Speech-language pathology Speech-Language Pathology professionals (Speech-Language Pathologists (SLPs), or informally speech therapists) specialize in communication disorders as well as swallowing disorders. The main components of speech production include: phonation, the process of sound production; resonance; intonation, the variation of pitch; and voice, including aeromechanical components of respiration. The main components of language include: phonology, the manipulation of sound according to the rules of the language; morphology, the understanding and use of the minimal units of meaning; syntax, the grammar rules for constructing sentences in language; semantics, the interpretation of meaning from the signs or symbols of communication; and pragmatics, the social aspects of communication.[1]
National approaches to speech and language pathology Speech-language pathology is known by a variety of names in various countries around the world:
Speech-language pathology (SLP) in the United States [1], Canada [2], Malta [3], Italy [4], and in the Philippines Speech and language therapy (SLTs) in the United Kingdom, Ireland [5], and South Africa [6]. Within the United Kingdom a Speech and Language Therapy team is sometimes referred to as the "SALT" team, to avoid confusion with Senior Leadership Team. S< is preferable however, and closer to the official abbreviation SLT used by RCSLT (Royal College of Speech and Language Therapists) [7]. Speech pathology in Australia [8], and the Philippines Speech-language therapy in New Zealand Speech therapy in India [9], Hong Kong [10] and other Asian countries. Speech and language pathologist in the Netherlands, the title for graduates from University who can participate in research. Speech and language therapist (logopedist) are educated to give therapy in the Netherlands.
Prior to 2006, the practice of Speech-Language Pathology in the United States was regulated by the individual states. Since January 2006, the 2005 "Standards and Implementation Procedures for the Certificate of Clinical Competence in Speech-Language Pathology" guidelines as set out by The American Speech-Language-Hearing Association (ASHA) have determined the qualification requirements to obtain "Speech-Language Pathology Clinical Fellowship". First, the individual must obtain an undergraduate degree, which may be in a field related to speechlanguage-hearing sciences. Second, the individual must graduate from an accredited master's program in speech language pathology. Many graduate programs will allow coursework not done in undergraduate years to be completed during graduate study. Various states have different regulations regarding licensure. The Certificate of Clinical Competence (CCC) is granted after the clinical fellowship year (CFY) when the individual provides services under the supervision of an experienced and licensed SLP. After a Certificate of Clinical Competence in Speech-
Language Pathology is awarded. Continuing education is required for maintenance of the Certificate of Clinical Competence, every three years.[2] Post master's graduate study for a Speech-Language Pathologist may consist of academic, research, and clinical practice . A doctoral degree (Ph.D or Speech-Language Pathology Doctorate) is currently optional for clinicians wishing to serve the public.
The Speech-Language Pathology vocation Speech-Language Pathologists provide a wide range of services, mainly on an individual basis, but also as for individuals, families, groups, and providing information for the general public. Speech services begin with initial screening for communication and swallowing disorders and continue with assessment and diagnosis, consultation for the provision of advice regarding management, intervention and treatment, and provision counseling and other follow up services for these disorders.
cognitive aspects of communication (e.g., attention, memory, problem solving, executive functions). speech (i.e., phonation, articulation, fluency, resonance, and voice including aeromechanical components of respiration); language (i.e., phonology, morphology, syntax, semantics, and pragmatic/social aspects of communication) including comprehension and expression in oral, written, graphic, and manual modalities; language processing; preliteracy and language-based literacy skills, phonological awareness. swallowing or other upper aerodigestive functions such as infant feeding and aeromechanical events (evaluation of esophageal function is for the purpose of referral to medical professionals); voice (i.e. hoarseness (dysphonia), poor vocal volume (hypophonia), abnormal (e.g. rough, breathy, strained) vocal quality). Research has been proven to demonstrate voice therapy to be especially helpful with certain patient populations, such as individuals with Parkinson's Disease, who often develop voice issues as a result of their disease. sensory awareness related to communication, swallowing, or other upper aerodigestive functions.
Multi-discipline collaboration
Speech-Language Pathologists collaborate with other health care professionals often working as part of a multidisciplinary team, providing referrals to audiologists and others; providing information to health care professionals (including doctors, nurses, occupational therapist, and dietitians), educators, and parents as dictated by the individual client's needs. In relation to Auditory Processing Disorders[3] collaborating in the assessment and providing intervention where there is evidence of speech, language, and/or other cognitive-communication disorders. The treatment for patients with cleft lip and palate has an obvious interdisciplinary character. The speech therapy outcome is even better as the surgical treatment is performed earlier.[4]
Healthcare
Promote healthy lifestyle practices for the preservation of communication, hearing, or swallowing, or for the treatment of other upper aerodigestive disorders. Recognizing the need to provide and appropriately accommodate diagnostic and treatment services to individuals from diverse cultural backgrounds and adjust treatment and assessment services accordingly. Advocating for individuals through community awareness, education, and training programs to promote and facilitate access to full participation in communication, including the elimination of societal barriers.
Research
Conduct research related to communication sciences and disorders, swallowing disorders, or other upper aerodigestive functions.
Training
Education:
Master's degree in Speech-Language Pathology (M.A. or M.S.) or a clinical doctorate in Speech Language Pathology (SLP-D). ing score on the National Speech-Language Pathology board exam (PRAXIS). Successful completion of a clinical fellowship year (CFY). American Speech and Hearing Association (ASHA) certificate of clinical competence (CCCs) and full state licensure to practice, following successful completion of clinical fellowship year (CFY). Credentials of a clinical fellow typically read as M.A. CFY-SLP. Credentials of a licensed SLP are commonly written as M.A/M.S.. CCC-SLP or Ph.D. CCC-SLP. to indicate the practitioner's graduate degree and successful completion of the fellowship year/board exams to obtain the certificate of clinical competence (CCC).
Continuing Education and Training Obligations:
Educate, supervise, and mentor future Speech-Language Pathologists.[5] Participate in continuing education. Educate and provide in-service training to families, caregivers, and other professionals. Train, supervise, and manage Speech-Language Pathology Assistants and other personnel. Educating and counseling individuals, families, co-workers, educators, and other persons in the community regarding acceptance, adaptation, and decisions about communication and swallowing.[6]
Working environments
Speech-Language Pathologists work in a variety of clinical and educational settings. SLPs work in public and private hospitals, skilled nursing facilities (SNFs), long-term acute care (LTAC) facilities, hospice,[7] and home healthcare. SLPs may also work as part of the structure in
the education system, working in both public and private schools, colleges, and universities.[8] Some speech-language pathologists also work in community health, providing services at prisons and young offenders' institutions or providing expert testimony in applicable court cases.[9] Subsequent to ASHA's 2005 approval of the delivery of Speech-Language Pathology services via video conference, or telepractice,[10] SLPs have begun delivering services via this service delivery method.
Methods of assessment For more details on this topic, see Speech and language assessment.
Assessment of speech, language, cognition, and swallowing can consist of informal (nonstandard or criterion based) assessments, formal standardized tests, instrumental measures, language sample analyses, and oral motor mechanism exam. Informal assessments rely on a clinician's knowledge and experience to evaluate an individual's abilities across areas of concern. Formal standardized testing is used to measure an individuals' abilities against peers. Instrumental measures (e.g., nasometer)utilizes equipment to measure physiological or anatomical impairments (e.g., Fiberoptic Endoscopic Evaluation of Swallowing (FEES) or Modified Barium Swallow Study (MBS)). Oral motor assessments review the strength, coordination, range of movement, symmetry and speed of cranial nerves V, VII, IX, X and XII. The Australian National Guidelines for Stroke Management state that the presence or absence of a gag reflex in an oro-motor examination is not sufficient evidence to determine if someone has a swallowing disorder. Referrals to Speech and Language Pathologists should be made if there are any concerns regarding slow or limited communication development in children, cognition (limited attention, disorganization etc. following by a Traumatic Brain Injury), difficulty with word-finding, errors in speech sound production, or for Augmentative Alternative Communication needs.
Clients and patients requiring speech and language pathology services Speech-Language Pathologists work with clients and patients who can present a wide range of issues. Infants and children
Infants with injuries due to complications at birth, feeding and swallowing difficulties, including dysphagia Children with mild, moderate or severe: o Genetic disorders that adversely affect speech, language and/or cognitive development including cleft palate, Down's Syndrome, DiGeorge Syndrome [11][12] o Attention deficit hyperactivity disorder [13] [14] o Autism, including Asperger syndrom o Developmental delay o Cranial nerve damage o Hearing loss
o o o o o o o
Craniofacial anomalies that adversely affect speech, language and/or cognitive development Language delay Specific Language Impairment Specific difficulties in producing sounds, called articulation disorders, (including vocalic /r/ and lisps) Some infants with injuries due to paralysis of brain. Pediatric Traumatic Brain Injury Childhood Apraxia of Speech
Some children are eligible to receive speech therapy services, including assessment and lessons through the public school system. If not, private therapy is readily available through personal lessons with a qualified Speech-Language Pathologist or the growing field of telepractice.[15] More at-home or combination treatments have become readily available to address specific types of articulation disorders. The use of mobile applications in speech therapy is also growing as an avenue to bring treatment into the home. Children and adults Cerebral Palsy Head Injury (Traumatic brain injury) Hearing Loss and Impairments Learning Difficulties including [16][17] o Dyslexia o Specific Language Impairment (SLI) o Auditory Processing Disorder[18]
Physical Disabilities Speech Disorders Stammering, Stuttering (dysfluency) Stroke Voice Disorders (dysphonia) Language disorders (dysphasia) Motor speech disorders (dysarthria or dyspraxia)
Naming difficulties (anomia) Dysgraphia, agraphia Cognitive communication disorders Pragmatics Laryngectomies Tracheostomies Oncology (Ear, nose or throat cancer)
Adults
Adults with mild, moderate, or severe eating, feeding and swallowing difficulties, including dysphagia Adults with mild, moderate, or severe language difficulties as a result of: o Stroke o Progressive neurological conditions Alzheimer's disease), dementia, Huntington's disease, Multiple Sclerosis, Motor Neuron Diseases, Parkinson's disease, etc.) o cancer of the head, neck and throat (including laryngectomy) o mental health issues
o
transgender voice therapy (usually for male-to-female individuals)
Communication disorder Communication disorder Classification and external resources ICD-9
315.3
MeSH
D003147
A communication disorder is a speech and language disorder which refers to problems in communication and in related areas such as oral motor function. The delays and disorders can range from simple sound substitution to the inability to understand or use their native language.[1] o
General definition Disorders and tendencies included and excluded under the category of communication disorders may vary by source. For example the definitions offered by the American Speech-languageHearing Association differ from that of the Diagnostic Statistical Manual 4th edition (DSM-IV). Gleanson (2001) defines a communication disorder as a speech and language disorder which refers to problems in communication and in related areas such as oral motor function. The delays and disorders can range from simple sound substitution to the inability to understand or use their native language.[1] In general communications disorders commonly refer to problems in speech (comprehension and/or expression) that significantly interfere with an individual’s achievement and/or quality of life. One may find it important to know the operational definition of the agency performing an assessment or giving a diagnosis. Persons who speak more than one language or are considered to have an accent in their location of residence do not have speech disorders if they are speaking in a manner consistent with their home environment or a blending of their home and foreign environment.[2]
DSM-IV’s Diagnostic Criteria Communication disorders are usually first diagnosed in childhood or adolescence though they are not limited as childhood disorders and may persist into adulthood (DSM IV-TR, Rapoport DSM-IV Training Guide for Diagnosis of Childhood Disorders). They may also occur with other disorders (co-occurring disorders).
Diagnosis involves testing and evaluation during which it is determined if the scores/performance are “substantially below” developmental expectations and if they “significantly” interfere with academic achievement, social interactions and daily living. This assessment may also determine if the characteristic is deviant or delayed. Therefore, it may be possible for an individual to have communication challenges but not meet the criteria of being “substantially below” criteria of the DSM (DSM IV-TR). It should also be noted that the DSM categories do not comprise a complete list of all communication disorders, for example, Auditory Processing Disorders (APD) are not classified under the DSM or ICD-10[3]
DSM-IV Communication Disorder Categories
expressive language disorder -- Characterized by difficulty expressing oneself beyond simple sentences and a limited vocabulary. An individual understands language better than they are able to speak communicate it, they may have a lot to say but have difficulties organizing and retrieving the words to get an idea across beyond what is expected for his/her developmental stage.[4]
mixed receptive-expressive language disorder -- problems comprehending the commands of others.
stuttering--a speech disorder characterized by a break in fluency, where sounds, syllables or words may be repeated or prolonged.[5]
Phonological Disorder—a speech sound disorder characterized by problems in making patterns of sound errors, i.e. “dat” for “that”.
Communication Disorder NOS (Not Otherwise Specified)—the DSM-IV category in which disorders that do not meet the specific criteria for the disorder listed above may be classified. (DSM-IV-TR)
Changes Being Considered for the DSM-V The DSM-V proposed categories for Communication Disorders completely rework the ones stated above. It appears that the framers are making the categories more general in a way to capture the various aspects of communications disorders in a way that emphasizes their childhood onset and differentiate these communications disorders from those associated with other disorders (i.e. Autism Spectrum Disorders) The new categories are as follows. A complete view of the revisions and the rationale for each may be found at dsm5.org The following partial definitions are taken directly from this source. A 02-08 Communication Disorders
A 02 Language Impairment – “diagnosed based on language abilities that are below age expectations in one or more language domains; LI is likely to persist into adolescence and adulthood, although the symptoms, domains, and modalities involved may shift with age”[6] .A 03 Late Language Emergence--a delay in language onset with no other diagnosed disabilities or developmental delays in other cognitive or motor domains.”[7] A 04 Specific Language Impairment-- language abilities are below age expectations but nonlinguistic developmental abilities are within age expectations A 05 Social Communication Disorder-- an impairment of pragmatics and is diagnosed based on difficulty in the social uses of verbal and nonverbal communication in naturalistic contexts, which affects the development of social relationships and discourse comprehension and cannot be explained by low abilities in the domains of word structure and grammar or general cognitive ability. A 06 Speech Sound Disorder—Formally Phonological Disorder[8] A 07 Childhood Onset Fluency Disorder—formally stuttering[9] A 08 Voice Disorder—“A voice disorder is diagnosed based on abnormal production and/or absence of vocal quality, pitch, loudness, resonance, and/or duration, which usually persists over time and is inappropriate for an individual's age or sex”
Examples Examples of disorders that may include or create challenges in language and communication and/or may co-occur with the above disorders:
autism spectrum disorder Autistic disorder (also called “classic” autism),Pervasive Development Disorder,and Asperger syndrome -- developmental disorders that affects the brain's normal development of social and communication skills. expressive language disorder -- affects speaking and understanding where there is no delay in non-verbal intelligence. mixed receptive-expressive language disorder -- affects speaking, understanding, reading and writing where there is no delay in non-verbal intelligence. semantic pragmatic disorder - Challenges with the semantic and pragmatic aspects of language specific language impairment - a language disorder that delays the mastery of language skills in children who have no hearing loss or other developmental delays. SLI is also called developmental language disorder, language delay, or developmental dysphasia.[10]
Sensory impairments
blindness--A link between communication skills and visual impairment with children who are blind is currently being investigated (James, D. M. and Stojanovik, V. (2007), Communication skills in blind children: a preliminary investigation. Child: Care, Health and Development, 33: 4– 10.) deafness/frequent ear infections—trouble with hearing during language acquisition may lead to spoken language problems. Children who suffer from frequent ear infections may temporarily develop problems pronouncing words correctly. It should also be noted that some of the above communication disorders can occur with people who use sign language. The inability to hear is not in itself a communication disorder.
Aphasia
Aphasia is loss of the ability to produce or comprehend language. There are Acute Aphasias which result from stroke or brain injury, and Primary Progressive Aphasias caused by progressive illnesses such as dementia.
Acute Aphasias o Expressive aphasia also known as Broca'a aphasia, expressive aphasia is a non-fluent aphasia that is characterized by damage to the frontal lobe region of the brain. A person with expressive aphasia usually speaks in short sentences that make sense but take great effort to produce. Also, a person with expressive aphasia understands another person's speech but has trouble responding quickly.[11] o Receptive aphasia also known as Wernicke's aphasia, receptive aphasia is a fluent aphasia that is categorized by damage to the temporal lobe region of the brain. A person with receptive aphasia usually speaks in long sentences that have no meaning or content. People with this type of aphasia often have trouble understanding other's speech and generally do not realize that they are not making any sense.[11] [11] o Conduction aphasia [11] o Anomic aphasia [11] o Global aphasia Primary Progressive Aphasias [12] o Progressive nonfluent aphasia [12] o Semantic Dementia [12] o Logopenic progressive aphasia
Learning disability
dyscalculia -- a defect of the systems used in communicating numbers dyslexia -- a defect of the systems used in reading dysgraphia - a defect in the systems used in writing
Speech disorders
cluttering - a syndrome characterized by a speech delivery rate which is either abnormally fast, irregular, or both.[13] dysarthria - a condition that occurs when problems with the muscles that help you talk make it difficult to pronounce words.[14] esophageal voice - involves the patient injecting or swallowing air into the esophagus. Once the patient has forced the air into their esophagus, the air vibrates a muscle and creates esophageal voice. Esophageal voice tends to be difficult to learn and patients are often only able to talk in short phrases with a quiet voice. lisp - a speech impediment that is also known as sigmatism. speech sound disorder - Speech-sound disorders (SSD) involve impairments in speech-sound production and range from mild articulation issues involving a limited number of speech sounds to more severe phonologic disorders involving multiple errors in speech-sound production and reduced intelligibility.[15]
stuttering - a speech disorder in which sounds, syllables, or words are repeated or last longer than normal. These problems cause a break in the flow of speech (called disfluency).
Dysphagia Not to be confused with Dysphasia. Dysphagia ICD-10
R13
ICD-9
438.82, 787.2
DiseasesDB
17942
MedlinePlus
003115
eMedicine
pmr/194
MeSH
D003680
Dysphagia is the medical term for the symptom of difficulty in swallowing.[1][2][3] Although classified under "symptoms and signs" in ICD-10,[4] the term is sometimes used as a condition in its own right.[5][6][7] Sufferers are sometimes unaware of their dysphagia.[8][9] It is derived from the Greek dys meaning bad or disordered, and phago meaning "eat". It may be a sensation that suggests difficulty in the age of solids or liquids from the mouth to the stomach,[10] a lack of pharyngeal sensation, or various other inadequacies of the swallowing mechanism. Dysphagia is distinguished from other symptoms including odynophagia, which is defined as painful swallowing,[11] and globus, which is the sensation of a lump in the throat. A psychogenic dysphagia is known as phagophobia. Individuals who suffer from Dysphagia are often ordered onto thickened fluids. The thicker consistency makes it less likely that an individual with dysphagia will aspirate while they are drinking. Individuals with difficulty swallowing may find liquids cause coughing, spluttering or even choking and thickening drinks enables them to swallow safely. A range of commercial thickening agents are available to purchase for the dietary management of Dysphagia. It is also worthwhile to refer to the physiology of swallowing in understanding dysphagia.
Causes Classification
Dysphagia is classified into two major types: 1. Oropharyngeal dysphagia and 2. Esophageal dysphagia.[12] 3. Functional dysphagia is defined in some patients as having no organic cause for dysphagia that can be found.
Following table enumerates possible causes of dysphagia: [show]Location Cause
Signs and symptoms Some patients have limited awareness of their dysphagia, so lack of the symptom does not exclude an underlying disease.[13] When dysphagia goes undiagnosed or untreated, patients are at a high risk of pulmonary aspiration and subsequent aspiration pneumonia secondary to food or liquids going the wrong way into the lungs. Some people present with "silent aspiration" and do not cough or show outward signs of aspiration. Undiagnosed dysphagia can also result in dehydration, malnutrition, and renal failure. Some signs and symptoms of oropharyngeal dysphagia include difficulty controlling food in the mouth, inability to control food or saliva in the mouth, difficulty initiating a swallow, coughing, choking, frequent pneumonia, unexplained weight loss, gurgly or wet voice after swallowing, nasal regurgitation, and dysphagia (patient complaint of swallowing difficulty).[13] When asked where the food is getting stuck, patients will often point to the cervical (neck) region as the site of the obstruction. The actual site of obstruction is always at or below the level at which the level of obstruction is perceived. The most common symptom of esophageal dysphagia is the inability to swallow solid food, which the patient will describe as 'becoming stuck' or 'held up' before it either es into the stomach or is regurgitated. Pain on swallowing or odynophagia is a distinctive symptom that can be highly indicative of carcinoma, although it also has numerous other causes that are not related to cancer. Achalasia is a major exception to usual pattern of dysphagia in that swallowing of fluid tends to cause more difficulty than swallowing solids. In achalasia, there is idiopathic destruction of parasympathetic ganglia of the auerbach submucosal plexus of the entire esophagus, which results in functional narrowing of the lower esophagus, and peristaltic failure throughout its length.
Dehydration and or undernutrition caused by restrictions may result in weight loss and worsen the risk of aspiration pneumonia.
Differential diagnosis All causes of dysphagia are considered as differential diagnoses. Some common ones are:
Esophagial atresia Paterson-Kelly syndrome Zenker's diverticulum Benign strictures Achalasia Esophagial diverticula Scleroderma Diffuse esophageal spasm Webs and rings Esophageal cancer Hiatus hernia, especially paraesophageal type Dysphagia lusoria Gastroesophageal reflux
Esophageal dysphagia is almost always caused by disease in or adjacent to the esophagus but occasionally the lesion is in the pharynx or stomach. In many of the pathological conditions causing dysphagia, the lumen becomes progressively narrowed and indistensible. Initially only fibrous solids cause difficulty but later the problem can extend to all solids and later even to liquids. Patients with difficulty swallowing may benefit from thickened fluids if the person is more comfortable with those liquids, although, so far, there are no scientific study that proves that those thickened liquids are beneficial.
Diagnostic approach The gold-standard for diagnosing oropharyngeal dysphagia in countries of the Commonwealth are via a Modified Barium Swallow Study or Videofluoroscopic Swallow Study (Fluoroscopy). This is a lateral video (and AP in some cases) X-ray that provides objective information on bolus transport, safest consistency of bolus (different consistencies including honey, nectar, thin, pudding, puree, regular), and possible head positioning and/or maneuvers that may facilitate swallow function depending on each individual's anatomy and physiology. In Zenker's diverticulum, barium meal first fills the pouch, then overflows from top. In achalasia, it shows "bird-beak" tapering of distal esophagus. In esophageal cancer, it shows a characteristic filling defect ("Rat-tail" deformity). In leiomyoma, there is smooth filling defect. Reflux can be demonstrated in fluorscopy. In strictures, meal is initially arrested above stricture, then gradually trickles down. Esophagoscopy and laryngoscopy can give direct view of lumens.
Chest radiograph may show air-fluid level in mediastinum. Pott's disease and calcified aneurysms of aorta can be easily diagnosed. Esophageal motility study is useful in cases of achalasia and diffuse esophageal spasms. Exfoliative cytology can be performed on esophageal lavage obtained by esophagoscopy. It can detect malignant cells in early stage. Ultrasonography and CT scan are not very useful in finding cause of dysphagia; but can detect masses in mediastinum and aortic aneurysms. A FEES (fibreoptic endoscopic evaluation of swallowing), sometimes with sensory evaluation, is done usually by an otorhinolaryngologist (Ear, Nose and Throat specialist) This procedure involves the patient eating different consistencies as above and further the results are analyzed.
Epidemiology Swallowing disorders can occur in all age groups, resulting from congenital abnormalities, structural damage, and/or medical conditions.[13] Swallowing problems are a common complaint among older individuals, and the incidence of dysphagia is higher in the elderly,[14] in patients who have had strokes,[15] and in patients who are itted to acute care hospitals or chronic care facilities. Dysphagia is a symptom of many different causes, which can usually be elicited through a careful history by the treating physician. A formal oropharyngeal dysphagia evaluation is performed by a speech-language pathologist.[16]
Speech For other uses, see Speech (disambiguation).
Speech production (English) visualized by Real-time MRI
Linguistics Theoretical linguistics
Cognitive linguistics
Generative linguistics
Functional theories of grammar
Quantitative linguistics
Phonology
Morphology
Morphophonology
Syntax
Lexis
Semantics
Pragmatics
Graphemics
Orthography
Semiotics
Descriptive linguistics
Anthropological linguistics
Comparative linguistics
Historical linguistics
Etymology
Graphetics
Phonetics Sociolinguistics
Applied and experimental linguistics
Computational linguistics
Evolutionary linguistics
Forensic linguistics
Internet linguistics
Language acquisition
Language assessment
Language development
Language education Linguistic anthropology
Neurolinguistics
Psycholinguistics
Second-language acquisition
Related articles
History of linguistics Linguistic prescription
List of linguists
List of unsolved problems in linguistics
Portal
v
t
e
Speech production (Chinese) visualized by Real-time MRI
Speech production (German) visualized by Real-time MRI
Speech is the vocalized form of human communication. It is based upon the syntactic combination of lexicals and names that are drawn from very large (usually about 10,000 different words) vocabularies. Each spoken word is created out of the phonetic combination of a limited set of vowel and consonant speech sound units. These vocabularies, the syntax which structures them, and their set of speech sound units differ, creating the existence of many thousands of different types of mutually unintelligible human languages. Most human speakers (polyglots) are able to communicate in two or more of them.[1] The vocal abilities that enable humans to produce speech also provide humans with the ability to sing. A gestural form of human communication exists for the deaf in the form of sign language. Speech in some cultures has become the basis of a written language, often one that differs in its vocabulary, syntax and phonetics from its associated spoken one, a situation called diglossia. Speech in addition to its use in communication, it is suggested by some psychologists such as Vygotsky is internally used by mental processes to enhance and organize cognition in the form of an interior monologue. Speech is researched in of the speech production and speech perception of the sounds used in vocal language. Other research topics concern speech repetition, the ability to map heard spoken words into the vocalizations needed to recreated that plays a key role in the vocabulary expansion in children and speech errors. Several academic disciplines study these including acoustics, psychology, speech pathology, linguistics, cognitive science, communication studies, otolaryngology and computer science. Another area of research is how the human brain in its different areas such as the Broca's area and Wernicke's area underlies speech. It is controversial how far human speech is unique in that other animals also communicate with vocalizations. While none in the wild have compatibly large vocabularies, research upon the nonverbal abilities of language trained apes such as Washoe and Kanzi raises the possibility that they might have these capabilities. The origins of speech are unknown and subject to much debate and speculation.
Speech production Main article: Speech production
In linguistics (articulatory phonetics), manner of articulation describes how the tongue, lips, jaw, and other speech organs are involved in making a sound make . Often the concept is only used for the production of consonants. For any place of articulation, there may be several manners, and therefore several homorganic consonants. Normal human speech is produced with pulmonary pressure provided by the lungs which creates phonation in the glottis in the larynx that is then modified by the vocal tract into different vowels and consonants. However humans can pronounce words without the use of the lungs and glottis in alaryngeal speech of which there are three types: esophageal speech, pharyngeal speech and buccal speech (better known as Donald Duck talk).
Speech perception Speech perception refers to the processes by which humans are able to interpret and understand the sounds used in language. The study of speech perception is closely linked to the fields of phonetics and phonology in linguistics and cognitive psychology and perception in psychology. Research in speech perception seeks to understand how human listeners recognize speech sounds and use this information to understand spoken language. Speech research has applications in building computer systems that can recognize speech, as well as improving speech recognition for hearing- and language-impaired listeners.
Speech repetition Spoken vocalizations are quickly turned from sensory inputs into motor instructions needed for their immediate or delayed (in phonological memory) vocal imitation. This occurs independently of speech perception. This mapping plays a key role in enabling children to expand their spoken vocabulary and hence the ability of human language to transmit across generations.[2]
Speech errors Speech is a complex activity; as a result, errors are often made in speech. Speech errors have been analyzed by scientists to understand the nature of the processes involved in the production of speech.
Problems involving speech There are several organic and psychological factors that can affect speech. Among these are: 1. Diseases and disorders of the lungs or the vocal cords, including paralysis, respiratory infections (bronchitis), vocal fold nodules and cancers of the lungs and throat. 2. Diseases and disorders of the brain, including alogia, aphasias, dysarthria, dystonia and speech processing disorders, where impaired motor planning, nerve transmission, phonological
processing or perception of the message (as opposed to the actual sound) leads to poor speech production. 3. Hearing problems, such as otitis media with effusion, and listening problems, auditory processing disorders, can lead to phonological problems. 4. Articulatory problems, such as stuttering, lisping, cleft palate, ataxia, or nerve damage leading to problems in articulation. Tourette syndrome and tics can also affect speech. Many speakers also have a slur in their voice 5. In addition to dysphasia, anomia and auditory processing disorder can impede the quality of auditory perception, and therefore, expression. Those who are Hard of Hearing or deaf may be considered to fall into this category.
Speech and the brain
Paul Broca
Two areas of the cerebral cortex are necessary for speech. Broca's area, named after its discoverer, French neurologist Paul Broca (1824-1880), is in the frontal lobe, usually on the left, near the motor cortex controlling muscles of the lips, jaws, soft palate and vocal cords. When damaged by a stroke or injury, comprehension is unaffected but speech is slow and labored and the sufferer will talk in "telegramese". Wernicke's area, discovered in 1874 by German neurologist Carl Wernicke (1848-1904), lies to the back of the temporal lobe, again, usually on the left, near the areas receiving auditory and visual information. Damage to it destroys comprehension - the sufferer speaks fluently but nonsensically.
Phonation Phonation Glottal states
From open to closed: Voicelessness (full airstream) Breathy voice (murmur) Slack voice Modal voice (maximum vibration) Stiff voice Creaky voice (restricted airstream) Glottalized (blocked airstream) Ballistic (fortis) Supra-glottal phonation Faucalized voice ("hollow") Harsh voice ("pressed") Strident (harsh trilled) Non-phonemic phonation Whisper Falsetto This box:
view talk edit
The term phonation has slightly different meanings depending on the subfield of phonetics. Among some phoneticians, phonation is the process by which the vocal folds produce certain sounds through quasi-periodic vibration. This is the definition used among those who study laryngeal anatomy and physiology and speech production in general. Other phoneticians, though, call this process voicing, and they use the term phonation to refer to any oscillatory state of any part of the larynx that modifies the airstream, of which voicing is just one example. As such,
voiceless and supra-glottal phonation are included under this definition, which is common in the field of linguistic phonetics.
Voicing The phonatory process, or voicing, occurs when air is expelled from the lungs through the glottis, creating a pressure drop across the larynx. When this drop becomes sufficiently large, the vocal folds start to oscillate. The minimum pressure drop required to achieve phonation is called the phonation threshold pressure, and for humans with normal vocal folds, it is approximately 2– 3 cm H2O. The motion of the vocal folds during oscillation is mostly lateral, though there is also some superior component as well. However, there is almost no motion along the length of the vocal folds. The oscillation of the vocal folds serves to modulate the pressure and flow of the air through the larynx, and this modulated airflow is the main component of the sound of most voiced phones. The sound that the larynx produces is a harmonic series. In other words, it consists of a fundamental tone (called the fundamental frequency, the main acoustic cue for the percept pitch) accompanied by harmonic overtones, which are multiples of the fundamental frequency.[1][2] According to the Source-Filter Theory, the resulting sound excites the resonance chamber that is the vocal tract to produce the individual speech sounds.[3][4] The vocal folds will not oscillate if they are not sufficiently close to one another, are not under sufficient tension or under too much tension, or if the pressure drop across the larynx is not sufficiently large. In linguistics, a phone is called voiceless if there is no phonation during its occurrence.[4] In speech, voiceless phones are associated with vocal folds that are elongated, highly tensed, and placed laterally (abducted) when compared to vocal folds during phonation.[5] Fundamental frequency, the main acoustic cue for the percept pitch, can be varied through a variety of means. Large scale changes are accomplished by increasing the tension in the vocal folds through contraction of the cricothyroid muscle. Smaller changes in tension can be effected by contraction of the thyroarytenoid muscle or changes in the relative position of the thyroid and cricoid cartilages, as may occur when the larynx is lowered or raised, either volitionally or through movement of the tongue to which the larynx is attached via the hyoid bone.[5] In addition to tension changes, fundamental frequency is also affected by the pressure drop across the larynx, which is mostly affected by the pressure in the lungs, and will also vary with the distance between the vocal folds. Variation in fundamental frequency is used linguistically to produce intonation and tone. There are currently two main theories as to how vibration of the vocal folds is initiated: the myoelastic theory and the aerodynamic theory.[6] These two theories are not in contention with one another and it is quite possible that both theories are true and operating simultaneously to initiate and maintain vibration. A third theory, the neurochronaxic theory, was in considerable vogue in the 1950s, but has since been largely discredited.[7]
Myoelastic and aerodynamic theory
The myoelastic theory states that when the vocal cords are brought together and breath pressure is applied to them, the cords remain closed until the pressure beneath them—the subglottic pressure—is sufficient to push them apart, allowing air to escape and reducing the pressure enough for the muscle tension recoil to pull the folds back together again. Pressure builds up once again until the cords are pushed apart, and the whole cycle keeps repeating itself. The rate at which the cords open and close—the number of cycles per second—determines the pitch of the phonation.[7] The aerodynamic theory is based on the Bernoulli energy law in fluids. The theory states that when a stream of breath is flowing through the glottis while the arytenoid cartilages are held together by the action of the interarytenoid muscles, a push-pull effect is created on the vocal fold tissues that maintains self-sustained oscillation. The push occurs during glottal opening, when the glottis is convergent, whereas the pull occurs during glottal closing, when the glottis is divergent. During glottal closure, the air flow is cut off until breath pressure pushes the folds apart and the flow starts up again, causing the cycles to repeat.[7] The textbook entitled Myoelastic Aerodynamic Theory of Phonation[6] by Ingo Titze credits Janwillem van den Berg as the originator of the theory and provides detailed mathematical development of the theory. Neurochronaxic theory
This theory states that the frequency of the vocal fold vibration is determined by the chronaxy of the recurrent nerve, and not by breath pressure or muscular tension. Advocates of this theory thought that every single vibration of the vocal folds was due to an impulse from the recurrent laryngeal nerves and that the acoustic center in the brain regulated the speed of vocal fold vibration.[7] Speech and voice scientists have long since left this theory as the muscles have been shown to not be able to contract fast enough to accomplish the vibration. In addition, persons with paralyzed vocal folds can produce phonation, which would not be possible according to this theory. Phonation occurring in excised larynges would also not be possible according to this theory.
State of the glottis
A continuum from closed glottis to open. The black triangles represent the arytenoid cartilages, the sail shapes the vocal cords, and the dotted circle the windpipe.
In linguistic phonetic treatments of phonation, such as those of Peter Ladefoged, phonation was considered to be a matter of points on a continuum of tension and closure of the vocal cords. More intricate mechanisms were occasionally described, but they were difficult to investigate, and until recently the state of the glottis and phonation were considered to be nearly synonymous.[8]
If the vocal cords are completely relaxed, with the arytenoid cartilages apart for maximum airflow, the cords do not vibrate. This is voiceless phonation, and is extremely common with obstruents. If the arytenoids are pressed together for glottal closure, the vocal cords block the airstream, producing stop sounds such as the glottal stop. In between there is a sweet spot of maximum vibration. This is modal voice, and is the normal state for vowels and sonorants in all the world's languages. However, the aperture of the arytenoid cartilages, and therefore the tension in the vocal cords, is one of degree between the end points of open and closed, and there are several intermediate situations utilized by various languages to make contrasting sounds.[8] For example, Gujarati has vowels with a partially lax phonation called breathy voice or murmured, while Burmese has vowels with a partially tense phonation called creaky voice or laryngealized. Both of these phonations have dedicated IPA diacritics, an under-umlaut and under-tilde. The Jalapa dialect of Mazatec is unusual in contrasting both with modal voice in a three-way distinction. (Note that Mazatec is a tonal language, so the glottis is making several tonal distinctions simultaneously with the phonation distinctions.)[8] Mazatec breathy voice a] he wears modal voice [já] tree creaky voice
a] he carries
Note: There was an editing error in the source of this information. The latter two translations may have been mixed up.
Javanese does not have modal voice in its stops, but contrasts two other points along the phonation scale, with more moderate departures from modal voice, called slack voice and stiff voice. The "muddy" consonants in Shanghainese are slack voice; they contrast with tenuis and aspirated consonants.[8] Although each language may be somewhat different, it is convenient to classify these degrees of phonation into discrete categories. A series of seven alveolar stops, with phonations ranging from an open/lax to a closed/tense glottis, are: Open glottis [t] voiceless (full airstream) d] breathy voice d] slack voice Sweet spot
[d] modal voice (maximum vibration)
d] stiff voice d] creaky voice Closed glottis [ t] glottal closure (blocked airstream)
The IPA diacritics under-ring and subscript wedge, commonly called "voiceless" and "voiced", are sometimes added to the symbol for a voiced sound to indicate more lax/open (slack) and tense/closed (stiff) states of the glottis, respectively. (Ironically, adding the 'voicing' diacritic to the symbol for a voiced consonant indicates less modal voicing, not more, because a modally voiced sound is already fully voiced, at its sweet spot, and any further tension in the vocal cords dampens their vibration.)[8] Alsatian, like several Germanic languages, has a typologically unusual phonation in its stops. The consonants transcribed /b/, /d/, / / (ambiguously called "lenis") are partially voiced: The vocal cords are positioned as for voicing, but do not actually vibrate. That is, they are technically voiceless, but without the open glottis usually associated with voiceless stops. They contrast with both modally voiced /b, d, / and modally voiceless /p, t, k/ in French borrowings, as well as aspirated /kʰ/ word initially.[8] Glottal consonants
It has long been noted that in many languages, both phonologically and historically, the glottal consonants [ʔ, ɦ, h] do not behave like other consonants. Phonetically, they have no manner or place of articulation other than the state of the glottis: glottal closure for [ʔ], breathy voice for [ɦ], and open airstream for [h]. Some phoneticians have described these sounds as neither glottal nor consonantal, but instead as instances of pure phonation, at least in many European languages. However, in Semitic languages they do appear to be true glottal consonants.[8]
Supra-glottal phonation In the last few decades it has become apparent that phonation may involve the entire larynx, with as many as six valves and muscles working either independently or together. From the glottis upward, these articulations are:[9] 1. 2. 3. 4.
glottal (the vocal cords), producing the distinctions described above ventricular (the 'false vocal cords', partially covering and damping the glottis) arytenoid (sphincteric compression forwards and upwards) epiglotto-pharyngeal (retraction of the tongue and epiglottis, potentially closing onto the pharyngeal wall) 5. raising or lowering of the entire larynx 6. narrowing of the pharynx
Until the development of fiber-optic laryngoscopy, the full involvement of the larynx during speech production was not observable, and the interactions among the six laryngeal articulators
is still poorly understood. However, at least two supra-glottal phonations appear to be widespread in the world's languages. These are harsh voice ('ventricular' or 'pressed' voice), which involves overall constriction of the larynx, and faucalized voice ('hollow' or 'yawny' voice), which involves overall expansion of the larynx.[9] The Bor dialect of Dinka has contrastive modal, breathy, faucalized, and harsh voice in its vowels, as well as three tones. The ad hoc diacritics employed in the literature are a subscript double quotation mark for faucalized voice, [a], and underlining for harsh voice, [a].[9] Examples are, Voice Bor Dinka
modal breathy tɕìt
tɕ t
harsh
faucalized
tɕ t
tɕ t
diarrhea go ahead scorpions to swallow
Other languages with these contrasts are Bai (modal, breathy, and harsh voice), Kabiye (faucalized and harsh voice, previously seen as ±ATR), Somali (breathy and harsh voice).[9] Elements of laryngeal articulation or phonation may occur widely in the world's languages as phonetic detail even when not phonemically contrastive. For example, simultaneous glottal, ventricular, and arytenoid activity (for something other than epiglottal consonants) has been observed in Tibetan, Korean, Nuuchahnulth, Nlaka'pamux, Thai, Sui, Amis, Pame, Arabic, Tigrinya, Cantonese, and Yi.[9]
Familiar language examples In languages such as French, all obstruents occur in pairs, one modally voiced and one voiceless.[citation needed] In English, every voiced fricative corresponds to a voiceless one. For the pairs of English stops, however, the distinction is better specified as voice onset time rather than simply voice: In initial position /b d g/ are only partially voiced (voicing begins during the hold of the consonant), while /p t k/ are aspirated (voicing doesn't begin until well after its release).[citation needed] Certain English morphemes have voiced and voiceless allomorphs, such as the plural, verbal, and possessive endings spelled -s (voiced in kids /kɪdz/ but voiceless in kits /kɪts/) and the past-tense ending spelled -ed (voiced in buzzed /bʌzd/ but voiceless in fished /fɪʃt/.[citation needed] A few European languages, such as Finnish, have no phonemically voiced obstruents but pairs of long and short consonants instead. Outside of Europe, a lack of voicing distinctions is not uncommon; indeed, in Australian languages it is nearly universal. In languages without the distinction between voiceless and voiced obstruents, it is often found that they are realized as voiced in voiced environments such as between vowels, and voiceless elsewhere.
Vocal s See also Speech , a subset of a language used in a particular social setting. In phonology Main article: (phonology)
In phonology, a is a combination of tone and vowel phonation into a single phonological parameter. For example, among its vowels, Burmese combines modal voice with low tone, breathy voice with falling tone, creaky voice with high tone, and glottal closure with high tone. These four s contrast with each other, but no other combination of phonation (modal, breath, creak, closed) and tone (high, low, falling) is found. In pedagogy and speech pathology Main article: Vocal registration
Among vocal pedagogues and speech pathologists, a vocal also refers to a particular phonation limited to a particular range of pitch, which possesses a characteristic sound quality.[10] The term "" may be used for several distinct aspects of the human voice:[7]
A particular part of the vocal range, such as the upper, middle, or lower s, which may be bounded by vocal breaks A particular phonation A resonance area such as chest voice or head voice A certain vocal timbre
Four combinations of these elements are identified in speech pathology: the vocal fry , tResonance This article is about resonance in physics. For other uses, see Resonance (disambiguation). "Resonant" redirects here. For the phonological term, see Sonorant.
Increase of amplitude as damping decreases and frequency approaches resonant frequency of a driven damped simple harmonic oscillator.[1][2]
In physics, resonance is the tendency of a system to oscillate with greater amplitude at some frequencies than at others. Frequencies at which the response amplitude is a relative maximum are known as the system's resonant frequencies, or resonance frequencies. At these frequencies, even small periodic driving forces can produce large amplitude oscillations, because the system stores vibrational energy. Resonance occurs when a system is able to store and easily transfer energy between two or more different storage modes (such as kinetic energy and potential energy in the case of a pendulum). However, there are some losses from cycle to cycle, called damping. When damping is small, the resonant frequency is approximately equal to the natural frequency of the system, which is a frequency of unforced vibrations. Some systems have multiple, distinct, resonant frequencies. Resonance phenomena occur with all types of vibrations or waves: there is mechanical resonance, acoustic resonance, electromagnetic resonance, nuclear magnetic resonance (NMR), electron spin resonance (ESR) and resonance of quantum wave functions. Resonant systems can be used to generate vibrations of a specific frequency (e.g. musical instruments), or pick out specific frequencies from a complex vibration containing many frequencies (e.g. filters). Resonance was recognized by Galileo Galilei with his investigations of pendulums and musical strings beginning in 1602.[3][4]
Examples
Pushing a person in a swing is a common example of resonance. The loaded swing, a pendulum, has a natural frequency of oscillation, its resonant frequency, and resists being pushed at a faster or slower rate.
One familiar example is a playground swing, which acts as a pendulum. Pushing a person in a swing in time with the natural interval of the swing (its resonant frequency) will make the swing go higher and higher (maximum amplitude), while attempts to push the swing at a faster or slower tempo will result in smaller arcs. This is because the energy the swing absorbs is maximized when the pushes are 'in phase' with the swing's natural oscillations, while some of the swing's energy is actually extracted by the opposing force of the pushes when they are not. Resonance occurs widely in nature, and is exploited in many man-made devices. It is the mechanism by which virtually all sinusoidal waves and vibrations are generated. Many sounds we hear, such as when hard objects of metal, glass, or wood are struck, are caused by brief resonant vibrations in the object. Light and other short wavelength electromagnetic radiation is produced by resonance on an atomic scale, such as electrons in atoms. Other examples are: Mechanical and acoustic resonance
the timekeeping mechanisms of modern clocks and watches, e.g. the balance wheel in a mechanical watch and the quartz crystal in a quartz watch the tidal resonance of the Bay of Fundy acoustic resonances of musical instruments and human vocal cords the shattering of a crystal wineglass when exposed to a musical tone of the right pitch (its resonant frequency)
Electrical resonance
electrical resonance of tuned circuits in radios and TVs that allow radio frequencies to be selectively received
Optical resonance
creation of coherent light by optical resonance in a laser cavity
Orbital resonance in astronomy
orbital resonance as exemplified by some moons of the solar system's gas giants
Atomic, particle, and molecular resonance
material resonances in atomic scale are the basis of several spectroscopic techniques that are used in condensed matter physics. o Nuclear magnetic resonance o Mössbauer effect o Electron spin resonance.
Theory
"Universal Resonance Curve", a symmetric approximation to the normalized response of a resonant circuit; abscissa values are deviation from center frequency, in units of center frequency divided by 2Q; ordinate is relative amplitude, and phase in cycles; dashed curves compare the range of responses of real two-pole circuits for a Q value of 5; for higher Q values, there is less deviation from the universal curve. Crosses mark the edges of the 3-dB bandwidth (gain 0.707, phase shift 45 degrees or 0.125 cycle).
The exact response of a resonance, especially for frequencies far from the resonant frequency, depends on the details of the physical system, and is usually not exactly symmetric about the resonant frequency, as illustrated for the simple harmonic oscillator above. For a lightly damped linear oscillator with a resonance frequency Ω, the intensity of oscillations I when the system is driven with a driving frequency ω is typically approximated by a formula that is symmetric about the resonance frequency:[5]
The intensity is defined as the square of the amplitude of the oscillations. This is a Lorentzian function, and this response is found in many physical situations involving resonant systems. Γ is a parameter dependent on the damping of the oscillator, and is known as the linewidth of the resonance. Heavily damped oscillators tend to have broad linewidths, and respond to a wider range of driving frequencies around the resonant frequency. The linewidth is inversely proportional to the Q factor, which is a measure of the sharpness of the resonance. In electrical engineering, this approximate symmetric response is known as the universal resonance curve, a concept introduced by Frederick E. Terman in 1932 to simplify the approximate analysis of radio circuits with a range of center frequencies and Q values.[6][7]
Resonators A physical system can have as many resonant frequencies as it has degrees of freedom; each degree of freedom can vibrate as a harmonic oscillator. Systems with one degree of freedom, such as a mass on a spring, pendulums, balance wheels, and LC tuned circuits have one resonant frequency. Systems with two degrees of freedom, such as coupled pendulums and resonant transformers can have two resonant frequencies. As the number of coupled harmonic oscillators grows, the time it takes to transfer energy from one to the next becomes significant. The vibrations in them begin to travel through the coupled harmonic oscillators in waves, from one oscillator to the next. Extended objects that experience resonance due to vibrations inside them are called resonators, such as organ pipes, vibrating strings, quartz crystals, microwave cavities, and laser rods. Since these can be viewed as being made of millions of coupled moving parts (such as atoms), they can have millions of resonant frequencies. The vibrations inside them travel as waves, at an approximately constant velocity, bouncing back and forth between the sides of the resonator. If the distance between the sides is , the length of a round trip is . In order to cause resonance, the phase of a sinusoidal wave after a round trip has to be equal to the initial phase, so the waves will reinforce. So the condition for resonance in a resonator is that the round trip distance, , be equal to an integer number of wavelengths of the wave:
If the velocity of a wave is
, the frequency is
so the resonant frequencies are:
So the resonant frequencies of resonators, called normal modes, are equally spaced multiples of a lowest frequency called the fundamental frequency. The multiples are often called overtones. There may be several such series of resonant frequencies, corresponding to different modes of vibration.
Q factor Main article: Q factor
The quality factor or Q factor is a dimensionless parameter that describes how under-damped an oscillator or resonator is,[8] or equivalently, characterizes a resonator's bandwidth relative to its center frequency.[9] Higher Q indicates a lower rate of energy loss relative to the stored energy of the oscillator, i.e. the oscillations die out more slowly. A pendulum suspended from a highquality bearing, oscillating in air, has a high Q, while a pendulum immersed in oil has a low Q. In order to sustain a system in resonance in constant amplitude by providing power externally, the energy that has to be provided within each cycle is less than the energy stored in the system (i.e. the sum of the potential and kinetic) by a factor of . Oscillators with high quality factors have low damping which tends to make them ring longer. Sinusoidally driven resonators having higher Q factors resonate with greater amplitudes (at the resonant frequency) but have a smaller range of frequencies around the frequency at which they resonate. The range of frequencies at which the oscillator resonates is called the bandwidth. Thus, a high Q tuned circuit in a radio receiver would be more difficult to tune, but would have greater selectivity, it would do a better job of filtering out signals from other stations that lie nearby on the spectrum. High Q oscillators operate over a smaller range of frequencies and are more stable. (See oscillator phase noise.) The quality factor of oscillators vary substantially from system to system. Systems for which damping is important (such as dampers keeping a door from slamming shut) have Q = ½. Clocks, lasers, and other systems that need either strong resonance or high frequency stability need high quality factors. Tuning forks have quality factors around Q = 1000. The quality factor of atomic clocks and some high-Q lasers can reach as high as 1011[10] and higher.[11] There are many alternate quantities used by physicists and engineers to describe how damped an oscillator is that are closely related to its quality factor. Important examples include: the damping ratio, relative bandwidth, linewidth and bandwidth measured in octaves.
Types of resonance Mechanical and acoustic resonance Main articles: Mechanical resonance, Acoustic resonance, and String resonance
Mechanical resonance is the tendency of a mechanical system to absorb more energy when the frequency of its oscillations matches the system's natural frequency of vibration than it does at other frequencies. It may cause violent swaying motions and even catastrophic failure in improperly constructed structures including bridges, buildings, trains, and aircraft. When
deg objects, Engineers must ensure the mechanical resonance frequencies of the component parts do not match driving vibrational frequencies of motors or other oscillating parts, a phenomenon known as resonance disaster. Avoiding resonance disasters is a major concern in every building, tower and bridge construction project. As a countermeasure, shock mounts can be installed to absorb resonant frequencies and thus dissipate the absorbed energy. The Taipei 101 building relies on a 660-tonne pendulum (730-short-ton) — a tuned mass damper — to cancel resonance. Furthermore, the structure is designed to resonate at a frequency which does not typically occur. Buildings in seismic zones are often constructed to take into the oscillating frequencies of expected ground motion. In addition, engineers deg objects having engines must ensure that the mechanical resonant frequencies of the component parts do not match driving vibrational frequencies of the motors or other strongly oscillating parts. Many clocks keep time by mechanical resonance in a balance wheel, pendulum, or quartz crystal Acoustic resonance is a branch of mechanical resonance that is concerned with the mechanical vibrations across the frequency range of human hearing, in other words sound. For humans, hearing is normally limited to frequencies between about 20 Hz and 20,000 Hz (20 kHz),[12] Acoustic resonance is an important consideration for instrument builders, as most acoustic instruments use resonators, such as the strings and body of a violin, the length of tube in a flute, and the shape of, and tension on, a drum membrane. Like mechanical resonance, acoustic resonance can result in catastrophic failure of the object at resonance. The classic example of this is breaking a wine glass with sound at the precise resonant frequency of the glass, although this is difficult in practice.[13] Electrical resonance Main article: Electrical resonance
Electrical resonance occurs in an electric circuit at a particular resonant frequency when the impedance of the circuit is at a minimum in a series circuit or at maximum in a parallel circuit (or when the transfer function is at a maximum). Optical resonance Main article: Optical cavity
An optical cavity or optical resonator is an arrangement of mirrors that forms a standing wave cavity resonator for light waves. Optical cavities are a major component of lasers, surrounding the gain medium and providing of the laser light. They are also used in optical parametric oscillators and some interferometers. Light confined in the cavity reflects multiple times producing standing waves for certain resonant frequencies. The standing wave patterns produced are called modes. Longitudinal modes differ only in frequency while transverse modes differ for different frequencies and have different intensity patterns across the cross section of the
beam. Ring resonators and whispering galleries are examples of optical resonators that do not form standing waves. Different resonator types are distinguished by the focal lengths of the two mirrors and the distance between them. (Flat mirrors are not often used because of the difficulty of aligning them precisely.) The geometry (resonator type) must be chosen so the beam remains stable, i.e. the beam size does not continue to grow with each reflection. Resonator types are also designed to meet other criteria such as minimum beam waist or having no focal point (and therefore intense light at that point) inside the cavity. Optical cavities are designed to have a very large Q factor;[14] a beam will reflect a very large number of times with little attenuation. Therefore the frequency line width of the beam is very small compared to the frequency of the laser. Additional optical resonances are guided-mode resonances and surface plasmon resonance, which result in anomalus reflection and high evanescent fields at resonance. In this case, the resonant modes are guided modes of a waveguide or surface plasmon modes of a dielectricmetallic interface. These modes are usually excited by a subwavelength grating. Orbital resonance Main article: Orbital resonance
In celestial mechanics, an orbital resonance occurs when two orbiting bodies exert a regular, periodic gravitational influence on each other, usually due to their orbital periods being related by a ratio of two small integers. Orbital resonances greatly enhance the mutual gravitational influence of the bodies. In most cases, this results in an unstable interaction, in which the bodies exchange momentum and shift orbits until the resonance no longer exists. Under some circumstances, a resonant system can be stable and self-correcting, so that the bodies remain in resonance. Examples are the 1:2:4 resonance of Jupiter's moons Ganymede, Europa, and Io, and the 2:3 resonance between Pluto and Neptune. Unstable resonances with Saturn's inner moons give rise to gaps in the rings of Saturn. The special case of 1:1 resonance (between bodies with similar orbital radii) causes large Solar System bodies to clear the neighborhood around their orbits by ejecting nearly everything else around them; this effect is used in the current definition of a planet. Atomic, particle, and molecular resonance Main articles: Nuclear magnetic resonance and Resonance (particle)
NMR Magnet at HWB-NMR, Birmingham, UK. In its strong 21.2-tesla field, the proton resonance is at 900 MHz.
Nuclear magnetic resonance (NMR) is the name given to a physical resonance phenomenon involving the observation of specific quantum mechanical magnetic properties of an atomic nucleus in the presence of an applied, external magnetic field. Many scientific techniques exploit NMR phenomena to study molecular physics, crystals and non-crystalline materials through NMR spectroscopy. NMR is also routinely used in advanced medical imaging techniques, such as in magnetic resonance imaging (MRI). All nuclei containing odd numbers of nucleons have an intrinsic magnetic moment and angular momentum. A key feature of NMR is that the resonant frequency of a particular substance is directly proportional to the strength of the applied magnetic field. It is this feature that is exploited in imaging techniques; if a sample is placed in a non-uniform magnetic field then the resonant frequencies of the sample's nuclei depend on where in the field they are located. Therefore, the particle can be located quite precisely by its resonant frequency. Electron paramagnetic resonance, otherwise known as Electron Spin Resonance (ESR) is a spectroscopic technique similar to NMR, but uses unpaired electrons instead. Materials for which this can be applied are much more limited since the material needs to both have an unpaired spin and be paramagnetic. The Mössbauer effect is the resonant and recoil-free emission and absorption of gamma ray photons by atoms bound in a solid form. Resonance (particle physics): In quantum mechanics and quantum field theory resonances may appear in similar circumstances to classical physics. However, they can also be thought of as
unstable particles, with the formula above valid if the is the decay rate and replaced by the particle's mass M. In that case, the formula comes from the particle's propagator, with its mass replaced by the complex number . The formula is further related to the particle's decay rate by the optical theorem. Failure of the original Tacoma Narrows Bridge Main article: Tacoma Narrows Bridge (1940)
The dramatically visible, rhythmic twisting that resulted in the 1940 collapse of "Galloping Gertie," the original Tacoma Narrows Bridge, has sometimes been characterized in physics textbooks as a classical example of resonance. However, this description is misleading. The catastrophic vibrations that destroyed the bridge were not due to simple mechanical resonance, but to a more complicated interaction between the bridge and the winds ing through it — a phenomenon known as aeroelastic flutter. Robert H. Scanlan, father of bridge aerodynamics, has written an article about this misunderstanding.[15] For more details on this topic, see Mechanical resonance. Resonance causing a vibration on the International Space Station
The rocket engines for the International Space Station are controlled by autopilot. Ordinarily the ed parameters for controlling the engine control system for the Zvezda module will cause the rocket engines to boost the International Space Station to a higher orbit. The rocket engines are hinge-mounted, and ordinarily the operation is not noticed by the crew. But on January 14, 2009, the ed parameters caused the autopilot to swing the rocket engines in larger and larger oscillations, at a frequency of 0.5 Hz. These oscillations were captured on video, and lasted for 142 seconds.[16]
Intonation (linguistics) Global rise Global fall
↗◌
↘◌ IPA number
510, 511 Encoding
Entity (decimal)
↗↘
Unicode (hex)
U+2197 U+2198
Not to be confused with inflection, tone (linguistics), or pitch accent.
In linguistics, intonation is variation of pitch while speaking which is not used to distinguish words. It contrasts with tone, in which pitch variation does distinguish words. Intonation, rhythm, and stress are the three main elements of linguistic prosody. Intonation patterns in some languages, such as Swedish and Swiss German, can lead to conspicuous fluctuations in pitch, giving speech a sing-song quality.[1] Fluctuations in pitch either involve a rising pitch or a falling pitch. Intonation is found in every language and even in tonal languages, but the realisation and function are seemingly different. It is used in non-tonal languages to add attitudes to words (attitudinal function) and to differentiate between wh-questions, yes-no questions, declarative statements, commands, requests, etc. Intonation can also be used for discourse analysis where new information is realised by means of intonation. It can also be used for emphatic/contrastive purposes. All languages use pitch pragmatically as intonation — for instance for emphasis, to convey surprise or irony, or to pose a question. Tonal languages such as Chinese and Hausa use pitch for distinguishing words in addition to providing intonation. Generally speaking, the following intonations are distinguished:
Rising Intonation means the pitch of the voice rises over time ↗]; Falling Intonation means that the pitch falls with time ↘]; Dipping Intonation falls and then rises ↘↗]; Peaking Intonation rises and then falls ↗↘].
Those with congenital amusia show impaired ability to discriminate, identify and imitate the intonation of the final words in sentences.[2]
Transcription In the International Phonetic Alphabet, global rising and falling intonation are marked with a diagonal arrow rising left-to-right [↗] and falling left-to-right [↘], respectively. These may be written as part of a syllable, or separated with a space when they have a broader scope: He found it on the street? hiː ˈfaʊnd ɪt | ɒn ðə ↗ˈˈstɹiːt ‖ ]
Here the rising pitch on street indicates that the question hinges on that word, on where he found it, not whether he found it. Yes, he found it on the street. ↘ˈ ɛs ‖ hi ˈfaʊnd ɪt | ɒn ðə ↘ˈstɹiːt ‖ ] How did you ever escape? ↗ˈˈhaʊ dɪd uː | ˈɛvɚ | ə↘ˈˈskeɪp ‖ ]
Here, as is common with wh- questions, there is a rising intonation on the question word, and a falling intonation at the end of the question. More detailed transcription systems for intonation have also been developed, such as ToBI (Tones and Break Indices), RaP (Rhythm and Pitch), and INTSINT [3].
Uses of intonation The uses of intonation can be divided into six categories:[4]:ch.6
informational: for example, in English I saw a ↘man in the garden answers "Whom did you see?" or "What happened?", while I ↘saw a man in the garden answers "Did you hear a man in the garden?"
grammatical: for example, in English a rising pitch turns a statement into a yes-no question, as in He's going ↗home? This use of intonation to express grammatical mood is its primary grammatical use (though whether this grammatical function actually exists is controversial).[4]:pp.140, 151 Some languages, like Chickasaw and Kalaallisut, have the opposite pattern from English: rising for statements and falling with questions.
illocution: the intentional force is signaled in, for example, English Why ↘don't you move to California? (a question) versus Why don't you ↗move to California? (a suggestion).
attitudinal: high declining pitch signals more excitement than does low declining pitch, as in English Good ↗morn↘ing versus Good morn↘ing.
textual: linguistic organization beyond the sentence is signaled by the absence of a statementending decline in pitch, as in English The lecture was canceled [high pitch on both syllables of "cancelled", indicating continuation]; the speaker was ill. versus The lecture was can↘celed. [high pitch on first syllable of "canceled", but declining pitch on the second syllable, indicating the end of the first thought] The speaker was ill.
indexical: group hip can be indicated by the use of intonation patterns adopted specifically by that group, such as street vendors, preachers, and possibly women in some cases (see high rising terminal.)
Intonation in English Halliday and Greaves[5] have made a detailed case that three types of meanings—textual, interpersonal, and logical—are all in part achieved through intonation. This is done, they have argued, through the choices we make in of (i) rising and falling pitch contour, (ii) where we locate that contour as part of a clause, throughout a whole clause, or over more than a single clause; and (iii) the shape of the contour. According to some s, American English pitch has four levels: low (1), middle (2), high (3), and very high (4). Normal conversation is usually at middle or high pitch; low pitch occurs at the end of utterances other than yes-no questions, while high pitch occurs at the end of yes-no questions. Very high pitch is for strong emotion or emphasis.[1]:p.184 Pitch can indicate attitude: for example, Great uttered in isolation can indicate weak emotion (with pitch starting medium and dropping to low), enthusiasm (with pitch starting very high and ending low), or sarcasm (with pitch starting and remaining low). Declarative sentences show a 2-3-1 pitch pattern. If the last syllable is prominent the final decline in pitch is a glide. For example, in This is fun, this is is at pitch 2, and fun starts at level 3 and glides down to level 1. But if the last prominent syllable is not the last syllable of the utterance, the pitch fall-off is a step. For example, in That can be frustrating, That can be has pitch 2, frus- has level 3, and both syllables of -trating have pitch 1.[1]:p.185 Wh-questions work the same way, as in Who (2) will (2) help (3↘1)? and Who (2) did (3) it (1)? But if something is left unsaid, the final pitch level 1 is replaced by pitch 2. Thus in John's (2) sick (3↘2) ..., with the speaker indicating more to come, John's has pitch 2 while sick starts at pitch 3 and drops only to pitch 2. Yes-no questions with a 2↗3 intonation pattern[3] usually have subject-verb inversion, as in Have (2) you (2) got (2) a (2) minute (3, 3)? (Here a 2↗4 contour would show more emotion, while a 1↗2 contour would show uncertainly.) Another example is Has (2) the (2) plane (3) left (3) already (3, 3, 3)?, which, depending on the word to be emphasized, could move the location of the rise, as in Has (2) the (2) plane (2) left (3) already (3, 3, 3)? or Has (2) the (2) plane (2) left (2) already (2, 3, 3)? And for example the latter question could also be framed without subjectverb inversion but with the same pitch contour: The (2) plane (2) has (2) left (2) already (2, 3, 3)?
Tag questions with declarative intent at the end of a declarative statement follow a 3↘1 contour rather than a rising contour, since they are not actually intended as yes-no questions, as in We (2) should (2) visit (3, 1) him (1), shouldn't (3, 1) we (1)? But tag questions exhibiting uncertainty, which are interrogatory in nature, have the usual 2↗3 contour, as in We (2) should (2) visit (3, 1) him (1), shouldn't (3, 3) we (3)? Questions with or can be ambiguous in English writing with regard to whether they are either-or questions or yes-no questions. But intonation in speech eliminates the ambiguity. For example, Would (2) you (2) like (2) juice (3) or (2) soda (3, 1)? emphasizes juice and soda separately and equally and ends with a decline in pitch, thus indicating that this is not a yes-no question but rather a choice question equivalent to Which would you like: juice or soda? In contrast, Would (2) you (2) like (2) juice (3) or (3) soda (3, 3)? has yes-no intonation and thus is equivalent to Would you like something to drink (such as juice or soda)? Thus the two basic sentence pitch contours are rising-falling and rising. However, other withinsentence rises and falls result from the placement of prominence on the stressed syllables of certain words. Note that for declaratives or wh-questions with a final decline, the decline is located as a stepdown to the syllable after the last prominently stressed syllable, or as a down-glide on the last syllable itself if it is prominently stressed. But for final rising pitch on yes-no questions, the rise always occurs as an upward step to the last stressed syllable, and the high (3) pitch is retained through the rest of the sentence. Pitch also plays a role in distinguishing acronyms that might otherwise be mistaken for common words. For example, in the phrase "Nike asks that you PLAY—Participate in the Lives of America's Youth",[6] the acronym PLAY may be pronounced with a high tone to distinguish it from the verb 'play', which would also make sense in this context. Alternatively, each letter could be said individually, so PLAY might become "P-L-A-Y" or "P.L.A.Y.". However, the high tone is only employed for disambiguation and is therefore contrastive intonation rather than true lexical tone. Dialects of British and Irish English vary substantially,[7] with rises on many statements in urban Belfast, and falls on most questions in urban Leeds. [3]
Intonation in French Summary
French intonation differs substantially from that of English.[8] There are four primary patterns.
The continuation pattern is a rise in pitch occurring in the last syllable of a rhythm group (typically a phrase). The finality pattern is a sharp fall in pitch occurring in the last syllable of a declarative statement. The yes/no intonation is a sharp rise in pitch occurring in the last syllable of a yes/no question.
The information question intonation is a rapid fall-off from high pitch on the first word of a nonyes/no question, often followed by a small rise in pitch on the last syllable of the question.
Detail Continuation pattern
The most distinctive feature of French intonation is the continuation pattern. While many languages, such as English and Spanish, place stress on a particular syllable of each word, and while many speakers of languages such as English may accompany this stress with a rising intonation, French has neither stress nor distinctive intonation on a given syllable. Instead, on the final syllable of every "rhythm group" except the last one in a sentence, there is placed a rising pitch. For example[8]:p.35 (note that as before the pitch change arrows ↘ and ↗ apply to the syllable immediately following the arrow):
Hier ↗soir, il m'a off↗ert une ciga↘rette. (The English equivalent would be "Last eve↗ning, he offered ↗me a cigar↘ette.") Le lendemain ma↗tin, après avoir changé le pansement du ma↗lade, l'infir↗mier est ren↗tré chez ↘lui.
Adjectives are in the same rhythm group as their noun. Each item in a list forms its own rhythm group:
Chez le frui↗tier on trouve des ↗pommes, des o↗ranges, des ba↗nanes, des ↗fraises et des abri↘cots.
Side comments inserted into the middle of a sentence form their own rhythm group:
La grande ↗guerre, si j'ai bonne mé↗moire, a duré quatre ↘ans.
Finality pattern
As can be seen in the example sentences above, a sharp fall in pitch is placed on the last syllable of a declarative statement. The preceding syllables of the final rhythm group are at a relatively high pitch. Yes/no pattern
It is most common in informal speech to indicate a yes/no question with a sharply rising pitch alone, without any change or rearrangement of words. For example[8]:p.65
Il est ↗riche?
A form found in both spoken and written French is the Est-ce que ... ("Is it that ...") construction, in which the spoken question can end in either a rising or a falling pitch:
Est-ce qu'il est ↗riche? OR Est-ce qu'il est ↘riche?
The most formal form for a yes/no question, which is also found in both spoken and written French, inverts the order of the subject and verb. In this case too the spoken question can end in either a rising or a falling pitch:
Est-il ↗riche? OR Est-il ↘riche?
Sometimes yes/no questions begin with a topic phrase, specifying the focus of the utterance. In this case the initial topic phrase follows the intonation pattern of a declarative sentence, and the rest of the question follows the usual yes/no question pattern:[8]:p.78
Et cette pho↘to, tu l'as ↗prise?
Information question pattern
Information questions begin with a question word such as qui, pourquoi, combien,, etc., often referred to in linguistics as wh-words because most of them start with those letters in English. The question word may be followed in French by est-ce que (as in English "(where) is it that ...") or est-ce qui, or by inversion of the subject-verb order (as in "where goes he?"). The sentence starts at a relatively high pitch which falls away rapidly after the question word, or its first syllable in case of a pollysyllabic question word. There may be a small increase in pitch on the final syllable of the question. For example:[8]:p.88
↗Où ↘part-il ? OR ↗Où ↘part-↗il ? ↗Où ↘est-ce qu'il part ? OR ↗Où ↘est-ce qu'il ↗part ? ↗Com↘bien ça vaut ? OR ↗Com↘bien ça ↗vaut ?
In both cases, the question both begins and ends at higher pitches than does a declarative sentence. In informal speech, the question word is sometimes put at the end of the sentence. In this case, the question ends at a high pitch, often with a slight rise on the high final syllable. The question may also start at a slightly higher pitch:[8]:p.90
Il part ↗où? OR ↗Il ↘part ↗où?
Intonation in Mandarin Chinese Mandarin Chinese is a tonal language, meaning that pitch contours within a word distinguish the word from other words with the same vowels and consonants. Nevertheless, Mandarin also has intonation patterns—patterns of pitch throughout the phrase or sentence—that indicate the nature of the sentence as a whole. There are four basic sentence types having distinctive intonation: declarative sentences, unmarked interrogative questions, yes-no questions marked as such with the sentence-final particle ma, and A-not-A questions of the form "He go not go" (meaning "Does he go or not?").
In the prestigious Beijing dialect these are intonationally distinguished for the average speaker as follows, using a pitch scale from 1 (lowest) to 9 (highest):[9][10]
Declarative sentences go from pitch level 3 to 5 and then down to 2 and 1. A-not-A questions go from 6 to 9 to 2 to 1. Yes-no ma questions go from 6 to 9 to 4 to 5. Unmarked questions go from 6 to 9 to 4 to 6.
Thus questions are begun with a higher pitch than are declarative sentences; pitch rises and then falls in all sentences; and in yes-no questions and unmarked questions pitch rises at the end of the sentence, while for declarative sentences and A-not-A questions the sentence ends at very low pitch. Because Mandarin distinguishes words on the basis of within-syllable tones, these tones create fluctuations of pitch around the sentence patterns indicated above. Thus the sentence patterns can be thought of as bands whose pitch varies over the course of the sentence, while changes of syllable pitch cause fluctuations within the band. Furthermore, the details of Mandarin intonation are affected by various factors, including[9] the tone of the final syllable, the presence or absence of focus (centering of attention) on the final word, and the dialect of the speaker.
Languages with falling intonation in questions Falling intonation is used at the end of questions in some languages, including Hawaiian, Fijian, and Samoan and in Greenlandic. It is also used in Hawaiian Creole English, presumably derived from Hawaiian.
Pitch (music)
In musical notation, the different vertical positions of notes indicate different pitches. top (help·info) & Play bottom (help·info)
Play
Pitch is a perceptual property that allows the ordering of sounds on a frequency-related scale.[1] Pitches are compared as "higher" and "lower" in the sense associated with musical melodies,[2] which require "sound whose frequency is clear and stable enough to be heard as not noise".[3] Pitch is a major auditory attribute of musical tones, along with duration, loudness, and timbre.[4] Pitch may be quantified as a frequency, but pitch is not a purely objective physical property; it is a subjective psychoacoustical attribute of sound. Historically, the study of pitch and pitch perception has been a central problem in psychoacoustics, and has been instrumental in forming and testing theories of sound representation, processing, and perception in the auditory system.[5]
Perception of pitch Pitch and frequency
Pitch is an auditory sensation in which a listener assigns musical tones to relative positions on a musical scale based primarily on the frequency of vibration.[6] Pitch is closely related to frequency, but the two are not equivalent. Frequency is an objective, scientific concept, whereas pitch is subjective. Sound waves themselves do not have pitch, and their oscillations can be measured to obtain a frequency. It takes a human brain to map the internal quality of pitch. Pitches are usually quantified as frequencies in cycles per second, or hertz, by comparing sounds with pure tones, which have periodic, sinusoidal waveforms. Complex and aperiodic sound waves can often be assigned a pitch by this method.[7][8][9] In most cases, the pitch of complex sounds such as speech and musical notes corresponds very nearly to the repetition rate of periodic or nearly-periodic sounds, or to the reciprocal of the time interval between repeating similar events in the sound waveform.[8][9] The pitch of complex tones can be ambiguous, meaning that two or more different pitches can be perceived, depending upon the observer.[5] When the actual fundamental frequency can be precisely determined through physical measurement, it may differ from the perceived pitch because of overtones, also known as upper partials, harmonic or otherwise. A complex tone composed of two sine waves of 1000 and 1200 Hz may sometimes be heard as up to three pitches: two spectral pitches at 1000 and 1200 Hz, derived from the physical frequencies of the pure tones, and the combination tone at 200 Hz, corresponding to the repetition rate of the waveform. In a situation like this, the percept at 200 Hz is commonly referred to as the missing fundamental, which is often the greatest common divisor of the frequencies present.[10] Pitch depends to a lesser degree on the sound pressure level (loudness, volume) of the tone, especially at frequencies below 1,000 Hz and above 2,000 Hz. The pitch of lower tones gets lower as sound pressure increases. For instance, a tone of 200 Hz that is very loud will seem to be one semitone lower in pitch than if it is just barely audible. Above 2,000 Hz, the pitch gets higher as the sound gets louder.[11]
Theories of pitch perception
A theory of pitch perception tries to explain how the physical sound and specific physiology of the auditory system work together to yield the various phenomena of pitch. In general, theories of pitch perception can be divided up into those of place coding and those of temporal coding. Place theory holds that the perception of pitch is determined by the place of maximum excitation on the basilar membrane. A place code, taking advantage of the tonotopy in the auditory system, must be in effect for the perception of high frequencies, since neurons have an upper limit on how fast they can phase-lock their action potentials.[6] However, a purely place-based theory cannot for the accuracy with which pitch is perceived in the low and middle frequency ranges. Temporal theories offer an alternative, appealing to the temporal structure of action potentials, mostly the phase-locking and mode-locking of action potentials to frequencies in a stimulus. The precise way in which this temporal structure is used to code for pitch at higher levels is still a matter of debate, but the processing seems to be based on an autocorrelation of action potentials in the auditory nerve.[12] However, it has long been noted that any neural mechanisms which may accomplish a delay, a necessary operation of a true autocorrelation, have not been found.[6] At least one model shows a temporal delay to be unnecessary to produce an autocorrelation model of pitch perception, appealing to phase shifts between cochlear filters;[13] however, earlier work has shown that certain sounds with a prominent peak in their autocorrelation function do not elicit a corresponding pitch percept,[14][15] and that certain sounds without a peak in their autocorrelation function nevertheless elicit a pitch.[16][17] To be a more complete model, autocorrelation must therefore be applied to signals representing the output of the cochlea, as via auditory-nerve interspike-interval histograms.[15] Some theories of pitch perception hold that pitch has inherent octave ambiguities, and therefore is best decomposed into a pitch chroma, a periodic value around the octave, like the note names in western music, and a pitch height, which may be ambiguous, indicating which octave the pitch may be in.[5] Just-noticeable difference
The just-noticeable difference (jnd, the threshold at which a change is perceived) depends on the tone's frequency content. Below 500 Hz, the jnd is about 3 Hz for sine waves, and 1 Hz for complex tones; above 1000 Hz, the jnd for sine waves is about 0.6% (about 10 cents).[18] The jnd is typically tested by playing two tones in quick succession with the listener asked if there was a difference in their pitches.[11] The jnd becomes smaller if the two tones are played simultaneously as the listener is then able to discern beat frequencies. The total number of perceptible pitch steps in the range of human hearing is about 1,400; the total number of notes in the equal-tempered scale, from 16 to 16,000 Hz, is 120.[11] High and low pitch
According to the American National Standards Institute, pitch is the auditory attribute of sound according to which sounds can be ordered on a scale from low to high. Since pitch is such a close proxy for frequency, it is almost entirely determined by how quickly the sound wave is making the air vibrate and has almost nothing to do with the intensity, or amplitude, of the wave. That is, "high" pitch means very rapid oscillation, and "low" pitch corresponds to slower oscillation. Despite that, the idiom relating vertical height to sound pitch is shared by most languages.[19] At
least in English, it is just one of many deep conceptual metaphors that involve up/down. The exact etymological history of the musical sense of high and low pitch is still unclear. There is evidence that humans do actually perceive the source that a sound is coming from to be located slightly higher or lower in vertical space when the sound frequency is increased or decreased.[19] Aural illusions
The relative perception of pitch can be fooled, resulting in "aural illusions". There are several of these, such as the tritone paradox, but most notably the Shepard scale, where a continuous or discrete sequence of specially formed tones can be made to sound as if the sequence continues ascending or descending forever.
Definite and indefinite pitch Not all musical instruments make notes with a clear pitch; unpitched percussion instruments are a class of percussion instruments that do not have a particular pitch. A sound or note of definite pitch is one of which it is possible or relatively easy to discern the pitch. Sounds with definite pitch have harmonic frequency spectra or close to harmonic spectra.[11] A sound generated on any instrument produces many modes of vibration occurring simultaneously. A listener hears numerous frequencies at once. The vibration that has the slowest rate is called the fundamental frequency, the other frequencies are overtones.[20] An important class of overtones is formed by the harmonics, which have frequencies of integer multiples of the fundamental. Whether or not the higher frequencies are integer multiples, they are collectively called the partials, referring to the different parts that make up the total spectrum. A sound or note of indefinite pitch is one of which it is impossible or relatively difficult to discern a pitch. Sounds with indefinite pitch do not have harmonic spectra or have altered harmonic spectra a characteristic known as inharmonicity. It is still possible for two sounds of indefinite pitch to clearly be higher or lower than one another, for instance, a snare drum invariably sounds higher in pitch than a bass drum, though both have indefinite pitch, because its sound contains higher frequencies. In other words, it is possible and often easy to roughly discern the relative pitches of two sounds of indefinite pitch, but any given sound of indefinite pitch does not neatly correspond to a given definite pitch. A special type of pitch often occurs in free nature when the sound of a sound source reaches the ear of an observer directly and also after being reflected against a sound-reflecting surface. This phenomenon is called repetition pitch, because the addition of a true repetition of the original sound to itself is the basic prerequisite.
Concert pitch Main article: Concert pitch
Concert pitch is the pitch reference to which a group of musical instruments are tuned for a performance. Concert pitch may vary from ensemble to ensemble, and has varied widely over musical history.
440 Hz
Menu 0:00
Problems listening to this file? See media help.
The A above middle C is usually set at 440 Hz (often written as "A = 440 Hz" or sometimes "A440"), although other frequencies are also often used, such as 442 Hz. Historically, this A has been tuned to a variety of higher and lower pitches. For example, Michael Praetorius proposed a standard of 465 Hz in the early 17th century.[21][not in citation given] The transposing instruments in an orchestra will conventionally have their parts transposed into different keys from the other instruments (and even from each other). As a result, musicians need a way to refer to a particular pitch in an unambiguous manner when talking to different sections of the orchestra. For example, the most common type of clarinet or trumpet, when playing a note written in their part as C, will sound a pitch that would be called B♭ on a non-transposing instrument like a piano. If you wanted to refer to that pitch unambiguously, you would call it "concert B♭", meaning "the pitch that someone playing a non-transposing instrument like a piano would call B♭".
Labeling pitches For a comprehensive list of frequencies of musical notes, see Scientific pitch notation and Frequencies of notes.
Note frequencies, four-octave C major diatonic scale, starting with C1.
Pitches can be labeled using letters, as in Helmholtz pitch notation; using a combination of letters and numbers, as in scientific pitch notation, where notes are labelled upwards from C0, the 16 Hz C; or by a number representing the frequency in hertz (Hz), the number of cycles per second. For example, one might refer to the A above middle C as "a'", "A4", or "440 Hz". In standard Western equal temperament, the notion of pitch is insensitive to "spelling": the description "G4 double sharp" refers to the same pitch as "A4"; in other temperaments, these may be distinct pitches. Human perception of musical intervals is approximately logarithmic with respect to fundamental frequency: the perceived interval between the pitches "A220" and "A440" is the same as the perceived interval between the pitches "A440" and "A880". Motivated by this logarithmic perception, music theorists sometimes represent pitches using a numerical scale based on the logarithm of fundamental frequency. For example, one can adopt the widely used MIDI standard to map fundamental frequency, f, to a real number, p, as follows
This creates a linear pitch space in which octaves have size 12, semitones (the distance between adjacent keys on the piano keyboard) have size 1, and A440 is assigned the number 69. (See Frequencies of notes.) Distance in this space corresponds to musical intervals as understood by musicians. An equal-tempered semitone is subdivided into 100 cents. The system is flexible enough to include "microtones" not found on standard piano keyboards. For example, the pitch halfway between C (60) and C♯ (61) can be labeled 60.5.
Scales The relative pitches of individual notes in a scale may be determined by one of a number of tuning systems. In the west, the twelve-note chromatic scale is the most common method of organization, with equal temperament now the most widely used method of tuning that scale. In it, the pitch ratio between any two successive notes of the scale is exactly the twelfth root of two (or about 1.05946). In well-tempered systems (as used in the time of Johann Sebastian Bach, for example), different methods of musical tuning were used. Almost all of these systems have one interval in common, the octave, where the pitch of one note is double the frequency of another. For example, if the A above middle C is 440 Hz, the A an octave above that will be 880 Hz (info).
Other musical meanings of pitch In atonal, twelve tone, or musical set theory a "pitch" is a specific frequency while a pitch class is all the octaves of a frequency. In many analytic discussions of atonal and post-tonal music, pitches are named with integers because of octave and enharmonic equivalency (for example, in a serial system, C♯ and D♭ are considered the same pitch, while C4 and C5 are functionally the same, one octave apart). Discrete pitches, rather than continuously variable pitches, are virtually universal, with exceptions including "tumbling strains"[22] and "indeterminate-pitch chants".[23] Gliding pitches are used in most cultures, but are related to the discrete pitches they reference or embellish.[24]
Human voice "Voice" redirects here. For other uses, see Voice (disambiguation).
The spectrogram of the human voice reveals its rich harmonic content.
The voice consists of sound made by a human being using the vocal folds for talking, singing, laughing, crying, screaming, etc. Habitual speech frequency ranges in 60–180 Hz for men and
160–300 Hz for women, but generally it can range from about 60 to 7000 Hz. The human voice is specifically that part of human sound production in which the vocal folds (vocal cords) are the primary sound source. Generally speaking, the mechanism for generating the human voice can be subdivided into three parts; the lungs, the vocal folds within the larynx, and the articulators. The lung (the pump) must produce adequate airflow and air pressure to vibrate vocal folds (this air pressure is the fuel of the voice). The vocal folds (vocal cords) are a vibrating valve that chops up the airflow from the lungs into audible pulses that form the laryngeal sound source. The muscles of the larynx adjust the length and tension of the vocal folds to ‘fine tune’ pitch and tone. The articulators (the parts of the vocal tract above the larynx consisting of tongue, palate, cheek, lips, etc.) articulate and filter the sound emanating from the larynx and to some degree can interact with the laryngeal airflow to strengthen it or weaken it as a sound source. The vocal folds, in combination with the articulators, are capable of producing highly intricate arrays of sound.[1][2][3] The tone of voice may be modulated to suggest emotions such as anger, surprise, or happiness.[4][5] Singers use the human voice as an instrument for creating music.[6]
Voice types and the folds (cords) themselves Main articles: Vocal folds and Voice types
A labeled anatomical diagram of the vocal folds or cords.
Adult men and women have different sizes of vocal fold; reflecting the male-female differences in larynx size. Adult male voices are usually lower-pitched and have larger folds. The male vocal
folds (which would be measured vertically in the opposite diagram), are between 17 mm and 25 mm in length.[7] The female vocal folds are between 12.5 mm and 17.5 mm in length.
The folds in both sexes are within the larynx. They are attached at the back (side nearest the spinal cord) to the arytenoids cartilages, and at the front (side under the chin) to the thyroid cartilage. They have no outer edge as they blend into the side of the breathing tube (the illustration is out of date and does not show this well) while their inner edges or "margins" are free to vibrate (the hole). They have a three layer construction of an epithelium, vocal ligament, then muscle (vocalis muscle), which can shorten and bulge the folds. They are flat triangular bands and are pearly white in color. Above both sides of the vocal cord is the vestibular fold or false vocal cord, which has a small sac between its two folds (not illustrated). The difference in vocal folds size between men and women means that they have differently pitched voices. Additionally, genetics also causes variances amongst the same sex, with men and women's singing voices being categorized into types. For example, among men, there are bass, baritone, tenor and countertenor (ranging from E2 to even F6), and among women, contralto, mezzo-soprano and soprano (ranging from F3 to C6). There are additional categories for operatic voices, see voice type. This is not the only source of difference between male and female voice. Men, generally speaking, have a larger vocal tract, which essentially gives the resultant voice a lower-sounding timbre. This is mostly independent of the vocal folds themselves.
Voice modulation in spoken language Human spoken language makes use of the ability of almost all persons in a given society to dynamically modulate certain parameters of the laryngeal voice source in a consistent manner. The most important communicative, or phonetic, parameters are the voice pitch (determined by the vibratory frequency of the vocal folds) and the degree of separation of the vocal folds, referred to as vocal fold adduction (coming together) or abduction (separating).[8] The ability to vary the ab/adduction of the vocal folds quickly has a strong genetic component, since vocal fold adduction has a life-preserving function in keeping food from ing into the lungs, in addition to the covering action of the epiglottis. Consequently, the muscles that control this action are among the fastest in the body.[8] Children can learn to use this action consistently during speech at an early age, as they learn to speak the difference between utterances such as "apa" (having an abductory-adductory gesture for the p) as "aba" (having no abductoryadductory gesture).[8] Surprisingly enough, they can learn to do this well before the age of two by listening only to the voices of adults around them who have voices much different from their own, and even though the laryngeal movements causing these phonetic differentiations are deep in the throat and not visible to them. If an abductory movement or adductory movement is strong enough, the vibrations of the vocal folds will stop (or not start). If the gesture is abductory and is part of a speech sound, the sound will be called voiceless. However, voiceless speech sounds are sometimes better identified as containing an abductory gesture, even if the gesture was not strong enough to stop the vocal folds from vibrating. This anomalous feature of voiceless speech sounds is better understood if it
is realized that it is the change in the spectral qualities of the voice as abduction proceeds that is the primary acoustic attribute that the listener attends to when identifying a voiceless speech sound, and not simply the presence or absence of voice (periodic energy).[9] An adductory gesture is also identified by the change in voice spectral energy it produces. Thus, a speech sound having an adductory gesture may be referred to as a "glottal stop" even if the vocal fold vibrations do not entirely stop.[9] Other aspects of the voice, such as variations in the regularity of vibration, are also used for communication, and are important for the trained voice to master, but are more rarely used in the formal phonetic code of a spoken language.
Physiology and vocal timbre The sound of each individual's voice is entirely unique not only because of the actual shape and size of an individual's vocal cords but also due to the size and shape of the rest of that person's body, especially the vocal tract, and the manner in which the speech sounds are habitually formed and articulated. (It is this latter aspect of the sound of the voice that can be mimicked by skilled performers.) Humans have vocal folds that can loosen, tighten, or change their thickness, and over which breath can be transferred at varying pressures. The shape of chest and neck, the position of the tongue, and the tightness of otherwise unrelated muscles can be altered. Any one of these actions results in a change in pitch, volume, timbre, or tone of the sound produced. Sound also resonates within different parts of the body, and an individual's size and bone structure can affect somewhat the sound produced by an individual. Singers can also learn to project sound in certain ways so that it resonates better within their vocal tract. This is known as vocal resonation. Another major influence on vocal sound and production is the function of the larynx, which people can manipulate in different ways to produce different sounds. These different kinds of laryngeal function are described as different kinds of vocal s.[10] The primary method for singers to accomplish this is through the use of the Singer's Formant, which has been shown to be a resonance added to the normal resonances of the vocal tract above the frequency range of most instruments and so enables the singer's voice to carry better over musical accompaniment.[11][12] Vocal registration
Vocal registration refers to the system of vocal s within the human voice. A in the human voice is a particular series of tones, produced in the same vibratory pattern of the vocal folds, and possessing the same quality. s originate in laryngeal functioning. They occur because the vocal folds are capable of producing several different vibratory patterns. Each of these vibratory patterns appears within a particular Vocal range range of pitches and produces certain characteristic sounds.[13] the term can be somewhat confusing as it encomes several aspects of the human voice. The term can be used to refer to any of the following:[14]
A particular part of the vocal range such as the upper, middle, or lower s.
A resonance area such as chest voice or head voice. A phonatory process. A certain vocal timbre. A region of the voice that is defined or delimited by vocal breaks. A subset of a language used for a particular purpose or in a particular social setting.
In linguistics, a language is a language that combines tone and vowel phonation into a single phonological system. Within speech pathology the term vocal has three constituent elements: a certain vibratory pattern of the vocal folds, a certain series of pitches, and a certain type of sound. Speech pathologists identify four vocal s based on the physiology of laryngeal function: the vocal fry , the modal , and the falsetto , and the whistle . This view is also adopted by many vocal pedagogists.[14] Vocal resonation
Vocal resonation is the process by which the basic product of phonation is enhanced in timbre and/or intensity by the air-filled cavities through which it es on its way to the outside air. Various related to the resonation process include amplification, enrichment, enlargement, improvement, intensification, and prolongation; although in strictly scientific usage acoustic authorities would question most of them. The main point to be drawn from these by a singer or speaker is that the end result of resonation is, or should be, to make a better sound.[14] There are seven areas that may be listed as possible vocal resonators. In sequence from the lowest within the body to the highest, these areas are the chest, the tracheal tree, the larynx itself, the pharynx, the oral cavity, the nasal cavity, and the sinuses.[15]
Influences of the human voice Main articles: Voice projection and Evolution
The twelve-tone musical scale, upon which a large portion of all music (western popular music in particular) is based, may have its roots in the sound of the human voice during the course of evolution, according to a study published by the New Scientist. Analysis of recorded speech samples found peaks in acoustic energy that mirrored the distances between notes in the twelvetone scale.[16]
Voice disorders Main articles: Vocal loading and Voice disorders
There are many disorders that affect the human voice; these include speech impediments, and growths and lesions on the vocal folds. Talking improperly for long periods of time causes vocal loading, which is stress inflicted on the speech organs. When vocal injury is done, often an ENT specialist may be able to help, but the best treatment is the prevention of injuries through good vocal production.[17] Voice therapy is generally delivered by a speech-language pathologist.
Vocal Cord Nodules and Polyps
Vocal nodules are caused over time by repeated abuse of the vocal cords which results in soft, swollen spots on each vocal cord. These spots develop into harder, callous-like growths called nodules. The longer the abuse occurs the larger and stiffer the nodules will become. Most polyps are larger than nodules and may be called by other names, such as polypoid degeneration or Reinke's edema. Polyps are caused by a single occurrence and may require surgical removal. Irritation after the removal may then lead to nodules if additional irritation persists. Speechlanguage therapy teaches the patient how to eliminate the irritations permanently through habit changes and vocal hygiene. Hoarseness or breathiness that lasts for more than two weeks is a common symptom of an underlying voice disorder such as nodes or polyps and should be investigated medically.[18]
Aeromechanics Aeromechanics is the science about mechanics that deals the motion of air and other gases, involving aerodynamics, thermophysics and aerostatics. The branch of mechanics that deals with the motion of gases (especially air) and their effects on bodies in the flow. The fluid flow and structure are interactive systems and their interaction is dynamic. The fluid force causes the structure to deform which changes its orientation to the flow and hence the resulting fluid force. Areas that comprise this is within technology of aircraft and helicopters since these use propellers and rotors.
Respiration (physiology) In physiology, respiration (often confused with breathing) is defined as the transport of oxygen from the outside air to the cells within tissues, and the transport of carbon dioxide in the opposite direction. This is in contrast to the biochemical definition of respiration, which refers to cellular respiration: the metabolic process by which an organism obtains energy by reacting oxygen with glucose to give water, carbon dioxide and ATP (energy). Although physiologic respiration is necessary to sustain cellular respiration and thus life in animals, the processes are distinct: cellular respiration takes place in individual cells of the organism, while physiologic respiration concerns the bulk flow and transport of metabolites between the organism and the external environment.
Mechanisms In unicellular organisms, simple diffusion is sufficient for gas exchange: every cell is constantly bathed in the external environment, with only a short distance for gases to flow across.
In plants oxygen is produced in photosynthesis but most oxygen used in plant respiration enters ively by diffusion or through structural openings such as lenticels.[1] Complex multicellular animals such as humans have a much greater distance between the environment and their innermost cells, thus, a respiratory system is needed for effective gas exchange. The respiratory system works in concert with a circulatory system to carry gases to and from the tissues. In air-breathing vertebrates such as humans, respiration of oxygen includes four stages:
Ventilation, moving of the ambient air into and out of the alveoli of the lungs. Pulmonary gas exchange, exchange of gases between the alveoli and the pulmonary capillaries. Gas transport, movement of gases within the pulmonary capillaries through the circulation to the peripheral capillaries in the organs, and then a movement of gases back to the lungs along the same circulatory route. Peripheral gas exchange, exchange of gases between the tissue capillaries and the tissues or organs, impacting the cells composing these and mitochondria within the cells.
Oxygen consumption of various organs Organ
Oxygen consumption (ml O2/min per 100g)[2]
Heart (rest)
8
Heart (heavy exercise)
70
Brain
3
Kidney
5
Skin
0.2
Resting skeletal muscle
1
Contracting skeletal muscle
50
Note that ventilation and gas transport require energy to power a mechanical pump (the heart) and the muscles of respiration, mainly the diaphragm. In heavy breathing, energy is also required to power additional respiratory muscles such as the intercostal muscles. The energy requirement for ventilation and gas transport is in contrast to the ive diffusion taking place in the gas exchange steps. Respiratory behavior is correlated to the cardiovascular behavior to control the gaseous exchange between cells and blood. Both behaviors are intensified by exercise of the body. However, respiratory is voluntary compared to cardiovascular activity which is involuntary. Respiratory physiology is the branch of human physiology concerned with respiration.
Classifications of respiration There are several ways to classify the physiology of respiration:
By species
Aquatic respiration Buccal pumping
By mechanism
Respiration organ Gas exchange Arterial blood gas Control of respiration Apnea
By experiments
Huff and puff Spirometry Selected ion flow tube mass spectrometry Bell Jar Model Lung
Sudden Infant Death Syndrome Myasthenia gravis Asthma Drowning Choking Dyspnea Anaphylaxis Pneumonia Severe acute respiratory syndrome Pulmonary aspiration - Pulmonary edema Death
By intensive care and emergency medicine
R Mechanical ventilation Intubation Iron lung Intensive care medicine Liquid breathing ECMO Oxygen toxicity Medical ventilator Paramedic Life General anaesthesia Laryngoscope
By other medical topics
Respiratory therapy Breathing gases Hyperbaric oxygen therapy Hypoxia Gas embolism Decompression sickness Barotrauma Oxygen toxicity Nitrogen narcosis Carbon dioxide poisoning Carbon monoxide poisoning HPNS Salt water aspiration syndrome
Language This article is about Human language in general. For other uses, see Language (disambiguation).
A mural in Teotihuacan, Mexico (ca. 200 AD) depicting a person emitting a speech scroll from his mouth, symbolizing speech.
Cuneiform is the first known form of written language, but spoken language predates writing by at least tens of thousands of years.
Two girls learning American Sign Language.
Braille writing represents language in a tactile form.
Language is the human capacity for acquiring and using complex systems of communication, and a language is any specific example of such a system. The scientific study of language is called linguistics. Any estimate of the precise number of languages in the world depends on a partly arbitrary distinction between languages and dialects. However, estimates vary between around 6,000 and 7,000 languages in number. Natural languages are spoken or signed, but any language can be encoded into secondary media using auditory, visual or tactile stimuli, for example in graphic writing, braille, or whistling. This is because human language is modalityindependent. When used as a general concept, "language" may refer to the cognitive ability to learn and use systems of complex communication, or to describe the set of rules that makes up these systems, or the set of utterances that can be produced from those rules. Human language is unique because has the properties of productivity, recursivity, and displacement, and because it relies entirely on social convention and learning. Its complex structure therefore affords a much wider range of possible expressions and uses than any known system of animal communication. Language is thought to have originated when early hominins started gradually changing their primate communication systems, acquiring the ability to form a theory of other minds and a shared intentionality. This development is sometimes thought to have coincided with an increase in brain volume, and many linguists see the structures of language as having evolved to serve specific communicative and social functions. Language is processed in many different locations in the human brain, but especially in Broca’s and Wernicke’s areas. Humans acquire language through social interaction in early childhood, and children generally speak fluently when they are around three years old. The use of language is deeply entrenched in human culture. Therefore, in addition to its strictly communicative uses, language also has many social and cultural uses, such as signifying group identity, social stratification, as well as for social grooming and entertainment. All languages rely on the process of semiosis to relate signs with particular meanings. Oral and sign languages contain a phonological system that governs how symbols are used to form sequences known as words or morphemes, and a syntactic system that governs how words and morphemes are combined to form phrases and utterances. Languages evolve and diversify over time, and the history of their evolution can be reconstructed by comparing modern languages to determine which traits their ancestral languages must have had for the later stages to have occurred. A group of languages that descend from a common ancestor is known as a language family. The languages that are most spoken in the world today belong to the Indo-European family, which includes languages such as English, Spanish, Portuguese, Russian and Hindi; the Sino-Tibetan languages, which include Mandarin Chinese, Cantonese and many others; Semitic languages, which include Arabic, Amharic and Hebrew; and the Bantu languages, which include Swahili, Zulu, Shona and hundreds of other languages spoken throughout Africa. The general consensus is that between 50 to 90% of languages spoken today will probably have become extinct by the year 2100.[1][2]
Definitions Main article: Philosophy of language
The English word "language" derives ultimately from Indo-European ₂s "tongue, speech, language" through Latin lingua, "language, tongue", and Old French langage "language".[3] The word is sometimes used to refer to codes, ciphers and other kinds of artificially constructed communication systems such as those used for computer programming. A language in this sense is a system of signs for encoding and decoding information. This article is specifically about the properties of natural human language as it is studied in the discipline of linguistics. As an object of linguistic study "language" has two primary meanings: language as an abstract concept, and "a language" (a specific linguistic system, e.g. "French"). The Swiss linguist Ferdinand de Saussure, who defined the modern discipline of linguistics, first explicitly formulated the distinction, using the French word langage for language as a concept, and langue as a specific instance of a language system, and parole for the concrete usage of speech in a particular language.[4] When speaking of language as a general concept, some different definitions can be used that stress different aspects of the phenomenon.[5] These definitions also entail different approaches and understandings of language, and they inform different and often incompatible schools of linguistic theory.[6] Mental faculty, organ or instinct
One definition sees language primarily as the mental faculty that allows humans to undertake linguistic behaviour: to learn languages and produce and understand utterances. This definition stresses the universality of language to all humans and the biological basis of the human capacity for language as a unique development of the human brain. The view that the drive to language acquisition is innate in humans is ed by the fact that all cognitively normal children raised in an environment where language is accessible will acquire language without formal instruction. Languages may even spontaneously develop in environments where people live or grow up together without a common language, for example in the case of creole languages, and the case of spontaneously developed sign languages such as Nicaraguan Sign Language. This view which can be seen as a view of language going back to Kant and Descartes often understands language to be largely innate, for example as in Chomsky's theory of Universal Grammar or American philosopher Jerry Fodor’s extreme innatist theory. These kinds of definitions are often applied by studies of language within a cognitive science framework and in neurolinguistics.[7][8] Formal symbolic system
Another definition sees language as a formal system of signs governed by grammatical rules of combination to communicate meaning. This definition stresses the fact that human languages can be described as closed structural systems consisting of rules that relate particular signs to particular meanings. This structuralist view of language was first introduced by Ferdinand de Saussure,[9] and his structuralism remains foundational for most approaches to language today.[10]
Some proponents of this view of language have advocated a formal approach which studies language structure by identifying its basic elements and then formulating a formal of the rules according to which the elements combine to form words and sentences. The main proponent of such a theory is Noam Chomsky, the originator of the generative theory of grammar, who has defined language as a particular set of sentences that can be generated from a particular set of rules. Chomsky considers these rules to be an innate feature of the human mind, and to constitute the essence of what language is.[11] Formal definitions of language is commonly used in formal logic, and in formal theories of grammar and in applied computational linguistics.[12][13] Tool for communication
Two men and a woman having a conversation in American Sign Language.
Yet another definition sees language as a system of communication that enables humans to cooperate. This definition stresses the social functions of language and the fact that humans use it to express themselves and to manipulate objects in their environment. Functional theories of grammar explain grammatical structures by their communicative functions, and understands the grammatical structures of language to be the result of an adaptive process by which grammar was "tailored" to serve communicative needs of its s.[14][15] This view of language is associated with the study of language in pragmatic, cognitive and interactional frameworks, as well as in socio-linguistics and linguistic anthropology. Functionalist theories tend to study grammar as a dynamic phenomenon, as structures that are always in the process of changing as they are employed by their speakers. This view places importance on the study of linguistic typology, classification of languages according to structural features, as it can be shown that processes of grammaticalization tend to follow trajectories that are partly dependent on typology. In the philosophy of language these views are often associated with Wittgenstein’s later works and with ordinary language philosophers such as Paul Grice, John Searle and J. L. Austin.[13] What makes human language unique Main articles: Animal language and Great ape language
Human language is unique in comparison to other forms of communication, such as those used by non-human animals. Communication systems used by other animals such as bees or nonhuman apes are closed systems that consist of a closed number of possible things that can be expressed.[16] In contrast human language is open-ended and productive, meaning that it allows humans to produce an infinite set of utterances from a finite set of elements, and to create new words and sentences. This we can do because human language is based on a dual code, where a finite number of meaningless elements (e.g. sounds, letters or gestures) can be combined to form units of meaning (words and sentences).[17] Furthermore the symbols and grammatical rules of any particular language are largely arbitrary, meaning that the system can only be acquired through social interaction.[18] The known systems of communication used by animals, on the other hand, can only express a finite number of utterances that are mostly genetically transmitted.[19] Several species of animals have proven able to acquire forms of communication through social learning, such as the Bonobo Kanzi who learned to express himself using a set of symbolic lexigrams. Similarly many species of birds and whales learn their songs by imitating other of their species. However while some animals may acquire large numbers of words and symbols,[20] none have been able to learn as many different signs as is generally known by an average 4 year old human, nor have any acquired anything resembling the complex grammar of human language.[21] Human languages also differ from animal communication systems in that they employ grammatical and semantic categories such as noun and verb, or present and past, to express exceedingly complex meanings.[21] Human language is also unique in having the property of recursivity; this is the way in which, for example, a noun phrase is able to contain another noun phrase (as in "[[the chimpanzee]'s lips]]") or a clause to contain a clause (as in "[I see [the dog is running]]").[22] Human language is also the only known natural communication system that is modality independent, meaning that it can be used not only for communication through one channel or medium, but through several - for example spoken language uses the auditive modality, whereas sign languages and writing use the visual modality and braille writing uses the tactile modality.[23] With regards to the meaning that it may convey and the cognitive operations that it builds on, human language is also unique in being able to refer to abstract concepts and to imagined or hypothetical events, as well as events that took place in the past or may happen in the future. This ability to refer to events that are not at the same time or place as the speech event is called displacement, and while some animal communication systems can use displacement (such as the communication of bees that can communicate the location of sources of nectar that are out of sight), the degree to which it is used in human language is also considered unique.[17]
Origin Main articles: Origin of language and Origin of speech
75-80,000 year old artefacts from Blombos cave, South Africa including a piece of ochre engraved with diagonal cross-hatch patterns, perhaps the oldest known example of symbols.
"The Tower of Babel" by Pieter Bruegel the Elder. Oil on board, 1563. Humans have speculated about the origins of language throughout history. The Biblical myth of the Tower of Babel is one such , other cultures have other stories of how language arose.[24]
Theories about the origin of language can be divided according to their basic assumptions. Some theories are based on the idea that language is so complex that one can not imagine it simply appearing from nothing in its final form, but that it must have evolved from earlier pre-linguistic systems among our pre-human ancestors. These theories can be called continuity-based theories. The opposite viewpoint is that language is such a unique human trait that it cannot be compared to anything found among non-humans and that it must therefore have appeared fairly suddenly in the transition from pre-hominids to early man. These theories can be defined as discontinuitybased. Similarly, some theories see language mostly as an innate faculty that is largely genetically encoded, while others see it as a system that is largely cultural, that is learned through social interaction.[25] Currently the only prominent proponent of a discontinuity-based theory of human language origins is linguist and philosopher Noam Chomsky. Chomsky proposes that 'some random mutation took place, maybe after some strange cosmic ray shower, and it reorganized the brain, implanting a language organ in an otherwise primate brain'. While cautioning against taking this story too literally, Chomsky insists that 'it may be closer to reality than many other fairy tales that are told about evolutionary processes, including language'.[26] Continuity-based theories are currently held by a majority of scholars, but they vary in how they envision this development. Those who see language as being mostly innate, for example psychologist Steven Pinker, hold the precedents to be animal cognition,[8] whereas those who see
language as a socially learned tool of communication, such as psychologist Michael Tomasello, see it as having developed from animal communication, either primate gestural or vocal communication to assist in cooperation.[19] Other continuity-based models see language as having developed from music, a view already espoused by Rousseau, Herder, Humboldt and Charles Darwin. A prominent proponent of this view today is archaeologist Steven Mithen.[27] Because the emergence of language is located in the early prehistory of man, the relevant developments have left no direct historical traces and no comparable processes can be observed today. Theories that stress continuity often look at animals to see if, for example, primates display any traits that can be seen as analogous to what pre-human language must have been like. Alternatively, early human fossils can be inspected to look for traces of physical adaptation to language use or for traces of pre-linguistic forms of symbolic behaviour.[28] It is mostly undisputed that pre-human australopithecines did not have communication systems significantly different from those found in great apes in general, but scholarly opinions vary as to the developments since the appearance of the genus Homo some 2.5 million years ago. Some scholars assume the development of primitive language-like systems (proto-language) as early as Homo habilis (2.3 million years ago), while others place the development of primitive symbolic communication only with Homo erectus (1.8 million years ago) or Homo heidelbergensis (0.6 million years ago) and the development of language proper with Anatomically Modern Homo sapiens with the Upper Paleolithic revolution less than 100,000 years ago.[29][30]
The study of language Main articles: Linguistics and History of linguistics
William Jones discovered the family relation between Latin and Sanskrit, laying the ground for the discipline of Historical linguistics.
Ferdinand de Saussure developed the structuralist approach to studying language.
Noam Chomsky is one of the most important linguistic theorists of the 20th century.
The study of language, linguistics, has been developing into a science since the first grammatical descriptions of particular languages in India more than 2000 years ago. Today linguistics is a science that concerns itself with all aspects relating to language, examining it from all of the theoretical viewpoints described above.[31] Sub-disciplines
The academic study of language is conducted within many different disciplinary areas and from different theoretical angles, all of which inform modern approaches to linguistics. For example, descriptive linguistics examines the grammar of single languages; theoretical linguistics develops theories on how best to conceptualize and define the nature of language, based on data from the various extant human languages; sociolinguistics studies how languages are used for social purposes informing in turn the study of the social functions of language and grammatical description; neurolinguistics studies how language is processed in the human brain, and allows the experimental testing of theories; computational linguistics builds on thoretical and descriptive linguistics to construct computational models of language often aimed at processing
natural language, or at testing linguistic hypotheses; and historical linguistics relies on grammatical and lexical descriptions of languages to trace their individual histories and reconstruct trees of language families by using the comparative method.[32] Early history
The formal study of language is often considered to have started in India with Pāṇini, the 5th century BC grammarian who formulated 3,959 rules of Sanskrit morphology. However Sumerian scribes already studied the differences between Sumerian and Akkadian grammar around 1900 BC. Subsequent grammatical traditions developed in all of the ancient cultures that adopted writing.[33] In the 17th century AD the French Port-Royal Grammarians developed the idea that the grammars of all languages were a reflection of the universal basics of thought, and therefore that grammar was universal. In the 18th century, the first use of the comparative method by British philologist and expert on ancient India William Jones sparked the rise of comparative linguistics.[34] The scientific study of language was broadened from Indo-European to language in general by Wilhelm von Humboldt. Early in the 20th century, Ferdinand de Saussure introduced the idea of language as a static system of interconnected units, defined through the oppositions between them.[9] By introducing a distinction between diachronic and synchronic analyses of language, he laid the foundation of the modern discipline of linguistics. Saussure also introduced several basic dimensions of linguistic analysis that are still fundamental in many contemporary linguistic theories, such as the distinctions between syntagm and paradigm, and the Langue-parole distinction, distinguishing language as an abstract system (langue), from language as a concrete manifestation of this system (parole).[35] Contemporary linguistics
In the 1960s Noam Chomsky formulated the generative theory of language. According to this theory the most basic form of language is a set of syntactic rules that are universal for all humans and which underlies the grammars of all human languages. This set of rules is called Universal Grammar; for Chomsky, describing it is the primary objective of the discipline of linguistics. Thus he considered that the grammars of individual languages are only of importance to linguistics insofar as they allow us to deduce the universal underlying rules from which the observable linguistic variability is generated.[36] In opposition to the formal theories of the generative school, functional theories of language propose that since language is fundamentally a tool, its structures are best analyzed and understood by reference to their functions. Formal theories of grammar seek to define the different elements of language and describe the way they relate to each other as systems of formal rules or operations, while functional theories seek to define the functions performed by language and then relate them to the linguistic elements that carry them out.[13][37] The framework of cognitive linguistics interprets language in of the concepts, sometimes
universal, sometimes specific to a particular language, which underlie its forms.[38] Cognitive linguistics is primarily concerned with how the mind creates meaning through language.
Physiological and neural architecture of language and speech Speaking is the default modality for language in all cultures. The production of spoken language depends on sophisticated capacities for controlling the lips, tongue and other components of the vocal apparatus, the ability to acoustically decode speech sounds, and the neurological apparatus required for acquiring and producing language.[39] The study of the genetic bases for human language is still on a fairly basic level, and the only gene that has been positively implied in language production is FOXP2, which may cause a kind of congenital language disorder if affected by mutations.[40] The brain and language Main article: Neurolinguistics
Language Areas of the brain. The Angular Gyrus is represented in orange, Supramarginal Gyrus is represented in yellow, Broca's area is represented in blue, Wernicke's area is represented in green and the Primary Auditory Cortex is represented in pink.
The brain is the coordinating center of all linguistic activity: It controls both the production of linguistic cognition and of meaning and the mechanics of speech production. Nonetheless our knowledge of the neurological bases for language is quite limited, though it has advanced considerably with the use of modern imaging techniques. The discipline of linguistics dedicated to studying the neurological aspects of language is called neurolinguistics.[41] Early work in neurolinguistics involved the study of language in people with brain lesions, to see how lesions in specific areas affect language and speech. In this way it was neuroscientists in the 19th century discovered that two areas in the brain are crucially implicated in language processing: Wernicke's area located in the posterior section of the superior temporal gyrus in the dominant cerebral hemisphere. People with a lesion in this area of the brain develop Receptive aphasia, a condition in which there is a major impairment of language comprehension, while speech retains a natural-sounding rhythm and a relatively normal sentence structure. The other area is Broca's area located in the posterior inferior frontal gyrus of the dominant hemisphere. People with a lesion to this area develop expressive aphasia, meaning that they know "what they
want to say, they just cannot get it out."[42] They are typically able to understand what is being said to them, but unable to speak fluently. Other symptoms that may be present in Broca's aphasia include problems with fluency, articulation, word-finding, word repetition, and producing and comprehending complex grammatical sentences, both orally and in writing. They also exhibit ungrammatical speech and show inability to use syntactic information to determine the meaning of sentences. Both Broca's and Wenicke's aphasia also affect the use of sign language, in analogous ways to how they affect speech, with Broca's aphasia causing signers to sign slowly and with incorrect grammar, whereas a signer with Wernicke's aphasia will sign fluently, but make little sense to others and have difficulties comprehending others' signs. This shows that the impairment is specific to the ability to use language, and not to the physiology used for speech production.[43][44] With technological advances in the late 20th century, neurolinguists have also adopted noninvasive techniques such as functional magnetic resonance imaging (fMRI) and electrophysiology to study language processing in individuals without impairments.[41] Anatomy of speech Main articles: Speech production, Phonetics, and Articulatory phonetics
The human vocal tract.
Spectrogram of American English vowels [i, u, ɑ] showing the formants f1 and f2
Real time MRI scan of a person speaking in Mandarin Chinese.
Spoken language relies on our physical ability to produce sound, which is a longitudinal wave propagated through the air at a frequency capable of vibrating the human ear drum. This ability depends on the physiology of the human speech organs. These organs consist of the lungs, the voice box (larynx) and the upper vocal tract - the throat, the mouth and the nose. By controlling the different parts of the speech apparatus the airstream can be manipulated to produce different speech sounds.[45] The sound of speech can be analyzed into a combination of segmental and suprasegmental elements. The segmental elements are those that follow each other in sequences, and which are usually represented by distinct letters in alphabetic scripts such as the Roman script. In free flowing speech, there are no clear boundaries between one segment and the next, nor usually are there any audible pauses between words. Segments therefore are distinguished by their distinct sounds which are a result of their different articulations, and they can be either vowels or consonants. Suprasegmental phenomena encom such elements as stress, phonation type, voice timbre and prosody or intonation all of which may have effects across multiple segments.[46] Consonants and vowel segments combine to form syllables, which in turn combine to form utterances; these can be distinguished phonetically as the space between two inhalations. Acoustically, these different segments are characterized by different formant structures, that are visible in a spectrogram of the recorded sound wave (See illustration of Spectrogram of the formant structures of three English vowels). Formants are the amplitude peaks in the frequency spectrum of a specific sound.[46][47] Vowels are those sounds that have no audible friction caused by the narrowing or obstruction of some part of the upper vocal tract. They vary in quality according to the degree of lip aperture and the placement of the tongue within the oral cavity.[46] Vowels are called close when the lips are relatively closed, as in the pronunciation of the vowel [i] (English "ee"), or open when the lips are relatively open, as in the vowel [a] (English "ah"). If the tongue is located towards the back of the mouth the quality changes, creating vowels such as [u] (English "oo"). And the quality also changes depending on whether the lips are rounded as opposed to unrounded,
creating distinctions such as that between [i] (unrounded front vowel such as English "ee") and [y] (rounded front vowel such as German "ü").[48] Consonants are those sounds that have audible friction or closure at some point within the upper vocal tract. Consonants sounds vary by place of articulation, i.e. the place in the vocal tract where the airflow is obstructed - commonly at the lips, teeth, alveolar ridge, palate, velum, uvula or glottis. Each place of articulation produces a different set of consonant sounds, which are further distinguished by manner of articulation - the kind of friction - whether full closure, in which case the consonant is called occlusive or stop, or different degrees of aperture creating fricatives and approximants. Consonants can also be either voiced or unvoiced, depending on whether the vocal cords are set in vibration by the airflow during the production of the sound. Voicing is what separates English [s] in bus (unvoiced sibilant) from [z] in buzz (voiced sibilant).[49] Some speech sounds, both vowels and consonants, involve release of air flow through the nasal cavity, and these are called nasals or nasalized sounds. Other sounds are defined by the way the tongue moves within the mouth: such as the l-sounds (called laterals, because the air flows along both sides of the tongue), and the r-sounds (called rhotics) that are characterized by how the tongue is positioned relative to the air stream.[47] By using these speech organs, humans can produce hundreds of distinct sounds: some appear very often in the world's languages whereas others are much more common in certain language families, or language areas, or even specific to a single language.[50]
Structure When described as a system of symbolic communication, language is traditionally seen as consisting of three parts: signs, meanings and a code connecting signs with their meanings. The study of the process of semiosis, how signs and meanings are combined, used and interpreted is called semiotics. Signs can be composed of sounds, gestures, letters or symbols, depending on whether the language is spoken, signed or written, and they can be combined into complex signs such as words and phrases. When used in communication a sign is encoded and transmitted by a sender through a channel to a receiver who decodes it.[51]
Ancient Tamil inscription at Thanjavur
Some of the properties that define human language as opposed to other communication systems are: the arbitrariness of the linguistic sign, meaning that there is no predictable connection between a linguistic sign and its meaning; the duality of the linguistic system, meaning that linguistic structures are built by combining elements into larger structures that can be seen as layered, e.g. how sounds build words and words build phrases; the discreteness of the elements of language, meaning that the elements out of which linguistic signs are constructed are discrete units, e.g. sounds and words, that can be distinguished from each other and rearranged in different patterns; and the productivity of the linguistic system, meaning that the finite number of linguistic elements can be combined into a theoretically infinite number of combinations.[51] The rules under which signs can be combined to form words and phrases are called syntax or grammar. The meaning that is connected to individual signs, morphemes, words, phrases and texts is called semantics.[52] The division of language into separate but connected systems of sign and meaning goes back to the first linguistic studies of de Saussure and is now used in almost all branches of linguistics.[53] Semantics Main articles: Semantics, Semiotics, and Meaning (linguistics)
Languages express meaning by relating a sign form to a meaning, its content. Sign forms must be something that can be perceived, for example in sounds, images or gestures, and they come to be related to a specific meaning by social convention. Because the basic relation of meaning for most linguistic signs is based on social convention, linguistic signs can be considered arbitrary, in the sense that the convention is established socially and historically, rather than by means of a natural relation between a specific sign form and its meaning. Thus languages must have a vocabulary of signs related to specific meaning—the English sign "dog" denotes, for example, a member of the species Canis familiaris. In a language, the array of arbitrary signs connected to specific meanings is called the lexicon, and a single sign connected to a meaning is called a lexeme. Not all meanings in a language are represented by single words often semantic concepts are embedded in the morphology or syntax of the language in the form of grammatical categories.[54] All languages contain the semantic structure of predication— a structure that predicates a property, state or action. Traditionally semantics has been understood as the study of how speakers and interpreters assign truth values to statements, so that meaning is understood as the process by which a predicate can be said to be true or false about an entity, e.g. "[x [is y]]" or "[x [does y]]." Recently, this model of semantics has been complemented with more dynamic models of meaning that incorporate shared knowledge about the context in which a sign is interpreted into the production of meaning. Such models of meaning are explored in the field of pragmatics.[55] Sounds and Symbols Main articles: Phonology and Writing
A spectrogram showing the sound of the spoken English word "man" which is written phonetically as [mæn]. note that in flowing speech there is no clear division between segments, only a smooth transition as the vocal apparatus moves.
The letter "wi" in the Hangul script.
The sign for "wi" in Korean Sign Language
Depending on modality language structure can be based on systems of sounds (speech), gestures (sign languages) or graphic or tactile symbols (writing). The ways in which languages use sounds or signs to construct meaning are studied in phonology.[56] The study of how humans produce and perceive vocal sounds is called phonetics.[57] In spoken language, meaning is produced when sounds become part of a system in which some sounds can contribute to expressing meaning and others do not. In any given language only a limited number of the many distinct sounds that can be created by the human vocal apparatus contribute to constructing meaning.[58] Sounds as part of a linguistic system are called phonemes.[59] Phonemes are abstract units of sound, defined as the smallest units in a language that can serve to distinguish between the
meaning of a pair of minimally different words, a so-called minimal pair. In English for example the words /bat/ [bat] and /pat/ [pʰat] form a minimal pair in which the distinction between /b/ and /p/ differentiates the two words as having different meanings. But each language contrasts sounds in different ways: for example in a language that does not distinguish between voiced and unvoiced consonants the sounds [p] and [b] would be considered a single phoneme and consequently the two pronunciations would have the same meaning. Similarly, the English language does not distinguish phonemically between aspirated and non-aspirated pronunciations of consonants as many other languages do: the unaspirated /p/ in /spin/ [spin] and the aspirated /p/ in /pin/ [pʰin] are considered as merely different ways of pronouncing the same phoneme (such variants of a single phoneme are called allophones), whereas in Mandarin Chinese the same difference in pronunciation distinguishes between the words [pʰá] "crouch" and [pá] "eight" (the accent above the á means that the vowel is pronounced with a high tone).[60] All spoken languages have phonemes of at least two different categories, vowels and consonants, that can be combined to form syllables.[46] As well as segments such as consonants and vowels, some languages also use sound in other ways to convey meaning. Many languages, for example, use stress, pitch, duration and tone to distinguish meaning. Because these phenomena operate outside of the level of single segments they are called suprasegmental.[61] Some languages have only a few phonemes, for example Rotokas and Pirahã language with 11 and 10 phonemes respectively, whereas languages like Taa may have as many as 141 phonemes.[60] In sign languages the equivalent to phonemes (formerly called cheremes) are defined by the basic elements of gestures such as hand shape, orientation, location, and motion, which correspond to manners of articulation in spoken language.[62] Writing systems represent language using visual symbols, which may or may not correspond to the sounds of spoken language. The Latin alphabet (and those on which it is based or that have been derived from it) was originally based on the representation of single sounds, so that words were constructed from letters that generally denote a single consonant or vowel in the structure of the word. In syllabic scripts, such as the Inuktitut syllabary, each sign represents a whole syllable. In logographic scripts each sign represents an entire word,[63] and will generally bear no relation to the sound of that word in spoken language. Because all languages have a very large number of words, no purely logographic scripts are known to exist. Written language represents the way spoken sounds and words follow one after another by arranging symbols according to a pattern that follows a certain direction. The direction used in a writing system is entirely arbitrary and established by convention. Some writing systems use the horizontal axis (left to right as the Latin script or right to left as the Arabic script), while others such as traditional Chinese writing use the vertical dimension (from top to bottom). A few writing systems use opposite directions for alternating lines, and others such as the ancient Maya script can be written in either direction and relies on graphic cues to show the reader the direction of reading.[64] In order to represent the sounds of the world's languages in writing, linguists have developed the International Phonetic Alphabet, designed to represent all of the discrete sounds that are known to contribute to meaning in human languages.[65]
Grammar Main article: grammar
Grammar is the study of how meaningful elements called morphemes, within a language can be combined into utterances. Morphemes can either be free or bound. If they are free to be moved around within an utterance, they are usually called words, and if they are bound to other words or morphemes, they are called affixes. The way in which meaningful elements can be combined within a language is governed by rules. The rules obtaining for the internal structure of words are called morphology. The rules of the internal structure of phrases and sentences are called syntax.[66] Grammatical categories
Grammar can be described as a system of categories, and a set of rules that determine how categories combine to form different aspects of meaning.[67] Languages differ widely in whether categories are encoded through the use of categories or lexical units. However, several categories are so common as to be nearly universal. Such universal categories include the encoding of the grammatical relations of participants and predicates by grammatically distinguishing between their relations to a predicate, the encoding of temporal and spatial relations on predicates, and a system of grammatical person governing reference to and distinction between speakers and addressees and those about whom they are speaking.[68] Word classes
Languages organize their parts of speech into classes according to their functions and positions relative to other parts. All languages, for instance, make a basic distinction between a group of words that prototypically denote things and concepts and a group of words that prototypically denote actions and events. The first group, which includes English words such as "dog" and "song," are usually called nouns. The second, which includes "run" and "sing," are called verbs. An other common category is the adjective, words that describe properties or qualities of nouns such as "red" or "big". Word classes can be "open", if new words can continuously be added to the class, or relatively "closed", if there is a fixed number of words in a class. In English the class of pronouns is closed, whereas the class of adjectives is open, since infinite numbers of adjectives can be constructed from verbs (e.g. "saddened") or nouns (e.g. with the -like suffix "noun-like"). In other languages such as Korean the situation is the opposite and new pronouns can be constructed, whereas the number of adjectives is fixed.[69] The word classes also carry out differing functions in grammar. Prototypically verbs are used to construct predicates, while nouns are used as arguments of predicates. In a sentence such as "Sally runs," the predicate is "runs," because it is the word that predicates a specific state about its argument "Sally." Some verbs such as "curse" can take two arguments, e.g. "Sally cursed John." A predicate that can only take a single argument is called intransitive, while a predicate that can take two arguments is called transitive.[70] Many other word classes exist in different languages, such as conjunctions that serve to two sentences and articles that introduces a noun, interjections such as "agh!" or "wow!", or
ideophones that mimic the sound of some event. Some languages have positionals, that describe the spatial position of an event or entity. Many languages have classifiers that identify countable nouns as belonging to a particular type or having a particular shape. For instance, in Japanese, the general noun classifier for humans is nin (人), and it is used for counting humans, whatever they are called: san-nin no gakusei (三人の学生) lit. "3 human-classifier of student" — three students
While for trees, it would be: san-bon no ki (三本の木) lit. "3 classifier-for-long-objects of tree" — three trees Morphology
In linguistics, the study of the internal structure of complex words, and the processes by which words are formed is called morphology. In most languages, it is possible to construct complex words that are built of several morphemes. For instance the English word "unexpected" can be analyzed as being composed of the three morphemes "un-", "expect" and "-ed".[71] Morphemes can be classified according to whether they are independent morphemes, so-called roots, or whether they can only co-occur attached to other morphemes. These bound morphemes or affixes can be classified according to their position in relation to the root: prefixes precede the root, suffixes follow the root and infixes are inserted in the middle of a root. Affixes serve to modify or elaborate the meaning of the root. Some languages change the meaning of words by changing the phonological structure of a word, for example the English word "run" which in the past tense is "ran". This process is called ablaut. Furthermore, morphology distinguishes between the process of inflection which modifies or elaborates on a word, and the process of derivation which creates a new word from an existing one. In English the verb "sing" has the inflectional forms "singing" and "sung" which are both verbs, and the derivational form "singer" which is a noun derived from the verb with the agentive suffix "-er".[72][73] Languages differ widely in how much they rely on morphological processes of word formation. In some languages, for example Chinese, there are no morphological processes and all grammatical information is encoded syntactically by forming strings of single words. This type of morpho-syntax is often called isolating, or analytic, because there is almost a full correspondence between a single word and a single aspect of meaning. Most languages have words consisting of several morphemes, but they vary in the degree to which morphemes are discrete units. In many languages, notably in most Indo-European languages, single morphemes may have several distinct meanings that cannot be analyzed into smaller segments. For example in Latin the word bonus "good" consists of the root bon- meaning "good" and the suffix -us which means masculine gender, singular number and nominative case. These languages are called fusional languages, because several meanings may be fused into a single morpheme. The opposite type of fusional languages are agglutinative languages which construct words by stringing morphemes together in chains, but with each morpheme as a discrete semantic unit. An example of such a language is Turkish, where for example the word evlerinizden "from your houses" consists of the morphemes, ev-ler-iniz-den with the meanings house-plural-your-from.
The languages that rely on morphology to the greatest extent are traditionally called polysynthetic languages. They may express the equivalent of an entire English sentence in a single word. For example the Yupik word tuntussuqatarniksaitengqiggtuq which means "He had not yet said again that he was going to hunt reindeer." The word consists of the morphemes tuntu-ssur-qatar-ni-ksaite-ngqiggte-uq with the meanings, reindeer-hunt-future-say-negationagain-third.person.singular.indicative, and except for the morpheme tuntu "reindeer", none of the other morphemes can appear in isolation.[74] Many languages use morphology to cross-reference words within a sentence. This is sometimes called agreement. For example, in many Indo-European languages adjectives must crossreference the noun they modify in of number, case and gender, so that the Latin adjective bonus "good" is inflected to agree with a noun that is masculine gender and singular. In many polysynthetic languages verbs cross-reference their subjects and objects. In these types of languages, a single verb may include information that would require an entire sentence in English. For example in the Basque phrase ikusi nauzu "you saw me", the past tense auxiliary verb n-au-zu (similar to English "do") agrees with both the subject (you) expressed by the nprefix, and with the object (me) expressed by the -zu suffix. The sentence could be directly transliterated as "see you-did-me"[75] Syntax Main article: syntax
In addition to word classes, a sentence can be analyzed in of grammatical functions: "The cat" is the subject of the phrase, "on the mat" is a locative phrase, and "sat" is the core of the predicate.
Another way in which languages convey meaning is through the order of words within a sentence. The grammatical rules for how to produce new sentences from words that are already known is called syntax. It is the syntactical rules of a language that determine why a sentence in English such as "I love you" is meaningful but "*love you I" is not[76]: syntactical rules determine
how word order and sentence structure is constrained, and how those constraints contribute to meaning.[77] For example, in English the two sentences "the slaves were cursing the master" and "the master was cursing the slaves" mean different things because the role of the grammatical subject is encoded by the noun being in front of the verb, and the role of object is encoded by the noun appearing after the verb. But in Latin both Dominus servos vituperabat and Servos vituperabat dominus mean "the master was reprimanding the slaves", because servos "slaves" is in the accusative case showing that they are the grammatical object of the sentence and dominus "master" is in the nominative case showing that he is the subject.[78] Latin uses morphology to express the distinction between subject and object, whereas English uses word order. Another example of how syntactic rules contribute to meaning is the rule of inverse word order in questions which exists in many languages. This rule is the reason that in English, when the phrase "John is talking to Lucy" is turned into a question it becomes "Who is John talking to?" and not "John is talking to who?". The latter example may be used as a way of placing special emphasis on "who", thereby slightly altering the meaning of the question. Syntax also includes the rules for how complex sentences are structured by grouping words together in units, called phrases, that can occupy different places in a larger syntactic structure. Sentences can be described as consisting of phrases connected in a tree structure, connecting the phrases to each other at different levels.[79] To the right is a graphic representation of the syntactic analysis of the English sentence "the cat sat on the mat". The sentence is analysed as being constituted by a noun phrase, a verb and a prepositional phrase; the prepositional phrase is further divided into a preposition and a noun phrase; and the noun phrases consist of an article and a noun.[80] The reason sentences can be seen as composed of phrases is because each phrase would be moved around as a single element if syntactic operations are carried out. For example "the cat" is one phrase and "on the mat" is another because they would be treated as single units if we decided to emphasize the location by moving forward the prepositional phrase: "[And] on the mat, the cat sat".[81] There are many different formalist and functionalist frameworks that propose theories for describing syntactic structures, based on different assumptions about what language is and how it should be described. Each of them would analyze a sentence such as this in a different manner.[13] Typology: universals and diversity Main articles: Linguistic typology, Language universals, and Universal Grammar
Languages can be classified in relation to their grammatical types. Languages that belong to different families nonetheless often have features in common, and these shared features tend to correlate.[82] For example languages can be classified on the basis of their basic word order, the relative order of the verb, and its constituents in a normal indicative sentence. In English the basic order is SVO "The snake(S) bit(V) the man(O)", whereas for example the corresponding sentence in the Australian language Gamilaraay would be " " (Snake Man [83] Bit), SOV. Word order type is relevant as a typological parameters because basic word order type corresponds with other syntactic parameters, such as the relative order of nouns and adjectives, or of the use of prepositions of postpositions. Such correlations are called implicational universals. For example most (but not all) languages that are of the SOV type have postpositions rather than prepositions, and have adjectives before nouns.[84]
Through the study of various types of word order it has been discovered that not all languages group the relations between actors and actions as English do into Subject, Object and Verb - this type is called the nominative-accusative type. Some languages called ergative, Gamilaraay among them, distinguish between Agents and Patients. In English transitive clauses, both the subject of intransitive sentences ("I run") and transitive sentences ("I love you") are treated in the same way, shown here by the nominative pronoun I. In ergative languages the single participant in an intransitive sentence such as I run is treated the same as the patient in a transitive sentence giving the equivalent of "me run" and "you love me", only in transitive sentences would the equivalent of the pronoun I be used.[83] In this way the semantic roles can map onto the grammatical relations in different ways, grouping an intransitive subject either with Agents (accusative type) or Patients (ergative type) or even making each of the three roles differently, which is called the tripartite type.[85] The shared features of languages which belong to the same typological class type may have arisen completely independently. Their co-occurrence might be due to the universal laws governing the structure of natural languages—language universals—or they might be the result of languages evolving convergent solutions to the recurring communicative problems that humans use language to solve.[14]
Social contexts of use and transmission While all humans have the ability to learn any language, they only do so if they grow up in an environment in which language exists and is used by others. Language is therefore dependent on communities of speakers in which children learn language from their elders and peers, and themselves transmit language to their own children. Languages are used by those who speak them to communicate, and to solve a plethora of social tasks. Many aspects of language use can be seen to be adapted specifically to these purposes.[14] Due to the way in which language is transmitted between generations and within communities, language perpetually changes, diversifying into new languages or converging due to language . The process is similar to the process of evolution, where the process of descent with modification leads to the formation of a phylogenetic tree.[86] However, languages differ from a biological organisms in that they readily incorporate elements from other languages through the process of diffusion, as speakers of different languages come into . Humans also frequently speak more than one language, acquiring their first language or languages as children, or learning new languages as they grow up. Because of the increased language in the globalizing world many small languages are becoming endangered as their speakers shift to other languages that afford the possibility to participate in larger and more influential speech communities.[87] Usage and meaning Main article: pragmatics
The semantic study of meaning assumes that meaning is located in a relation between signs and meanings that are firmly established through social convention. But semantics does not study the way in which social conventions are made and affect language. However, when studying the way
in which words and signs are used, it is often the case that words have different meanings depending on the social context of use. And signs also change their meaning over time, as the conventions governing their usage gradually change. The study of how the meaning of linguistic expressions change depending on context is called pragmatics. Pragmatics is concerned with the ways in which language use is patterned and how these patterns contribute to meaning. For example in all languages linguistic expressions can be used not just to transmit information, but to perform actions. Certain actions are made only through language, but nonetheless have tangible effects. For example the act of 'naming', which creates a new name for some entity, or the act of 'pronouncing someone man and wife' which creates a social contract of marriage. These types of acts are called speech acts, although they can of course also be carried out through writing or hand g.[88] The form of linguistic expression often does not correspond to the meaning that it actually has in a social context. For example, if at a dinner table a person asks "can you reach the salt?", that is in fact not a question about the length of the arms of the one being addressed, but a request to the salt across the table. This meaning is implied by the context in which it is spoken, these kinds of effects of meaning are called conversational implicatures. These social rules for the ways in which certain ways of using language are considered appropriate in certain situations, and how to understand utterances in relation to their context, vary between communities, and learning them is a large part of acquiring communicative competence in a language.[89] Language acquisition Main articles: Language acquisition, Second-language acquisition, Second language, and Language education
All normal children acquire language if they are exposed to it in their first years of life, even in cultures where adults rarely address infants and toddlers directly.
All healthy, normally-developing human beings learn to use language. Children acquire the language or languages used around them – whichever languages they receive sufficient exposure to during childhood. The development is essentially the same for children acquiring sign or oral languages.[90] This learning process is referred to as first-language acquisition, since unlike many
other kinds of learning it requires no direct teaching or specialized study. In The Descent of Man, naturalist Charles Darwin called this process "an instinctive tendency to acquire an art."[8] First language acquisition proceeds in a fairly regular sequence, though there is a wide degree of variation in the timing of particular stages among normally-developing infants. From birth, newborns respond more readily to human speech than to other sounds. Around one month of age, babies appear to be able to distinguish between different speech sounds. Around six months of age, a child will begin babbling, producing the speech sounds or handshapes of the languages used around them. Words appear around the age of 12 to 18 months; the average vocabulary of an eighteen-month old child is around 50 words. A child's first utterances are holophrases (literally "whole-sentences"), utterances that use just one word to communicate some idea. Several months after a child begins producing words, she or he will produce two-word utterances, and within a few more months begin to produce telegraphic speech, short sentences that are less grammatically complex than adult speech, but that do show regular syntactic structure. From roughly the age of three to five years, a child's ability to speak or sign is refined to the point that it resembles adult language.[91] Acquisition of second and additional languages can come at any age, through exposure in daily life or courses. Children learning a second language are more likely to achieve native-like fluency than adults, but in general it is very rare for someone speaking a second language to completely for a native speaker. An important difference between first language acquisition and additional language acquisition is that the process of additional language acquisition is influenced by languages that the learner already knows. Language and culture See also: Culture and Speech community
Arnold Lakhovsky, The Conversation (circa 1935)
Languages, understood as the particular set of speech norms of a particular community, are also a part of the larger culture of the community that speak them. Humans use language as a way of signalling identity with one cultural group and difference from others. Even among speakers of
one language several different ways of using the language exist, and each is used to signal affiliation with particular subgroups within a larger culture. Linguists and anthropologists, particularly sociolinguists, ethnolinguists and linguistic anthropologists have specialized in studying how ways of speaking vary between speech communities.[92] A community's way of using language is a part of the community's culture, just as other shared practices are; it is a way of displaying group identity. Ways of speaking function not only to facilitate communication, but also to identify the social position of the speaker. In many languages there are stylistic or even grammatical differences between the ways men and women speak. Just as some languages employ different words depending on who is listening. For example in the Australian language Dyirbal a married man must use a special set of words to refer to everyday items when speaking in the presence of his mother in-law.[93] Linguists use the term "varieties" to refer to the different ways of speaking a language. This term includes geographically or socioculturally defined dialects as well as the jargons or styles of subcultures. Linguistic anthropologists and sociologists of language define communicative style as the ways that language is used and understood within a particular culture.[93] Languages do not differ only in pronunciation, vocabulary or grammar, but also through having different "cultures of speaking". Some cultures for example have elaborate systems of "social deixis", systems of signalling social distance through linguistic means.[94] In English, social deixis is shown mostly though distinguishing between addressing some people by first name and others by surname, but also in titles such as "Mrs.", "boy", "Doctor" or "Your Honor", but in other languages such systems may be highly complex and codified in the entire grammar and vocabulary of the language. For instance, in several languages of east Asia, such as Thai, Burmese and Javanese, different words are used according to whether a speaker is addressing someone of higher or lower rank than oneself in a ranking system with animals and children ranking the lowest and gods and of royalty as the highest.[94] Writing, literacy and technology Main articles: Writing and Literacy
An inscription of Swampy Cree using Canadian Aboriginal syllabics, an abugida developed by Christian missionaries for Indigenous Canadian languages
Throughout history a number of different ways of representing language in graphic media have been invented. These are called writing systems.
The use of writing has made language even more useful to humans. It makes it possible to store large amounts of information outside of the human body and retrieve it again, and it allows communication across distances that would otherwise be impossible. Many languages conventionally employ different genres, styles and in written and spoken language, and in some communities writing traditionally takes place in an entirely different language than the one spoken. There is some evidence that the use of writing also have effects on the cognitive development of humans, perhaps because acquiring literacy generally requires explicit and formal education.[95] The invention of the first writing systems is roughly contemporary with the beginning of the Bronze Age in the late Neolithic of the late 4th millennium BC. The Sumerian archaic cuneiform script and the Egyptian hieroglyphs are generally considered the earliest writing systems, both emerging out of their ancestral proto-literate symbol systems from 3400–3200 BC with the earliest coherent texts from about 2600 BC. It is generally agreed that Sumerian writing was an independent invention; however, it is debated whether Egyptian writing was developed completely independently of Sumerian, or was a case of cultural diffusion. A similar debate exists for the Chinese script, which developed around 1200 BC. The pre-Columbian Mesoamerican writing systems (including among others Olmec and Maya scripts) are generally believed to have had independent origins.[64] Language change Main articles: Language change and Grammaticalization
The first page of the Beowulf poem written in Old English in the early medieval period (800 - 1100 AD). Although old English language is the direct ancestor of modern English language change has rendered it unintelligible to contemporary English speakers.
All languages change as speakers adopt or invent new ways of speaking and them on to other of their speech community. Language change happens at all levels from the phonological level to the levels of vocabulary, morphology, syntax and discourse. Even though language change is often initially evaluated negatively by speakers of the language who often consider changes to be "decay" or a sign of slipping norms of language usage, it is natural and inevitable.[96][97] Changes may affect specific sounds or the entire phonological system. Sound change can consist of the replacement of one speech sound or phonetic feature by another, or of the complete loss of the affected sound, or even the introduction of a new sound in a place where there previously was none. Sound changes can be conditioned in which case a sound is changed only if it occurs in the vicinity of certain other sounds. Sound change is usually assumed to be regular, which means that it is expected to apply mechanically whenever its structural conditions are met, irrespective of any non-phonological factors. On the other hand, sound changes can sometimes be sporadic, affecting only one particular word or a few words, without any seeming regularity. Sometimes a simple change triggers a chain shift in which the entire phonological system is affected. This happened in the Germanic languages when the sound change known as Grimm's law affected all the stop consonants in the system. The original consonant *bʰ became /b/ in the Germanic languages, and the previous *b in turn became /p/ and the previous *p became /f/. The same process applied to all stop consonants and explains why Italic languages such as Latin have p in words like pater and pisces whereas Germanic languages like English have father and fish.[98] Another example is the Great Vowel Shift in English, which is the reason that the spelling of English vowels do not correspond well to their current pronunciation, this is because the vowel shift brought the already established orthography out of synchronization with pronunciation. Another source of sound change is the erosion of words, as pronunciation gradually becomes increasingly indistinct and shortens words leaving out syllables or sounds. This kind of change caused Latin mea domina to eventually become the French madame and American English ma'am.[99] Change also happens in the grammar of languages as discourse patterns such as idioms or particular constructions become grammaticalized. This frequently happen when words or morphemes erode and the grammatical system is unconsciously rearranged to compensate for the lost element. For example in some varieties of Caribbean Spanish where word final /s/ has eroded away. Since Standard Spanish uses final /s/ is the morpheme marking the second person subject "you" on verbs, the Caribbean varieties now have to express the second person using the pronoun tú. This means that the sentence "what's your name" is ¿como te llamas? ['komo te 'jamas] in Standard Spanish, but ['komo 'tu te 'jama] in Caribbean Spanish. The simple sound change has affected both morphology and syntax.[100] Another common cause of grammatical change is the gradual petrification of idioms into new grammatical forms, for example the way the English "going to" construction lost its aspect of movement and in some varieties of English has almost become a full fledged future tense (e.g. I'm gonna). Language change may be motivated by "language internal" factors, such as changes in pronunciation motivated by certain sounds being difficult to distinguish auditively or to produce,
or because of certain patterns of change that cause certain rare types of constructions to drift towards more common types.[101] Other causes of language change are social, such as when certain certain pronunciations become emblematic of hip of certain groups, such as social classes, or with ideologies, and therefore are adopted by those who wish to identify with those groups or ideas. In this way issues of identity and politics can have profound effects on language structure.[102] Language Main articles: Language , Pidgin, Creole language, and Sprachbund
One important source of language change is between different languages and resulting diffusion of linguistic traits between languages. Language occurs when speakers of two or more languages or varieties interact on a regular basis.[103] Multilingualism is likely to have been the norm throughout human history, and today most people in the world are multilingual. Before the rise of the concept of the ethno-national state, monolingualism was characteristic mainly of populations inhabiting small islands. But with the ideology that made one people, one, state and one language the most desirable political arrangement monolingualism started to spread throughout the world. Nonetheless there are only 250 countries in the world corresponding to some 6000 languages, so most countries are multilingual and most languages therefore exist in close with other languages.[104] When speakers of different languages interact closely, it is typical for their languages to influence each other. Through sustained language over long periods linguistic traits diffuse between languages, and languages belonging to different families may converge to become more similar. In areas where many languages are in close this may lead to the formation of language areas in which unrelated languages share a number of linguistic features. A number of such language areas have been documented, among them: the Balkan language area, the Mesoamerican language area, and the Ethiopian language area. Also larger areas such as South Asia, Europe and South East Asia have sometimes been considered language areas, because of widespread diffusion of specific areal features.[105][106] Language may also lead to a variety of other linguistic phenomena, including language convergence, borrowing, and relexification (replacement of much of the native vocabulary with that of another language). In situations of extreme and sustained language it may lead to the formation of new mixed languages that cannot be considered to belong to a single language family. One type of mixed language called pidgins occurs when adult speakers of two different languages interact on a regular basis, but in a situation where neither group learns to learn to speak the language of the other group fluently. In such a case they will often construct a communication form that has traits of both languages, but which has a simplified grammatical and phonological structure, the language comes to contain mostly the grammatical and phonological categories that exist in both languages. Pidgin languages are defined by not having any native speakers, but only being spoken by people who have another language as their first language. But if a Pidgin language becomes the main language of a speech community, then eventually children will grow up learning the pidgin as their first language. As the generation of child learners grow up the pidgin will often be seen to change its structure and acquire a greater degree of complexity. This type of language is generally called a creole language. An example of
such mixed languages are Tok Pisin the official language of Papua New-Guinea which originally arose as a Pidgin based on English and Austronesian languages; others are Kreyòl ayisyen the French based creole language spoken in Haiti, and Michif, a mixed language of Canada, based on the Native American language Cree and French.[107][108]
Linguistic diversity See also: List of languages and List of languages by number of speakers
Language
Native speakers (in mil.)[109]
Mandarin
845
Spanish
329[110]
English
328
Arabic languages
221
Hindi
182
Bengali
181
Portuguese
178
Russian
144
Japanese
122
German
90,3
A "living language" is simply one which is in wide use as a primary form of communication by a specific group of living people. The exact number of known living languages varies from 6,000 to 7,000, depending on the precision of one's definition of "language", and in particular on how one defines the distinction between languages and dialects. As of 2009, SIL Ethnologue catalogued 6909 living human languages. The Ethnologue establishes linguistic groups based on studies of mutual intelligibility, and therefore often include more categories than more conservative classifications. For example the Danish language that most scholars consider a single language with several dialects, is classified as three distinct languages by the Ethnologue.[109]
The Ethnologue is also sometimes criticized for using cumulative data gathered over many decades, meaning that exact speaker numbers are frequently out of date, and some languages classified as living may have already become extinct. According to the Ethnologue 389 (or nearly 6%) languages have more than a million speakers. These languages together for 94% of the world’s population, whereas 94% of the world's languages for the remaining 6% of the global population. To the right is a table of the world's 10 most spoken languages with population estimates from the Ethnologue (2009 figures).[109] Languages and dialects
There is no clear distinction between a language and a dialect, notwithstanding a famous aphorism attributed to linguist Max Weinreich that "a language is a dialect with an army and navy".[111] For example, national boundaries frequently override linguistic difference in determining whether two linguistic varieties are languages or dialects. Cantonese and Mandarin are for example often classified as "dialects" of Chinese, even though they are more different from each other than Swedish is from Norwegian. Before the Yugoslav civil war, Serbo-Croatian was considered a single language with two dialects, but now Croatian and Serbian are considered different languages, and employ different writing systems. In other words, the distinction may hinge on political considerations as much as on cultural differences, distinctive writing systems, or degree of mutual intelligibility.[112] Language families of the World Main articles: Language family, dialectology, Historical linguistics, and List of language families
Principal language families of the world (and in some cases geographic groups of families). For greater detail, see Distribution of languages in the world.
The world's languages can be grouped into language families consisting of languages that can be shown to have common ancestry. Linguists currently recognize many hundred language families,
although some of them can possibly be grouped into larger units as more evidence becomes available and in-depth studies are carried out. At present there are also dozens of language isolates - languages that cannot be shown to be related to any other languages in the world. Among them is Basque, spoken in Europe, Zuni of New Mexico, P'urhépecha of Mexico, Ainu of Japan, Burushaski of Pakistan and many others. The language families of the world that have most speakers are the Indo-European languages, spoken by 46% of the world's population. This family includes major world languages like English, Spanish, Russian and Hindi/Urdu. The Indo-European family achieved prevalence first during the Eurasian Migration Period (c. 400-800 AD), and subsequently through the European colonial expansion which brought the Indo-European languages to a politically and often numerically dominant position in the Americas and much of Africa. The Sino-Tibetan languages are spoken by 21% of the world's population and includes many of the languages of East Asia including Mandarin Chinese, Cantonese and hundreds of smaller languages. Africa is home to a large number of language families, the largest of which is the Niger–Congo languages which includes such languages as Kiswahili, Shona and Yoruba. Speakers of the Niger-Congo languages for 6.4% of the world's population. A similar number of people speak the Afroasiatic languages, which include the populous Semitic languages such as Arabic, Hebrew language and the languages of the Sahara region such as the Berber languages and Hausa. The Austronesian languages are spoken by 5.9% of the world's population and stretches from Madagascar to maritime Southeast Asia all the way to Oceania. It includes such languages as Malagasy, Māori, Samoan, and many of the indigenous languages of Indonesia and Taiwan. The Austronesian languages are considered to have originated in Taiwan around 3000 BC. and spread through the Oceanic region through island-hopping, based on an level advanced nautical technology. Other populous language families are the Dravidian languages of South Asia (among them Tamil and Telugu), the Turkic languages of Central Asia (such as Turkish), and the AustroAsiatic (Among them Khmer) and Tai–Kadai languages of Southeast Asia (including Thai).[113] The areas of the world where there is greatest linguistic diversity such as the Americas, PapuaNew Guinea, West Africa and South-Asia contain hundreds of small language families. These areas together for the majority of the world's languages, though not the majority of speakers. In the Americas some of the largest language families include the Quechumaran, Arawak, and Tupi-Guarani families of South America, the Uto-Aztecan, Oto-Manguean, Mayan of Mesoamerica, and the Na-Dene and Algonquian language families of North America. In Australia, most indigenous languages belong to the Pama-Nyungan family, whereas Papua-New Guinea is home to a large number of small families and isolates, as well as a number of Austronesian languages.[114] Language endangerment Main articles: Endangered language, language loss, language shift, and language death
Together, the eight countries in red contain more than 50% of the world's languages. The areas in blue are the most linguistically diverse in the world, and the locations of most of the world's endangered languages.
Language endangerment occurs when a language is at risk of falling out of use as its speakers die out or shift to speaking another language. Language loss occurs when the language has no more native speakers, and becomes a dead language. If eventually no one speaks the language at all, it becomes an extinct language. While languages have always gone extinct throughout human history, they are currently disappearing at an accelerated rate due to the processes of globalization and neo-colonialism, where the economically powerful languages dominate other languages.[1] The more commonly spoken languages dominate the less commonly spoken languages and therefore, the less commonly spoken languages eventually disappear from populations. The total number of languages in the world is not known. Estimates vary depending on many factors. The general consensus is that there are between 6,000[2] and 7,000 languages currently spoken, and that between 50-90% of those will have become extinct by the year 2100.[1] The top 20 languages spoken by more than 50 million speakers each, are spoken by 50% of the world's population, whereas many of the other languages are spoken by small communities, most of them with less than 10,000 speakers.[1] The United Nations Educational, Scientific and Cultural Organization (UNESCO) operates with five levels of language endangerment: "safe", "vulnerable" (not spoken by children outside the home), "definitely endangered" (not spoken by children), "severely endangered" (only spoken by the oldest generations), "critically endangered" (spoken by few of the oldest generation, often semi-speakers). Notwithstanding claims that the world would be better off if most adopted a single common lingua franca such as English or Esperanto, there is a general consensus that the loss of languages harms the cultural diversity of the world. It is a common belief, going back to the biblical narrative of the tower of Babel that linguistic diversity causes political conflict,[24] but this belief is contradicted by the facts that many of the world's major
episodes of violence have taken place in situations with low linguistic diversity, such as the Yugoslav and American Civil Wars, or the genocides of Nazi and Rwanda, whereas many of the most stable political units have been highly multilingual.[115] Many projects under way are aimed at preventing or slowing this loss by revitalizing endangered languages and promoting education and literacy in minority languages. Across the world many countries have enacted specific legislation aimed at protecting and stabilizing the language of indigenous speech communities. A minority of linguists have argued that language loss is a natural process that should not be counteracted, and that documenting endangered languages for posterity is sufficient.[116]
Phonology Linguistics Theoretical linguistics
Cognitive linguistics Generative linguistics
Functional theories of grammar
Quantitative linguistics
Phonology
Morphology
Morphophonology
Syntax
Lexis
Semantics
Pragmatics
Graphemics
Orthography
Semiotics
Descriptive linguistics
Anthropological linguistics
Comparative linguistics
Historical linguistics
Etymology
Graphetics
Phonetics Sociolinguistics
Applied and experimental linguistics
Computational linguistics
Evolutionary linguistics
Forensic linguistics
Internet linguistics
Language acquisition
Language assessment
Language development
Language education Linguistic anthropology
Neurolinguistics
Psycholinguistics
Second-language acquisition
Related articles
History of linguistics Linguistic prescription
List of linguists
List of unsolved problems in linguistics
Portal
v
t
e
Phonology is a branch of linguistics concerned with the systematic organization of sounds in languages. It has traditionally focused largely on study of the systems of phonemes in particular languages, but it may also cover any linguistic analysis either at a level beneath the word (including syllable, onset and rhyme, articulatory gestures, articulatory features, mora, etc.) or at all levels of language where sound is considered to be structured for conveying linguistic meaning. Phonology also includes the study of equivalent organizational systems in sign languages. The word phonology (as in the phonology of English) can also refer to the phonological system (sound system) of a given language. This is one of the fundamental systems which a language is considered to comprise, like its syntax and its vocabulary. Phonology is often distinguished from phonetics. While phonetics concerns the physical production, acoustic transmission and perception of the sounds of speech,[1][2] phonology describes the way sounds function within a given language or across languages to encode meaning. In other words, phonetics belongs to descriptive linguistics, and phonology to theoretical linguistics. Note that this distinction was not always made, particularly before the development of the modern concept of phoneme in the mid 20th century. Some subfields of modern phonology have a crossover with phonetics in descriptive disciplines such as psycholinguistics and speech perception, resulting in specific areas like articulatory phonology or laboratory phonology.
Derivation and definitions The word phonology comes from Greek φωνή, p ō ḗ, "voice, sound", and the suffix -logy (which is from Greek λόγος, lógos, "word, speech, subject of discussion"). Definitions of the term vary. Nikolai Trubetzkoy in Grundzüge der Phonologie (1939) defines phonology as "the study of sound pertaining to the system of language", as opposed to phonetics, which is "the study of sound pertaining to the act of speech" (the distinction between language and speech being basically Saussure's distinction between langue and parole).[3] More recently, Lass (1998) writes that phonology refers broadly to the subdiscipline of linguistics concerned with the sounds of language, while in more narrow , "phonology proper is concerned with the function, behaviour and organization of sounds as linguistic items".[1] According to Clark et al. (2007) it means the systematic use of sound to encode meaning in any spoken human language, or the field of linguistics studying this use.[4]
Development of phonology The history of phonology may be traced back to the Ashtadhyayi, the Sanskrit grammar composed by Pāṇini in the 4th century BC. In particular the Shiva Sutras, an auxiliary text to the Ashtadhyayi, introduces what can be considered a list of the phonemes of the Sanskrit language,
with a notational system for them that is used throughout the main text, which deals with matters of morphology, syntax and semantics. The Polish scholar Jan Baudouin de Courtenay (together with his former student Mikołaj Kruszewski) introduced the concept of the phoneme in 1876, and his work, though often unacknowledged, is considered to be the starting point of modern phonology. He also worked on the theory of phonetic alternations (what is now called allophony and morphophonology), and had a significant influence on the work of Ferdinand de Saussure.
Nikolai Trubetzkoy, 1920s.
An influential school of phonology in the interwar period was the Prague School. One of its leading was Prince Nikolai Trubetzkoy, whose Grundzüge der Phonologie (Principles of Phonology),[3] published posthumously in 1939, is among the most important works in the field from this period. Directly influenced by Baudouin de Courtenay, Trubetzkoy is considered the founder of morphophonology, although this concept had also been recognized by de Courtenay. Trubetzkoy also developed the concept of the archiphoneme. Another important figure in the Prague School was Roman Jakobson, who was one of the most prominent linguists of the 20th century. In 1968 Noam Chomsky and Morris Halle published The Sound Pattern of English (SPE), the basis for Generative Phonology. In this view, phonological representations are sequences of segments made up of distinctive features. These features were an expansion of earlier work by Roman Jakobson, Gunnar Fant, and Morris Halle. The features describe aspects of articulation and perception, are from a universally fixed set, and have the binary values + or −. There are at least two levels of representation: underlying representation and surface phonetic representation. Ordered phonological rules govern how underlying representation is transformed into the actual pronunciation (the so-called surface form). An important consequence of the influence SPE had on phonological theory was the downplaying of the syllable and the emphasis on segments. Furthermore, the Generativists folded morphophonology into phonology, which both solved and created problems.
Natural Phonology was a theory based on the publications of its proponent David Stampe in 1969 and (more explicitly) in 1979. In this view, phonology is based on a set of universal phonological processes which interact with one another; which ones are active and which are suppressed are language-specific. Rather than acting on segments, phonological processes act on distinctive features within prosodic groups. Prosodic groups can be as small as a part of a syllable or as large as an entire utterance. Phonological processes are unordered with respect to each other and apply simultaneously (though the output of one process may be the input to another). The second-most prominent Natural Phonologist is Stampe's wife, Patricia Donegan; there are many Natural Phonologists in Europe, though also a few others in the U.S., such as Geoffrey Nathan. The principles of Natural Phonology were extended to morphology by Wolfgang U. Dressler, who founded Natural Morphology. In 1976 John Goldsmith introduced autosegmental phonology. Phonological phenomena are no longer seen as operating on one linear sequence of segments, called phonemes or feature combinations, but rather as involving some parallel sequences of features which reside on multiple tiers. Autosegmental phonology later evolved into Feature Geometry, which became the standard theory of representation for the theories of the organization of phonology as different as Lexical Phonology and Optimality Theory. Government Phonology, which originated in the early 1980s as an attempt to unify theoretical notions of syntactic and phonological structures, is based on the notion that all languages necessarily follow a small set of principles and vary according to their selection of certain binary parameters. That is, all languages' phonological structures are essentially the same, but there is restricted variation that s for differences in surface realizations. Principles are held to be inviolable, though parameters may sometimes come into conflict. Prominent figures include Jonathan Kaye, Jean Lowenstamm, Jean-Roger Vergnaud, Monik Charette, John Harris, and many others. In a course at the LSA summer institute in 1991, Alan Prince and Paul Smolensky developed Optimality Theory — an overall architecture for phonology according to which languages choose a pronunciation of a word that best satisfies a list of constraints which is ordered by importance: a lower-ranked constraint can be violated when the violation is necessary in order to obey a higher-ranked constraint. The approach was soon extended to morphology by John McCarthy and Alan Prince, and has become a dominant trend in phonology. Though this usually goes unacknowledged, Optimality Theory was strongly influenced by Natural Phonology; both view phonology in of constraints on speakers and their production, though these constraints are formalized in very different ways.[citation needed] The appeal to phonetic grounding of constraints in various approaches has been criticized by proponents of 'substance-free phonology'.[5] Broadly speaking Government Phonology (or its descendant, Strict-CV Phonology) has a greater following in the United Kingdom, whereas Optimality Theory is predominant in North America.[citation needed]
Analysis of phonemes An important part of traditional, pre-generative, schools of phonology is studying which sounds can be grouped into distinctive units within a language; these units are known as phonemes. For example, in English, the "p" sound in pot is aspirated (pronounced [pʰ]), while that in spot is not aspirated (pronounced [p]). However, English speakers intuitively treat both sounds as variations (allophones) of the same phonological category, that is, of the phoneme /p/. (Traditionally, it would be argued that if a word-initial aspirated [pʰ] were interchanged with the unaspirated [p] in spot, native speakers of English would still hear the same words; that is, the two sounds are perceived as "the same" /p/.) In some other languages, however, these two sounds are perceived as different, and they are consequently assigned to different phonemes in those languages. For example, in Thai, Hindi, and Quechua, there are minimal pairs of words for which aspiration is the only contrasting feature (two words with different meanings that are identical except that one has an aspirated sound where the other has an unaspirated one).
The vowels of modern (Standard) Arabic and (Israeli) Hebrew from the phonemic point of view. Note the intersection of the two circles—the distinction between short a, i and u is made by both speakers, but Arabic lacks the mid articulation of short vowels, while Hebrew lacks the distinction of vowel length.
The vowels of Modern Standard Arabic and Israeli Hebrew from the phonetic point of view. Note that the two circles are totally separate—none of the vowel-sounds made by speakers of one language is
made by speakers of the other. One modern theory is that Israeli Hebrew's phonology reflects Yiddish elements, not Semitic ones.
Part of the phonological study of a language therefore involves looking at data (phonetic transcriptions of the speech of native speakers) and trying to deduce what the underlying phonemes are and what the sound inventory of the language is. The presence or absence of minimal pairs, as mentioned above, is a frequently used criterion for deciding whether two sounds should be assigned to the same phoneme. However other considerations often need to be taken into as well. The particular sounds which are phonemic in a language can change over time. At one time, [f] and [v] were allophones in English, but these later changed into separate phonemes. This is one of the main factors of historical change of languages as described in historical linguistics. The findings and insights of speech perception and articulation research complicates the traditional and somewhat intuitive idea of interchangeable allophones being perceived as the same phoneme. First, interchanged allophones of the same phoneme can result in unrecognizable words. Second, actual speech, even at a word level, is highly co-articulated, so it is problematic to expect to be able to splice words into simple segments without affecting speech perception. Different linguists therefore take different approaches to the problem of asg sounds to phonemes. For example, they differ in the extent to which they require allophones to be phonetically similar. There are also differing ideas as to whether this grouping of sounds is purely a tool for linguistic analysis, or reflects an actual process in the way the human brain processes a language. Since the early 1960s, theoretical linguists have moved away from the traditional concept of a phoneme, preferring to consider basic units at a more abstract level, as a component of morphemes; these units can be called morphophonemes, and analysis using this approach is called morphophonology.
Other topics in phonology In addition to the minimal units that can serve the purpose of differentiating meaning (the phonemes), phonology studies how sounds alternate, i.e. replace one another in different forms of the same morpheme (allomorphs), as well as, for example, syllable structure, stress, accent, and intonation. Phonology also includes topics such as phonotactics (the phonological constraints on what sounds can appear in what positions in a given language) and phonological alternation (how the pronunciation of a sound changes through the application of phonological rules, sometimes in a given order which can be feeding or bleeding,[6]) as well as prosody, the study of suprasegmentals and topics such as stress and intonation. The principles of phonological analysis can be applied independently of modality because they are designed to serve as general analytical tools, not language-specific ones. The same principles
have been applied to the analysis of sign languages (see Phonemes in sign languages), even though the sub-lexical units are not instantiated as speech sounds.
Morphology (linguistics) Linguistics Theoretical linguistics
Cognitive linguistics
Generative linguistics
Functional theories of grammar
Quantitative linguistics
Phonology
Morphology Morphophonology
Syntax
Lexis
Semantics
Pragmatics
Graphemics
Orthography
Semiotics
Descriptive linguistics
Anthropological linguistics
Comparative linguistics
Historical linguistics
Etymology
Graphetics
Phonetics Sociolinguistics
Applied and
experimental linguistics
Computational linguistics
Evolutionary linguistics
Forensic linguistics
Internet linguistics
Language acquisition
Language assessment
Language development
Language education Linguistic anthropology
Neurolinguistics
Psycholinguistics
Second-language acquisition
Related articles
History of linguistics Linguistic prescription
List of linguists
List of unsolved problems in linguistics
Portal
v
t
e
In linguistics, morphology is the identification, analysis and description of the structure of a given language's morphemes and other linguistic units, such as root words, affixes, parts of speech, intonation/stress, or implied context (words in a lexicon are the subject matter of lexicology). Morphological typology represents a method for classifying languages according to the ways by which morphemes are used in a language —from the analytic that use only isolated morphemes, through the agglutinative ("stuck-together") and fusional languages that use bound morphemes (affixes), up to the polysynthetic, which compress many separate morphemes into single words.
While words are generally accepted as being (with clitics) the smallest units of syntax, it is clear that in most languages, if not all, words can be related to other words by rules (grammars). For example, English speakers recognize that the words dog and dogs are closely related — differentiated only by the plurality morpheme "-s", which is only found bound to nouns, and is never separate. Speakers of English (a fusional language) recognize these relations from their tacit knowledge of the rules of word formation in English. They infer intuitively that dog is to dogs as cat is to cats; similarly, dog is to dog catcher as dish is to dishwasher, in one sense. The rules understood by the speaker reflect specific patterns, or regularities, in the way words are formed from smaller units and how those smaller units interact in speech. In this way, morphology is the branch of linguistics that studies patterns of word formation within and across languages, and attempts to formulate rules that model the knowledge of the speakers of those languages. A language like Classical Chinese instead uses unbound ("free") morphemes, but depends on post-phrase affixes, and word order to convey meaning. However, this cannot be said of presentday Mandarin, in which most words are compounds (around 80%), and most roots are bound. In the Chinese languages, these are understood as grammars that represent the morphology of the language. Beyond the agglutinative languages, a polysynthetic language like Chukchi will have words composed of many morphemes: The word "təmeyŋəlevtpəγtərkən" is composed of eight morphemes t-ə- e ŋ-ə-levt-pəγt-ə-rkə , that can be glossed 1.SG.SUBJ-great-head-hurt-PRES.1, meaning 'I have a fierce headache.' The morphology of such languages allows for each consonant and vowel to be understood as morphemes, just as the grammars of the language key the usage and understanding of each morpheme. The discipline that deals specifically with the sound changes occurring within morphemes is called morphophonology.
History The history of morphological analysis dates back to the ancient Indian linguist Pāṇini, who formulated the 3,959 rules of Sanskrit morphology in the text Aṣṭā ā ī by using a constituency grammar. The Greco-Roman grammatical tradition also engaged in morphological analysis. Studies in Arabic morphology, conducted by Marāḥ al-arwāḥ and Aḥmad b. ‘alī Mas‘ūd, date back to at least 1200 CE.[1] The term morphology was coined by August Schleicher in 1859.[2]
Fundamental concepts Lexemes and word forms
The distinction between these two senses of "word" is arguably the most important one in morphology. The first sense of "word", the one in which dog and dogs are "the same word", is called a lexeme. The second sense is called word form. We thus say that dog and dogs are different forms of the same lexeme. Dog and dog catcher, on the other hand, are different lexemes, as they refer to two different kinds of entities. The form of a word that is chosen conventionally to represent the canonical form of a word is called a lemma, or citation form. Prosodic word vs. morphological word
Here are examples from other languages of the failure of a single phonological word to coincide with a single morphological word form. In Latin, one way to express the concept of 'NOUNPHRASE1 and NOUN-PHRASE2' (as in "apples and oranges") is to suffix '-que' to the second noun phrase: "apples oranges-and", as it were. An extreme level of this theoretical quandary posed by some phonological words is provided by the Kwak'wala language.[3] In Kwak'wala, as in a great many other languages, meaning relations between nouns, including possession and "semantic case", are formulated by affixes instead of by independent "words". The three-word English phrase, "with his club", where 'with' identifies its dependent noun phrase as an instrument and 'his' denotes a possession relation, would consist of two words or even just one word in many languages. Unlike most languages, Kwak'wala semantic affixes phonologically attach not to the lexeme they pertain to semantically, but to the preceding lexeme. Consider the following example (in Kwakw'ala, sentences begin with what corresponds to an English verb):[4] kwixʔid-i-da bəgwanəmai-χ-a q'asa-s-isi t'alwagwayu Morpheme by morpheme translation: kwix id-i-da = clubbed-PIVOT-DETERMINER bəgwanəma-χ-a = man-ACCUSATIVE-DETERMINER q'asa-s-is = otter-INSTRUMENTAL-3SG-POSSESSIVE t'alwagwayu = club. "the man clubbed the otter with his club"
(Notation notes: 1. accusative case marks an entity that something is done to. 2. determiners are words such as "the", "this", "that". 3. the concept of "pivot" is a theoretical construct that is not relevant to this discussion.)
That is, to the speaker of Kwak'wala, the sentence does not contain the "words" 'him-the-otter' or 'with-his-club' Instead, the markers -i-da (PIVOT-'the'), referring to man, attaches not to bə ə ('man'), but instead to the "verb"; the markers -χ-a (ACCUSATIVE-'the'), referring to otter, attach to bə ə instead of to q'asa ('otter'), etc. To summarize differently: a speaker of Kwak'wala does not perceive the sentence to consist of these phonological words: kwixʔid i-da-bəgwanəma χ-a-q'asa s-isi-t'alwagwayu clubbed PIVOT-the-mani hit-the-otter with-hisi-club A central publication on this topic is the recent volume edited by Dixon and Aikhenvald (2007), examining the mismatch between prosodic-phonological and grammatical definitions of "word" in various Amazonian, Australian Aboriginal, Caucasian, Eskimo, Indo-European, Native North American, West African, and sign languages. Apparently, a wide variety of languages make use of the hybrid linguistic unit clitic, possessing the grammatical features of independent words but the prosodic-phonological lack of freedom of bound morphemes. The intermediate status of clitics poses a considerable challenge to linguistic theory. Inflection vs. word formation
Given the notion of a lexeme, it is possible to distinguish two kinds of morphological rules. Some morphological rules relate to different forms of the same lexeme; while other rules relate to different lexemes. Rules of the first kind are called inflectional rules, while those of the second kind are called word formation. The English plural, as illustrated by dog and dogs, is an inflectional rule; compound phrases and words like dog catcher or dishwasher provide an example of a word formation rule. Informally, word formation rules form "new words" (that is, new lexemes), while inflection rules yield variant forms of the "same" word (lexeme). There is a further distinction between two kinds of word formation: derivation and compounding. Compounding is a process of word formation that involves combining complete word forms into a single compound form; dog catcher is therefore a compound, because both dog and catcher are complete word forms in their own right before the compounding process has been applied, and are subsequently treated as one form. Derivation involves affixing bound (non-independent) forms to existing lexemes, whereby the addition of the affix derives a new lexeme. One example of derivation is clear in this case: the word independent is derived from the word dependent by prefixing it with the derivational prefix in-, while dependent itself is derived from the verb depend. The distinction between inflection and word formation is not at all clear cut. There are many examples where linguists fail to agree whether a given rule is inflection or word formation. The next section will attempt to clarify this distinction. Word formation is a process, as we have said, where you combine two complete words, whereas with inflection you can combine a suffix with some verb to change its form to subject of the sentence. For example: in the present indefinite, we use ‘go’ with subject I/we/you/they and plural nouns, whereas for third person singular pronouns (he/she/it) and singular nouns we use
‘goes’. So this ‘-es’ is an inflectional marker and is used to match with its subject. A further difference is that in word formation, the resultant word may differ from its source word’s grammatical category whereas in the process of inflection the word never changes its grammatical category. Paradigms and morphosyntax Linguistic typology Morphological Isolating Synthetic Polysynthetic Fusional Agglutinative Morphosyntactic Alignment Accusative Ergative Split ergative Philippine Active–stative Tripartite Marked nominative Inverse marking Syntactic pivot
Theta role Word order VO languages Subject–verb–object Verb–subject–object Verb–object–subject OV languages Subject–object–verb Object–subject–verb Object–verb–subject Time–manner–place Place–manner–time This box:
view
talk
edit
A linguistic paradigm is the complete set of related word forms associated with a given lexeme. The familiar examples of paradigms are the conjugations of verbs, and the declensions of nouns. Accordingly, the word forms of a lexeme may be arranged conveniently into tables, by classifying them according to shared inflectional categories such as tense, aspect, mood, number, gender or case. For example, the personal pronouns in English can be organized into tables, using the categories of person (first, second, third), number (singular vs. plural), gender (masculine, feminine, neuter), and case (nominative, oblique, genitive). See English personal pronouns for the details.
The inflectional categories used to group word forms into paradigms cannot be chosen arbitrarily; they must be categories that are relevant to stating the syntactic rules of the language. For example, person and number are categories that can be used to define paradigms in English, because English has grammatical agreement rules that require the verb in a sentence to appear in an inflectional form that matches the person and number of the subject. In other words, the syntactic rules of English care about the difference between dog and dogs, because the choice between these two forms determines which form of the verb is to be used. In contrast, however, no syntactic rule of English cares about the difference between dog and dog catcher, or dependent and independent. The first two are just nouns, and the second two just adjectives, and they generally behave like any other noun or adjective behaves. An important difference between inflection and word formation is that inflected word forms of lexemes are organized into paradigms, which are defined by the requirements of syntactic rules, whereas the rules of word formation are not restricted by any corresponding requirements of syntax. Inflection is therefore said to be relevant to syntax, and word formation is not. The part of morphology that covers the relationship between syntax and morphology is called morphosyntax, and it concerns itself with inflection and paradigms, but not with word formation or compounding. Allomorphy
In the exposition above, morphological rules are described as analogies between word forms: dog is to dogs as cat is to cats, and as dish is to dishes. In this case, the analogy applies both to the form of the words and to their meaning: in each pair, the first word means "one of X", while the second "two or more of X", and the difference is always the plural form -s affixed to the second word, signaling the key distinction between singular and plural entities. One of the largest sources of complexity in morphology is that this one-to-one correspondence between meaning and form scarcely applies to every case in the language. In English, there are word form pairs like ox/oxen, goose/geese, and sheep/sheep, where the difference between the singular and the plural is signaled in a way that departs from the regular pattern, or is not signaled at all. Even cases considered "regular", with the final -s, are not so simple; the -s in dogs is not pronounced the same way as the -s in cats, and in a plural like dishes, an "extra" vowel appears before the -s. These cases, where the same distinction is effected by alternative forms of a "word", are called allomorphy. Phonological rules constrain which sounds can appear next to each other in a language, and morphological rules, when applied blindly, would often violate phonological rules, by resulting in sound sequences that are prohibited in the language in question. For example, to form the plural of dish by simply appending an -s to the end of the word would result in the form *[dɪʃs], which is not permitted by the phonotactics of English. In order to "rescue" the word, a vowel sound is inserted between the root and the plural marker, and [dɪʃɪz] results. Similar rules apply to the pronunciation of the -s in dogs and cats: it depends on the quality (voiced vs. unvoiced) of the final preceding phoneme.
Lexical morphology
Lexical morphology is the branch of morphology that deals with the lexicon, which, morphologically conceived, is the collection of lexemes in a language. As such, it concerns itself primarily with word formation: derivation and compounding.
Models There are three principal approaches to morphology, which each try to capture the distinctions above in different ways. These are,
Morpheme-based morphology, which makes use of an Item-and-Arrangement approach. Lexeme-based morphology, which normally makes use of an Item-and-Process approach. Word-based morphology, which normally makes use of a Word-and-Paradigm approach.
Note that while the associations indicated between the concepts in each item in that list is very strong, it is not absolute. Morpheme-based morphology
In morpheme-based morphology, word forms are analyzed as arrangements of morphemes. A morpheme is defined as the minimal meaningful unit of a language. In a word like independently, we say that the morphemes are in-, depend, -ent, and ly; depend is the root and the other morphemes are, in this case, derivational affixes.[5] In a word like dogs, we say that dog is the root, and that -s is an inflectional morpheme. In its simplest (and most naïve) form, this way of analyzing word forms treats words as if they were made of morphemes put after each other like beads on a string, is called Item-and-Arrangement. More modern and sophisticated approaches seek to maintain the idea of the morpheme while accommodating non-concatenative, analogical, and other processes that have proven problematic for Item-and-Arrangement theories and similar approaches. Morpheme-based morphology presumes three basic axioms (cf. Beard 1995 for an overview and references):
Baudoin’s single morpheme hypothesis: Roots and affixes have the same status as morphemes. Bloomfield’s sign base morpheme hypothesis: As morphemes, they are dualistic signs, since they have both (phonological) form and meaning. Bloomfield’s lexical morpheme hypothesis: The morphemes, affixes and roots alike, are stored in the lexicon.
Morpheme-based morphology comes in two flavours, one Bloomfieldian and one Hockettian. (cf. Bloomfield 1933 and Charles F. Hockett 1947). For Bloomfield, the morpheme was the minimal form with meaning, but it was not meaning itself. For Hockett, morphemes are meaning elements, not form elements. For him, there is a morpheme plural, with the allomorphs -s, -en, ren etc. Within much morpheme-based morphological theory, these two views are mixed in
unsystematic ways, so that a writer may talk about "the morpheme plural" and "the morpheme s" in the same sentence, although these are different things. Lexeme-based morphology
Lexeme-based morphology is (usually) an Item-and-Process approach. Instead of analyzing a word form as a set of morphemes arranged in sequence, a word form is said to be the result of applying rules that alter a word form or stem in order to produce a new one. An inflectional rule takes a stem, changes it as is required by the rule, and outputs a word form; a derivational rule takes a stem, changes it as per its own requirements, and outputs a derived stem; a compounding rule takes word forms, and similarly outputs a compound stem. Word-based morphology
Word-based morphology is (usually) a Word-and-paradigm approach. This theory takes paradigms as a central notion. Instead of stating rules to combine morphemes into word forms, or to generate word forms from stems, word-based morphology states generalizations that hold between the forms of inflectional paradigms. The major point behind this approach is that many such generalizations are hard to state with either of the other approaches. The examples are usually drawn from fusional languages, where a given "piece" of a word, which a morphemebased theory would call an inflectional morpheme, corresponds to a combination of grammatical categories, for example, "third person plural." Morpheme-based theories usually have no problems with this situation, since one just says that a given morpheme has two categories. Itemand-Process theories, on the other hand, often break down in cases like these, because they all too often assume that there will be two separate rules here, one for third person, and the other for plural, but the distinction between them turns out to be artificial. Word-and-Paradigm approaches treat these as whole words that are related to each other by analogical rules. Words can be categorized based on the pattern they fit into. This applies both to existing words and to new ones. Application of a pattern different from the one that has been used historically can give rise to a new word, such as older replacing elder (where older follows the normal pattern of adjectival superlatives) and cows replacing kine (where cows fits the regular pattern of plural formation).
Morphological typology Main article: Morphological typology
In the 19th century, philologists devised a now classic classification of languages according to their morphology. According to this typology, some languages are isolating, and have little to no morphology; others are agglutinative, and their words tend to have lots of easily separable morphemes; while others yet are inflectional or fusional, because their inflectional morphemes are "fused" together. This leads to one bound morpheme conveying multiple pieces of information. The classic example of an isolating language is Chinese; the classic example of an agglutinative language is Turkish; both Latin and Greek are classic examples of fusional languages.
Considering the variability of the world's languages, it becomes clear that this classification is not at all clear cut, and many languages do not neatly fit any one of these types, and some fit in more than one way. A continuum of complex morphology of language may be adapted when considering languages. The three models of morphology stem from attempts to analyze languages that more or less match different categories in this typology. The Item-and-Arrangement approach fits very naturally with agglutinative languages; while the Item-and-Process and Word-and-Paradigm approaches usually address fusional languages. The reader should also note that the classical typology mostly applies to inflectional morphology. There is very little fusion going on with word formation. Languages may be classified as synthetic or analytic in their word formation, depending on the preferred way of expressing notions that are not inflectional: either by using word formation (synthetic), or by using syntactic phrases (analytic).
Syntax For other uses, see Syntax (disambiguation). Not to be confused with Sin tax. See also Syntaxis.
Linguistics Theoretical linguistics
Cognitive linguistics Generative linguistics
Functional theories of grammar
Quantitative linguistics
Phonology
Morphology
Morphophonology
Syntax
Lexis
Semantics
Pragmatics
Graphemics
Orthography
Semiotics
Descriptive linguistics
Anthropological linguistics
Comparative linguistics
Historical linguistics
Etymology
Graphetics
Phonetics Sociolinguistics
Applied and experimental linguistics
Computational linguistics
Evolutionary linguistics
Forensic linguistics
Internet linguistics
Language acquisition
Language assessment
Language development
Language education Linguistic anthropology
Neurolinguistics
Psycholinguistics
Second-language acquisition
Related articles
History of linguistics Linguistic prescription
List of linguists
List of unsolved problems in linguistics
Portal
v
t
e
In linguistics, syntax (from Ancient Greek σύνταξις "arrangement" from σύν syn, "together", and τάξις táxis, "an ordering") is "the study of the principles and processes by which sentences are constructed in particular languages".[1] In addition to referring to the overarching discipline, the term syntax is also used to refer directly to the rules and principles that govern the sentence structure of any individual language, for example in "the syntax of Modern Irish." Modern research in syntax attempts to describe languages in of such rules. Many professionals in this discipline attempt to find general rules that apply to all natural languages. The term syntax is also used to refer to the rules governing the behavior of mathematical systems, such as formal languages used in logic. (See Logical syntax).
Early history Works on grammar were written long before modern syntax came about; the Aṣṭā ā ī of Pāṇini is often cited as an example of a premodern work that approaches the sophistication of a modern syntactic theory.[2] In the West, the school of thought that came to be known as "traditional grammar" began with the work of Dionysius Thrax. For centuries, work in syntax was dominated by a framework known as grammaire générale, first expounded in 1660 by Antoine Arnauld in a book of the same title. This system took as its basic premise the assumption that language is a direct reflection of thought processes and therefore there is a single, most natural way to express a thought. (That natural way, coincidentally, was exactly the way it was expressed in French.) However, in the 19th century, with the development of historical-comparative linguistics, linguists began to realize the sheer diversity of human language, and to question fundamental assumptions about the relationship between language and logic. It became apparent that there was no such thing as the most natural way to express a thought, and therefore logic could no longer be relied upon as a basis for studying the structure of language. The Port-Royal grammar modeled the study of syntax upon that of logic (indeed, large parts of the Port-Royal Logic were copied or adapted from the Grammaire générale[3]). Syntactic categories were identified with logical ones, and all sentences were analyzed in of "Subject
– Copula – Predicate". Initially, this view was adopted even by the early comparative linguists such as Franz Bopp. The central role of syntax within theoretical linguistics became clear only in the 20th century, which could reasonably be called the "century of syntactic theory" as far as linguistics is concerned. For a detailed and critical survey of the history of syntax in the last two centuries, see the monumental work by Giorgio Graffi (2001).[4]
Modern theories There are a number of theoretical approaches to the discipline of syntax. One school of thought, founded in the works of Derek Bickerton,[5] sees syntax as a branch of biology, since it conceives of syntax as the study of linguistic knowledge as embodied in the human mind. Other linguists (e.g. Gerald Gazdar) take a more Platonistic view, since they regard syntax to be the study of an abstract formal system.[6] Yet others (e.g. Joseph Greenberg) consider grammar a taxonomical device to reach broad generalizations across languages. Generative grammar Main article: Generative grammar
The hypothesis of generative grammar is that language is a structure of the human mind. The goal of generative grammar is to make a complete model of this inner language (known as ilanguage). This model could be used to describe all human language and to predict the grammaticality of any given utterance (that is, to predict whether the utterance would sound correct to native speakers of the language). This approach to language was pioneered by Noam Chomsky. Most generative theories (although not all of them) assume that syntax is based upon the constituent structure of sentences. Generative grammars are among the theories that focus primarily on the form of a sentence, rather than its communicative function. Among the many generative theories of linguistics, the Chomskyan theories are:
Transformational grammar (TG) (Original theory of generative syntax laid out by Chomsky in Syntactic Structures in 1957)[7] Government and binding theory (GB) (revised theory in the tradition of TG developed mainly by Chomsky in the 1970s and 1980s)[8] Minimalist program (MP) (a reworking of the theory out of the GB framework published by Chomsky in 1995)[9]
Other theories that find their origin in the generative paradigm are:
Generative semantics (now largely out of date) Relational grammar (RG) (now largely out of date) Arc pair grammar Generalized phrase structure grammar (GPSG; now largely out of date) Head-driven phrase structure grammar (HPSG) Lexical functional grammar (LFG)
Nanosyntax
Categorial grammar Main article: Categorial grammar
Categorial grammar is an approach that attributes the syntactic structure not to rules of grammar, but to the properties of the syntactic categories themselves. For example, rather than asserting that sentences are constructed by a rule that combines a noun phrase (NP) and a verb phrase (VP) (e.g. the phrase structure rule S → NP VP), in categorial grammar, such principles are embedded in the category of the head word itself. So the syntactic category for an intransitive verb is a complex formula representing the fact that the verb acts as a function word requiring an NP as an input and produces a sentence level structure as an output. This complex category is notated as (NP\S) instead of V. NP\S is read as "a category that searches to the left (indicated by \) for a NP (the element on the left) and outputs a sentence (the element on the right)". The category of transitive verb is defined as an element that requires two NPs (its subject and its direct object) to form a sentence. This is notated as (NP/(NP\S)) which means "a category that searches to the right (indicated by /) for an NP (the object), and generates a function (equivalent to the VP) which is (NP\S), which in turn represents a function that searches to the left for an NP and produces a sentence). Tree-ading grammar is a categorial grammar that adds in partial tree structures to the categories. Dependency grammar Main article: Dependency grammar
Dependency grammar is an approach to sentence structure where syntactic units are arranged according to the dependency relation, as opposed to the constituency relation of phrase structure grammars. Dependencies are directed links between words. The (finite) verb is seen as the root of all clause structure and all the other words in the clause are either directly or indirectly dependent on this root. Some prominent dependency-based theories of syntax:
Algebraic syntax Word grammar Operator grammar Meaning–text theory Functional generative description
Lucien Tesnière (1893–1954) is widely seen as the father of modern dependency-based theories of syntax and grammar. He argued vehemently against the binary division of the clause into subject and predicate that is associated with the grammars of his day (S → NP VP) and which remains at the core of all phrase structure grammars, and in the place of this division, he positioned the verb as the root of all clause structure.[10]
Stochastic/probabilistic grammars/network theories
Theoretical approaches to syntax that are based upon probability theory are known as stochastic grammars. One common implementation of such an approach makes use of a neural network or connectionism. Some theories based within this approach are:
Optimality theory[citation needed] Stochastic context-free grammar
Functionalist grammars Main article: Functional theories of grammar
Functionalist theories, although focused upon form, are driven by explanation based upon the function of a sentence (i.e. its communicative function). Some typical functionalist theories include:
Functional discourse grammar (Dik) Prague linguistic circle Systemic functional grammar Cognitive grammar Construction grammar (CxG) Role and reference grammar (RRG) Emergent grammar
semantics Linguistics Theoretical linguistics
Cognitive linguistics Generative linguistics
Functional theories of grammar
Quantitative linguistics
Phonology
Morphology
Morphophonology
Syntax Lexis Semantics
Pragmatics
Graphemics
Orthography
Semiotics
Descriptive linguistics
Anthropological linguistics
Comparative linguistics
Historical linguistics
Etymology
Graphetics
Phonetics Sociolinguistics
Applied and experimental linguistics
Computational linguistics
Evolutionary linguistics
Forensic linguistics
Internet linguistics
Language acquisition
Language assessment
Language development
Language education Linguistic anthropology
Neurolinguistics
Psycholinguistics
Second-language acquisition
Related articles
History of linguistics Linguistic prescription
List of linguists
List of unsolved problems in linguistics
Portal
v
t
e
Semantics (from Greek: sēmantikós)[1][2] is the study of meaning. It focuses on the relation between signifiers, such as words, phrases, signs, and symbols, and what they stand for, their denotata. Linguistic semantics is the study of meaning that is used to understand human expression through language. Other forms of semantics include the semantics of programming languages, formal logics, and semiotics. The word semantics itself denotes a range of ideas, from the popular to the highly technical. It is often used in ordinary language to denote a problem of understanding that comes down to word selection or connotation. This problem of understanding has been the subject of many formal enquiries, over a long period of time, most notably in the field of formal semantics. In linguistics, it is the study of interpretation of signs or symbols as used by agents or communities within particular circumstances and contexts.[3] Within this view, sounds, facial expressions, body language, and proxemics have semantic (meaningful) content, and each has several branches of study. In written language, such things as paragraph structure and punctuation have semantic content; in other forms of language, there is other semantic content.[3] The formal study of semantics intersects with many other fields of inquiry, including lexicology, syntax, pragmatics, etymology and others, although semantics is a well-defined field in its own right, often with synthetic properties.[4] In philosophy of language, semantics and reference are closely connected. Further related fields include philology, communication, and semiotics. The formal study of semantics is therefore complex. Semantics contrasts with syntax, the study of the combinatorics of units of a language (without reference to their meaning), and pragmatics, the study of the relationships between the symbols of a language, their meaning, and the s of the language.[5] In international scientific vocabulary semantics is also called semasiology.
Linguistics In linguistics, semantics is the subfield that is devoted to the study of meaning, as inherent at the levels of words, phrases, sentences, and larger units of discourse (termed texts). The basic area of study is the meaning of signs, and the study of relations between different linguistic units and compounds: homonymy, synonymy, antonymy, hypernymy, hyponymy, meronymy, metonymy, holonymy, paronyms. A key concern is how meaning attaches to larger chunks of text, possibly as a result of the composition from smaller units of meaning. Traditionally, semantics has included the study of sense and denotative reference, truth conditions, argument structure, thematic roles[disambiguation needed], discourse analysis, and the linkage of all of these to syntax. Montague grammar
In the late 1960s, Richard Montague proposed a system for defining semantic entries in the lexicon in of the lambda calculus. In these , the syntactic parse of the sentence John ate every bagel would consist of a subject (John) and a predicate (ate every bagel); Montague showed that the meaning of the sentence as a whole could be decomposed into the meanings of its parts and relatively few rules of combination. The logical predicate thus obtained would be elaborated further, e.g. using truth theory models, which ultimately relate meanings to a set of Tarskiian universals, which may lie outside the logic. The notion of such meaning atoms or primitives is basic to the language of thought hypothesis from the 1970s. Despite its elegance, Montague grammar was limited by the context-dependent variability in word sense, and led to several attempts at incorporating context, such as:
Situation semantics (1980s): truth-values are incomplete, they get assigned based on context Generative lexicon (1990s): categories (types) are incomplete, and get assigned based on context
Dynamic turn in semantics
Semantics Language · Linguistics Formal semantics (logic & linguistics) Lexis Lexical semantics Statistical semantics Structural semantics
Prototype semantics Lexicology Semantic analysis Latent semantic analysis Theory of descriptions Force dynamics Unsolved problems Semantic matching Analysis (machine) Abstract semantic graph Semantic Web Semantic wiki Semantic file system Abstract interpretation Formal semantics of programming languages Denotational semantics Axiomatic semantics Operational semantics Action semantics Categorical semantics Concurrency semantics Game semantics Predicate transformer
v
t
e
In Chomskyan linguistics there was no mechanism for the learning of semantic relations, and the nativist view considered all semantic notions as inborn. Thus, even novel concepts were proposed to have been dormant in some sense. This view was also thought unable to address many issues such as metaphor or associative meanings, and semantic change, where meanings within a linguistic community change over time, and qualia or subjective experience. Another issue not addressed by the nativist model was how perceptual cues are combined in thought, e.g. in mental rotation.[6] This view of semantics, as an innate finite meaning inherent in a lexical unit that can be composed to generate meanings for larger chunks of discourse, is now being fiercely debated in the emerging domain of cognitive linguistics[7] and also in the non-Fodorian camp in philosophy of language.[8] The challenge is motivated by:
factors internal to language, such as the problem of resolving indexical or anaphora (e.g. this x, him, last week). In these situations context serves as the input, but the interpreted utterance also modifies the context, so it is also the output. Thus, the interpretation is necessarily dynamic and the meaning of sentences is viewed as context change potentials instead of propositions. factors external to language, i.e. language is not a set of labels stuck on things, but "a toolbox, the importance of whose elements lie in the way they function rather than their attachments to things."[8] This view reflects the position of the later Wittgenstein and his famous game example, and is related to the positions of Quine, Davidson, and others.
A concrete example of the latter phenomenon is semantic underspecification – meanings are not complete without some elements of context. To take an example of one word, red, its meaning in a phrase such as red book is similar to many other usages, and can be viewed as compositional.[9] However, the colours implied in phrases such as red wine (very dark), and red hair (coppery), or red soil, or red skin are very different. Indeed, these colours by themselves would not be called red by native speakers. These instances are contrastive, so red wine is so called only in comparison with the other kind of wine (which also is not white for the same reasons). This view goes back to de Saussure: Each of a set of synonyms like redouter ('to dread'), craindre ('to fear'), avoir peur ('to be afraid') has its particular value only because they stand in contrast with one another. No word has a value that can be identified independently of what else is in its vicinity.[10]
and may go back to earlier Indian views on language, especially the Nyaya view of words as indicators and not carriers of meaning.[11] An attempt to defend a system based on propositional meaning for semantic underspecification can be found in the generative lexicon model of James Pustejovsky, who extends contextual
operations (based on type shifting) into the lexicon. Thus meanings are generated on the fly based on finite context. Prototype theory
Another set of concepts related to fuzziness in semantics is based on prototypes. The work of Eleanor Rosch in the 1970s led to a view that natural categories are not characterizable in of necessary and sufficient conditions, but are graded (fuzzy at their boundaries) and inconsistent as to the status of their constituent . One may compare it with Jung's archetype, though the concept of archetype sticks to static concept. Some post-structuralists are against the fixed or static meaning of the words. Derrida, following Nietzsche, talked about slippages in fixed meanings. Here are some examples from Bangla fuzzy words [12][13] Systems of categories are not objectively out there in the world but are rooted in people's experience. These categories evolve as learned concepts of the world – meaning is not an objective truth, but a subjective construct, learned from experience, and language arises out of the "grounding of our conceptual systems in shared embodiment and bodily experience".[14] A corollary of this is that the conceptual categories (i.e. the lexicon) will not be identical for different cultures, or indeed, for every individual in the same culture. This leads to another debate (see the Sapir–Whorf hypothesis or Eskimo words for snow). Theories in semantics Model theoretic semantics Main article: formal semantics (linguistics)
Originates from Montague's work (see above). A highly formalized theory of natural language semantics in which expressions are assigned denotations (meanings) such as individuals, truth values, or functions from one of these to another. The truth of a sentence, and more interestingly, its logical relation to other sentences, is then evaluated relative to a model. Formal (or truth-conditional) semantics Main article: truth-conditional semantics
Pioneered by the philosopher Donald Davidson, another formalized theory, which aims to associate each natural language sentence with a meta-language description of the conditions under which it is true, for example: `Snow is white' is true if and only if snow is white. The challenge is to arrive at the truth conditions for any sentences from fixed meanings assigned to the individual words and fixed rules for how to combine them. In practice, truth-conditional semantics is similar to model-theoretic semantics; conceptually, however, they differ in that truth-conditional semantics seeks to connect language with statements about the real world (in the form of meta-language statements), rather than with abstract models. Lexical and conceptual semantics Main article: conceptual semantics
This theory is an effort to explain properties of argument structure. The assumption behind this theory is that syntactic properties of phrases reflect the meanings of the words that head them.[15] With this theory, linguists can better deal with the fact that subtle differences in word meaning correlate with other differences in the syntactic structure that the word appears in.[15] The way this is gone about is by looking at the internal structure of words.[16] These small parts that make up the internal structure of words are termed semantic primitives.[16] Lexical semantics Main article: lexical semantics
A linguistic theory that investigates word meaning. This theory understands that the meaning of a word is fully reflected by its context. Here, the meaning of a word is constituted by its contextual relations.[17] Therefore, a distinction between degrees of participation as well as modes of participation are made.[17] In order to accomplish this distinction any part of a sentence that bears a meaning and combines with the meanings of other constituents is labeled as a semantic constituent. Semantic constituents that cannot be broken down into more elementary constituents are labeled minimal semantic constituents.[17] Computational semantics Main article: computational semantics
Computational semantics is focused on the processing of linguistic meaning. In order to do this concrete algorithms and architectures are described. Within this framework the algorithms and architectures are also analyzed in of decidability, time/space complexity, data structures they require and communication protocols.[18]
Computer science Main article: semantics (computer science)
In computer science, the term semantics refers to the meaning of languages, as opposed to their form (syntax). According to Euzenat, semantics "provides the rules for interpreting the syntax which do not provide the meaning directly but constrains the possible interpretations of what is declared."[19] In other words, semantics is about interpretation of an expression. Additionally, the term is applied to certain types of data structures specifically designed and used for representing information content. Programming languages
The semantics of programming languages and other languages is an important issue and area of study in computer science. Like the syntax of a language, its semantics can be defined exactly. For instance, the following statements use different syntaxes, but cause the same instructions to be executed:
Statement
Programming languages
x += y
C, C++, C#, Java, Perl, Python, Ruby, PHP, etc.
x := x + y
ALGOL, BL, Simula, ALGOL 68, SETL, Pascal, Smalltalk, Modula-2, Ada, Standard ML, OCaml, Eiffel, Object Pascal (Delphi), Oberon, Dylan, VHDL, etc.
ADD x, y
Assembly languages: Intel 8086
LET X = X + Y BASIC: early x = x + y
BASIC: most dialects; Fortran, MATLAB
Set x = x + y Caché ObjectScript ADD Y TO X GIVING X
COBOL
(incf x y)
Common Lisp
Generally these operations would all perform an arithmetical addition of 'y' to 'x' and store the result in a variable called 'x'. Various ways have been developed to describe the semantics of programming languages formally, building on mathematical logic:[20]
Operational semantics: The meaning of a construct is specified by the computation it induces when it is executed on a machine. In particular, it is of interest how the effect of a computation is produced. Denotational semantics: Meanings are modelled by mathematical objects that represent the effect of executing the constructs. Thus only the effect is of interest, not how it is obtained. Axiomatic semantics: Specific properties of the effect of executing the constructs are expressed as assertions. Thus there may be aspects of the executions that are ignored.
Semantic models
such as semantic network and semantic data model are used to describe particular types of data models characterized by the use of directed graphs in which the vertices denote concepts or entities in the world, and the arcs denote relationships between them. The Semantic Web refers to the extension of the World Wide Web via embedding added semantic metadata, using semantic data modelling techniques such as Resource Description Framework (RDF) and Web Ontology Language (OWL).
Psychology In psychology, semantic memory is memory for meaning – in other words, the aspect of memory that preserves only the gist, the general significance, of ed experience – while episodic memory is memory for the ephemeral details – the individual features, or the unique particulars of experience. Word meaning is measured by the company they keep, i.e. the relationships among words themselves in a semantic network. The memories may be transferred intergenerationally or isolated in one generation due to a cultural disruption. Different generations may have different experiences at similar points in their own time-lines. This may then create a vertically heterogeneous semantic net for certain words in an otherwise homogeneous culture.[21] In a network created by people analyzing their understanding of the word (such as Wordnet) the links and decomposition structures of the network are few in number and kind, and include part of, kind of, and similar links. In automated ontologies the links are computed vectors without explicit meaning. Various automated technologies are being developed to compute the meaning of words: latent semantic indexing and vector machines as well as natural language processing, neural networks and predicate calculus techniques. Ideasthesia is a rare psychological phenomenon that in certain individuals associates semantic and sensory representations. Activation of a concept (e.g., that of the letter A) evokes sensorylike experiences (e.g., of red color).
Pragmatics This article is about the subfield of linguistics. For other uses, see Pragmatic.
Linguistics Theoretical linguistics
Cognitive linguistics Generative linguistics
Functional theories of grammar
Quantitative linguistics
Phonology
Morphology
Morphophonology
Syntax Lexis Semantics
Pragmatics
Graphemics
Orthography
Semiotics
Descriptive linguistics
Anthropological linguistics
Comparative linguistics
Historical linguistics
Etymology
Graphetics
Phonetics Sociolinguistics
Applied and experimental linguistics
Computational linguistics
Evolutionary linguistics
Forensic linguistics
Internet linguistics
Language acquisition
Language assessment
Language development
Language education Linguistic anthropology
Neurolinguistics
Psycholinguistics
Second-language acquisition
Related articles
History of linguistics Linguistic prescription
List of linguists
List of unsolved problems in linguistics
Portal
v
t
e
Pragmatics is a subfield of linguistics which studies the ways in which context contributes to meaning. Pragmatics encomes speech act theory, conversational implicature, talk in interaction and other approaches to language behavior in philosophy, sociology, and linguistics and anthropology.[1] Unlike semantics, which examines meaning that is conventional or "coded" in a given language, pragmatics studies how the transmission of meaning depends not only on structural and linguistic knowledge (e.g., grammar, lexicon, etc.) of the speaker and listener, but also on the context of the utterance, any preexisting knowledge about those involved, the inferred intent of the speaker, and other factors.[2] In this respect, pragmatics explains how language s are able to overcome apparent ambiguity, since meaning relies on the manner, place, time etc. of an utterance.[1] The ability to understand another speaker's intended meaning is called pragmatic competence.[3][4][5]
Structural ambiguity The sentence "You have a green light" is ambiguous. Without knowing the context, the identity of the speaker, and his or her intent, it is difficult to infer the meaning with confidence. For example:
It could mean that you have green ambient lighting. It could mean that you have a green light while driving your car. It could mean that you can go ahead with the project. It could mean that your body has a green glow. It could mean that you possess a light bulb that is tinted green.
Similarly, the sentence "Sherlock saw the man with binoculars" could mean that Sherlock observed the man by using binoculars, or it could mean that Sherlock observed a man who was holding binoculars.[6] The meaning of the sentence depends on an understanding of the context and the speaker's intent. As defined in linguistics, a sentence is an abstract entity — a string of words divorced from non-linguistic context — as opposed to an utterance, which is a concrete
example of a speech act in a specific context. The closer conscious subjects stick to common words, idioms, phrasings, and topics, the more easily others can surmise their meaning; the further they stray from common expressions and topics, the wider the variations in interpretations. This suggests that sentences do not have meaning intrinsically; there is not a meaning associated with a sentence or word, they can only symbolically represent an idea. The cat sat on the mat is a sentence in English; if you say to your sister on Tuesday afternoon, "The cat sat on the mat," this is an example of an utterance. Thus, there is no such thing as a sentence, term, expression or word symbolically representing a single true meaning; it is underspecified (which cat sat on which mat?) and potentially ambiguous. The meaning of an utterance, on the other hand, is inferred based on linguistic knowledge and knowledge of the non-linguistic context of the utterance (which may or may not be sufficient to resolve ambiguity). In mathematics with Berry's paradox there arose a systematic ambiguity with the word "definable". The ambiguity with words shows that the descriptive power of any human language is limited.
Etymology The word pragmatics derives via Latin pragmaticus from the Greek πραγματικός (pragmatikos), meaning amongst others "fit for action",[7] which comes from πρᾶγμα (pragma), "deed, act",[8] and that from πράσσω (pr ssō), "to over, to practise, to achieve".[9] Pragmatics was a reaction to structuralist linguistics as outlined by Ferdinand de Saussure. In many cases, it expanded upon his idea that language has an analyzable structure, composed of parts that can be defined in relation to others. Pragmatics first engaged only in synchronic study, as opposed to examining the historical development of language. However, it rejected the notion that all meaning comes from signs existing purely in the abstract space of langue. Meanwhile, historical pragmatics has also come into being.
Areas of interest
The study of the speaker's meaning, not focusing on the phonetic or grammatical form of an utterance, but instead on what the speaker's intentions and beliefs are.
The study of the meaning in context, and the influence that a given context can have on the message. It requires knowledge of the speaker's identities, and the place and time of the utterance.
Metapragmatics means to understand the context in which the speech event took place. Without the context, pure referential meanings elide the complexities of the any speech utterance.
The study of implicatures, i.e. the things that are communicated even though they are not explicitly expressed.
The study of relative distance, both social and physical, between speakers in order to understand what determines the choice of what is said and what is not said.
The study of what is not meant, as opposed to the intended meaning, i.e. that which is unsaid and unintended, or unintentional.
Information Structure, the study of how utterances are marked in order to efficiently manage the common ground of referred entities between speaker and hearer
Formal Pragmatics, the study of those aspects of meaning and use, for which context of use is an important factor, by using the methods and goals of formal semantics.
Referential uses of language When we speak of the referential uses of language we are talking about how we use signs to refer to certain items. Below is an explanation of, first, what a sign is, second, how meanings are accomplished through its usage. A sign is the link or relationship between a signified and the signifier as defined by Saussure and Huguenin. The signified is some entity or concept in the world. The signifier represents the signified. An example would be: Signified: the concept cat Signifier: the word "cat" The relationship between the two gives the sign meaning. This relationship can be further explained by considering what we mean by "meaning." In pragmatics, there are two different types of meaning to consider: semantico-referential meaning and indexical meaning. Semantico-referential meaning refers to the aspect of meaning, which describes events in the world that are independent of the circumstance they are uttered in. An example would be propositions such as: "Santa Claus eats cookies." In this case, the proposition is describing that Santa Claus eats cookies. The meaning of this proposition does not rely on whether or not Santa Claus is eating cookies at the time of its utterance. Santa Claus could be eating cookies at any time and the meaning of the proposition would remain the same. The meaning is simply describing something that is the case in the world. In contrast, the proposition, "Santa Claus is eating a cookie right now," describes events that are happening at the time the proposition is uttered. Semantico-referential meaning is also present in meta-semantical statements such as: Tiger: omnivorous, a mammal If someone were to say that a tiger is an omnivorous animal in one context and a mammal in another, the definition of tiger would still be the same. The meaning of the sign tiger is describing some animal in the world, which does not change in either circumstance. Indexical meaning, on the other hand, is dependent on the context of the utterance and has rules of use. By rules of use, it is meant that indexicals can tell you when they are used, but not what
they actually mean. Example: "I" Whom "I" refers to depends on the context and the person uttering it. As mentioned, these meanings are brought about through the relationship between the signified and the signifier. One way to define the relationship is by placing signs in two categories: referential indexical signs, also called "shifters," and pure indexical signs. Referential indexical signs are signs where the meaning shifts depending on the context hence the nickname "shifters." 'I' would be considered a referential indexical sign. The referential aspect of its meaning would be '1st person singular' while the indexical aspect would be the person who is speaking (refer above for definitions of semantico-referential and indexical meaning). Another example would be: "This" Referential: singular count Indexical: Close by A pure indexical sign does not contribute to the meaning of the propositions at all. It is an example of a ""non-referential use of language."" A second way to define the signified and signifier relationship is C.S. Peirce's Peircean Trichotomy. The components of the trichotomy are the following: 1. Icon: the signified resembles the signifier (signified: a dog's barking noise, signifier: bowwow) 2. Index: the signified and signifier are linked by proximity or the signifier has meaning only because it is pointing to the signified 3. Symbol: the signified and signifier are arbitrarily linked (signified: a cat, signifier: the word cat) These relationships allow us to use signs to convey what we want to say. If two people were in a room and one of them wanted to refer to a characteristic of a chair in the room he would say "this chair has four legs" instead of "a chair has four legs." The former relies on context (indexical and referential meaning) by referring to a chair specifically in the room at that moment while the latter is independent of the context (semantico-referential meaning), meaning the concept chair.
Non-referential uses of language Silverstein's "pure" indexes
Michael Silverstein has argued that "nonreferential" or "pure" indices do not contribute to an utterance's referential meaning but instead "signal some particular value of one or more
contextual variables."[10] Although nonreferential indexes are devoid of semantico-referential meaning, they do encode "pragmatic" meaning. The sorts of contexts that such indexes can mark are varied. Examples include:
Sex indexes are affixes or inflections that index the sex of the speaker, e.g. the verb forms of female Koasati speakers take the suffix "-s". Deference indexes are words that signal social differences (usually related to status or age) between the speaker and the addressee. The most common example of a deference index is the V form in a language with a T-V distinction, the widespread phenomenon in which there are multiple second-person pronouns that correspond to the addressee's relative status or familiarity to the speaker. Honorifics are another common form of deference index and demonstrate the speaker's respect or esteem for the addressee via special forms of address and/or self-humbling first-person pronouns. An Affinal taboo index is an example of avoidance speech that produces and reinforces sociological distance, as seen in the Aboriginal Dyirbal language of Australia. In this language and some others, there is a social taboo against the use of the everyday lexicon in the presence of certain relatives (mother-in-law, child-in-law, paternal aunt's child, and maternal uncle's child). If any of those relatives are present, a Dyirbal speaker has to switch to a completely separate lexicon reserved for that purpose.
In all of these cases, the semantico-referential meaning of the utterances is unchanged from that of the other possible (but often impermissible) forms, but the pragmatic meaning is vastly different. The performative
Main articles: Performative utterance, Speech act theory J.L. Austin introduced the concept of the performative, contrasted in his writing with "constative" (i.e. descriptive) utterances. According to Austin's original formulation, a performative is a type of utterance characterized by two distinctive features:
It is not truth-evaluable (i.e. it is neither true nor false) Its uttering performs an action rather than simply describing one
However, a performative utterance must also conform to a set of felicity conditions. Examples:
"I hereby pronounce you man and wife." "I accept your apology." "This meeting is now adjourned."
Jakobson's six functions of language Main article: Jakobson's functions of language
The six factors of an effective verbal communication. To each one corresponds a communication function (not displayed in this picture).[11]
Roman Jakobson, expanding on the work of Karl Bühler, described six "constitutive factors" of a speech event, each of which represents the privileging of a corresponding function, and only one of which is the referential (which corresponds to the context of the speech event). The six constitutive factors and their corresponding functions are diagrammed below. The six constitutive factors of a speech event Context Message
Addresser---------------------Addressee Code
The six functions of language Referential Poetic
Emotive-----------------------Conative Phatic Metalingual
The Referential Function corresponds to the factor of Context and describes a situation, object or mental state. The descriptive statements of the referential function can consist of both definite descriptions and deictic words, e.g. "The autumn leaves have all fallen now." The Expressive (alternatively called "emotive" or "affective") Function relates to the Addresser and is best exemplified by interjections and other sound changes that do not alter the
denotative meaning of an utterance but do add information about the Addresser's (speaker's) internal state, e.g. "Wow, what a view!" The Conative Function engages the Addressee directly and is best illustrated by vocatives and imperatives, e.g. "Tom! Come inside and eat!" The Poetic Function focuses on "the message for its own sake"[12] and is the operative function in poetry as well as slogans. The Phatic Function is language for the sake of interaction and is therefore associated with the factor. The Phatic Function can be observed in greetings and casual discussions of the weather, particularly with strangers. The Metalingual (alternatively called "metalinguistic" or "reflexive") Function is the use of language (what Jakobson calls "Code") to discuss or describe itself.
Related fields There is considerable overlap between pragmatics and sociolinguistics, since both share an interest in linguistic meaning as determined by usage in a speech community. However, sociolinguists tend to be more interested in variations in language within such communities. Pragmatics helps anthropologists relate elements of language to broader social phenomena; it thus pervades the field of linguistic anthropology. Because pragmatics describes generally the forces in play for a given utterance, it includes the study of power, gender, race, identity, and their interactions with individual speech acts. For example, the study of code switching directly relates to pragmatics, since a switch in code effects a shift in pragmatic force.[12] According to Charles W. Morris, pragmatics tries to understand the relationship between signs and their s, while semantics tends to focus on the actual objects or ideas to which a word refers, and syntax (or "syntactics") examines relationships among signs or symbols. Semantics is the literal meaning of an idea whereas pragmatics is the implied meaning of the given idea. Speech Act Theory, pioneered by J.L. Austin and further developed by John Searle, centers around the idea of the performative, a type of utterance that performs the very action it describes. Speech Act Theory's examination of Illocutionary Acts has many of the same goals as pragmatics, as outlined above.
Pragmatics in literary theory Pragmatics (more specifically, Speech Act Theory's notion of the performative) underpins Judith Butler's theory of gender performativity. In Gender Trouble, she claims that gender and sex are not natural categories, but socially constructed roles produced by "reiterative acting." In Excitable Speech she extends her theory of performativity to hate speech and censorship, arguing that censorship necessarily strengthens any discourse it tries to suppress and therefore, since the state has sole power to define hate speech legally, it is the state that makes hate speech performative.
Jaques Derrida remarked that some work done under Pragmatics aligned well with the program he outlined in his book Of Grammatology. Émile Benveniste argued that the pronouns "I" and "you" are fundamentally distinct from other pronouns because of their role in creating the subject. Gilles Deleuze and Félix Guattari discuss linguistic pragmatics in the fourth chapter of A Thousand Plateaus ("November 20, 1923--Postulates of Linguistics"). They draw three conclusions from Austin: (1) A performative utterance does not communicate information about an act second-hand—it is the act; (2) Every aspect of language ("semantics, syntactics, or even phonematics") functionally interacts with pragmatics; (3) There is no distinction between language and speech. This last conclusion attempts to refute Saussure's division between langue and parole and Chomsky's distinction between surface structure and deep structure simultaneously. [13]
Significant works
J. L. Austin's How To Do Things With Words Paul Grice's cooperative principle and conversational maxims Brown & Levinson's Politeness Theory Geoffrey Leech's politeness maxims Levinson's Presumptive Meanings Jürgen Habermas's universal pragmatics Dan Sperber and Deirdre Wilson's relevance theory Dallin D. Oaks's Structural Ambiguity in English: An Applied Grammatical Inventory
American Speech–Language–Hearing Association From Wikipedia, the free encyclopedia (Redirected from American Speech-Language-Hearing Association) Jump to: navigation, search The American Speech–Language–Hearing Association (ASHA) is a professional association for speech–language pathologists, audiologists, and speech, language, and hearing scientists in the United States and internationally. It has more than 140,000 and s. The mission of the American Speech–Language–Hearing Association is to promote the interests of and provide the highest quality services for professionals in audiology, speech–language pathology, and speech and hearing science, and to advocate for people with communication disabilities. It was founded in 1925 as the American Academy of Speech Correction. The current name was adopted in 1978. The association's national office is located at Gude Drive and Research Boulevard in Rockville, Maryland. Arlene Pietranton is currently serving as the association's executive director.
Manner of articulation Manners of articulation
Obstruent Stop Affricate Fricative Sibilant
Sonorant Nasal Flap/Tap Approximant Liquid Vowel Semivowel
Lateral
Trill
Airstreams
Pulmonic
Ejective Implosive
Lingual (clicks) Linguo-pulmonic Linguo-ejective
Alliteration
Assonance
Consonance See also: Place of articulation
This page contains phonetic information in IPA, which may not display correctly in some browsers. [Help]
Human vocal tract
v
t
e
Articulation visualized by real-time MRI.
In linguistics, manner of articulation describes how the tongue, lips, jaw, and other speech organs are involved in making a sound. Often the concept is only used for the production of consonants, even though the movement of the articulators will also greatly alter the resonant properties of the vocal tract, thereby changing the formant structure of speech sounds that is crucial for the identification of vowels. For any place of articulation, there may be several manners, and therefore several homorganic consonants. One parameter of manner is stricture, that is, how closely the speech organs approach one another. Parameters other than stricture are those involved in the r-like sounds (taps and trills), and the sibilancy of fricatives. Often nasality and laterality are included in manner, but phoneticians such as Peter Ladefoged consider them to be independent.
Stricture From greatest to least stricture, speech sounds may be classified along a cline as stop consonants (with occlusion, or blocked airflow), fricative consonants (with partially blocked and therefore strongly turbulent airflow), approximants (with only slight turbulence), and vowels (with full unimpeded airflow). Affricates often behave as if they were intermediate between stops and fricatives, but phonetically they are sequences of stop plus fricative. Historically, sounds may move along this cline toward less stricture in a process called lenition. The reverse process is fortition.
Other parameters Sibilants are distinguished from other fricatives by the shape of the tongue and how the airflow is directed over the teeth. Fricatives at coronal places of articulation may be sibilant or non-sibilant, sibilants being the more common. Taps and flaps are similar to very brief stops. However, their articulation and behavior is distinct enough to be considered a separate manner, rather than just length.[specify] Trills involve the vibration of one of the speech organs. Since trilling is a separate parameter from stricture, the two may be combined. Increasing the stricture of a typical trill results in a trilled fricative. Trilled affricates are also known. Nasal airflow may be added as an independent parameter to any speech sound. It is most commonly found in nasal occlusives and nasal vowels, but nasalized fricatives, taps, and approximants are also found. When a sound is not nasal, it is called oral. Laterality is the release of airflow at the side of the tongue. This can also be combined with other manners, resulting in lateral approximants (the most common), lateral flaps, and lateral fricatives and affricates.
Individual manners
Stop, an oral occlusive, where there is occlusion (blocking) of the oral vocal tract, and no nasal air flow, so the air flow stops completely. Examples include English /p t k/ (voiceless) and /b d ɡ/ (voiced). If the consonant is voiced, the voicing is the only sound made during occlusion; if it is voiceless, a stop is completely silent. What we hear as a /p/ or /k/ is the effect that the onset of the occlusion has on the preceding vowel, as well as the release burst and its effect on the following vowel. The shape and position of the tongue (the place of articulation) determine the resonant cavity that gives different stops their characteristic sounds. All languages have stops.
Nasal, a nasal occlusive, where there is occlusion of the oral tract, but air es through the nose. The shape and position of the tongue determine the resonant cavity that gives different nasals their characteristic sounds. Examples include English /m, n/. Nearly all languages have nasals, the only exceptions being in the area of Puget Sound and a single language on Bougainville Island.
Fricative, sometimes called spirant, where there is continuous frication (turbulent and noisy airflow) at the place of articulation. Examples include English /f, s/ (voiceless), /v, z/ (voiced), etc. Most languages have fricatives, though many have only an /s/. However, the Indigenous Australian languages are almost completely devoid of fricatives of any kind.
Sibilants are a type of fricative where the airflow is guided by a groove in the tongue toward the teeth, creating a high-pitched and very distinctive sound. These are by far the most common fricatives. Fricatives at coronal (front of tongue) places of articulation are usually, though not always, sibilants. English sibilants include /s/ and /z/.
Lateral fricatives are a rare type of fricative, where the frication occurs on one or both sides of the edge of the tongue. The "ll" of Welsh and the "hl" of Zulu are lateral fricatives.
Affricate, which begins like a stop, but this releases into a fricative rather than having a separate release of its own. The English letters "ch" and "j" represent affricates. Affricates are quite common around the world, though less common than fricatives.
Flap, often called a tap, is a momentary closure of the oral cavity. The "tt" of "utter" and the "dd" of "udder" are pronounced as a flap in North American and Australian English. Many linguists distinguish taps from flaps, but there is no consensus on what the difference might be. No language relies on such a difference. There are also lateral flaps.
Trill, in which the articulator (usually the tip of the tongue) is held in place, and the airstream causes it to vibrate. The double "r" of Spanish "perro" is a trill. Trills and flaps, where there are one or more brief occlusions, constitute a class of consonant called rhotics.
Approximant, where there is very little obstruction. Examples include English /w/ and /r/. In some languages, such as Spanish, there are sounds that seem to fall between fricative and approximant.
One use of the word semivowel, sometimes called a glide, is a type of approximant, pronounced like a vowel but with the tongue closer to the roof of the mouth, so that there is slight turbulence. In English, /w/ is the semivowel equivalent of the vowel /u/, and /j/ (spelled "y") is the semivowel equivalent of the vowel /i/ in this usage. Other descriptions use semivowel for vowel-like sounds that are not syllabic, but do not have the increased stricture of approximants. These are found as elements in diphthongs. The word may also be used to cover both concepts.
Lateral approximants, usually shortened to lateral, are a type of approximant pronounced with the side of the tongue. English /l/ is a lateral. Together with the rhotics, which have similar behavior in many languages, these form a class of consonant called liquids.
Broader classifications Manners of articulation with substantial obstruction of the airflow (stops, fricatives, affricates) are called obstruents. These are prototypically voiceless, but voiced obstruents are extremely common as well. Manners without such obstruction (nasals, liquids, approximants, and also vowels) are called sonorants because they are nearly always voiced. Voiceless sonorants are uncommon, but are found in Welsh and Classical Greek (the spelling "rh"), in Standard Tibetan (the "lh" of Lhasa), and the "wh" in those dialects of English that distinguish "which" from "witch". Sonorants may also be called resonants, and some linguists prefer that term, restricting the word 'sonorant' to non-vocoid resonants (that is, nasals and liquids, but not vowels or semi-vowels).
Another common distinction is between occlusives (stops and nasals) and continuants (all else); affricates are considered to be both, because they are sequences of stop plus fricative.
Other airstream initiations All of these manners of articulation are pronounced with an airstream mechanism called pulmonic egressive, meaning that the air flows outward, and is powered by the lungs (actually the ribs and diaphragm). Other airstream mechanisms are possible. Sounds that rely on some of these include:
Ejectives, which are glottalic egressive. That is, the airstream is powered by an upward movement of the glottis rather than by the lungs or diaphragm. Stops, affricates, and occasionally fricatives may occur as ejectives. All ejectives are voiceless, or at least transition from voiced to voiceless.
Implosives, which are glottalic ingressive. Here the glottis moves downward, but the lungs may be used simultaneously (to provide voicing), and in some languages no air may actually flow into the mouth. Implosive stops are not uncommon, but implosive affricates and fricatives are rare. Voiceless implosives are also rare.
Clicks, which are lingual ingressive. Here the back of the tongue is used to create a vacuum in the mouth, causing air to rush in when the forward occlusion (tongue or lips) is released. Clicks may be oral or nasal, stop or affricate, central or lateral, voiced or voiceless. They are extremely rare in normal words outside Southern Africa. However, English has a click in its "tsk tsk" (or "tut tut") sound, and another is often used to say "giddy up" to a horse.
Combinations of these, in some analyses, in a single consonsant: linguo-pulmonic and linguoglottalic (ejective) consonants, which are clicks released into either a pulmonic or ejective stop/fricative.
Language processing For the processing of language by computers, see Natural language processing.
Broca's and Wernicke's Areas
Language processing refers to the way human beings use words to communicate ideas and feelings, and how such communications are processed and understood. Thus it is how the brain creates and understands language. Most recent theories consider that this process is carried out entirely by and inside the brain. This is considered one of the most characteristic abilities of the human species - perhaps the most characteristic. However very little is known about it and there is huge scope for research on it. Most of the knowledge acquired to date on the subject has come from patients who have suffered some type of significant head injury, whether external (wounds, bullets) or internal (strokes, tumors, degenerative diseases). Studies have shown that most of the language processing functions are carried out in the cerebral cortex. The essential function of the cortical language areas is symbolic representation. Even though language exists in different forms, all of them are based on symbolic representation.[1]
Neural basis for language Much of the language function is processed in several association areas, and there are two wellidentified areas that are considered vital for human communication: Wernicke's area and Broca's area. These areas are usually located in the dominant hemisphere (the left hemisphere in 97% of people) and are considered the most important areas for language processing. This is why language is considered a localized and lateralized function.[2] However, the less-dominant hemisphere also participates in this cognitive function, and there is ongoing debate on the level of participation of the less-dominant areas.[3] Other factors are believed to be relevant to language processing and verbal fluency, such as cortical thickness, participation of prefrontal areas of the cortex, and communication between right and left hemispheres.
Wernicke's area
Lateral surface of the brain with Brodmann's areas numbered.
Wernicke's area is classically located in the posterior section of the superior temporal gyrus of the dominant hemisphere (Brodmann area 22), with some branches extending around the posterior section of the lateral sulcus, in the parietal lobe.[4] Considering its position, Wernicke's area is located relatively between the auditory cortex and the visual cortex. The former is located in the transverse temporal gyrus (Brodmann areas 41 and 42), in the temporal lobe, while the latter is located in the posterior section of the occipital lobe (Brodmann areas 17, 18 and 19).[4] While the dominant hemisphere is in charge of most of language comprehension, recent studies have demonstrated that the less dominant (right hemisphere in 97% of people) homologous area participates in the comprehension of ambiguous words, whether they are written or heard.[5] Receptive speech has traditionally been associated with Wernicke's area of the posterior superior temporal gyrus (STG) and surrounding areas. Current models of speech perception include greater Wernicke's area, but also implicate a "dorsal" stream that includes regions also involved in speech motor processing.[6] First identified by Carl Wernicke in 1874, its main function is the comprehension of language and the ability to communicate coherent ideas, whether the language is vocal, written, signed.[2]
Broca's area Broca's area is usually formed by the pars triangularis and the pars opercularis of the inferior frontal gyrus (Brodmann areas 44 and 45). It follows Wernicke's area, and as such they both are usually located in the left hemisphere of the brain.[4] Broca's area is involved mostly in the production of speech. Given its proximity to the motor cortex, neurons from Broca's area send signals to the larynx, tongue and mouth motor areas, which in turn send the signals to the corresponding muscles, thus allowing the creation of sounds.[4] A recent analysis of the specific roles of these sections of the left inferior frontal gyrus in verbal fluency indicates that Brodmann area 44 (pars opercularis) may subserve phonological fluency, whereas the Brodmann area 45 (pars triangularis) may be more involved in semantic fluency.[7]
Arcuate fasciculus
Diffusion tensor imaging image of the brain showing the right and left, also shown are the right and left superior longitudinal fasciculus and tapetum of corpus callosum. Image provided by Aaron G. Filler, MD, PhD.
Arcuate fasciculus is a bundle of nerves that is believed to connect the posterior part of the temporal-parietal junction with the frontal lobe in the brain, which roughly translates into connecting Wernicke's area with Broca's area, thus becoming an important association area.[8] New research demonstrates that the arcuate fasciculus instead connects posterior receptive areas with motor areas, and not to Broca's area in particular. Since some branches of the arcuate fasciculus extend further into the parietal lobe, it is believed that it has an important role in attention.[8]
Cortical thickness and verbal fluency Recent studies have shown that the rate of increase in raw vocabulary fluency was positively correlated with the rate of cortical thinning. In other words, greater performance improvements were associated with greater thinning. This is more evident in left hemisphere regions, including the left lateral dorsal frontal and left lateral parietal regions: the usual locations of Broca's area and Wernicke's area, respectively.[7] After Sowell's studies, it was hypothesized that increased performance on the verbal fluency test would correlate with decreased cortical thickness in regions that have been associated with language: the middle and superior temporal cortex, the temporal–parietal junction, and inferior and middle frontal cortex. Additionally, other areas related to sustained attention for executive tasks were also expected to be affected by cortical thinning.[7] One theory for the relation between cortical thinning and improved language fluency is the effect that synaptic pruning has in signaling between neurons. If cortical thinning reflects synaptic pruning, then pruning may occur relatively early for language-based abilities. The functional benefit would be a tightly honed neural system that is impervious to "neural interference",
avoiding undesired signals running through the neurons which could possibly worsen verbal fluency.[7] The strongest correlations between language fluency and cortical thicknesses were found in the temporal lobe and temporal–parietal junction. Significant correlations were also found in the auditory cortex, the somatosensory cortex related to the organs responsible for speech (lips, tongue and mouth), and frontal and parietal regions related to attention and performance monitoring. The frontal and parietal regions are also evident in the right hemisphere.[7]
Oral language Speech perception Main article: Speech perception
Acoustic stimuli are received by the auditive organ and are converted to bioelectric signals on the organ of Corti. These electric impulses are then transported through scarpa's ganglion (vestibulocochlear nerve) to the primary auditory cortex, on both hemispheres. Each hemisphere treats it differently, nevertheless: while the left side recognizes distinctive parts such as phonemes, the right side takes over prosodic characteristics and melodic information. The signal is then transported to Wernicke's area on the left hemisphere (the information that was being processed on the right hemisphere is able to cross through inter-hemispheric axons), where the already noted analysis takes part. During speech comprehension, activations are focused in and around Wernicke's area. A large body of evidence s a role for the posterior superior temporal gyrus in acoustic–phonetic aspects of speech processing, whereas more ventral sites such as the posterior middle temporal gyrus (pMTG) are thought to play a higher linguistic role linking the auditory word form to broadly distributed semantic knowledge.[6] Also, the pMTG site shows significant activation during the semantic association interval of the verb generation and picture naming tasks, in contrast to the pSTG sites that remain at or below baseline levels during this interval. This is consistent with a greater lexical–semantic role for pMTG relative to a more acoustic–phonetic role for pSTG.[6] Semantic association
Early auditory processing and word recognition take place in inferior temporal areas ("what" pathway), where the signal arrives from the primary and secondary visual cortices. The representation of the object in the "what" pathway and nearby inferior temporal areas itself constitutes a major aspect of the conceptual–semantic representation. Additional semantic and syntactic associations are also activated, and during this interval of highly variable duration (depending on the subject, the difficulty of the current object, etc.), the word to be spoken is selected. This involves some of the same sites – prefrontal cortex (PFC), supramarginal gyrus (SMG), and other association areas – involved in the semantic selection stage of verb generation.[6]
Speech production
From Wernicke's area, the signal is taken to Broca's area through the arcuate fasciculus. Speech production activations begin prior to verbal response in the peri-Rolandic cortices (pre- and postcentral gyri). The role of ventral peri-Rolandic cortices in speech motor functions has long been appreciated (Broca's area). The superior portion of the ventral premotor cortex also exhibited auditory responses preferential to speech stimuli and are part of the dorsal stream.[6] Involvement of Wernicke's area in speech production has been suggested and recent studies document the participation of traditional Wernicke's area (mid-to posterior superior temporal gyrus) only in post-response auditory , while demonstrating a clear pre-response activation from the nearby temporal-parietal junction (TPJ).[6] It is believed that the common route to speech production is through verbal and phonological working memory using the same dorsal stream areas (temporal-parietal junction, sPMv) implicated in speech perception and phonological working memory. The observed pre-response activations at these dorsal stream sites are suggested to subserve phonological encoding and its translation to the articulatory score for speech. Post-response Wernicke's activations, on the other hand, are involved strictly in auditory self-monitoring.[6] Several authors a model in which the route to speech production runs essentially in reverse of speech perception, as in going from conceptual level to word form to phonological representation.[6]
Aphasia Main article: Aphasia
The acquired language disorders that are associated to brain activity are called aphasias. Depending on the location of the damage, the aphasias can present several differences. The aphasias listed below are examples of acute aphasias which can result from brain injury or stroke.
Expressive aphasia: Usually characterized as a nonfluent aphasia, this language disorder is present when injury or damage occurs to or near Broca's area. Individuals with this disorder have a hard time reproducing speech, although most of their cognitive functions remain intact, and are still able to understand language. They frequently omit small words. They are aware of their language disorder and may get frustrated.[9]
Receptive aphasia: Individuals with receptive aphasia are able to produce speech without a problem. However, most of the words they produce lack coherence. At the same time, they have a hard time understanding what others try to communicate. They are often unaware of their mistakes. As in the case with expressive aphasia, this disorder happens when damage occurs to Wernicke's area.[10]
Conduction aphasia: Characterized by poor speech repetition, this disorder is rather uncommon and happens when branches of the arcuate fasciculus are damaged. Auditory perception is practically intact, and speech generation is maintained. Patients with this disorder will be aware of their errors, and will show significant difficulty correcting them.[11
Cleft lip and palate Cleft lip and palate Classification and external resources
Child with cleft lip and palate. ICD-10
Q35-Q37
ICD-9
749
MedlinePlus
001051
eMedicine
ped/2679
Cleft lip (cheiloschisis) and cleft palate (palatoschisis), which can also occur together as cleft lip and palate, are variations of a type of clefting congenital deformity caused by abnormal facial development during gestation. A cleft is a fissure or opening—a gap. It is the non-fusion of the body's natural structures that form before birth. Approximately 1 in 700 children born have a cleft lip and/or a cleft palate. In decades past, the condition was sometimes referred to as
harelip, based on the similarity to the cleft in the lip of a hare, but that term is now generally considered to be offensive. Clefts can also affect other parts of the face, such as the eyes, ears, nose, cheeks, and forehead. In 1976, Paul Tessier described fifteen lines of cleft. Most of these craniofacial clefts are even more rare and are frequently described as Tessier clefts using the numerical locator devised by Tessier.[1] A cleft lip or palate can be successfully treated with surgery, especially so if conducted soon after birth or in early childhood.
Signs and symptoms Cleft lip and palate
If the cleft does not affect the palate structure of the mouth it is referred to as cleft lip. Cleft lip is formed in the top of the lip as either a small gap or an indentation in the lip (partial or incomplete cleft) or it continues into the nose (complete cleft). Lip cleft can occur as a one sided (unilateral) or two sided (bilateral). It is due to the failure of fusion of the maxillary and medial nasal processes (formation of the primary palate).
Unilateral incomplete
Unilateral complete
Bilateral complete A mild form of a cleft lip is a microform cleft.[2] A microform cleft can appear as small as a little dent in the red part of the lip or look like a scar from the lip up to the nostril.[3] In some cases muscle tissue in the lip underneath the scar is affected and might require reconstructive surgery.[4] It is advised to have newborn infants with a microform cleft checked with a craniofacial team as soon as possible to determine the severity of the cleft.[5]
6 month old girl before going into surgery to have her unilateral complete cleft lip repaired.
The same girl, 1 month after the surgery.
Same girl, age 8. Note how the scar is almost gone.
Cleft palate
Cleft palate is a condition in which the two plates of the skull that form the hard palate (roof of the mouth) are not completely ed. The soft palate is in these cases cleft as well. In most cases, cleft lip is also present. Cleft palate occurs in about one in 700 live births worldwide.[6] Palate cleft can occur as complete (soft and hard palate, possibly including a gap in the jaw) or incomplete (a 'hole' in the roof of the mouth, usually as a cleft soft palate). When cleft palate occurs, the uvula is usually split. It occurs due to the failure of fusion of the lateral palatine processes, the nasal septum, and/or the median palatine processes (formation of the secondary palate). The hole in the roof of the mouth caused by a cleft connects the mouth directly to the nasal cavity. Note: the next images show the roof of the mouth. The top shows the nose, the lips are colored pink. For clarity the images depict a toothless infant.
Incomplete cleft palate
Unilateral complete lip and palate
Bilateral complete lip and palate
A result of an open connection between the oral cavity and nasal cavity is called velopharyngeal inadequacy (VPI). Because of the gap, air leaks into the nasal cavity resulting in a hypernasal voice resonance and nasal emissions while talking.[7] Secondary effects of VPI include speech articulation errors (e.g., distortions, substitutions, and omissions) and compensatory misarticulations and mispronunciations (e.g., glottal stops and posterior nasal fricatives).[8] Possible treatment options include speech therapy, prosthetics, augmentation of the posterior pharyngeal wall, lengthening of the palate, and surgical procedures.[7] Submucous cleft palate (SM) can also occur, which is a cleft of the soft palate with a classic clinical triad of a bifid, or split, uvula which is found dangling in the back of the throat, a furrow along the midline of the soft palate, and a notch in the back margin of the hard palate.[9] Psychosocial
Most children who have their clefts repaired early enough are able to have a happy youth and social life. Having a cleft palate/lip does not inevitably lead to a psychosocial problem. However, adolescents with cleft palate/lip are at an elevated risk for developing psychosocial problems especially those relating to self concept, peer relationships and appearance. Adolescents may face psychosocial challenges but can find professional help if problems arise. A cleft palate/lip may impact an individual’s self-esteem, social skills and behavior. There is research dedicated to the psychosocial development of individuals with cleft palate. Self-concept may be adversely affected by the presence of a cleft lip and or cleft palate, particularly among girls.[10] Research has shown that during the early preschool years (ages 3–5), children with cleft lip and or cleft palate tend to have a self-concept that is similar to their peers without a cleft. However, as they grow older and their social interactions increase, children with clefts tend to report more dissatisfaction with peer relationships and higher levels of social anxiety. Experts conclude that this is probably due to the associated stigma of visible deformities and possible speech impediments. Children who are judged as attractive tend to be perceived as more intelligent, exhibit more positive social behaviors, and are treated more positively than children with cleft lip and or cleft palate.[11] Children with clefts tend to report feelings of anger, sadness, fear, and alienation from their peers, but these children were similar to their peers in regard to "how well they liked themselves." The relationship between parental attitudes and a child’s self-concept is crucial during the preschool years. It has been reported that elevated stress levels in mothers correlated with reduced social skills in their children.[12] Strong parent networks may help to prevent the development of negative self-concept in children with cleft palate.[13] In the later preschool and early elementary years, the development of social skills is no longer only impacted by parental attitudes but is beginning to be shaped by their peers. A cleft lip and or cleft palate may affect the behavior of preschoolers. Experts suggest that parents discuss with their children ways to handle negative social situations related to their cleft lip and or cleft palate. A child who is entering school should learn the proper (and age-appropriate) related to the cleft. The ability to confidently explain the condition to others may limit feelings of awkwardness and embarrassment and reduce negative social experiences.[14]
As children reach adolescence, the period of time between age 13 and 19, the dynamics of the parent-child relationship change as peer groups are now the focus of attention. An adolescent with cleft lip and or cleft palate will deal with the typical challenges faced by most of their peers including issues related to self esteem, dating and social acceptance.[15][16][17] Adolescents, however, view appearance as the most important characteristic above intelligence and humor.[18] This being the case, adolescents are susceptible to additional problems because they cannot hide their facial differences from their peers. Adolescent boys typically deal with issues relating to withdrawal, attention, thought, and internalizing problems and may possibly develop anxiousness-depression and aggressive behaviors.[17] Adolescent girls are more likely to develop problems relating to self concept and appearance. Individuals with cleft lip and or cleft palate often deal with threats to their quality of life for multiple reasons including: unsuccessful social relationships, deviance in social appearance and multiple surgeries. Complications
A baby being fed using a customized bottle. The upright sitting position allows gravity to help the baby swallow the milk more easily
Cleft may cause problems with feeding, ear disease, speech and socialization. Due to lack of suction, an infant with a cleft may have trouble feeding. An infant with a cleft palate will have greater success feeding in a more upright position. Gravity will help prevent milk from coming through the baby's nose if he/she has cleft palate. Gravity feeding can be accomplished by using specialized equipment, such as the Haberman Feeder, or by using a combination of nipples and bottle inserts like the one shown, is commonly used with other infants. A large hole, crosscut, or slit in the nipple, a protruding nipple and rhythmically squeezing the bottle insert can result in controllable flow to the infant without the stigma caused by specialized equipment. Individuals with cleft also face many middle ear infections which can eventually lead to total hearing loss. The Eustachian tubes and external ear canals may be angled or tortuous, leading to food or other contamination of a part of the body that is normally self cleaning. Hearing is related to learning to speak. Babies with palatal clefts may have compromised hearing and therefore, if the baby cannot hear, it cannot try to mimic the sounds of speech. Thus, even before
expressive language acquisition, the baby with the cleft palate is at risk for receptive language acquisition. Because the lips and palate are both used in pronunciation, individuals with cleft usually need the aid of a speech therapist.
Cause The development of the face is coordinated by complex morphogenetic events and rapid proliferative expansion, and is thus highly susceptible to environmental and genetic factors, rationalising the high incidence of facial malformations. During the first six to eight weeks of pregnancy, the shape of the embryo's head is formed. Five primitive tissue lobes grow: a) one from the top of the head down towards the future upper lip; (Frontonasal Prominence) b-c) two from the cheeks, which meet the first lobe to form the upper lip; (Maxillar Prominence) d-e) and just below, two additional lobes grow from each side, which form the chin and lower lip; (Mandibular Prominence)
If these tissues fail to meet, a gap appears where the tissues should have ed (fused). This may happen in any single ing site, or simultaneously in several or all of them. The resulting birth defect reflects the locations and severity of individual fusion failures (e.g., from a small lip or palate fissure up to a completely malformed face). The upper lip is formed earlier than the palate, from the first three lobes named a to c above. Formation of the palate is the last step in ing the five embryonic facial lobes, and involves the back portions of the lobes b and c. These back portions are called palatal shelves, which grow towards each other until they fuse in the middle.[19] This process is very vulnerable to multiple toxic substances, environmental pollutants, and nutritional imbalance. The biologic mechanisms of mutual recognition of the two cabinets, and the way they are glued together, are quite complex and obscure despite intensive scientific research.[20] Genetics
Genetic factors contributing to cleft lip and cleft palate formation have been identified for some syndromic cases, but knowledge about genetic factors that contribute to the more common isolated cases of cleft lip/palate is still patchy. Many clefts run in families, even though in some cases there does not seem to be an identifiable syndrome present,[21] possibly because of the current incomplete genetic understanding of midfacial development. A number of genes are involved including cleft lip and palate transmembrane protein 1 and GAD1,[22] one of the glutamate decarboxylases. Many genes are known to play a role in craniofacial development and are being studied through the FaceBase initiative for their part in clefting. These genes are AXIN2, BMP4, FGFR1, FGFR2, FOXE1, IRF6, MAFB (gene), MMP3, MSX1, MSX2 (Msh homeobox 2), MSX3, PAX7, PDGFC, PTCH1, SATB2, SOX9,
SUMO1 (Small ubiquitin-related modifier 1), TBX22, TCOF (Treacle protein), TFAP2A, VAX1, TP63, ARHGAP29, NOG, NTN1, WNT genes, and locus 8q24.[23] Syndromes
The Van der Woude Syndrome is caused by a specific variation in the gene IRF6 that increases the occurrence of these deformities threefold.[24][25][26] Another syndrome, Siderius X-linked mental retardation, is caused by mutations in the PHF8 gene (OMIM 300263); in addition to cleft lip and/or palate, symptoms include facial dysmorphism and mild mental retardation.[27]
In some cases, cleft palate is caused by syndromes which also cause other problems.
Stickler's Syndrome can cause cleft lip and palate, t pain, and myopia.[28][29] Loeys-Dietz syndrome can cause cleft palate or bifid uvula, hypertelorism, and aortic aneurysm.[30] Hardikar syndrome can cause cleft lip and palate, Hydronephrosis, Intestinal obstruction and other symptoms.[31] Cleft lip/palate may be present in many different chromosome disorders including Patau Syndrome (trisomy 13). Malpuech facial clefting syndrome Hearing loss with craniofacial syndromes Popliteal pterygium syndrome Treacher Collins Syndrome
Specific genes
Many genes associated with syndromic cases of cleft lip/palate (see above) have been identified to contribute to the incidence of isolated cases of cleft lip/palate. This includes in particular sequence variants in the genes IRF6, PVRL1 and MSX1.[32] The understanding of the genetic complexities involved in the morphogenesis of the midface, including molecular and cellular processes, has been greatly aided by research on animal models, including of the genes BMP4, SHH, SHOX2, FGF10 and MSX1.[32] Types include: Type OMIM
Gene
Locus
OFC1 119530 ?
6p24
OFC2 602966 ?
2p13
OFC3 600757 ?
19q13
OFC4 608371 ?
4q
OFC5 608874 MSX1
4p16.1
OFC6 608864 ?
1q
OFC7 600644) PVRL1 11q OFC8 129400 TP63
3q27
OFC9 610361 ?
13q33.1-q34
OFC10 601912 SUMO1 2q32.2-q33 OFC11 600625 BMP4 14q22 OFC12 612858 ?
8q24.3
Environment
Environmental influences may also cause, or interact with genetics to produce, orofacial clefting. An example for how environmental factors might be linked to genetics comes from research on mutations in the gene PHF8 that cause cleft lip/palate (see above). It was found that PHF8 encodes for a histone lysine demethylase,[33] and is involved in epigenetic regulation. The catalytic activity of PHF8 depends on molecular oxygen,[33] a fact considered important with respect to reports on increased incidence of cleft lip/palate in mice that have been exposed to hypoxia early during pregnancy.[34] In humans, fetal cleft lip and other congenital abnormalities have also been linked to maternal hypoxia, as caused by e.g. maternal smoking,[35] maternal alcohol abuse or some forms of maternal hypertension treatment.[36] Other environmental factors that have been studied include: seasonal causes (such as pesticide exposure); maternal diet and vitamin intake; retinoids — which are of the vitamin A family; anticonvulsant drugs; alcohol; cigarette use; nitrate compounds; organic solvents; parental exposure to lead; and illegal drugs (cocaine, crack cocaine, heroin, etc.). Current research continues to investigate the extent to which Folic acid can reduce the incidence of clefting.[37]
Diagnosis Traditionally, the diagnosis is made at the time of birth by physical examination. Recent advances in prenatal diagnosis have allowed obstetricians to diagnose facial clefts in utero.[38]
Treatment Cleft lip and palate is very treatable; however, the kind of treatment depends on the type and severity of the cleft.
Most children with a form of clefting are monitored by a cleft palate team or craniofacial team through young adulthood.[39] Care can be lifelong. Treatment procedures can vary between craniofacial teams. For example, some teams wait on jaw correction until the child is aged 10 to 12 (argument: growth is less influential as deciduous teeth are replaced by permanent teeth, thus saving the child from repeated corrective surgeries), while other teams correct the jaw earlier (argument: less speech therapy is needed than at a later age when speech therapy becomes harder). Within teams, treatment can differ between individual cases depending on the type and severity of the cleft. Cleft lip
Within the first 2–3 months after birth, surgery is performed to close the cleft lip. While surgery to repair a cleft lip can be performed soon after birth, often the preferred age is at approximately 10 weeks of age, following the "rule of 10s" coined by surgeons Wilhelmmesen and Musgrave in 1969 (the child is at least 10 weeks of age; weighs at least 10 pounds, and has at least 10g hemoglobin).[40] If the cleft is bilateral and extensive, two surgeries may be required to close the cleft, one side first, and the second side a few weeks later. The most common procedure to repair a cleft lip is the Millard procedure pioneered by Ralph Millard. Millard performed the first procedure at a Mobile Army Surgical Hospital (MASH) unit in Korea.[41] Often an incomplete cleft lip requires the same surgery as complete cleft. This is done for two reasons. Firstly the group of muscles required to purse the lips run through the upper lip. In order to restore the complete group a full incision must be made. Secondly, to create a less obvious scar the surgeon tries to line up the scar with the natural lines in the upper lip (such as the edges of the philtrum) and tuck away stitches as far up the nose as possible. Incomplete cleft gives the surgeon more tissue to work with, creating a more supple and natural-looking upper lip.
The blue lines indicate incisions.
Movement of the flaps; flap A is moved between B and C. C is rotated slightly while B is pushed down.
Pre-operation
Post-operation, the lip is swollen from surgery and will get a more natural look within a couple of weeks. See photos in the section above. Pre-surgical devices
In some cases of a severe bi-lateral complete cleft, the premaxillary segment will be protruded far outside the mouth. Nasoalveolar molding prior to surgery can improve long-term nasal symmetry among patients with complete unilateral cleft lip-cleft palate patients compared to correction by surgery alone, according to a retrospective cohort study.[42] In this study, significant improvements in nasal symmetry were observed in multiple areas including measurements of the projected length of the nasal ala (lateral surface of the external nose), position of the superoinferior alar groove, position of the mediolateral nasal dome, and nasal bridge deviation. "The nasal ala projection length demonstrated an average ratio of 93.0 percent in the surgery-alone group and 96.5 percent in the nasoalveolar molding group" this study concluded. Cleft palate
A repaired cleft palate on a 64-year-old female.
Often a cleft palate is temporarily covered by a palatal obturator (a prosthetic device made to fit the roof of the mouth covering the gap). Cleft palate can also be corrected by surgery, usually performed between 6 and 12 months. Approximately 20–25% only require one palatal surgery to achieve a competent velopharyngeal valve capable of producing normal, non-hypernasal speech. However, combinations of surgical methods and repeated surgeries are often necessary as the child grows. One of the new innovations of cleft lip and cleft palate repair is the Latham appliance.[43] The Latham is surgically inserted by use of pins during the child's 4th or 5th month. After it is in place, the doctor, or parents, turn a screw daily to bring the cleft together to assist with future lip and/or palate repair. If the cleft extends into the maxillary alveolar ridge, the gap is usually corrected by filling the gap with bone tissue. The bone tissue can be acquired from the patients own chin, rib or hip. Speech and hearing
A tympanostomy tube is often inserted into the eardrum to aerate the middle ear.[44] This is often beneficial for the hearing ability of the child. Children with cleft palate typically have a variety of speech problems. Some speech problems result directly from anatomical differences such as velopharyngeal inadequacy. Velopharyngeal inadequacy refers to the inability of the soft palate to close the opening from the throat to the nasal cavity, which is necessary for many speech sounds, such as /p/, /b/, /t/, /d/, /s/, /z/, etc.[45] This type of errors typically resolve after palate repair.[46] However, sometimes children with cleft palate also have speech errors which develop as the result of an attempt to compensate for the inability to produce the target phoneme. These are known as compensatory articulations. Compensatory articulations are usually sounds that are non-existent in normal English phonology, often do not resolve automatically after palatal repair, and make a child’s speech even more difficult to understand.[46][47][48] Speech-language pathology can be very beneficial to help resolve speech problems associated with cleft palate. In addition, research has indicated that children who receive early language intervention are less likely to develop compensatory error patterns later.[49] Hearing loss
Hearing impairment is particularly prevalent in children with cleft palate. The tensor muscle fibres that open the eustachian tubes lack an anchor to function effectively. In this situation, when the air in the middle ear is absorbed by the mucous membrane, the negative pressure is not compensated, which results in the secretion of fluid into the middle ear space from the mucous membrane.[50] Children with this problem typically have a conductive hearing loss primarily caused by this middle ear effusion.[51]
Sample treatment schedule
Note that each individual patient's schedule is treated on a case-by-case basis and can vary per hospital. The table below shows a common sample treatment schedule. The colored squares indicate the average timeframe in which the indicated procedure occurs. In some cases this is usually one procedure (for example lip repair) in other cases this is an ongoing therapy (for example speech therapy). 0 3 6 9 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
age m m m m y y y y y y y y y y y y y y y y y y Palatal obturator Repair cleft lip Repair soft palate Repair hard palate Tympanostomy tube Speech therapy/Pharyngoplasty Bone grafting jaw Orthodontics Further cosmetic corrections (Including jawbone surgery) Craniofacial team Main article: Craniofacial team
A craniofacial team is routinely used to treat this condition. The majority of hospitals still use craniofacial teams; yet others are making a shift towards dedicated cleft lip and palate programs. While craniofacial teams are widely knowledgeable about all aspects of craniofacial conditions, dedicated cleft lip and palate teams are able to dedicate many of their efforts to being on the cutting edge of new advances in cleft lip and palate care. Many of the top pediatric hospitals are developing their own CLP clinics in order to provide patients with comprehensive multi-disciplinary care from birth through adolescence. Allowing an entire team to care for a child throughout their cleft lip and palate treatment (which is
ongoing) allows for the best outcomes in every aspect of a child's care. While the individual approach can yield significant results, current trends indicate that team based care leads to better outcomes for CLP patients. .[52]
Epidemiology Main article: Clefting prevalence in different cultures
Prevalence rates reported for live births for Cleft lip with or without Cleft Palate (CL ± P) and Cleft Palate alone (O) varies within different ethnic groups. The highest prevalence rates for (CL ± P) are reported for Native Americans and Asians. Africans have the lowest prevalence rates.[53]
Native Americans: 3.74/1000 Japanese: 0.82/1000 to 3.36/1000 Chinese: 1.45/1000 to 4.04/1000 Caucasians: 1.43/1000 to 1.86/1000 Latin Americans: 1.04/1000 Africans: 0.18/1000 to 1.67/1000
Rate of occurrence of O is similar for Caucasians, Africans, North American natives, Japanese and Chinese. The trait is dominant. Prevalence of "cleft uvula" has varied from .02% to 18.8% with the highest numbers found among Chippewa and Navajo and the lowest generally in Africans.[54][55]
Society and culture Controversy
In some countries, cleft lip or palate deformities are considered reasons (either generally tolerated or officially sanctioned) to perform abortion beyond the legal fetal age limit, even though the fetus is not in jeopardy of life or limb. Some human rights activists contend this practice of "cosmetic murder" amounts to eugenics. The Japanese anime Ghost Stories caused controversy through an episode featuring a Kuchisakeonna (a ghost with a Glasgow smile) because her scar resembled a cleft lip.[56] Notable cases Name John Henry "Doc" Holliday
Comments American dentist, gambler and gunfighter of the American Old West, who is usually ed for his friendship with Wyatt Earp and the Gunfight at the O.K. Corral
[57]
Tutankhamen
Egyptian pharaoh who may have had a slightly cleft palate according to diagnostic imaging
[58]
Thorgils Skarthi
Thorgils 'the hare-lipped'—a 10th century Viking warrior and founder of Scarborough, England.
[59]
Tad Lincoln
Fourth and youngest son of President Abraham Lincoln
[60]
Carmit Bachar
American dancer and singer
[61][62]
Jürgen Habermas
German philosopher and sociologist
[63]
Ljubo Milicevic
Australian professional footballer
[64]
Stacy Keach
American actor and narrator
[65]
Cheech Marin
American actor and comedian
[66]
Chin-Chin
American magician and stage illusionist
[67]
Owen Schmitt
American football fullback
[68]
Tim Lott
English author and journalist
[69]
Richard Hawley
English musician
[70]
In other animals Cleft lips and palates are occasionally seen in cattle and dogs, and rarely in sheep, cats, horses, pandas and ferrets. Most commonly, the defect involves the lip, rhinarium, and premaxilla. Clefts of the hard and soft palate are sometimes seen with a cleft lip. The cause is usually hereditary. Brachycephalic dogs such as Boxers and Boston Terriers are most commonly affected.[71] An inherited disorder with incomplete penetrance has also been suggested in Shih tzus, Swiss Sheepdogs, Bulldogs, and Pointers.[72] In horses, it is a rare condition usually involving the caudal soft palate.[73] In Charolais cattle, clefts are seen in combination with arthrogryposis, which is inherited as an autosomal recessive trait. It is also inherited as an autosomal recessive trait in Texel sheep. Other contributing factors may include maternal nutritional deficiencies, exposure in utero to viral infections, trauma, drugs, or chemicals, or ingestion of toxins by the mother, such as certain lupines by cattle during the second or third month of gestation.[74] The use of corticosteroids during pregnancy in dogs and the ingestion of Veratrum californicum by pregnant sheep have also been associated with cleft formation.[75] Difficulty with nursing is the most common problem associated with clefts, but aspiration pneumonia, regurgitation, and malnutrition are often seen with cleft palate and is a common
cause of death. Providing nutrition through a feeding tube is often necessary, but corrective surgery in dogs can be done by the age of twelve weeks.[71] For cleft palate, there is a high rate of surgical failure resulting in repeated surgeries.[76] Surgical techniques for cleft palate in dogs include prosthesis, mucosal flaps, and microvascular free flaps.[77] Affected animals should not be bred due to the hereditary nature of this condition.
Cleft lip in a Boxer
Cleft lip in a Boxer with premaxillary involvement
Same dog as picture on left, one year later
Speech and language assessment Common speech and language therapy assessments include:
For children Many assessments exist for investigating children's language. Here is a selection of commonly used assessments by speech and language therapy services in the UK:
BPVS - a receptive assessment of vocabulary TROG - understanding of language (grammar) PVCS - preverbal communication checklist Derbyshire picture test - simple understanding Clinical Evaluation of Language Fundamentals (CELF-4): Assesses receptive, expressive, and pragmatic language.
Clinical Evaluation of Language Fundamentals: Pre-School (CELF-P): Assesses receptive, expressive, and pragmatic language skills in Pre-School aged children. ACE 6-11 - a battery of receptive, expressive and pragmatic language tests RAPT - picture naming and grammar/content analysis RWFVS - Picture naming/vocabulary test STASS - expressive grammar The Bus Story Test - early narrative assessment CLEAR - phonology screening assessment STAP - expressive phonology DEAP - expressive phonology Peabody Picture Vocabulary Test Language for Thinking assessment - inferential thinking and understanding
For adults
PALPA - Psycholinguistic Assessment of Language Processing in Aphasia Boston diagnostic battery Confrontation naming tests, such as the Boston Naming Test WAB - Western Aphasia Battery
Development of speech and language Every child develops at a different rate, but most go through the same stages. Listed below are the average ages of some important language and comprehension milestones as developed by the American Speech-Language-Hearing Association. Please note that like with any developmental timeline, these stages may be quite varied and perhaps met in a different order. A child who accomplishes these milestones differently may not necessarily have a developmental delay or speech disorder (and a child who hits these stages early is not necessarily a prodigy!).
birth to 3 months o startles to loud sounds o smiles when spoken to o responds to pleasure with 'cooing' noises 4 months to 6 months o notices and pays attention to sounds and music o shifts eyes in direction of sounds o makes babbling noises that resemble speech 7 months to 1 year o recognizes basic familiar words such as cup or ball o imitates different speech sounds o produces first words such as bye-bye or mama 1 year to 2 years o listens to simple stories o identifies pictures by name when directed (point to the cow, e.g.) o speaks two-word sentences such as more juice or where daddy? 2 years to 3 years o understands differences in meaning for basic words (up-down or in-out)
o o
produces three-word sentences can name most objects 3 years to 4 years o understands questions o talks about events o speech is understood by most people 4 years to 5 years o pays attention and responds to stories and questions o speaks clearly o tells detailed, ordered stories
Problems can arise at any stage of development, as well as much later in life. They can be the result of a congenital defect, a developmental disorder, or an injury. If a problem is suspected, an assessment should be made by an SLP who can diagnose and treat communication disorders.
Diagnosis of communication disorders In a school setting, children are often screened when they start kindergarten. This process involves a rapid assessment to determine which children need further testing, diagnosis, or treatment. Often, a screening is a sort of informal interview between an SLP and a child or group of children. The child may be asked to give their name, count, pronounce the names of pictured objects, and answer open-ended questions. The purpose of these tasks is to elicit a brief language sample from the child which the SLP will use to evaluate articulation, fluency, and other aspects of speech. Screenings usually last about five minutes (Oyer 10). After a screening is done, an individual diagnosis must be made. This involves a one-on-one evaluation which may last two hours or more. If an individual has been referred for testing, either by a doctor, teacher, or other professional, the screening process is skipped and testing starts here. This session allows the SLP to gather information that will help in the diagnosis of a speech or language problem, as well as provide insight to possible causes, goals and objectives for therapy, and which techniques will work best for that individual. Individual evaluations often include the following components:
A visual examination of the oral cavity and throat (typically with a flashlight and tongue depressor) to determine whether the physical structures appear to be capable of speech production Tests of articulation of speech sounds in words and sentences as well as alone A measure of the ability to hear the difference between correct speech sounds and sounds actually produced Tests of expressive language and spontaneous speech Evaluations of fluency and voice A hearing test A case history
After this evaluation, the SLP will review the results and information gathered and determine whether the individual would benefit from speech therapy. Goals and objectives of therapy are
outlined and a specific treatment plan is created, drawing on the strengths and weaknesses and unique situation of that individual (Oyer 11).
Common communication and language disorders Disorders that affect children may affect adults differently, or even not at all. As the body grows and develops, the types of disorders that affect an individual change. Children typically exhibit developmental language disorders, but may also experience problems due to illness or injury. In developing children, language disorders are often related to congenital disabilities or neurological or physiological results of childhood illness. These seemingly unrelated problems can have a serious impact on speech and language development. Children that have cognitive impairments are often delayed in development of communication skills. Different genetic syndromes that often cause cognitive impairment, such as Down syndrome or Williams syndrome, often affect different areas of speech. Children with autism tend to have difficulty communicating and expressing their emotions or desires. Sometimes this is due to specific problems with articulation or semantics, but often it is an issue of neurological development directly related to autism. Brain injuries, tumors, or seizures in children can also cause loss of language skills. Children with attention deficit hyperactivity disorder (ADHD) commonly have learning difficulties which also affect their language development. Emotional disturbances early in childhood can also have an impact on the growth of basic communicative skills. Perhaps more obvious are the developmental and communicative consequences of childhood hearing loss (Boone 200-05). Some disorders commonly diagnosed in children: Specific language impairment
Some children have language development deficits that cannot be linked to neurological, intellectual, social, or motor causes. The child’s language skills grow much more slowly than those of typically developing children. While other children are speaking in complete sentences, using conjugated verb forms, the SLI child’s speech sounds telegraphic- lacking grammatical and functional morphemes (e.g., He go store. rather than He goes to the store.) Their vocabulary remains relatively small while other children are adding new words every day. The SLI child often produces short sentences in order to avoid embarrassment and may have problems understanding complex or figurative structures (such as metaphors or multi-clausal sentences). Problems due to SLI can also lead to learning disabilities as the child fails to understand information being presented in science, language arts, or math classes. Studies suggest that the cause of SLI is a biological difference in brain anatomy and development (Boone 204). Treatment objectives generally focus on vocabulary development, verb morphology, memory and recall, and narrative skills (Goffman 154). Articulation disorders
An articulation disorder may be diagnosed when a child has difficulty producing phonemes, or speech sounds, correctly. When classifying a sound, speech pathologists refer to the manner of
articulation, the place of articulation, and voicing. A speech sound disorder may include one or more errors of place, manner, or voicing of the phoneme. Different types of articulation disorders include: omissions certain sounds are deleted, often at the ends of words; entire syllables or classes of sounds may be deleted; e.g., fi' for fish substitutions one sound is substituted for another, often with similar places or manners or articulation; e.g., fith for fish distortions sounds are changed slightly by what may seem like the addition of noise, or a change in voicing; e.g., filsh for fish additions an extra sound is added to one already produced correctly; often occurs at the ends of words; may include changes in voicing; e.g., fisha for fish (Boone 256-58)
The phonemes that present the greatest challenge for children include /l/ as in pull, /r/ as in mirror, /ʃ/ ("sh") as in shut, /tʃ/ ("ch") as in church, /dʒ/ ("j") as in fudge, /z/ as in zoo, /ʒ/ ("zh") as in measure, /θ/ ("th") as in math and /ð/ ("th") as in this (Boone 112). Articulation disorders may be attributed to a variety of causes. A child with hearing loss may not be able to hear certain phonemes pronounced at certain frequencies, or hear the error in their own production of sounds. Oral-motor problems may also be at fault, such as apraxia (a problem with coordination of speech muscles) or dysarthria (abnormal facial muscle tone, often due to neurological problems such as cerebral palsy). Abnormalities in the structure of the mouth and other speech muscles can cause problems with articulation; cleft palate, tongue thrust, and dental-orthodontia abnormalities are some common examples. Finally, it is difficult for children to hear and produce all of the different phonemes of a given language. Development is slow, and may take up to seven years. Sometimes, as children grow, articulation problems fade and disappear without treatment. Often, however, therapy is necessary. Treatment therapies may target semantic differences related to phonemic differences (e.g., teaching a child the difference between toe and toad, underlining the importance of the final consonant), physical-motor differences (e.g., using a mirror to show a child the correct tongue placement for a particular sound), or behavior-modification techniques (e.g., repetitive production through prompts and fun learning games). and reinforcement of therapy practices, both in the classroom and at home, are crucial to the success of articulation disorder treatment (Boone 122-24, 259-62, 27476). Clinically proven products aimed at correcting articulation disorders include Speech Buddies, which uses tactile to teach correct tongue placement.
It is necessary to note the difference between articulation disorders and dialectical variations. There are several dialects of English spoken in the United States, influenced by socioeconomic status, geographic isolation, and other languages either brought to the U.S. by settlers or indigenous languages of the Native Americans. These social dialects are rule-governed and are not to be considered lesser than, but simply different from standard English. Examples of dialectical features that may be mistaken for articulation disorders include the 'r-lessness' of New York City speech in words like floor, here, and paper as well as the reduction of consonant clusters in African-American Vernacular English (AAVE). If a word ends with two or more consonants such as in cold, and is followed by another word that begins with a consonant such as cuts, cold is shortened to col, producing col cuts. These features alone should not be treated as articulation disorders to be 'cured' by speech therapy. However, it is possible for a child with a dialectal variation to also have a communication disorder. It is important for a speech pathologist to be able to tell the difference (Oyer 170). Voice disorders
Children may experience problems with their voice due to misuse or abnormalities in the vocal mechanisms. There are two types of voice disorders: those of phonation, and those of resonance. Both types can be the result of either abuse or physical structure. Voice disorders are among the most successfully treated speech and language problems because they can be solved with surgery or reconditioning of the voice (Boone 286). A phonation disorder is a problem with pitch, loudness, or intensity that originates in the vocal folds of the larynx. Phonation disorders may be functional, caused by continuous yelling or throat clearing, excessive smoking, or speaking at an abnormally low frequency or pitch. The results may be an increased size or thickening of the vocal folds, lesions or polyps on the vocal folds, or problems with elasticity of the larynx. In these cases, the treatment involves resting the voice and learning to speak at optimal pitches and volumes, as well as eliminating external causes such as smoking. Phonation disorders may also be organic, due to viral growths, cancer, paralysis of laryngeal nerves, surgical intubation, or external traumas such as being hit in the throat with a baseball. These problems may require surgical removal of growths or reconstruction of the larynx, accompanied by voice therapy (Boone 287-96). A resonance disorder occurs when any part of the vocal tract is altered or dysfunctional. In the case of an oral resonance disorder, the tongue sits too high in the front or back of the mouth. When the tongue is too far forward in the mouth, a type of ‘baby voice’ occurs, and a lisp may also result. Treatment involves practicing back vowels such as /a/ in father, /o/ in boat, and /u/ in spoon, accompanied by back consonants like /k/ in broke and /g/ in bog. When the tongue sits toward the back of the mouth, the voice sounds dull, and problems with articulation at the front of the mouth may also occur. Treatment focuses on front consonants such as /w/ in where or work, /p/ in pink, /b/ in ball, /f/ in laugh, /v/ in leave, /l/ in mail, and /th/ in with or bath coupled with high-front vowels like /i/ in wheat, /I/ in fit, /e/ in pay, /E/ in bet, and /ae/ in slat. This type of resonance disorder is commonly seen in children with severe hearing impairment.
Nasal resonance disorders occur when the space between the oral and nasal cavities remains open or closed, producing a hypernasal or denasal resonance. Causes of hypernasality include paralysis of the velum, a short velum, or a cleft palate which allows air to escape to the nasal cavity. The speech of actor James Stewart is a recognizable example of hypernasality (although in this case, there was no structural problem; rather, he employed the highly nasal voice as part of his character). Denasality is often caused by a structural blockage which doesn’t allow air to between the oral and nasal cavities. A child experiencing denasality may sound like they have a bad cold. If a structural problem is to blame, surgery is the most common treatment. After surgery, or if there is no structural cause, voice therapy is often given, involving massive amounts of practice (Boone 305-12). Fluency disorders
As a child’s language and vocabulary grows, they may struggle to locate a particular word or sound. Normal dysfluency occurs in developing children as a repetition of whole words or phrases while the child searches for a particular thought or word. Around age three-and-a-half, children may compulsively repeat words or phrases. This tends to fade by the time the child is five. Stuttering, in contrast, results in repeated or prolonged speech sounds or syllables. Often, involuntary blocks in fluency will be accompanied by muscle tension due to frustration. The mouth may tighten up or the eyes may blink rapidly. A child may become so embarrassed by stuttering that they talk as little as possible to avoid the struggle. This may have serious academic and social implications. The cause of stuttering is unknown, yet widely debated. Most theories suggest emotional, psychological, or neurological origins. Psychological treatment aims at improving the self-image of the child and the child’s attitude toward the problem, while other therapies attempt to increase fluency by modifying the rhythm and rate of speech (Boone 316-29, 335-38).
How many people are affected by communication disorders? According to the National Institutes of Health, it is estimated that, in the United States,
between 8 and 10 percent of people have a communication disorder 7.5 million people have voice disorders cleft palate affects 1 in 700 live births 5 percent of children have noticeable communication disorders stuttering affects more than 3 million people, mostly children age 2 through 6
According to the United States Department of Education, speech, language, and hearing impairments for 20.1 percent of all Special Education students in the United States.
A typical in-school speech therapy session The treatment of speech, language, and hearing impairments is handled differently by public and private schools throughout the country, although many programs have the same basic components. The model given below is that used by Mary Jablonski who has been working as a professional speech therapist in an elementary school for 22 years.
Therapy usually takes place in the speech pathologist’s office, although it may be conducted in a classroom. Students are put into small groups of three or four students of similar age and severity of disorder. Students meet for 30-minute to one-hour sessions from one to five days a week, depending on the diagnosis, severity, and disorder. The children sit with the therapist and discuss any problems that they may be having or any progress that they have made. Students are encouraged to have a few minutes of conversation to loosen up their speech muscles. Also, the social interaction that results can be extremely beneficial to children with communication disorders who may be shy or socially withdrawn. The rest of the session is spent doing an activity. Students may play games, make crafts, draw pictures, sing songs, or act out short skits or role-playing exercises. These activities focus on improving students’ communication skills using several techniques, as noted by ASHA:
improving coordination of speech muscles through strengthening exercises, such as pushing the tongue against a tongue depressor, and training exercises involving sound repetition and imitation improving communication between the brain and the body through visual and auditory aids such as mirrors and tape recorders improving fluency through breathing exercises
Games and activities can be tailored to each individual’s problem areas. For example, a game board might have pictures of familiar items along a path. All of the items will have the target sound at the beginning of the word, in the middle, or at the end. A board for the s sound may have images of socks, a whistle, glasses, scissors, or a horse. Players roll dice and when they land on a particular picture, they have to pronounce the word correctly, focusing on the target sound. This game can be reproduced using pictures with /f/ sounds or /th/ sounds, etc. Games such as The Entire World of R Game Boards and The Entire World of R Say and Sequence Playing Card System treat the difficult /r/ phoneme while keeping the children’s energy and interest levels high. They enjoy the friendly competition and small-scale social interaction. Students can use mirrors to look into their mouths as they practice sounds to make sure that their tongue, teeth, and lips are in the right places. A child’s speech may be recorded and played back so the child can hear what they are saying. Often, a child may think they're producing sounds the same as everyone else. Hear their recorded voice helps them realize what they are doing wrong. If a child is having a difficult time producing a particular sound, the speech pathologist may remind them of the oral cues that go along with the sound. Here are some common cues:
For /th/, stick out the tongue and blow air through the mouth For /f/, bite the lower lip and push the air through the teeth For /r/, raise the back of the tongue to the roof of the mouth, or pretend the back of the tongue is an elevator and there is a little man on it who wants to ride to the top
At the end of the session students are usually given a reward for good behavior. This could be a sticker, a pencil, or a small toy. They are also given worksheets to complete at home with their parents. The worksheets usually involve verbal interaction through games and coloring activities.
Parental involvement and reinforcement play an integral part in a student’s progress. When a child succeeds and improves through therapy, the benefits can be overwhelming.
Benefits of speech therapy Communication skills play an important part in life’s experiences. In elementary school, children are developing language and learning to read and write. In order for a child to learn, he has to communicate and interact with his peers and adults. Spoken language is the basis for written language. As a child grows and develops, the two types of language interact and build upon each other to improve literacy and language. This process continues throughout a person’s life. If a child has a communication disorder, they are often delayed in other areas, such as reading and math. The child may be very bright but unable to express themselves correctly, and the learning process can be affected negatively. Speech therapy can help children learn to communicate effectively with others and learn to solve problems and make decisions independently. Communication with peers and educators is an essential part of a fulfilling educational experience. Also, children who are able to overcome communication disorders feel a great sense of pride and confidence. Children who stutter may be withdrawn socially, but with the help of therapy and improved confidence, they can enjoy a fully active social life (ASHA). Throughout her many years of working with children, Mary Jablonski recalls two particular success stories: A. M. had problems with /s/, /th/, /r/, and /l/ sounds when he started kindergarten. I worked with him until he was in sixth grade. He transformed from a shy and quiet child to an outgoing, cheerful, friendly, and intelligent young man. I received a letter from him some time ago, explaining that he was in Europe, performing in operas. He wanted to thank me for his speech therapy experience. Unable to produce clear speech, he would never have had the confidence or ability to be an opera singer. C. D. had problems producing several speech sounds. She was very difficult to understand when she started therapy. I worked with her through sixth grade and watched her, too, develop into a very outgoing and enthusiastic young woman. She graduated from high school at the top of her class and was asked to name her most influential teacher. Out of all of the instructors she had in her 13 years of education, she chose me. I had given her the gift of confidence and the ability to communicate effectively and get the most out of her educational experience.
Conclusion Communication is at the heart of human existence. Proper skills are necessary to communicate effectively. When children develop those skills slowly or fail to develop them at all, there may be a communication disorder at fault. Disorders are diagnosed through assessments and tests and can be treated through interactive speech therapy. Over 1 million American students in kindergarten through twelfth grade are being treated for a communication disorder or impairment every year (American Speech-Language-Hearing Association). A speech therapist works with
children in schools to improve their oral motor skills and speech production. Improved communication through speech therapy can result in a better educational, social, and emotional experience for a child.
Specific developmental disorder Specific developmental disorder Classification and external resources ICD-10
F80-F83
ICD-9
307, 315
Specific developmental disorders are disorders in which development is delayed in one specific area or areas,[1] and in which basically all other areas of development are not affected.[2] Specific developmental disorders are as opposed to pervasive developmental disorders[2] that are characterized by delays in the development of multiple basic functions including socialization and communication.[3]
ICD-10 taxonomy The tenth revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10) has four categories of specific developmental disorder: specific developmental disorders of speech and language, specific developmental disorders of scholastic skills, specific developmental disorder of motor function, and mixed specific developmental disorder.[4]
DSM taxonomy In the third edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-III),[5] SDD was opposed to the pervasive developmental disorders (PDD). There were two factors that were considered:
The specificity of the impairment: in SDD there is one single domain that is affected, whereas in PDD multiple areas of functioning are affected.[6] The nature of the impairment: development in SDD is delayed but not otherwise abnormal, whereas in PDD there are behavioral deviations that are not typical for any developmental stage.[6]
In the fourth edition of the DSM specific developmental disorders are no longer grouped together.[7] Instead they are reclassified as communication disorders, learning disorders, and motor skills disorders.[1]
Comparison and conditions ICD-10[8]
Specific developmental disorders of speech and language (F80):
Specific speech articulation disorder (F80.0) Expressive language disorder (F80.1) Receptive language disorder (F80.2) Acquired aphasia with epilepsy LandauKleffner syndrome (F80.3) Other developmental disorders of speech and language (F80.8) Developmental disorder of speech and language, unspecified (F80.9)
Specific developmental disorders of scholastic skills (F81):
Specific reading disorder (F81.0) Specific spelling disorder (F81.1) Specific disorder of arithmetical skills (F81.2) Mixed disorder of scholastic skills (F81.3) Other disorders of scholastic skills (F81.8) Developmental disorder of scholastic skills, unspecified (F81.9)
Specific developmental disorder of motor function (F82)
DSM-IV-TR[9]
Communication disorders:
Expressive Language Disorder (315.31) Mixed Receptive-Expressive Language Disorder (315.32) Phonological Disorder(315.39) Stuttering (307.0) Communication Disorder Not Otherwise Specified (307.9)
Learning disorders:
Reading Disorder (315.0) Mathematics Disorder (315.1) Disorder of Written Expression (315.2) Learning Disorder Not Otherwise Specified (315.9)
Motor skills disorders:
Mixed specific developmental disorder (F83)
Cranial nerve Nerve: Cranial nerves
Developmental Coordination Disorder (315.4)
Inferior view of the brain and brain stem showing cranial nerves. An unlabelled version is here Latin nervus cranialis (plural: nervi craniales) Code TA A14.2.00.038
Cranial nerves are nerves that emerge directly from the brain, in contrast to spinal nerves, which emerge from segments of the spinal cord. In humans, there are traditionally twelve pairs of cranial nerves. Only the first and the second pair emerge from the cerebrum; the remaining ten pairs emerge from the brainstem.
Cranial nerves in non-human vertebrates Human cranial nerves are nerves similar to those found in many other vertebrates. Cranial nerves XI and XII evolved in other species to amniotes (non-amphibian tetrapods), thus totaling twelve pairs. In some primitive cartilaginous fishes, such as the spiny dogfish or mud shark (Squalus acanthias), there is a terminal nerve numbered zero, since it exits the brain before the traditionally designated first cranial nerve. Because they exit from the brainstem as opposed to the spinal column, these are part of the central nervous system.
List of cranial nerves Number Name
I
II
III
IV
V
Olfactory
Optic
Oculomotor
Trochlear
Trigeminal
Sensory, motor, Origin or both
Nuclei
Purely Telencephalon sensory
Transmits the sense of smell from the nasal Anterior cavity.[1] Located in olfactory nucleus olfactory foramina in the cribriform plate of ethmoid.
Purely Diencephalon sensory
[Lateral geniculate nucleus][2]
Function
Transmits visual signals from the retina of the eye to the brain.[3] Located in the optic canal.
Mainly motor
Innervates the levator palpebrae superioris, superior rectus, medial rectus, inferior rectus, and Oculomotor inferior oblique, which Anterior aspect of nucleus, Edingercollectively perform most Westphal Midbrain eye movements. Also nucleus innervates the sphincter pupillae and the muscles of the ciliary body. Located in the superior orbital fissure.
Mainly motor
Trochlear nucleus
Innervates the superior oblique muscle, which depresses, rotates laterally, and intorts the eyeball. Located in the superior orbital fissure.
Principal sensory trigeminal nucleus, Spinal trigeminal nucleus, Mesencephalic trigeminal
Receives sensation from the face and innervates the muscles of mastication. Located in the superior orbital fissure (ophthalmic nerve - V1), foramen rotundum (maxillary nerve -
Dorsal aspect of Midbrain
Both sensory Pons and motor
nucleus, V2), and foramen ovale Trigeminal motor (mandibular nerve - V3). nucleus Nuclei lying under the floor of the Abducens fourth ventricle nucleus Pons
Innervates the lateral rectus, which abducts the eye. Located in the superior orbital fissure.
Abducens
Mainly motor
VII
Facial
Provides motor innervation to the muscles of facial expression, posterior belly of the digastric muscle, and stapedius muscle. Also receives the special sense Both Facial nucleus, of taste from the anterior Pons sensory Solitary nucleus, 2/3 of the tongue and (cerebellopontine and Superior salivary provides secretomotor angle) above olive innervation to the salivary motor nucleus glands (except parotid) and the lacrimal gland. Located in and runs through the internal acoustic canal to the facial canal and exits at the stylomastoid foramen.
VIII
Senses sound, rotation, and gravity (essential for balance and movement). Acoustic or More specifically, the Vestibulocochlear Lateral to CN VII Mostly Vestibular nuclei, vestibular branch carries (or auditory(cerebellopontine sensory Cochlear nuclei impulses for equilibrium vestibular nerve or angle) and the cochlear branch acoustic nerve) carries impulses for hearing. Located in the internal acoustic canal.
IX
Both sensory Glossopharyngeal Medulla and motor
VI
Receives taste from the Nucleus posterior 1/3 of the tongue, ambiguus, provides secretomotor Inferior salivary innervation to the parotid nucleus, Solitary gland, and provides motor
nucleus
X
Vagus
innervation to the stylopharyngeus. Some sensation is also relayed to the brain from the palatine tonsils. Located in the jugular foramen.
Supplies branchiomotor innervation to most laryngeal and pharyngeal muscles (except the stylopharyngeus, which is innervated by the glossopharyngeal). Also provides parasympathetic fibers to nearly all thoracic Nucleus Both and abdominal viscera ambiguus, Dorsal sensory Posterolateral down to the splenic flexure. motor vagal and Receives the special sense sulcus of Medulla nucleus, Solitary of taste from the epiglottis. motor nucleus A major function: controls muscles for voice and resonance and the soft palate. Symptoms of damage: dysphagia (swallowing problems), velopharyngeal insufficiency. Located in the jugular foramen.
XI
Accessory or spinal-acccessory (or cranial Mainly accessory nerve or motor spinal accessory nerve)
Controls the sternocleidomastoid and trapezius muscles, and Nucleus overlaps with functions of Cranial and Spinal ambiguus, Spinal the vagus nerve (CN X). accessory Roots Symptoms of damage: nucleus inability to shrug, weak head movement. Located in the jugular foramen.
XII
Hypoglossal
Medulla
Mainly
Hypoglossal
Provides motor innervation to the muscles of the
motor
nucleus
tongue (except for the palatoglossus, which is innervated by the vagus nerve) and other glossal muscles. Important for swallowing (bolus formation) and speech articulation. Located in the hypoglossal canal.
Some of the major cranial nerves and their ganglia and fiber connections
Mnemonic devices Main article: List of mnemonics for the cranial nerves
There are many mnemonic devices in circulation to help the names and order of the cranial nerves. Because the mind recalls rhymes well, the best mnemonics often use rhyming schemes. An example mnemonic sentence for the initial letters "OOOTTAFAGVSH" is "On old Olympus's towering tops, a Finn and German viewed some hops,"[4] and for the initial letters "OOOTTAFVGVAH" is "Oh, oh, oh, to touch and feel very good velvet...ah, heaven."[5] The differences between these depend on "acoustic" versus "vestibulocochlea" and "spinalaccessory" versus "accessory". A useful mnemonic for ing which nerves are motor (M), sensory (S), or both (B) is, "Some Say Money Matters But My Brother Says Big Brains Matter Most". There are many more mnemonics from many sources, for example OLd OPie OCcasionally TRies TRIGonometry And Feels VEry GLOomy, VAGUe, And HYPOactive.[6]
Language delay
Language delay is a failure to develop language abilities on the usual developmental timetable. Language delay is distinct from speech delay, in which the speech mechanism itself is the focus of delay. Thus, language delay refers specifically to a delay in the development of the underlying knowledge of language, rather than its implementation. The difference between language and speech can be understood by considering the relationship between a computer program and an output device like a printer. The software running on the computer (a word processing program, for example) is designed to allow a to create content that is stored in the computer. In order to actually create a physical copy of the file, the computer requires another device: a printer. The printer takes the file and transforms it into a series of commands which control the movement of a print head, thereby making marks on paper. This two-stage process is something like the distinction between language (computer program) and speech (printer). When we want to communicate something, the first stage is to encode the message into a set of words and sentence structures that convey our meaning. These processes are collectively what we refer to as language. In the second stage, language is translated into motor commands that control the articulators, thereby creating speech. Speech refers to the actual process of making sounds, using such organs and structures as the lungs, vocal cords, mouth, tongue, teeth, etc. Because language and speech are two independent stages, they may be individually delayed. For example, a child may be delayed in speech (i.e., unable to produce intelligible speech sounds), but not delayed in language. In this case, the child would be attempting to produce an ageappropriate amount of language, but that language would be difficult or impossible to understand. Conversely, a child with a language delay typically has not yet had the opportunity to produce speech sounds; he or she is therefore likely to have a delay in speech as well. Language delay is commonly divided into receptive and expressive categories. Receptive language refers to the process of understanding what is said to us. Expressive language refers to the use of words and sentences to communicate what we think, need, or want. Both categories are fundamental in order to be able to communicate with others as well as to understand when others communicate with us. Language delay is a risk factor for other types of developmental delay, including social, emotional, and cognitive delay, though some children may grow out of these deficits, even excelling where they once lagged, while others may not. One particularly common result of language delay is delayed or inadequate acquisition of reading skills. Reading depends upon an ability to code and decode script (i.e., match speech sounds with symbols, and vice versa). If a child is still struggling to master language and speech, it is very difficult to then learn another level of complexity (writing). Thus, it is crucial that children have facility with language to be successful readers. Neuroscientist Steven Pinker postulates that a certain form of language delay may be associated with exceptional and innate analytical prowess in some individuals, such as Albert Einstein, Richard Feynman and Edward Teller.[1]
In 2005, researchers found a connection between expressive language delay and a genetic abnormality: a duplicate set of the same genes that are missing in sufferers of Williams-Beuren syndrome.[2] Many reports show there is really no clear evidence that language delay can be prevented by training or educating the medical home visitor or health care professional. Overall, some of the reviews show positive results regarding interventions in language delay, but are not curative. (Commentary - Early Identification of Language Delays, 2005)
Television and language delay Television viewing is associated with delayed language development. Research on early brain development shows that babies and toddlers have a critical need for direct interactions with parents and other significant care givers for healthy brain growth and the development of appropriate social, emotional, and cognitive skills.[3] Children who watched television alone were 8.47 times more likely to have language delay when compared to children who interacted with their caregivers during television viewing.[4] As recommended by the American Academy of Pediatrics (AAP), children under the age of 2 should watch no television at all, and after age 2 watch no more than one to two hours of quality programming a day. Therefore, exposing such young children to television programs should be discouraged. Parents should engage children in more conversational activities to avoid television-related delays to their children language development, which could impair their intellectual performance.
Causes Stress during pregnancy is associated with language delay.[5] There is strong evidence that autism and ADHD are also commonly associated with language delay.[6][7] Aspergers syndrome, which is on the autistic spectrum, however, is not associated with language delay.[8]
Specific language impairment Specific language impairment (SLI) is diagnosed when a child's language does not develop normally and the difficulties cannot be ed for by generally slow development (mental retardation), physical abnormality of the speech apparatus, autistic disorder, acquired brain damage or hearing loss.
Classification Specific language impairment (SLI) is diagnosed when a child has delayed or disordered language development for no apparent reason.[1] Usually the first indication of SLI is that the child is later than usual in starting to speak and subsequently is delayed in putting words together to form sentences. Spoken language may be immature throughout corresponds to an expressive language impairment. In many children with SLI, understanding of language, or receptive language, is also impaired, though this may not be obvious unless the child is given a formal assessment.[2] Although difficulties with use and understanding of complex sentences are a common feature of SLI, the diagnostic criteria encom a wide range of problems, and for some children other aspects of language are problematic (see below). In general, the term SLI is reserved for children whose language difficulties persist into school age, and so it would not be applied to toddlers who are late to start talking, most of whom catch up with their peer group after a late start.[3] Terminology
The terminology for children’s language disorders is extremely wide-ranging and confusing, with many labels that have overlapping but not necessarily identical meanings. In part this confusion reflects uncertainty about the boundaries of SLI, and the existence of different subtypes. Historically, the ‘’developmental dysphasia’’ or ‘’developmental aphasia’’ were used to describe children with the clinical picture of SLI.[4] These have, however, largely been abandoned, as they suggest parallels with adult acquired aphasia. This is misleading, as SLI is not caused by brain damage. In medical circles, such as specific developmental language disorder are often used, but this has the disadvantage of being wordy, and is also rejected by some people who think SLI should not be seen as a ‘disorder’. In the UK educational system, speech, language and communication needs (SLCN) is currently the term of choice, but this is far broader than SLI, and includes children with speech and language difficulties arising from a wide range of causes. Subtypes (Rapin and Allen 1983)
Although most experts agree that children with SLI are quite variable, there is little agreement on how best to subtype them.[5] There is no widely accepted classification system. In 1983 Rapin and Allen [6] proposed a classification of developmental language disorders based on the linguistic features of language impairment, which was subsequently updated by Rapin.[7] Note that Rapin is a child neurologist, and she refers to different subtypes as ‘syndromes’; many of those coming from the perspective of education or speech-language therapy reject this kind of medical label, and argue that there is not a clear dividing line between SLI and normal variation.[8] Also, although most experts would agree that children with characteristics of the Rapin subtypes can be identified, there are many cases who are less easy to categorise, and there is also evidence that categorisation can change over time.[9] Rapin's subgroups fall into three broad categories:
Receptive/expressive developmental language disorder
Receptive/expressive phonologic/syntactic deficit syndrome. This is the most common form of SLI, in which the child’s most obvious problems are a tendency to speak in short, simplified sentences, with omission of some grammatical features, such as past tense -ed.[10] It is common also to see simplified speech production when the child is young. For instance, clusters of consonants may be reduced, so that ‘string’ is pronounced as ‘ting’. Vocabulary is often limited, with a tendency to use ‘general all-purpose’ , rather than more specific words.[11][12] Verbal auditory agnosia. This is a very rare form of language impairment, in which the child appears unable to make sense of speech sounds. It typically occurs as a symptom of LandauKleffner syndrome, in which case a diagnosis of SLI would not be appropriate, as there is a known neurological origin of the language difficulties. Expressive developmental language disorder syndromes
Developmental verbal dyspraxia (DVD). In the child with DVD, comprehension is adequate; the onset of speech is very delayed and extremely limited with impaired production of speech sounds and short utterances. The poor speech production cannot be explained in of structural or neurological damage of the articulators. There is much disagreement about diagnostic criteria, but the label most often used for children whose intelligibility declines markedly when they attempt complex utterances, compared to when they are producing individual sounds or syllables. Another key feature is inconsistency of speech sound production from one occasion to another. Although the term 'dyspraxia' suggests a pure output disorder,[13] many – perhaps all- of these children have difficulty in doing tasks that involve mentally manipulating speech sounds, such as phonological awareness tasks. Children with verbal dyspraxia also typically have major literacy problems, and receptive language levels may be poor on tests of vocabulary and grammar[14] Phonologic programming deficit syndrome. The child speaks in long but poorly intelligible utterances, producing what sounds like jargon. Outside Rapin’s group, little has been written about this subtype, which is not generally recognised in diagnostic frameworks. Higher order processing disorders
Lexical deficit disorder. The child has word finding problems and difficulty putting ideas into words. There is poor comprehension for connected speech. Again, there is little research on this subtype, which is not widely recognised. Semantic-pragmatic deficit disorder. The child speaks in fluent and well-formed utterances with adequate articulation; content of language is unusual; comprehension may be over-literal; language use is odd; the child may chatter incessantly, be poor at turn-taking in conversation and maintaining a topic. There has been a great deal of controversy about this category, which is termed pragmatic language impairment (PLI) in the UK. Debate has centred over the question of whether it is a subtype of SLI, part of the autistic spectrum, or a separate condition.[15]
Relationship with other neurodevelopmental disorders
Although textbooks draw clear boundaries between different neurodevelopmental disorders, there is much debate about overlaps between them.[16] Many children with SLI meet diagnostic criteria for developmental dyslexia,[17] and others have features of autism.[18]
Diagnosis SLI is defined purely in behavioural : there is no biological test for SLI. There are three points that need to be met for a diagnosis of SLI:
The child has language difficulties that interfere with daily life or academic progress Other causes are excluded: the problems cannot be explained in of hearing loss, general developmental delay, autism, or physical difficulty in speaking Performance on a standardized language test (see Assessment, below) is significantly below age level
There is considerable variation in how this last criterion is implemented. Tombin et al. (1996) proposed the EpiSLI criterion, based on five composite scores representing performance in three domains of language (vocabulary, grammar, and narration) and two modalities (comprehension and production). Children scoring in the lowest 10% on two or more composite scores are identified as having language disorder.[19] Assessment
Assessment will usually include an interview with the child’s caregiver, observation of the child in an unstructured setting, a hearing test, and standardized tests of language and nonverbal ability. There is a wide range of language assessments in English. Some are restricted for use by speech and language professionals (therapists or SALTs in the UK, speech-language pathologists, SLPs, in the US and Australia). A commonly used test battery for diagnosis of SLI is the Clinical Evaluation of Language Fundamentals (CELF). Assessments that can be completed by a parent or teacher can be useful to identify children who may require more indepth evaluation. The Grammar and Phonology Screening (GAPS) test is a quick (ten minute) simple and accurate screening test developed and standardized in the UK. It is suitable for children from 3;4 to 6;8 years;months and can be istered by professionals and nonprofessionals (including parents) alike,[20] and has been demonstrated to be highly accurate (98% accuracy) in identifying impaired children who need specialist help vs non-impaired children.[21] This makes it potentially a feasible test for widespread screening. 'The Children’s Communication Checklist CCC–2' is a parent questionnaire suitable for testing language skills in school-aged children.
Prevalence Epidemiological surveys, in the US[22] and Canada,[23] estimated the prevalence of SLI in 5-yearolds at around 7 per cent. However, neither study adopted the stringent 'discrepancy' criteria of the Diagnostic and Statistical Manual of Mental Disorders or ICD-10; SLI was diagnosed if the
child scored below cut-off on standardized language tests, but had a nonverbal IQ of 90 or above and no other exclusionary criteria.
Developmental course and outcome Longitudinal studies indicate that problems are largely resolved by 5 years in around 40% of 4year-olds with SLI.[24] However, for children who still have significant language difficulties at school entry low levels of literacy are common, even for children who receive specialist help,[25] and educational attainments are typically poor.[26] Poor outcomes are most common in cases where comprehension as well as expressive language is affected.[27] SLI is associated with a high rate of psychiatric disorder.[28] For instance, Conti-Ramsden and Botting (2004) found that 64% of a sample of 11-year-olds with SLI scored above a clinical threshold on a questionnaire for psychiatric difficulties, and 36% were regularly bullied, compared with 12% of comparison children.[29] In the longer-term, studies of adult outcomes of children with SLI find elevated rates of unemployment, social isolation and psychiatric disorder.[30] However, most studies focused on children with severe problems, where comprehension as well as expressive language was affected. Better outcomes are found for children who have milder difficulties and do not require special educational provision.[31]
Genetic and environmental risks It is now generally accepted that SLI is a strongly genetic disorder.[32] The best evidence comes from studies of twins. Two twins growing up together are exposed to the same home environment, yet may differ radically in their language skills. Such different outcomes are, however, seen almost exclusively in fraternal (non-identical) twins, who are genetically different. Identical twins share the same genes and tend to be much more similar in language ability. There can be some variation in the severity and persistence of SLI in identical twins, indicating that environmental factors affect the course of disorder, but it is unusual to find a child with SLI who has an identical twin with normal language. SLI is not usually caused by a mutation in a single gene. Current evidence suggests that there are many different genes that can influence language learning, and SLI results when a child inherits a particularly detrimental combination of risk factors, each of which may have only a small effect.[33] Only a handful of non-genetic factors have been found selectively to impact on language development in children. Later-born children in large families are at greater risk than earlier born.[34]
Causal theories Much research has focussed on trying to identify what makes language learning so hard for some children. A major divide is between theories that attribute the difficulties to a low-level problem with auditory temporal processing,[35][36] and those that propose there is a deficit in a specialised language-learning system.[37][38] Other s emphasise deficits in specific aspects of memory.[39][40][41][42][43] It can be difficult to choose between theories because they do not always make distinctive predictions, and there is considerable heterogeneity among children with SLI. It has also been suggested that SLI may only arise when more than one underlying deficit is present.[44][45][46]
Associated factors Males are more affected by SLI than females. In clinical samples, the sex ratio of affected males: females is around 3 or 4:1.[47] The reason for this association is not known: no linkage has been found to genes on the sex chromosomes. Poor motor skills are commonly found in children with SLI.[13] Brain scans do not usually reveal any obvious abnormalities in children with SLI, although quantitative comparisons have found differences in brain size or relative proportions of white or grey matter in specific regions.[48] In some cases, unusual brain gyri are found.[49] To date, no consistent 'neural signature' for SLI has been found. Differences in the brains of children with SLI vs typically developing children are subtle and may overlap with atypical patterns seen in other neurodevelopmental disorders.[50][51]
Intervention Intervention is usually carried out by speech and language therapists, who use a wide range of techniques to stimulate language learning. In the past, there was a vogue for drilling children in grammatical exercises, using imitation and elicitation methods, but such methods fell into disuse when it became apparent that there was little generalisation to everyday situations. Contemporary approaches to enhancing development of language structure are more likely to adopt 'milieu' methods, in which the intervention is interwoven into natural episodes of communication, and the therapist builds on the child's utterances, rather than dictating what will be talked about. In addition, there has been a move away from a focus solely on grammar and phonology toward interventions that develop children's social use of language, often working in small groups that may include typically developing as well as language-impaired peers.[52] Another way in which modern approaches to remediation differ from the past is that parents are more likely to be directly involved, particularly with preschool children.[53] A radically different approach has been developed by Tallal and colleagues, who have devised a computer-based intervention, Fast Forword, that involves prolonged and intensive training on specific components of language and auditory processing.[54] The theory underlying this approach maintains that language difficulties are caused by a failure to make fine-grained auditory discriminations in the temporal dimension, and the computerised training materials are designed to sharpen perceptual acuity. For all these types of intervention, there are few adequately controlled trials that allow one to assess clinical efficacy.[55] In general, where studies have been done, results have been disappointing,[56] though some more positive outcomes have been reported.[57] In 2010, a systematic review of clinical trials assessing the FastForword approach was published, and reported no significant gains relative to a control group.[58]
Traumatic brain injury Traumatic brain injury
Classification and external resources
CT scan showing cerebral contusions, hemorrhage within the hemispheres, subdural hematoma, and skull fractures[1] ICD-10
S06
ICD-9
800.0-801.9, 803.0-804.9, 850.0-854.1
DiseasesDB
5671
MedlinePlus
000028
eMedicine
med/2820 neuro/153 ped/929
MeSH
D001930
Traumatic brain injury (TBI), also known as intracranial injury, occurs when an external force traumatically injures the brain. TBI can be classified based on severity, mechanism (closed or penetrating head injury), or other features (e.g., occurring in a specific location or over a widespread area). Head injury usually refers to TBI, but is a broader category because it can involve damage to structures other than the brain, such as the scalp and skull. TBI is a major cause of death and disability worldwide, especially in children and young adults. Causes include falls, vehicle accidents, and violence. Prevention measures include use of technology to protect those suffering from automobile accidents, such as seat belts and sports or motorcycle helmets, as well as efforts to reduce the number of automobile accidents, such as safety education programs and enforcement of traffic laws. Brain trauma can be caused by a direct impact or by acceleration alone. In addition to the damage caused at the moment of injury, brain trauma causes secondary injury, a variety of
events that take place in the minutes and days following the injury. These processes, which include alterations in cerebral blood flow and the pressure within the skull, contribute substantially to the damage from the initial injury. TBI can cause a host of physical, cognitive, social, emotional, and behavioral effects, and outcome can range from complete recovery to permanent disability or death. The 20th century saw critical developments in diagnosis and treatment that decreased death rates and improved outcome. Some of the current imaging techniques used for diagnosis and treatment include CT scans computed tomography and MRIs magnetic resonance imaging. Depending on the injury, treatment required may be minimal or may include interventions such as medications, emergency surgery or surgery years later. Physical therapy, speech therapy, recreation therapy, and occupational therapy may be employed for rehabilitation.
Classification Traumatic brain injury is defined as damage to the brain resulting from external mechanical force, such as rapid acceleration or deceleration, impact, blast waves, or penetration by a projectile.[2] Brain function is temporarily or permanently impaired and structural damage may or may not be detectable with current technology.[3] TBI is one of two subsets of acquired brain injury (brain damage that occur after birth); the other subset is non-traumatic brain injury, which does not involve external mechanical force (examples include stroke and infection).[4][5] All traumatic brain injuries are head injuries, but the latter term may also refer to injury to other parts of the head.[6][7][8] However, the head injury and brain injury are often used interchangeably.[9] Similarly, brain injuries fall under the classification of central nervous system injuries[10] and neurotrauma.[11] In neuropsychology research literature, in general the term "traumatic brain injury" is used to refer to non-penetrating traumatic brain injuries. TBI is usually classified based on severity, anatomical features of the injury, and the mechanism (the causative forces).[12] Mechanism-related classification divides TBI into closed and penetrating head injury.[2] A closed (also called nonpenetrating, or blunt)[6] injury occurs when the brain is not exposed.[7] A penetrating, or open, head injury occurs when an object pierces the skull and breaches the dura mater, the outermost membrane surrounding the brain.[7] Severity Severity of traumatic brain injury[13]
Mild
GCS
PTA
LOC
13–15
<1 day
0–30 minutes
Moderate
9–12
>1 to <7 >30 min to days <24 hours
Severe
3–8
>7 days
>24 hours
Brain injuries can be classified into mild, moderate, and severe categories.[12] The Glasgow Coma Scale (GCS), the most commonly used system for classifying TBI severity, grades a person's level of consciousness on a scale of 3–15 based on verbal, motor, and eye-opening reactions to stimuli.[14] It is generally agreed that a TBI with a GCS of 13 or above is mild, 9–12 is moderate, and 8 or below is severe.[3][8][15] Similar systems exist for young children.[8] However, the GCS grading system has limited ability to predict outcomes. Because of this, other classification systems such as the one shown in the table are also used to help determine severity. A current model developed by the Department of Defense and Department of Veterans Affairs uses all three criteria of GCS after resuscitation, duration of post-traumatic amnesia (PTA), and loss of consciousness (LOC).[13] It also has been proposed to use changes that are visible on neuroimaging, such as swelling, focal lesions, or diffuse injury as method of classification.[2] Grading scales also exist to classify the severity of mild TBI, commonly called concussion; these use duration of LOC, PTA, and other concussion symptoms.[16] Pathological features Main article: Focal and diffuse brain injury
CT scan Spread of the subdural hematoma (single arrows), midline shift (double arrows)
Systems also exist to classify TBI by its pathological features.[12] Lesions can be extra-axial, (occurring within the skull but outside of the brain) or intra-axial (occurring within the brain tissue).[17] Damage from TBI can be focal or diffuse, confined to specific areas or distributed in a
more general manner, respectively.[18] However, it is common for both types of injury to exist in a given case.[18] Diffuse injury manifests with little apparent damage in neuroimaging studies, but lesions can be seen with microscopy techniques post-mortem,[18][19] and in the early 2000s, researchers discovered that diffusion tensor imaging (DTI), a way of processing MRI images that shows white matter tracts, was an effective tool for displaying the extent of diffuse axonal injury.[20][21] Types of injuries considered diffuse include edema (swelling) and diffuse axonal injury, which is widespread damage to axons including white matter tracts and projections to the cortex.[22][23] Types of injuries considered diffuse include concussion and diffuse axonal injury, widespread damage to axons in areas including white matter and the cerebral hemispheres.[22] Focal injuries often produce symptoms related to the functions of the damaged area.[10] Research shows that the most common areas to have focal lesions in non-penetrating traumatic brain injury are the orbitofrontal cortex (the lower surface of the frontal lobes) and the anterior temporal lobes, areas that are involved in social behavior, emotion regulation, olfaction, and decisionmaking, hence the common social/emotional and judgment deficits following moderate-severe TBI.[24][25][26][27] Symptoms such as hemiparesis or aphasia can also occur when less commonly affected areas such as motor or language areas are, respectively, damaged.[28][29] One type of focal injury, cerebral laceration, occurs when the tissue is cut or torn.[30] Such tearing is common in orbitofrontal cortex in particular, because of bony protrusions on the interior skull ridge above the eyes.[24] In a similar injury, cerebral contusion (bruising of brain tissue), blood is mixed among tissue.[15] In contrast, intracranial hemorrhage involves bleeding that is not mixed with tissue.[30] Hematomas, also focal lesions, are collections of blood in or around the brain that can result from hemorrhage.[3] Intracerebral hemorrhage, with bleeding in the brain tissue itself, is an intraaxial lesion. Extra-axial lesions include epidural hematoma, subdural hematoma, subarachnoid hemorrhage, and intraventricular hemorrhage.[31] Epidural hematoma involves bleeding into the area between the skull and the dura mater, the outermost of the three membranes surrounding the brain.[3] In subdural hematoma, bleeding occurs between the dura and the arachnoid mater.[15] Subarachnoid hemorrhage involves bleeding into the space between the arachnoid membrane and the pia mater.[15] Intraventricular hemorrhage occurs when there is bleeding in the ventricles.[31]
Signs and symptoms
Unequal pupil size is potentially a sign of a serious brain injury.[32]
Symptoms are dependent on the type of TBI (diffuse or focal) and the part of the brain that is affected.[33] Unconsciousness tends to last longer for people with injuries on the left side of the brain than for those with injuries on the right.[7] Symptoms are also dependent on the injury's severity. With mild TBI, the patient may remain conscious or may lose consciousness for a few seconds or minutes.[34] Other symptoms of mild TBI include headache, vomiting, nausea, lack of motor coordination, dizziness, difficulty balancing,[35] lightheadedness, blurred vision or tired eyes, ringing in the ears, bad taste in the mouth, fatigue or lethargy, and changes in sleep patterns.[34] Cognitive and emotional symptoms include behavioral or mood changes, confusion, and trouble with memory, concentration, attention, or thinking.[34] Mild TBI symptoms may also be present in moderate and severe injuries.[34] A person with a moderate or severe TBI may have a headache that does not go away, repeated vomiting or nausea, convulsions, an inability to awaken, dilation of one or both pupils, slurred speech, aphasia (word-finding difficulties), dysarthria (muscle weakness that causes disordered speech), weakness or numbness in the limbs, loss of coordination, confusion, restlessness, or agitation.[34] Common long-term symptoms of moderate to severe TBI are changes in appropriate social behavior, deficits in social judgment, and cognitive changes, especially problems with sustained attention, processing speed, and executive functioning.[27][36][37][38][39] Alexithymia, a deficiency in identifying, understanding, processing, and describing emotions occurs in 60.9% of individuals with TBI.[40] Cognitive and social deficits have long-term consequences for the daily lives of people with moderate to severe TBI, but can be improved with appropriate rehabilitation.[39][41][42][43] When the pressure within the skull (intracranial pressure, abbreviated I) rises too high, it can be deadly.[44] Signs of increased I include decreasing level of consciousness, paralysis or weakness on one side of the body, and a blown pupil, one that fails to constrict in response to light or is slow to do so.[44] Cushing's triad, a slow heart rate with high blood pressure and respiratory depression is a classic manifestation of significantly raised I.[3] Anisocoria, unequal pupil size, is another sign of serious TBI.[32] Abnormal posturing, a characteristic positioning of the limbs caused by severe diffuse injury or high I, is an ominous sign.[3] Small children with moderate to severe TBI may have some of these symptoms but have difficulty communicating them.[45] Other signs seen in young children include persistent crying, inability to be consoled, listlessness, refusal to nurse or eat,[45] and irritability.[3]
Causes The most common causes of TBI in the U.S. include violence, transportation accidents, construction, and sports.[35][46] Motor bikes are major causes, increasing in significance in developing countries as other causes reduce.[47] The estimates that between 1.6 and 3.8 million traumatic brain injuries each year are a result of sports and recreation activities in the US.[48] In children aged two to four, falls are the most common cause of TBI, while in older children traffic accidents compete with falls for this position.[49] TBI is the third most common injury to result from child abuse.[50] Abuse causes 19% of cases of pediatric brain trauma, and the death rate is higher among these cases.[51] Domestic violence is another cause of TBI,[52] as are work-related and industrial accidents.[53] Firearms[7] and blast injuries from explosions[54] are other causes of
TBI, which is the leading cause of death and disability in war zones.[55] According to Representative Bill Pascrell (Democrat, NJ), TBI is "the signature injury of the wars in Iraq and Afghanistan."[56] There is a promising technology called activation database guided EEG bio which has been documented to return a TBI's auditory memory ability to above the control group's performance [57] [58]
Mechanism Physical forces
Ricochet of the brain within the skull may for the coup-contrecoup phenomenon.[59]
The type, direction, intensity, and duration of forces all contribute to the characteristics and severity TBI.[2] Forces that may contribute to TBI include angular, rotational, shear, and translational forces.[30] Even in the absence of an impact, significant acceleration or deceleration of the head can cause TBI; however in most cases a combination of impact and acceleration is probably to blame.[30] Forces involving the head striking or being struck by something, termed or impact loading, are the cause of most focal injuries, and movement of the brain within the skull, termed non or inertial loading, usually causes diffuse injuries.[12] The violent shaking of an infant that causes shaken baby syndrome commonly manifests as diffuse injury.[60] In impact loading, the force sends shock waves through the skull and brain, resulting in tissue damage.[30] Shock waves caused by penetrating injuries can also destroy tissue along the path of a projectile, compounding the damage caused by the missile itself.[15] Damage may occur directly under the site of impact, or it may occur on the side opposite the impact (coup and contrecoup injury, respectively).[59] When a moving object impacts the stationary head, coup injuries are typical,[61] while contrecoup injuries are usually produced when the moving head strikes a stationary object.[62]
Primary and secondary injury
MRI scan showing damage due to brain herniation after TBI[1] Main article: Primary and secondary brain injury
A large percentage of the people killed by brain trauma do not die right away but rather days to weeks after the event;[63] rather than improving after being hospitalized, some 40% of TBI patients deteriorate.[64] Primary brain injury (the damage that occurs at the moment of trauma when tissues and blood vessels are stretched, compressed, and torn) is not adequate to explain this deterioration; rather, it is caused by secondary injury, a complex set of cellular processes and biochemical cascades that occur in the minutes to days following the trauma.[65] These secondary processes can dramatically worsen the damage caused by primary injury[55] and for the greatest number of TBI deaths occurring in hospitals.[32] Secondary injury events include damage to the blood–brain barrier, release of factors that cause inflammation, free radical overload, excessive release of the neurotransmitter glutamate (excitotoxicity), influx of calcium and sodium ions into neurons, and dysfunction of mitochondria.[55] Injured axons in the brain's white matter may separate from their cell bodies as a result of secondary injury,[55] potentially killing those neurons. Other factors in secondary injury are changes in the blood flow to the brain; ischemia (insufficient blood flow); cerebral hypoxia (insufficient oxygen in the brain); cerebral edema (swelling of the brain); and raised intracranial pressure (the pressure within the skull).[66] Intracranial pressure may rise due to swelling or a mass effect from a lesion, such as a hemorrhage.[44] As a result, cerebral perfusion pressure (the pressure of blood flow in the brain) is reduced; ischemia results.[32][67] When the pressure within the skull rises too high, it can cause brain death or herniation, in which parts of the brain are squeezed by structures in the skull.[44] A particularly weak part of the skull that is vulnerable to damage causing extradural haematoma is the pterion, deep in which lies the middle meningeal artery which is easily damaged in fractures of the pterion. Since the pterion is so weak this type of injury can easily occur and can be secondary due to trauma to other parts of the skull where the impact forces spreads to the pterion.
Diagnosis
CT scan showing epidural hematoma (arrow)
Diagnosis is suspected based on lesion circumstances and clinical evidence, most prominently a neurological examination, for example checking whether the pupils constrict normally in response to light and asg a Glasgow Coma Score.[15] Neuroimaging helps in determining the diagnosis and prognosis and in deciding what treatments to give.[68] The preferred radiologic test in the emergency setting is computed tomography (CT): it is quick, accurate, and widely available.[69] Followup CT scans may be performed later to determine whether the injury has progressed.[2] Magnetic resonance imaging (MRI) can show more detail than CT, and can add information about expected outcome in the long term.[15] It is more useful than CT for detecting injury characteristics such as diffuse axonal injury in the longer term.[2] However, MRI is not used in the emergency setting for reasons including its relative inefficacy in detecting bleeds and fractures, its lengthy acquisition of images, the inaccessibility of the patient in the machine, and its incompatibility with metal items used in emergency care.[15] Other techniques may be used to confirm a particular diagnosis. X-rays are still used for head trauma, but evidence suggests they are not useful; head injuries are either so mild that they do not need imaging or severe enough to merit the more accurate CT.[69] Angiography may be used to detect blood vessel pathology when risk factors such as penetrating head trauma are involved.[2] Functional imaging can measure cerebral blood flow or metabolism, inferring neuronal activity in specific regions and potentially helping to predict outcome.[70] Electroencephalography and transcranial doppler may also be used. The most sensitive physical measure to date is the quantitative EEG which has documented an 80% to 100% ability in discriminating between normals and traumatic brain injured subjects. [71] [72]
Neuropsychological assessment can be performed to evaluate the long-term cognitive sequelae and to aid in the planning of the rehabilitation.[68] Instruments range from short measures of general mental functioning to complete batteries formed of different domain-specific tests.
Prevention
Protective sports equipment such as helmets can protect athletes from head injury.
Since a major cause of TBI are vehicle accidents, their prevention or the amelioration of their consequences can both reduce the incidence and gravity of TBI. In accidents, damage can be reduced by use of seat belts, child safety seats[48] and motorcycle helmets,[73] and presence of roll bars and airbags.[30] Education programs exist to lower the number of crashes.[68] In addition, changes to public policy and safety laws can be made; these include speed limits, seat belt and helmet laws, and road engineering practices.[55] Changes to common practices in sports have also been discussed. An increase in use of helmets could reduce the incidence of TBI.[55] Due to the possibility that repeatedly "heading" a ball practicing soccer could cause cumulative brain injury, the idea of introducing protective headgear for players has been proposed.[74] Improved equipment design can enhance safety; softer baseballs reduce head injury risk.[75] Rules against dangerous types of , such as "spear tackling" in American football, when one player tackles another head first, may also reduce head injury rates.[75] Falls can be avoided by installing grab bars in bathrooms and handrails on stairways; removing tripping hazards such as throw rugs; or installing window guards and safety gates at the top and bottom of stairs around young children.[48] Playgrounds with shock-absorbing surfaces such as mulch or sand also prevent head injuries.[48] Child abuse prevention is another tactic; programs exist to prevent shaken baby syndrome by educating about the dangers of shaking children.[51] Gun safety, including keeping guns unloaded and locked, is another preventative measure.[76] Studies on the effect of laws that aim to control access to guns in the United States have been insufficient to determine their effectiveness preventing number of deaths or injuries.[77] Recent clinical and laboratory research by neurosurgeon Julian Bailes, M.D., and his colleagues from West Virginia University, has resulted in papers showing that dietary supplementation with
omega-3 DHA offers protection against the biochemical brain damage that occurs after a traumatic injury.[78] Rats given DHA prior to induced brain injuries suffered smaller increases in two key markers for brain damage (APP and caspase-3), as compared with rats given no DHA.[79] “The potential for DHA to provide prophylactic benefit to the brain against traumatic injury appears promising and requires further investigation. The essential concept of daily dietary supplementation with DHA, so that those at significant risk may be preloaded to provide protection against the acute effects of TBI, has tremendous public health implications.” [80]
Treatment It is important to begin emergency treatment within the so-called "golden hour" following the injury.[81] People with moderate to severe injuries are likely to receive treatment in an intensive care unit followed by a neurosurgical ward.[82] Treatment depends on the recovery stage of the patient. In the acute stage the primary aim of the medical personnel is to stabilize the patient and focus on preventing further injury because little can be done to reverse the initial damage caused by trauma.[82] Rehabilitation is the main treatment for the subacute and chronic stages of recovery.[82] International clinical guidelines have been proposed with the aim of guiding decisions in TBI treatment, as defined by an authoritative examination of current evidence.[2] Acute stage
Certain facilities are equipped to handle TBI better than others; initial measures include transporting patients to an appropriate treatment center.[44][83] Both during transport and in hospital the primary concerns are ensuring proper oxygen supply, maintaining adequate cerebral blood flow, and controlling raised intracranial pressure (I),[3] since high I deprives the brain of badly needed blood flow[84] and can cause deadly brain herniation. Other methods to prevent damage include management of other injuries and prevention of seizures.[15][68] Neuroimaging is helpful but not flawless in detecting raised I.[85] A more accurate way to measure I is to place a catheter into a ventricle of the brain,[32] which has the added benefit of allowing cerebrospinal fluid to drain, releasing pressure in the skull.[32] Treatment of raised I may be as simple as tilting the patient's bed and straightening the head to promote blood flow through the veins of the neck. Sedatives, analgesics and paralytic agents are often used.[44] Hypertonic saline can improve I by reducing the amount of cerebral water (swelling), though it is used with caution to avoid electrolyte imbalances or heart failure.[2] Mannitol, an osmotic diuretic,[2] was also studied for this purpose,[86][87][88] but such studies have been heavily questioned.[89] Diuretics, drugs that increase urine output to reduce excessive fluid in the system, may be used to treat high intracranial pressures, but may cause hypovolemia (insufficient blood volume).[32] Hyperventilation (larger and/or faster breaths) reduces carbon dioxide levels and causes blood vessels to constrict; this decreases blood flow to the brain and reduces I, but it potentially causes ischemia[3][32][90] and is, therefore, used only in the short term.[3] Endotracheal intubation and mechanical ventilation may be used to ensure proper oxygen supply and provide a secure airway.[68] Hypotension (low blood pressure), which has a devastating outcome in TBI, can be prevented by giving intravenous fluids to maintain a normal blood pressure. Failing to maintain blood pressure can result in inadequate blood flow to the brain.[15]
Blood pressure may be kept at an artificially high level under controlled conditions by infusion of norepinephrine or similar drugs; this helps maintain cerebral perfusion.[91] Body temperature is carefully regulated because increased temperature raises the brain's metabolic needs, potentially depriving it of nutrients.[92] Seizures are common. While they can be treated with benzodiazepines, these drugs are used carefully because they can depress breathing and lower blood pressure.[44] TBI patients are more susceptible to side effects and may react adversely or be inordinately sensitive to some pharmacological agents.[82] During treatment monitoring continues for signs of deterioration such as a decreasing level of consciousness.[2][3] Traumatic brain injury may cause a range of serious coincidental complications which include cardiac arrhythmias[93] and neurogenic pulmonary edema.[94] These conditions must be adequately treated and stabilised as part of the core care for these patients. Surgery can be performed on mass lesions or to eliminate objects that have penetrated the brain. Mass lesions such as contusions or hematomas causing a significant mass effect (shift of intracranial structures) are considered emergencies and are removed surgically.[15] For intracranial hematomas, the collected blood may be removed using suction or forceps or it may be floated off with water.[15] Surgeons look for hemorrhaging blood vessels and seek to control bleeding.[15] In penetrating brain injury, damaged tissue is surgically debrided, and craniotomy may be needed.[15] Craniotomy, in which part of the skull is removed, may be needed to remove pieces of fractured skull or objects embedded in the brain.[95] Decompressive craniectomy (DC) is performed routinely in the very short period following TBI during operations to treat hematomas; part of the skull is removed temporarily (primary DC).[96] DC performed hours or days after TBI in order to control high intracranial pressures (secondary DC) has not been shown to improve outcome in some trials and may be associated with severe side effects.[2][96] Chronic stage
Physical therapy will commonly include muscle strength exercise.
Once medically stable, patients may be transferred to a subacute rehabilitation unit of the medical center or to an independent rehabilitation hospital.[82] Rehabilitation aims to improve independent function at home and in society and to help adapt to disabilities [82] and has demonstrated its general effectiveness, when conducted by a team of health professionals who specialise in head trauma.[97] As for any patient with neurologic deficits, a multidisciplinary approach is key to optimising outcome. Physiatrists or neurologists are likely to be the key
medical staff involved, but depending on the patient, doctors of other medical specialties may also be helpful. Allied health professions such as physiotherapy, speech and language therapy, cognitive rehabilitation therapy, and occupational therapy will be essential to assess function and design the rehabilitation activities for each patient. Treatment of neuropsychiatric symptoms such as emotional distress and clinical depression may involve mental health professionals such as therapists, psychologists, and psychiatrists, while neuropsychologists can help to evaluate and manage cognitive deficits.[82] After discharge from the inpatient rehabilitation treatment unit, care may be given on an outpatient basis. Community-based rehabilitation will be required for a high proportion of patients, including vocational rehabilitation; this ive employment matches job demands to the worker's abilities.[98] People with TBI who cannot live independently or with family may require care in ed living facilities such as group homes.[98] Respite care, including day centers and leisure facilities for the disabled, offers time off for caregivers, and activities for people with TBI.[98] Pharmacological treatment can help to manage psychiatric or behavioral problems.[99] Medication is also used to control post-traumatic epilepsy; however the preventive use of antiepileptics is not recommended.[100] In those cases where the person is bedridden due to a reduction of consciousness, has to remain in a wheelchair because of mobility problems, or has any other problem heavily impacting self-caring capacities, caregiving and nursing are critical. The most effective research documented intervention approach is the activation database guided EEG bio approach which has shown significant improvements in memory abilities of the TBI subject which are far superior than traditional approaches (strategies, computers, medication intervention). Gains of 2.61 standard deviations have been documented. The TBI's auditory memory ability was superior to the control group after the treatment. [57]
Prognosis Prognosis worsens with the severity of injury.[101] Most TBIs are mild and do not cause permanent or long-term disability; however, all severity levels of TBI have the potential to cause significant, long-lasting disability.[102] Permanent disability is thought to occur in 10% of mild injuries, 66% of moderate injuries, and 100% of severe injuries.[103] Most mild TBI is completely resolved within three weeks, and almost all people with mild TBI are able to live independently and return to the jobs they had before the injury, although a portion have mild cognitive and social impairments.[76] Over 90% of people with moderate TBI are able to live independently, although a portion require assistance in areas such as physical abilities, employment, and financial managing.[76] Most people with severe closed head injury either die or recover enough to live independently; middle ground is less common.[2] Coma, as it is closely related to severity, is a strong predictor of poor outcome.[3] Prognosis differs depending on the severity and location of the lesion, and access to immediate, specialised acute management. Subarachnoid hemorrhage approximately doubles mortality.[104] Subdural hematoma is associated with worse outcome and increased mortality, while people with epidural hematoma are expected to have a good outcome if they receive surgery quickly.[68] Diffuse axonal injury may be associated with coma when severe, and poor outcome.[2] Following
the acute stage, prognosis is strongly influenced by the patient's involvement in activity that promotes recovery, which for most patients requires access to a specialised, intensive rehabilitation service. Medical complications are associated with a bad prognosis. Examples are hypotension (low blood pressure), hypoxia (low blood oxygen saturation), lower cerebral perfusion pressures and longer times spent with high intracranial pressures.[2][68] Patient characteristics also influence prognosis. Factors thought to worsen it include abuse of substances such as illicit drugs and alcohol and age over sixty or under two years (in children, younger age at time of injury may be associated with a slower recovery of some abilities).[68]
Complications Main article: Complications of traumatic brain injury
The relative risk of post-traumatic seizures increases with the severity of traumatic brain injury.[105]
A CT of the head years after a traumatic brain injury showing an empty space where the damage occurred marked by the arrow.
Improvement of neurological function usually occurs for two or more years after the trauma. For many years it was believed that recovery was fastest during the first six months, but there is no evidence to this. It may be related to services commonly being withdrawn after this period, rather than any physiological limitation to further progress.[2] Children recover better in the immediate time frame and improve for longer periods.[3] Complications are distinct medical problems that may arise as a result of the TBI. The results of traumatic brain injury vary widely in type and duration; they include physical, cognitive, emotional, and behavioral complications. TBI can cause prolonged or permanent effects on consciousness, such as coma, brain death, persistent vegetative state (in which patients are unable to achieve a state of alertness to interact with their surroundings),[106] and minimally conscious state (in which patients show minimal signs of being aware of self or environment).[107][108] Lying still for long periods can cause complications including pressure sores, pneumonia or other infections, progressive multiple organ failure,[82] and deep venous thrombosis, which can cause pulmonary embolism.[15] Infections that can follow skull fractures and penetrating injuries include meningitis and abscesses.[82] Complications involving the blood vessels include vasospasm, in which vessels constrict and restrict blood flow, the formation of aneurysms, in which the side of a vessel weakens and balloons out, and stroke.[82] Movement disorders that may develop after TBI include tremor, ataxia (uncoordinated muscle movements), myoclonus (shock-like contractions of muscles), and loss of movement range and control (in particular with a loss of movement repertoire).[82] The risk of post-traumatic seizures increases with severity of trauma (image at right) and is particularly elevated with certain types of brain trauma such as cerebral contusions or hematomas.[103] People with early seizures, those occurring within a week of injury, have an increased risk of post-traumatic epilepsy (recurrent seizures occurring more than a week after the initial trauma).[109] People may lose or experience altered vision, hearing, or smell.[3] Hormonal disturbances may occur secondary to hypopituitarism, occurring immediately or years after injury in 10 to 15% of TBI patients. Development of diabetes insipidus or an electrolyte abnormality acutely after injury indicate need for endocrinologic work up. Signs and symptoms of hypopituitarism may develop and be screened for in adults with moderate TBI and in mild TBI with imaging abnormalities. Children with moderate to severe head injury may also develop hypopituitarism. Screening should take place 3 to 6 months, and 12 months after injury, but problems may occur more remotely.[110] Cognitive deficits that can follow TBI include impaired attention; disrupted insight, judgement, and thought; reduced processing speed; distractibility; and deficits in executive functions such as abstract reasoning, planning, problem-solving, and multitasking.[111] Memory loss, the most common cognitive impairment among head-injured people, occurs in 20–79% of people with closed head trauma, depending on severity.[112] People who have suffered TBI may also have difficulty with understanding or producing spoken or written language, or with more subtle aspects of communication such as body language.[82] Post-concussion syndrome, a set of lasting
symptoms experienced after mild TBI, can include physical, cognitive, emotional and behavioral problems such as headaches, dizziness, difficulty concentrating, and depression.[3] Multiple TBIs may have a cumulative effect.[108] A young person who receives a second concussion before symptoms from another one have healed may be at risk for developing a very rare but deadly condition called second-impact syndrome, in which the brain swells catastrophically after even a mild blow, with debilitating or deadly results. About one in five career boxers is affected by chronic traumatic brain injury (CTBI), which causes cognitive, behavioral, and physical impairments.[113] Dementia pugilistica, the severe form of CTBI, affects primarily career boxers years after a boxing career. It commonly manifests as dementia, memory problems, and parkinsonism (tremors and lack of coordination).[114] TBI may cause emotional, social, or behavioral problems and changes in personality.[115][116][117][118] These may include emotional instability, depression, anxiety, hypomania, mania, apathy, irritability, problems with social judgment, and impaired conversational skills.[115][118][119] TBI appears to predispose survivors to psychiatric disorders including obsessive compulsive disorder, substance abuse, dysthymia, clinical depression, bipolar disorder, and anxiety disorders.[120] In patients who have depression after TBI, suicidal ideation is not uncommon; the suicide rate among these persons is increased 2- to 3-fold.[121] Social and behavioral symptoms that can follow TBI include disinhibition, inability to control anger, impulsiveness, lack of initiative, inappropriate sexual activity, poor social judgment, and changes in personality.[115][117][118][122] TBI also has a substantial impact on the functioning of family systems[123] Caregiving family and TBI survivors often significantly alter their familial roles and responsibilities following injury, creating significant change and strain on a family system. Typical challenges identified by families recovering from TBI include: frustration and impatience with one another, loss of former lives and relationships, difficulty setting reasonable goals, inability to effectively solve problems as a family, increased level of stress and household tension, changes in emotional dynamics, and overwhelming desire to return to pre-injury status. In addition, families may exhibit less effective functioning in areas including coping, problem solving and communication. Psychoeducation and counseling models have been demonstrated to be effective in minimizing family disruption [124]
Epidemiology
Causes of TBI fatalities in the US[125]
TBI is a leading cause of death and disability around the globe[126] and presents a major worldwide social, economic, and health problem.[2] It is the number one cause of coma,[127] it plays the leading role in disability due to trauma,[68] and is the leading cause of brain damage in children and young adults.[7] In Europe it is responsible for more years of disability than any other cause.[2] It also plays a significant role in half of trauma deaths.[15] Findings on the frequency of each level of severity vary based on the definitions and methods used in studies. A World Health Organization study estimated that between 70 and 90% of head injuries that receive treatment are mild,[128] and a US study found that moderate and severe injuries each for 10% of TBIs, with the rest mild.[64] The incidence of TBI varies by age, gender, region and other factors.[129] Findings of incidence and prevalence in epidemiological studies vary based on such factors as which grades of severity are included, whether deaths are included, whether the study is restricted to hospitalized people, and the study's location.[7] The annual incidence of mild TBI is difficult to determine but may be 100–600 people per 100,000.[55] Mortality
In the US, the mortality (death rate) rate is estimated to be 21% by 30 days after TBI.[83] A study on Iraq War soldiers found that severe TBI carries a mortality of 30–50%.[55] Deaths have declined due to improved treatments and systems for managing trauma in societies wealthy enough to provide modern emergency and neurosurgical services.[92] The fraction of those who die after being hospitalized with TBI fell from almost half in the 1970s to about a quarter at the beginning of the 21st century.[68] This decline in mortality has led to a concomitant increase in the number of people living with disabilities that result from TBI.[130] Biological, clinical, and demographic factors contribute to the likelihood that an injury will be fatal.[125] In addition, outcome depends heavily on the cause of head injury. In the US, patients with fall-related TBIs have an 89% survival rate, while only 9% of patients with firearm-related TBIs survive.[131] In the US, firearms are the most common cause of fatal TBI, followed by vehicle accidents and then falls.[125] Of deaths from firearms, 75% are considered to be suicides.[125] The incidence of TBI is increasing globally, due largely to an increase in motor vehicle use in low- and middle-income countries.[2] In developing countries, automobile use has increased faster than safety infrastructure could be introduced.[55] In contrast, vehicle safety laws have decreased rates of TBI in high-income countries,[2] which have seen decreases in traffic-related TBI since the 1970s.[47] Each year in the United States about two million people suffer a TBI[13] and about 500,000 are hospitalized.[129] The yearly incidence of TBI is estimated at 180–250 per 100,000 people in the US,[129] 281 per 100,000 in , 361 per 100,000 in South Africa, 322
per 100,000 in Australia,[7] and 430 per 100,000 in England.[53] In the European Union the yearly aggregate incidence of TBI hospitalizations and fatalities is estimated at 235 per 100,000.[2] Demographics
TBI is present in 85% of traumatically injured children, either alone or with other injuries.[132] The greatest number of TBIs occur in people aged 15–24.[5][30] Because TBI is more common in young people, its costs to society are high due to the loss of productive years to death and disability.[2] The age groups most at risk for TBI are children ages five to nine and adults over age 80,[101] and the highest rates of death and hospitalization due to TBI are in people over age 65.[102] The incidence of fall-related TBI in First World countries is increasing as the population ages; thus the median age of people with head injuries has increased.[2] Regardless of age, TBI rates are higher in males.[30] Men suffer twice as many TBIs as women do and have a fourfold risk of fatal head injury,[101] and males for two thirds of childhood and adolescent head trauma.[133] However, when matched for severity of injury, women appear to fare more poorly than men.[84] Socioeconomic status also appears to affect TBI rates; people with lower levels of education and employment and lower socioeconomic status are at greater risk.[7]
History
The Edwin Smith Papyrus
Head injury is present in ancient myths that may date back before recorded history.[134] Skulls found in battleground graves with holes drilled over fracture lines suggest that trepanation may have been used to treat TBI in ancient times.[135] Ancient Mesopotamians knew of head injury and some of its effects, including seizures, paralysis, and loss of sight, hearing or speech.[136] The Edwin Smith Papyrus, written around 1650–1550 BC, describes various head injuries and symptoms and classifies them based on their presentation and tractability.[137] Ancient Greek physicians including Hippocrates understood the brain to be the center of thought, probably due to their experience with head trauma.[138]
Medieval and Renaissance surgeons continued the practice of trepanation for head injury.[138] In the Middle Ages, physicians further described head injury symptoms and the term concussion became more widespread.[139] Concussion symptoms were first described systematically in the 16th century by Berengario da Carpi.[138] It was first suggested in the 18th century that intracranial pressure rather than skull damage was the cause of pathology after TBI. This hypothesis was confirmed around the end of the 19th century, and opening the skull to relieve pressure was then proposed as a treatment.[135] In the 19th century it was noted that TBI is related to the development of psychosis.[140] At that time a debate arose around whether post-concussion syndrome was due to a disturbance of the brain tissue or psychological factors.[139] The debate continues today.
Phineas Gage carrying the rod that caused his TBI
Perhaps the first reported case of personality change after brain injury is that of Phineas Gage, who survived an accident in which a large iron rod was driven through his head, destroying one or both of his frontal lobes; numerous cases of personality change after brain injury have been reported since.[24][26][27][36][37][41][141][142] The 20th century saw the advancement of technologies that improved treatment and diagnosis such as the development of imaging tools including CT and MRI, and, in the 21st century, diffusion tensor imaging (DTI). The introduction of intracranial pressure monitoring in the 1950s
has been credited with beginning the "modern era" of head injury.[92][143] Until the 20th century, the mortality rate of TBI was high and rehabilitation was uncommon; improvements in care made during World War I reduced the death rate and made rehabilitation possible.[134] Facilities dedicated to TBI rehabilitation were probably first established during World War I.[134] Explosives used in World War I caused many blast injuries; the large number of TBIs that resulted allowed researchers to learn about localization of brain functions.[144] Blast-related injuries are now common problems in returning veterans from Iraq & Afghanistan; research shows that the symptoms of such TBIs are largely the same as those of TBIs involving a physical blow to the head.[145] In the 1970s, awareness of TBI as a public health problem grew,[146] and a great deal of progress has been made since then in brain trauma research,[92] such as the discovery of primary and secondary brain injury.[135] The 1990s saw the development and dissemination of standardized guidelines for treatment of TBI, with protocols for a range of issues such as drugs and management of intracranial pressure.[92] Research since the early 1990s has improved TBI survival;[135] that decade was known as the "Decade of the Brain" for advances made in brain research.[147]
Research No medication to halt the progression of secondary injury exists,[55] but the variety of pathological events presents opportunities to find treatments that interfere with the damage processes.[2] Neuroprotection, methods to halt or mitigate secondary injury, have been the subject of great interest for their ability to limit the damage that follows TBI. However, clinical trials to test agents that could halt these cellular mechanisms have met largely with failure.[2] For example, interest existed in hypothermia, cooling the injured brain to limit TBI damage, but clinical trials showed that it is not useful in the treatment of TBI.[92] In addition, drugs such as NMDA receptor antagonists to halt neurochemical cascades such as excitotoxicity showed promise in animal trials but failed in clinical trials.[92] These failures could be due to factors including faults in the trials' design or in the insufficiency of a single agent to prevent the array of injury processes involved in secondary injury.[92] Recent research has gone into monitoring brain metabolism for ischaemia, in particular the parameters of glucose, glycerol, and glutamate through microdialysis[citation needed]. Developments in technologies may provide doctors with valuable medical information. For example, work has been done to design a device to monitor oxygenation that could be attached to a probe placed into the brain—such probes are currently used to monitor I.[92] Research is also planned to clarify factors correlated to outcome in TBI and to determine in which cases it is best to perform CT scans and surgical procedures.[148] Hyperbaric oxygen therapy (HBO) has been evaluated as an adjunctive treatment following TBI, concluding a Cochrane review stating that its use could not be justified.[149] HBO for TBI has remained controversial as studies have looked for improvement mechanisms,[150][151][152] and further evidence shows that it may have potential as a treatment.[153][154]
Apraxia of speech Apraxia of speech (AOS) is an oral motor speech disorder affecting an individual's ability to translate conscious speech plans into motor plans, which results in limited and difficult speech ability. In adults, the disorder is caused by illness or injury, while the cause of AOS in children is unknown. Like other apraxias, AOS only affects volitional (willful or purposeful) movement patterns. Individuals with this disorder have difficulty connecting speech messages from the brain to the mouth.[1] The disorder can be divided into two specific types: acquired apraxia of speech (AOS) and childhood apraxia of speech (CAS).[citation needed] Acquired apraxia of speech is a loss of prior speech ability resulting from a brain illness or injury which occurs in both children and adults. Childhood apraxia of speech is an inability to utilize motor planning to perform movements necessary for speech during a child's language learning process. Although the age of onset differs between the two forms, the main characteristics and treatments are similar.[1]
Characteristics Apraxia of speech (AOS) is a neurogenic communication disorder affecting the motor programming system for speech production.[2][3] Individuals with AOS demonstrate difficulty in speech production, specifically with sequencing and forming sounds. The individual knows exactly what they want to say, but there is a disruption in the part of the brain that sends the signal to the muscle for the specific movement.[3] Individuals with acquired AOS demonstrate hallmark characteristics of articulation and prosody (rhythm, stress or intonation) errors.[2][3] Coexisting characteristics may include groping and effortful speech production with selfcorrection, difficulty initiating speech, abnormal stress, intonation and rhythm errors, and inconsistency with articulation.[4] Wertz et al., (1984) describe the following five speech characteristics that an individual with apraxia of speech may exhibit:[4] Effortful trial and error with groping Groping is when the mouth searches for the position needed to create a sound. When this trial and error process occurs, sounds may be held out longer, repeated or silently voiced. In some cases, an AOS sufferer may be able to produce certain sounds on their own, easily and unconsciously, but when prompted by another to produce the same sound the patient may grope with their lips, using volitional control (conscious awareness of the attempted speech movements), while struggling to produce the sound.[3][5] Self correction of errors Patients are aware of their speech errors and can attempt to correct themselves. This can involve distorted consonants, vowels, and sound substitutions. People with AOS often have a
much greater understanding of speech than they are able to express. This receptive ability allows them to attempt at self correction.[6] Abnormal rhythm, stress and intonation Sufferers of AOS present with prosodic errors which include irregular pitch, rate, and rhythm. This impaired prosody causes their speech to be: too slow or too fast and highly segmented (many pauses). An AOS speaker also stresses syllables incorrectly and in a monotone. As a result, the speech is often described as 'robotic'. When words are produced in a monotone with equal syllabic stress, a word such as 'tectonic' may sound like 'tec-ton-ic' as opposed to 'tec-TON-ic'. These patterns occur even though the speakers are aware of the prosodic patterns that should be used.[7] Inconsistent articulation errors on repeated speech productions of the same utterance When producing the same utterance in different instances, a person with AOS may have difficulty using and maintaining the same articulation that was previously used for that utterance. On some days, people with AOS may have more errors, or seem to "lose" the ability to produce certain sounds for an amount of time. Articulation also becomes more difficult when a word or phrase requires an articulation adjustment, in which the lips and tongue must move in order to shift between sounds. For example, the word "baby" needs less mouth adjustment than the word "dog" requires, since producing "dog" requires two tongue/lips movements to articulate.[2] Difficulty initiating utterances Producing utterances becomes a difficult task in patients with AOS, which result in various speech errors. The errors in completing a speech movement gesture may increase as the length of the utterance increases. Since multisyllabic words are difficult, those with AOS use simple syllables and a limited range of consonants and vowels.[2][3]
Diagnosis Apraxia of speech can be diagnosed by a speech language pathologist (SLP) through specific exams that measure oral mechanisms of speech. The exam involves tasks such as pursing lips, blowing, licking lips, elevating the tongues, and also involves an examination of the mouth, and observation of the patient eating and talking. Tests such as the Kaufman Speech Praxis test, a more formal examination, are also used in diagnosis.[8] SLPs do not agree on a specific set of characteristics that make up the apraxia of speech diagnosis,[citation needed] so any of the characteristics from the section above could be used to form a diagnosis.[1] For acquired AOS, patients may be asked to perform other daily tasks such as reading, writing, and conversing with others. In situations involving brain damage, an MRI brain scan also helps identify damaged areas of the brain.[1]
A differential diagnosis must be used in order to rule out other similar or alternative disorders. Although disorders such as expressive aphasia, conduction aphasia, and dysarthria involve similar symptoms as apraxia of speech, the disorders must be distinguished in order to correctly treat the patients.[citation needed] While apraxias involve the planning aspect of speech, aphasic disorders such as these involve the content of the language.[citation needed] A differential diagnosis of AOS is often not possible for children under the age of 2 years old. Even when children are between 2–3 years, a clear diagnosis cannot always occur because at this age, they may still be unable to focus on, or cooperate with, diagnostic testing.[5][9] Possible co-morbid aphasias
AOS and expressive aphasia (also known as Broca's aphasia) are commonly mistaken as the same disorder mainly because they often occur together in patients. Although both disorders present with symptoms such as a difficulty producing sounds due to damage in the language parts of the brain, they are not the same. The main difference between these disorders lies in the ability to comprehend spoken language; patients with apraxia are able to fully comprehend speech, while patients with aphasia are not always fully able to comprehend others' speech.[10] Conduction aphasia is another speech disorder that is similar to, but not the same as, apraxia of speech. Although patients who suffer from conduction aphasia have full comprehension of speech, as do AOS sufferers, there are differences between the two disorders.[11] Patients with conduction aphasia are typically able to speak fluently, but they do not have the ability to repeat what they hear.[12] Similarly, dysarthria, another motor speech disorder, is characterized by difficulty articulating sounds. The difficulty in articulation does not occur due in planning the motor movement, as happens with AOS. Instead, dysarthria is caused by inability in or weakness of the muscles in the mouth, face, and respiratory system.[13]
Causes of acquired apraxia of speech AOS can be caused by any type of brain damage affecting the speech controls in the brain. Brain damage can occur as a result of stroke, head injury, tumor, or a progressive illness affecting brain functioning.[1] Apraxia of speech (AOS)
Stroke-associated AOS is the most common form of acquired AOS, making up about 60% of all reported acquired AOS cases. This is one of the several possible disorders that can result from a stroke, but only about 11% of stroke cases involve this disorder. Brain damage to the neural connections, and especially the neural synapses, during the stroke can lead to acquiring AOS. Most cases of stroke-associated AOS are minor, but in the most severe cases, all linguistic motor function can be lost and must be relearned. Since most with this form of AOS are at least fifty years old, few fully recover to their previous level of ability to produce speech.[14]
Progressive apraxia of speech
Recent research has established the existence of primary progressive apraxia of speech caused by neuroanatomic motor atrophy.[15][16]
Management of acquired apraxia of speech In cases of acute AOS (stroke), spontaneous recovery may occur, in which previous speech abilities reappear on their own. All other cases of acquired AOS require a form of therapy; however the therapy varies with the individual needs of the patient. Typically, treatment involves one-on-one therapy with a speech language pathologist (SLP).[1] For severe forms of AOS, therapy may involve multiple sessions per week, which is reduced with speech improvement.[17] Another main theme in AOS treatment is the use of repetition in order to achieve a large amount of target utterances, or desired speech usages.[17] Different Speech Language Pathologists use various treatment techniques for AOS. One technique, called the Linguistic Approach, utilizes the rules for sounds and sequences. This approach focuses on the placement of the mouth in forming speech sounds. Another type of treatment is the Motor-Programming Approach, in which the motor movements necessary for speech are practiced. This technique utilizes a great amount of repetition in order to practice the sequences and transitions that are necessary in between production of sounds.[17]
Childhood apraxia of speech "Childhood apraxia of speech (CAS) is a neurological childhood (pediatric) speech sound disorder in which the precision and consistency of movements underlying speech are impaired in the absence of neuromuscular deficits (e.g., abnormal reflexes, abnormal tone). CAS may occur as a result of known neurological impairment, in association with complex neurobehavioral disorders of known or unknown origin, or as an idiopathic neurogenic speech sound disorder. The core impairment in planning and/or programming spatiotemporal parameters of movement sequences results in errors in speech sound production and prosody." — American Speech-Language-Hearing Association (ASHA) Ad Hoc Committee on Apraxia of Speech in Children (2007) [18] The cause of childhood apraxia of speech (CAS), also known as developmental verbal dyspraxia (DVD), is unknown.[19] Research on the brain structures has not been able to find specific areas indicating lesions or differences in brain structure. Some observations suggest a genetic relationship with CAS, as many with the disorder have a family history of communication disorders.[1][19][20] Management of childhood apraxia of speech
CAS requires various forms of therapy which varies with the individual needs of the patient. Typically, treatment involves one-on-one therapy with a speech language pathologist (SLP).[1] In children with CAS, consistency is a key element in treatment. Consistency in the form of
communication, as well as the development and use of oral communication are extremely important in aiding a child's speech learning process.
History & Terminology The term apraxia was first defined by Hugo Karl Liepmann in 1908 as the "inability to perform voluntary acts despite preserved muscle strength." In 1969, Frederic L. Darley coined the term "apraxia of speech", replacing Liepmann's original term "apraxia of the glosso-labio-pharyngeal structures." Paul Broca had also identified this speech disorder in 1861, which he referred to as "aphemia": a disorder involving difficulty of articulation despite having intact language skills and muscular function.[2] The disorder is currently referred to as "apraxia of speech", but was also formerly termed "verbal dyspraxia". The term apraxia comes from the Greek root "praxis," meaning the performance of action or skilled movement.[4] Adding the prefix "a", meaning absence, or "dys", meaning abnormal or difficult, to the root "praxis", both function to imply speech difficulties related to movement. rom Wikipedia, the free encyclopedia Jump to: navigation, search "Stutter" redirects here. For other uses, see Stutter (disambiguation). "Stammer" redirects here. For other uses, see Stammer (disambiguation).
Stuttering Classification and external resources ICD-10
F98.5
ICD-9
307.0
OMIM
184450 609261
MedlinePlus
001427
MeSH
D013342
Stuttering (/ˈstʌtərɪŋ/; alalia syllabaris), also known as stammering (/ˈstæmərɪŋ/; alalia literalis or anarthria literalis), is a speech disorder in which the flow of speech is disrupted by involuntary repetitions and prolongations of sounds, syllables, words or phrases as well as involuntary silent pauses or blocks in which the person who stutters is unable to produce
sounds.[1] The term stuttering is most commonly associated with involuntary sound repetition, but it also encomes the abnormal hesitation or pausing before speech, referred to by people who stutter as blocks, and the prolongation of certain sounds, usually vowels and semivowels. For many people who stutter, repetition is the primary problem. Blocks and prolongations are learned mechanisms to mask repetition, as the fear of repetitive speaking in public is often the main cause of psychological unease. The term "stuttering", as popularly used, covers a wide spectrum of severity: it may encom individuals with barely perceptible impediments, for whom the disorder is largely cosmetic, as well as others with extremely severe symptoms, for whom the problem can effectively prevent most oral communication. The impact of stuttering on a person's functioning and emotional state can be severe. This may include fears of having to enunciate specific vowels or consonants, fears of being caught stuttering in social situations, selfimposed isolation, anxiety, stress, shame, or a feeling of "loss of control" during speech. Stuttering is sometimes popularly associated with anxiety but there is actually no such correlation (though as mentioned social anxiety may actually develop in individuals as a result of their stuttering).[2] Despite popular perceptions to the contrary,[3] stuttering is not reflective of intelligence. Stuttering is generally not a problem with the physical production of speech sounds or putting thoughts into words. Acute nervousness and stress can trigger stuttering in persons predisposed to it, and living with a highly stigmatized disability can result in anxiety and high allostatic stress load (i.e., chronic nervousness and stress) that reduce the amount of acute stress necessary to trigger stuttering in any given person who stutters, exacerbating the problem in the manner of a positive system; the name 'Stuttered Speech Syndrome' has been proposed for this condition.[4][5] Neither acute nor chronic stress, however, itself creates any predisposition to stuttering. The disorder is also variable, which means that in certain situations, such as talking on the telephone, the stuttering might be more severe or less, depending on the anxiety level connected with that activity. Although the exact etiology or cause of stuttering is unknown, both genetics and neurophysiology are thought to contribute. There are many treatments and speech therapy techniques available that may help increase fluency in some people who stutter to the point where an untrained ear cannot identify a problem; however, there is essentially no "cure" for the disorder at present, although many treatments are available.[citation needed] The United States is home to the Stuttering Foundation, the largest and oldest nonprofit organizations dedicated to helping those who stutter. The Foundation provides free online resources, services and to those who stutter and their families, as well as for research into the causes of stuttering.
Classification Developmental stuttering is stuttering that originates when a child is learning to speak and develops as the child matures into adulthood. Other disorders with symptoms resembling
stuttering include Asperger's syndrome, cluttering, Parkinson's speech, essential tremor, palilalia, spasmodic dysphonia, selective mutism and social anxiety.
Characteristics Primary behaviors
Primary stuttering behaviors are the overt, observable signs of speech fluency breakdown, including repeating sounds, syllables, words or phrases, silent blocks and prolongation of sounds. These differ from the normal disfluencies found in all speakers in that stuttering disfluencies may last longer, occur more frequently, and are produced with more effort and strain.[6] Stuttering disfluencies also vary in quality: normal disfluencies tend to be a repetition of words, phrases or parts of phrases, while stuttering is characterized by prolongations, blocks and part-word repetitions.[7]
Repetition occurs when a unit of speech, such as a sound, syllable, word, or phrase is repeated and are typical in children who are beginning to stutter. For example, "to-to-to-tomorrow". Prolongations are the unnatural lengthening of continuant sounds, for example,"mmmmmmmmmilk". Prolongations are also common in children beginning to stutter. Blocks are inappropriate cessation of sound and air, often associated with freezing of the movement of the tongue, lips and/or vocal folds. Blocks often develop later, and can be associated with muscle tension and effort.[8]
Variability
The severity of a stutter is often not constant even for people who severely stutter. People who stutter commonly report dramatically increased fluency when talking in unison with another speaker, copying another's speech, whispering, singing, and acting or when talking to pets, young children, or themselves.[9] Other situations, such as public speaking and speaking on the telephone are often greatly feared by people who stutter, and increased stuttering is reported.[10] Feelings and attitudes
Stuttering may have a significant negative cognitive and affective impact on the person who stutters. Joseph Sheehan, a prominent researcher in the field, has described stuttering in of the well-known analogy to an iceberg, with the immediately visible and audible symptoms of stuttering above the waterline and a broader set of symptoms such as negative emotions hidden below the surface.[11] Feelings of embarrassment, shame, frustration, fear, anger, and guilt are frequent in people who stutter,[12] and may actually increase tension and effort, leading to increased stuttering.[13] With time, continued exposure to difficult speaking experiences may crystallize into a negative self-concept and self-image. A person who stutters may project his or her attitudes onto others, believing that they think he or she is nervous or stupid. Such negative feelings and attitudes may need to be a major focus of a treatment program. [13] Many people who stutter report about a high emotional cost, including jobs or promotions not received, as well as relationships broken or not pursued.[14]
Fluency and disfluency
Linguistic tasks can invoke speech disfluency. People who stutter may "move along a continuum from fluency to dysfluency."[15] Tasks that trigger disfluency usually require a controlledlanguage processing, which involves linguistic planning. In stuttering, it is seen that many individuals have fluency when it comes to tasks that allow for automatic processing without substantial planning. For example, singing "Happy Birthday" or other relatively common, repeated linguistic discourses could be fluid in people who stutter. Tasks like this reduce semantic, syntactic, and prosodic planning, whereas spontaneous, "controlled" speech or reading aloud requires thoughts to transform into linguistic material and thereafter syntax and prosody. Some researchers hypothesize that controlled-language activated circuitry consistently does not function properly in people who stutter, whereas people who do not stutter only sometimes display disfluent speech and abnormal circuitry.[15]
Sub-types Developmental
Stuttering is typically a developmental disorder beginning in early childhood and continuing into adulthood in at least 20% of affected children.[16][17] The mean onset of stuttering is 30 months.[18] Although there is variability, early stuttering behaviours usually consist of word or syllable repetitions, and secondary behaviours such as tension, avoidance or escape behaviours are absent.[19] Most young children are unaware of the interruptions in their speech.[19] With early people who stutter, disfluency may be episodic, and periods of stuttering are followed by periods of relative fluency.[20] Though the rate of early recovery is very high,[16] with time a young person who stutters may transition from easy, relaxed repetition to more tense and effortful stuttering, including blocks and prolongations.[19] Some propose that parental reaction may affect the development of chronic stutter. Recommendations to slow down, take a breath, say it again, etc. may increase the child’s anxiety and fear, leading to more difficulties with speaking and, in the "cycle of stuttering" to ever yet more fear, anxiety and expectation of stuttering.[21] With time secondary stuttering including escape behaviours such as eye blinking, lip movements, etc. may be used, as well as fear and avoidance of sounds, words, people, or speaking situations. Eventually, many become fully aware of their disorder and begin to identify themselves as "stutterers." With this may come deeper frustration, embarrassment and shame.[22] Other, rarer, patterns of stuttering development have been described, including sudden onset with the child being unable to speak, despite attempts to do so.[23] The child usually is unable to utter the first sound of a sentence, and shows high levels of awareness and frustration. Another variety also begins suddenly with frequent word and phrase repetition, and does not develop secondary stuttering behaviours.[23] Another way stuttering comes about is through child development. Many toddlers and preschool age children stutter as they are learning to talk, and although many parents worry about it, most of these children will outgrow the stuttering and will have normal speech as they get older. Since most of these children don't stutter as adults, this normal stage of speech development is usually referred to as pseudostuttering or as a normal dysfluency. As children learn to talk, they may repeat certain sounds, stumble on or mispronounce words, hesitate between words, substitute
sounds for each other, and be unable to express some sounds. Children with a normal dysfluency usually have brief repetitions of certain sounds, syllables or short words; however, the stuttering usually comes and goes and is most noticeable when a child is excited, stressed or overly tired. Stuttering is also believed to be caused by neurophysiocology. Neurogenic stuttering is a type of fluency disorder in which a person has difficulty in producing speech in a normal, smooth fashion. Individuals with fluency disorders may have speech that sounds fragmented or halting, with frequent interruptions and difficulty producing words without effort or struggle. Neurogenic stuttering typically appears following some sort of injury or disease to the central nervous system. Injuries to the brain and spinal cord, including cortex, subcortex, cerebellar, and even the neural pathway regions.[citation needed] Acquired
In rare cases, stuttering may be acquired in adulthood as the result of a neurological event such as a head injury, tumour, stroke or drug use. The stuttering has different characteristics from its developmental equivalent: it tends to be limited to part-word or sound repetitions, and is associated with a relative lack of anxiety and secondary stuttering behaviors. Techniques such as altered auditory (see below), which may promote fluency in people who stutter with the developmental condition, are not effective with the acquired type.[16][17][24] Psychogenic stuttering may also arise after a traumatic experience such as a grief, the breakup of a relationship or as the psychological reaction to physical trauma. Its symptoms tend to be homogeneous: the stuttering is of sudden onset and associated with a significant event, it is constant and uninfluenced by different speaking situations, and there is little awareness or concern shown by the speaker.[25]
Causes of developmental stuttering No single, exclusive cause of developmental stuttering is known. A variety of hypotheses and theories suggests multiple factors contributing to stuttering.[16] Among these is the strong evidence that stuttering has a genetic basis.[26] Children who have first-degree relatives who stutter are three times as likely to develop a stutter.[27] However, twin and adoption studies suggest that genetic factors interact with environmental factors for stuttering to occur,[28] and many people who stutter have no family history of the disorder.[29] There is evidence that stuttering is more common in children who also have concomitant speech, language, learning or motor difficulties.[30] Robert West, a pioneer of genetic studies in stuttering, has suggested that the presence of stuttering is connected to the fact that articulated speech is the last major acquisition in human evolution.[31] Another view is that a stutter is a complex tic.[32] In a 2010 article, three genes were found to correlate with stuttering: GNPTAB, GNPTG, and NAGPA. Researchers estimated that alterations in these three genes were present in 9% of people who stutter who have a family history of stuttering.[33] For some people who stutter, congenital factors may play a role. These may include physical trauma at or around birth, as well as cerebral palsy and mental retardation. For other people who
stutter, there could be added impact due to stressful situations such as the birth of a sibling, moving, or a sudden growth in linguistic ability.[26][28] There is clear empirical evidence for structural and functional differences in the brains of people who stutter. Research is complicated somewhat by the possibility that such differences could be the consequences of stuttering rather than a cause, but recent research on older children confirm structural differences thereby giving strength to the argument that at least some of the differences are not a consequence of stuttering.[34][35] Auditory processing deficits have also been proposed as a cause of stuttering. Stuttering is less prevalent in deaf and hard-of-hearing individuals,[36] and stuttering may be improved when auditory is altered, such as masking, delayed auditory (DAF), or frequency altered .[16][37] There is some evidence that the functional organization of the auditory cortex may be different in people who stutter.[16] There is evidence of differences in linguistic processing between people who stutter and people who do not stutter.[38] Brain scans of adult people who stutter have found increased activation of the right hemisphere, which is associated with emotions, than in the left hemisphere, which is associated with speech. In addition reduced activation in the left auditory cortex has been observed.[16][28] The capacities and demands model has been proposed to for the heterogeneity of the disorder. In this approach, speech performance varies depending on the capacity that the individual has for producing fluent speech, and the demands placed upon the person by the speaking situation. Capacity for fluent speech, which may be affected by a predisposition to the disorder, auditory processing or motor speech deficits, and cognitive or affective issues. Demands may be increased by internal factors such as lack of confidence or self esteem or inadequate language skills or external factors such as peer pressure, time pressure, stressful speaking situations, insistence on perfect speech, and the like. In stuttering, the severity of the disorder is seen as likely to increase when demands placed on the person's speech and language system exceed their capacity to deal with these pressures.[39]
Neuroimaging of developmental stuttering in adults Several neuroimaging studies have emerged in order to identify areas associated with stuttering. Brain imaging studies have primarily been focused on adults. In general, during stuttering, cerebral activities change dramatically in comparison to silent rest or fluent speech between people who stutter and people who do not stutter. Studies utilizing positron-emission tomography (PET) have found during tasks that invoke disfluent speech, people who stutter show hypoactivity in cortical areas associated with language processing, such as Broca’s area, but hyperactivity in areas associated with motor function.[15] One such study that evaluated the stutter period found that there was over activation in the cerebrum and cerebellum, and relative deactivation of the left hemisphere auditory areas and frontal temporal regions.[40]
In non-stuttering, normal speech, PET scans show that both hemispheres are active but that the left hemisphere may be more active. By contrast, people who stutter yield more activity on the right hemisphere, suggesting that it might be interfering with left-hemisphere speech production. Another comparison of scans anterior forebrain regions are disproportionately active in stuttering subjects, while post-rolandic regions are relatively inactive.[41] Functional magnetic resonance imaging (fMRI) has found abnormal activation in the right frontal operculum (RFO), which is an area associated with time-estimation tasks, occasionally incorporated in complex speech.[15] Researchers have explored temporal cortical activations by utilizing magnetoencephalography (MEG). In single-word-recognition tasks, people who do not stutter showed cortical activation first in occipital areas, then in left inferior-frontal regions such as Broca’s area, and finally, in motor and premotor cortices. The people who stutter also first had cortical activation in the occipital areas, but, interestingly, the left inferior-frontal regions were activated only after the motor and premotor cortices were activated.[15][40] It is important to note that the neurological abnormalities found in adults does not conclude if childhood stuttering caused these abnormalities or if the abnormalities cause stuttering.[34] Future research should address a longitudinal case study to track the development of brain structure in relation to stuttering.
Physiopathology of developmental stuttering Much evidence from neuroimaging techniques has ed the theory that the righthemisphere of people who stutter interferes with left-hemisphere speech production. Additionally, people who stutter seem to activate motor programs before the articulatory or linguistic processing is initiated. Overactivity and underactivity
During speech production, people who stutter show overactivity in the anterior insula, cerebellum and bilateral midbrain. They show underactivity in the ventral premotor, Rolandic opercular and sensorimotor cortex bilaterally and Heschl’s gyrus in the left hemisphere.[42] Additionally, speech production in people who stutter yields underactivity in cortical motor and premotor areas.[34] Anatomical differences
Though neuroimaging studies have not yet found specific cortical correlates, there is much evidence that there are differences in the brain physiology of adults who stutter in comparison to those who do not. Asymmetry has been found between the left and right planum temporale in comparing people who stutter and people who do not stutter.[40] These studies have also found that there are anatomical differences in the Rolandic operculum and arcuate fasciculus.[citation needed]
Dopamine abnormalities
Recent studies have found that adults who stutter have elevated levels of the neurotransmitter dopamine, and have thus found dopamine antagonists that reduce stuttering (see anti-stuttering medication below).[40] Overactivity of the midbrain has been found at the level of the substantia nigra extended to the red nucleus and subthalamic nucleus, which all contribute to the production of dopamine.[34]
Treatment Main article: Stuttering therapy Fluency shaping therapy
Fluency shaping therapy, also known as "speak more fluently", "prolonged speech" or "connected speech", trains people who stutter to speak fluently by controlling their breathing, phonation, and articulation (lips, jaw, and tongue). It is based on operant conditioning techniques.[43] People who stutter are trained to reduce their speaking rate by stretching vowels and consonants, and using other fluency techniques such as continuous airflow and soft speech s. The result is very slow, monotonic, but fluent speech, used only in the speech clinic. After the person who stutters masters these fluency skills, the speaking rate and intonation are increased gradually. This more normal-sounding, fluent speech is then transferred to daily life outside the speech clinic, though lack of speech naturalness at the end of treatment remains a frequent criticism. Fluency shaping approaches are often taught in intensive group therapy programs, which may take two to three weeks to complete, but more recently the Camperdown program, using a much shorter schedule, has been shown to be effective.[44] Stuttering modification therapy
The goal of stuttering modification therapy is not to eliminate stuttering but to modify it so that stuttering is easier and less effortful.[45] The rationale is that since fear and anxiety causes increased stuttering, using easier stuttering and with less fear and avoidance, stuttering will decrease. The most widely known approach was published by Charles Van Riper in 1973 and is also known as block modification therapy.[46] However, depending on the patient, speech therapy may be ineffective.[47] Electronic fluency device Main article: Electronic fluency device
Altered auditory , so that people who stutter hear their voice differently, have been used for over 50 years in the treatment of stuttering.[48] Altered auditory effect can be produced by speaking in chorus with another person, by blocking out the person who stutter's voice while talking (masking), by delaying the person who stutter's voice slightly (delayed auditory ) and/or by altering the frequency of the (frequency altered ). Studies of these techniques have had mixed results, with some people who stutter
showing substantial reductions in stuttering, while others improved only slightly or not at all.[48] In a 2006 review of the efficacy of stuttering treatments, none of the studies on altered auditory met the criteria for experimental quality, such as the presence of control groups.[49] Anti-stuttering medications
The effectiveness of pharmacological agents, such as benzodiazepines, anti-convulsants, antidepressants, antipsychotic and antihypertensive medications, and dopamine antagonists in the treatment of stuttering has been evaluated in studies involving both adults and children.[50] A comprehensive review of pharmacological treatments of stuttering in 2006 concluded that few of the drug trials were methodologically sound.[50] Of those that were, only one, not unflawed study,[51] showed a reduction in the frequency of stuttering to less than 5% of words spoken. In addition, potentially serious side effects of pharmacological treatments were noted,[50] such as weight gain and the potential for blood pressure increases. There is one new drug studied especially for stuttering named pagoclone, which was found to be well-tolerated "with only minor side-effects of headache and fatigue reported in a minority of those treated".[52] groups and the self-help movement
With existing behavioral, prosthetic, and pharmaceutical treatments providing limited relief from the overt symptoms of stuttering, groups and the self-help movement continues to gain popularity and by professionals and people who stutter. One of the basic tenets behind the self-help movement is that since a cure does not exist, quality of living can be improved by not thinking about the stammer for prolonged periods. Psychoanalysis has claimed success in the treament of stuttering, such as the therapy used on King George in "The King's Speech." Mere acceptance of this so-called "disorder", displays a complete lack of self-confidence and a "giveup" attitude. groups further focus on the fact that stuttering is not a physical impediment but a psychological one.[53] Diaphragmatic breathing
Several treatment initiatives advocate diaphragmatic breathing (or costal breathing) as a means by which stuttering can be controlled. Performing vocal artists[clarification needed], who have strengthened their diaphragm, tend to stutter when speaking but not when singing because singing involves voluntary diaphragm usage while speaking involves involuntary diaphragm usage primarily.[54][55]
Prognosis Among preschoolers, the prognosis for recovery is good. Based on research, about 65% of preschoolers who stutter recover spontaneously in the first two years of stuttering,[18][56] and about 74% recover by their early teens.[57] In particular, girls seem to recover well.[57][58] For others, early intervention is effective in helping the child achieve normal fluency.[59] Once stuttering has become established, and the child has developed secondary behaviors, the prognosis is more guarded,[59] and only 18% of children who stutter after five years recover
spontaneously.[60] However, with treatment young children may be left with little evidence of stuttering.[59] The stuttering mindset is more deeply ingrained in adult people who stutter. With adult people who stutter, there is no known cure,[57] though they may make partial recovery or even complete recovery with intervention. People who stutter often learn to stutter less severely and be less affected emotionally, though others may make no progress with therapy.[59]
Epidemiology The lifetime prevalence, or the proportion of individuals expected to stutter at one time in their lives, is about 5%,[61] and overall males are affected two to five times more often than females.[17][62][63] Most stuttering begins in early childhood, and studies suggest that 2.5% of children under the age of 5 stutter.[64][65] The sex ratio appears to widen as children grow: among preschoolers, boys who stutter outnumber girls who stutter about two to one, or less.[63][65] but widens to three to one at first grade and five to one at fifth grade,[66] due to higher recovery rates in girls.[57] Due to high (approximately 65–75%) rates of early recovery,[62][67] the overall prevalence of stuttering is generally considered to be approximately 1%.[17][68] Cross-cultural studies of the stuttering prevalence were very active in early and middle of the 20th century, particularly under the influence of the works of Wendell Johnson, who claimed that the onset of stuttering was connected to the cultural expectations and the pressure put on young children by anxious parents. Johnson claimed there were cultures where stuttering, and even the word "stutterer", were absent (for example, among some tribes of American Indians). Later studies found that this claim was not ed by the facts, so the influence of cultural factors in stuttering research declined. It is generally accepted by contemporary scholars that stuttering is present in every culture and in every race, although the attitude towards the actual prevalence differs. Some believe stuttering occurs in all cultures and races[26] at similar rates,[17] about 1% of general population (and is about 5% among young children) all around the world. A US-based study indicated that there were no racial or ethnic differences in the incidence of stuttering in preschool children.[64][65] At the same time, there are cross-cultural studies indicating that the difference between cultures may exist. For example, summarizing prevalence studies, E. Cooper and C. Cooper conclude: "On the basis of the data currently available, it appears the prevalence of fluency disorders varies among the cultures of the world, with some indications that the prevalence of fluency disorders labeled as stuttering is higher among black populations than white or Asian populations" (Cooper & Cooper, 1993:197). Different regions of the world are researched very unevenly. The largest number of studies had been conducted in European countries and in North America, where the experts agree on the mean estimate to be about 1% of the general population (Bloodtein, 1995. A Handbook on Stuttering). African populations, particularly from West Africa, might have the highest stuttering prevalence in the world—reaching in some populations 5%, 6% and even over 9%.[69] Many regions of the world are not researched sufficiently, and for some major regions there are no prevalence studies at all (for example, in China). Some claim the reason for this might be a lower incidence in general population in China.[70]
Lewis Carroll, the well-known author of Alice's Adventures in Wonderland, was afflicted with a stammer, as were his siblings.
History Because of the unusual-sounding speech that is produced and the behaviors and attitudes that accompany a stutter, it has long been a subject of scientific interest and speculation as well as discrimination and ridicule. People who stutter can be traced back centuries to the likes of Demosthenes, who tried to control his disfluency by speaking with pebbles in his mouth.[71] The Talmud interprets Bible ages to indicate Moses was also a person who stuttered, and that placing a burning coal in his mouth had caused him to be "slow and hesitant of speech" (Exodus 4, v.10)[71] Galen's humoral theories were influential in Europe in the Middle Ages for centuries afterward. In this theory, stuttering was attributed to imbalances of the four bodily humors: yellow bile, blood, black bile, and phlegm. Hieronymus Mercurialis, writing in the sixteenth century, proposed methods to redress the imbalance including changes in diet, reduced lovemaking (in men only), and purging. Believing that fear aggravated stuttering, he suggested techniques to overcome this. Humoral manipulation continued to be a dominant treatment for stuttering until the eighteenth century.[72] Partly due to a perceived lack of intelligence because of his stutter, the man who became the Roman Emperor Claudius was initially shunned from the public eye and excluded from public office.[71] In and around eighteenth and nineteenth century Europe, surgical interventions for stuttering were recommended, including cutting the tongue with scissors, removing a triangular wedge from the posterior tongue, and cutting nerves, or neck and lip muscles. Others recommended
shortening the uvula or removing the tonsils. All were abandoned due to the high danger of bleeding to death and their failure to stop stuttering. Less drastically, Jean Marc Gaspard Itard placed a small forked golden plate under the tongue in order to "weak" muscles.[71]
Notker Balbulus, from a medieval manuscript.
Italian pathologist Giovanni Morgagni attributed stuttering to deviations in the hyoid bone, a conclusion he came to via autopsy.[72] Blessed Notker of St. Gall (ca. 840–912), called Balbulus ("The Stutterer") and described by his biographer as being "delicate of body but not of mind, stuttering of tongue but not of intellect, pushing boldly forward in things Divine," was invoked against stammering. Famous Englishmen who stammered were King George VI and Prime Minister Winston Churchill, who led the UK through World War II. George VI went through years of speech therapy, most successfully under Australian speech therapist Lionel Logue, for his stammer. This is dealt with in the Academy Award-winning film The King's Speech (2010) in which Colin Firth plays George VI. The film is based on an original screenplay by David Seidler who also used to stutter as a child until age 16.[73] Churchill claimed, perhaps not directly discussing himself, that "[s]ometimes a slight and not unpleasing stammer or impediment has been of some assistance in securing the attention of the audience..."[74] However, those who knew Churchill and commented on his stutter believed that it was or had been a significant problem for him. His secretary Phyllis Moir in her 1941 book 'I was Winston Churchill's Private Secretary' commented that 'Winston Churchill was born and grew up with a stutter'. Moir writes also about one incident 'It’s s s simply s s splendid" he
stuttered, as he always did when excited.’ Louis J. Alber, who helped to arrange a lecture tour of the United States wrote in Volume 55 of The American Mercury (1942) ‘Churchill struggled to express his feelings but his stutter caught him in the throat and his face turned purple' and ‘Born with a stutter and a lisp, both caused in large measure by a defect in his palate, Churchill was at first seriously hampered in his public speaking. It is characteristic of the man’s perseverance that, despite his staggering handicap, he made himself one of the greatest orators of our time.’ For centuries "cures" such as consistently drinking water from a snail shell for the rest of one's life, "hitting a stutterer in the face when the weather is cloudy", strengthening the tongue as a muscle, and various herbal remedies were used.[75] Similarly, in the past people have subscribed to theories about the causes of stuttering which today are considered odd. Proposed causes of stuttering have included tickling an infant too much, eating improperly during breastfeeding, allowing an infant to look in the mirror, cutting a child's hair before the child spoke his or her first words, having too small a tongue, or the "work of the devil."[75]
Notable people who stutter Notable personalities who have stuttered include actress Marilyn Monroe, U. S. Vice President Joe Biden, King George VI, Scatman John, British politicians Jack Straw and Ed Balls, actors James Earl Jones, Hrithik Roshan and Sam Neill, authors John Updike and Margaret Drabble, journalist John Stossel,[76] singers Carly Simon and Mel Tillis.,[77] musician Noel Gallagher, and sportscaster Bill Walton.[78][79]
Popular culture Main article: Stuttering in popular culture
Jazz and Euro Dance musician Scatman John wrote the song "Scatman (Ski Ba Bop Ba Dop Bop)" to help children who stutter overcome adversity. Born John Paul Larkin, Scatman spoke with a stutter himself and won the American Speech-Language-Hearing Association's Annie Glenn Award for outstanding service to the stuttering community.[80] Fiction character Albert Arkwright from British sitcom Open All Hours, stammered and much of the series' humour revolved around this. Recurring character Reginald Barclay from the Star Trek television franchise and the Emperor Claudius from the I, Claudius series by Robert Graves and acted by Derek Jacobi are portrayed as suffering from and overcoming their stuttering. Cartoon character Porky Pig has a notable stutter. This arose because his original voice artist, Joe Dougherty, had an authentic stammer. However, Dougherty's stutter caused recording sessions to take longer than otherwise necessary, and so Warner Bros. replaced him with Mel Blanc, who provided Porky's voice for the rest of his life. Porky's stutter is probably most pronounced when he says "Th-th-th-that's all, folks!" Also a person who stutters is the cartoon character Keswick from TUFF Puppy.
Deafness
Hearing loss Classification and external resources
The international symbol of deafness or hard of hearing ICD-10
H90-H91
ICD-9
389
MedlinePlus
003044
eMedicine
article/994159
MeSH
D034381
Deafness, hearing impairment, or hearing loss is a partial or total inability to hear.[1]
Definition Hearing loss
Hearing loss exists when there is diminished sensitivity to the sounds normally heard.[2] The term hearing impairment is usually reserved for people who have relative insensitivity to sound in the speech frequencies. The severity of a hearing loss is categorized according to the increase in volume above the usual level necessary before the listener can detect it.
Deafness
Deafness is defined as a degree of impairment such that a person is unable to understand speech even in the presence of amplification.[2] In profound deafness, even the loudest sounds produced by an audiometer (an instrument used to measure hearing by producing pure tone sounds through a range of frequencies) may not be detected. In total deafness, no sounds at all, regardless of amplification or method of production, are heard. Speech perception
Another aspect of hearing involves the perceived clarity of a sound rather than its amplitude. In humans, that aspect is usually measured by tests of speech perception. These tests measure one's ability to understand speech, not to merely detect sound. There are very rare types of hearing impairments which affect speech understanding alone.[3]
Causes The following are some of the major causes of hearing loss. Age
There is a progressive loss of ability to hear high frequencies with increasing age known as presbycusis. This begins in early adulthood, but does not usually interfere with ability to understand conversation until much later. Although genetically variable it is a normal concomitant of aging and is distinct from hearing losses caused by noise exposure, toxins or disease agents.[4] Noise Main article: Noise-induced hearing loss
Noise is the cause of approximately half of all cases of hearing loss, causing some degree of problems in 5% of the population globally.[5] Populations living near airports or freeways are exposed to levels of noise typically in the 65 to 75 dB(A) range. If lifestyles include significant outdoor or open window conditions, these exposures over time can degrade hearing. The U.S. EPA and various states have set noise standards to protect people from these adverse health risks. The EPA has identified the level of 70 dB(A) for 24 hour exposure as the level necessary to protect the public from hearing loss and other disruptive effects from noise, such as sleep disturbance, stress-related problems, learning detriment, etc. (EPA, 1974). Noise-induced hearing loss (NIHL) is typically centered at 3000, 4000, or 6000 Hz. As noise damage progresses, damage spreads to affect lower and higher frequencies. On an audiogram, the resulting configuration has a distinctive notch, sometimes referred to as a "noise notch." As aging and other effects contribute to higher frequency loss (6–8 kHz on an audiogram), this notch may be obscured and entirely disappear.
Louder sounds cause damage in a shorter period of time. Estimation of a "safe" duration of exposure is possible using an exchange rate of 3 dB. As 3 dB represents a doubling of intensity of sound, duration of exposure must be cut in half to maintain the same energy dose. For example, the "safe" daily exposure amount at 85 dB A, known as an exposure action value, is 8 hours, while the "safe" exposure at 91 dB(A) is only 2 hours (National Institute for Occupational Safety and Health, 1998). Note that for some people, sound may be damaging at even lower levels than 85 dB A. Exposures to other ototoxins (such as pesticides, some medications including chemotherapy agents, solvents, etc.) can lead to greater susceptibility to noise damage, as well as causing their own damage. This is called a synergistic interaction. Some American health and safety agencies (such as OSHA-Occupational Safety and Health istration and MSHA-Mine Safety and Health istration), use an exchange rate of 5 dB. While this exchange rate is simpler to use, it drastically underestimates the damage caused by very loud noise. For example, at 115 dB, a 3 dB exchange rate would limit exposure to about half a minute; the 5 dB exchange rate allows 15 minutes. Many people are unaware of the presence of environmental sound at damaging levels, or of the level at which sound becomes harmful. Common sources of damaging noise levels include car stereos, children's toys, transportation, crowds, lawn and maintenance equipment, power tools, gun use, and even hair dryers. Noise damage is cumulative; all sources of damage must be considered to assess risk. If one is exposed to loud sound (including music) at high levels or for extended durations (85 dB A or greater), then hearing impairment will occur. Sound levels increase with proximity; as the source is brought closer to the ear, the sound level increases. In the USA, 12.5% of children aged 6–19 years have permanent hearing damage from excessive noise exposure.[6] Genetic
Hearing loss can be inherited. Both dominant and recessive genes exist which can cause mild to profound impairment. If a family has a dominant gene for deafness it will persist across generations because it will manifest itself in the offspring even if it is inherited from only one parent. If a family had genetic hearing impairment caused by a recessive gene it will not always be apparent as it will have to be ed onto offspring from both parents. Dominant and recessive hearing impairment can be syndromic or nonsyndromic. Recent gene mapping has identified dozens of nonsyndromic dominant (DFNA#) and recessive (DFNB#) forms of deafness.
The first gene mapped for non-syndromic deafness, DFNA1, involves a splice site mutation in the formin related homolog diaphanous 1 (DIAPH1). A single base change in a large Costa Rican family was identified as causative in a rare form of low frequency onset progressive hearing loss with autosomal dominant inheritance exhibiting variable age of onset and complete penetrance by age 30.[7] The most common type of congenital hearing impairment in developed countries is DFNB1, also known as Connexin 26 deafness or GJB2-related deafness. The most common dominant syndromic forms of hearing impairment include Stickler syndrome and Waardenburg syndrome.
The most common recessive syndromic forms of hearing impairment are Pendred syndrome, Large vestibular aqueduct syndrome and Usher syndrome. The congenital defect microtia can cause full or partial deafness depending upon the severity of the deformity and whether or not certain parts of the inner or middle ear are affected. Mutations in PTPRQ Are a Cause of Autosomal-Recessive Nonsyndromic Hearing Impairment.[8]
Illness
Measles may cause auditory nerve damage Meningitis may damage the auditory nerve or the cochlea Autoimmune disease has only recently been recognized as a potential cause for cochlear damage. Although probably rare, it is possible for autoimmune processes to target the cochlea specifically, without symptoms affecting other organs. Wegener's granulomatosis is one of the autoimmune conditions that may precipitate hearing loss. Mumps (Epidemic parotitis) may result in profound sensorineural hearing loss (90 dB or more), unilateral (one ear) or bilateral (both ears). Presbycusis is a progressive hearing impairment accompanying age, typically affecting sensitivity to higher frequencies (above about 2 kHz). Adenoids that do not disappear by adolescence may continue to grow and may obstruct the Eustachian tube, causing conductive hearing impairment and nasal infections that can spread to the middle ear. People with HIV/AIDS frequently experience auditory system anomalies. Chlamydia may cause hearing loss in newborns to whom the disease has been ed at birth. Fetal alcohol syndrome is reported to cause hearing loss in up to 64% of infants born to alcoholic mothers, from the ototoxic effect on the developing fetus plus malnutrition during pregnancy from the excess alcohol intake. Premature birth causes sensorineural hearing loss approximately 5% of the time. Syphilis is commonly transmitted from pregnant women to their fetuses, and about a third of infected children will eventually become deaf. Otosclerosis is a hardening of the stapes (or stirrup) in the middle ear and causes conductive hearing loss. Medulloblastoma and other types of brain tumors can cause hearing loss, whether by the placement of the tumor around the Vestibulocochlear nerve, surgical resection, or platinumbased chemotherapy drugs such as cisplatin. Superior canal dehiscence, a gap in the bone cover above the inner ear, can lead to lowfrequency conductive hearing loss, autophony and vertigo
Neurological disorders
Neurological disorders such as multiple sclerosis and strokes can have an effect on hearing as well. Multiple sclerosis, or MS, is an autoimmune disease where the immune system attacks the myelin sheath, a covering that protects the nerves. Once the myelin sheaths are destroyed they cannot be repaired. Without the myelin to protect the nerves, nerves become damaged, creating disorientation for the patient. This is a painful process and may end in the debilitation of the affected person until they are paralyzed and have one or more senses gone. One of those may be hearing. If the auditory nerve becomes damaged then the affected person will become completely
deaf in one or both ears. There is no cure for MS.[9] Depending on what nerves are damaged from a stroke, one of the side effects can be deafness.[10] Medications
Some medications cause irreversible damage to the ear, and are limited in their use for this reason. The most important group is the aminoglycosides (main member gentamicin) and platinum based chemotherapeutics such as cisplatin. Some medications may reversibly affect hearing. This includes some diuretics, aspirin and NSAIDs, and macrolide antibiotics. According to a study by researchers at Brigham and Woman's Hospital in Boston, the link between nonsteroidal anti-inflammatory drugs (NSAIDs), such as ibuprofen and hearing loss tends to be greater in women, especially those who take ibuprofen six or more times a week.[11] Others may cause permanent hearing loss.[12] Extremely heavy hydrocodone use is known to cause hearing impairment. On October 18, 2007, the U.S. Food and Drug istration (FDA) announced that a warning about possible sudden hearing loss would be added to drug labels of PDE5 inhibitors, which are used for erectile dysfunction.[13] Chemicals Main article: Ototoxicity
In addition to medications, hearing loss can also result from specific drugs; metals, such as lead; solvents, such as toluene (found in crude oil, gasoline[14] and automobile exhaust,[14] for example); and asphyxiants.[15] Combined with noise, these ototoxic chemicals have an additive effect on a person’s hearing loss.[15] Hearing loss due to chemicals starts in the high frequency range and is irreversible. It damages the cochlea with lesions and degrades central portions of the auditory system.[15] For some ototoxic chemical exposures, particularly styrene,[16] the risk of hearing loss can be higher than being exposed to noise alone. Controlling noise and using hearing protectors are insufficient for preventing hearing loss from these chemicals. However, taking antioxidants helps prevent ototoxic hearing loss, at least to a degree.[16] The following list provides an accurate catalogue of ototoxic chemicals:[15][16]
Drugs o antimalarial, antibiotics, anti-inflammatory (non-steroidal), antineoplastic, diuretics Solvents o toluene, styrene, xylene, n-hexane, ethyl benzene, white spirits/Stoddard, carbon disulfide, fuels, perchloroethylene, trichloroethylene, p-xylene Asphyxiants o carbon monoxide, hydrogen cyanide Metals o lead, mercury, organotins (trimethyltin) Pesticides/Herbicides o paraquat, organophosphates
Physical trauma
There can be damage either to the ear itself or to the brain centers that process the aural information conveyed by the ears. People who sustain head injury are especially vulnerable to hearing loss or tinnitus, either temporary or permanent. I. King Jordan lost his hearing after suffering a skull fracture as a result of a motorcycle accident at age 21.[17]
Diagnosis
An audiologist conducting an audiometric hearing test in a sound-proof testing booth
The severity of a hearing impairment is ranked according to the additional intensity above a nominal threshold that a sound must be before being detected by an individual; it is (measured in decibels of hearing loss, or dB HL). Hearing impairment may be ranked as mild, moderate, moderately severe, severe or profound as defined below:
Mild: o o
for adults: between 26 and 40 dB HL for children: between 20 and 40 dB HL[2] Moderate: between 41 and 54 dB HL[2] Moderately severe: between 55 and 70 dB HL[2] Severe: between 71 and 90 dB HL[2] Profound: 91 dB HL or greater[2] Totally Deaf: Have no hearing at all.
Hearing sensitivity varies according to the frequency of sounds. To take this into , hearing sensitivity can be measured for a range of frequencies and plotted on an audiogram. For certain legal purposes such as insurance claims, hearing impairments are described in of percentages. Given that hearing impairments can vary by frequency and that audiograms are plotted with a logarithmic scale, the idea of a percentage of hearing loss is somewhat arbitrary, but where decibels of loss are converted via a recognized legal formula, it is possible to calculate a standardized "percentage of hearing loss" which is suitable for legal purposes only.
Another method for quantifying hearing impairments is a speech-in-noise test. As the name implies, a speech-in-noise test gives an indication of how well one can understand speech in a noisy environment. A person with a hearing loss will often be less able to understand speech, especially in noisy conditions. This is especially true for people who have a sensorineural loss – which is by far the most common type of hearing loss. As such, speech-in-noise tests can provide valuable information about a person's hearing ability, and can be used to detect the presence of a sensorineural hearing loss. A triple-digit speech-in-noise test was developed by RNID as part of an EU funded projectHearcom. The RNID version is available over the phone (0844 800 3838, only available in the UK), on the [1] and as an app on the iPhone. Classification
Hearing impairments are categorized by their type, their severity, and the age of onset (before or after language is acquired). Furthermore, a hearing impairment may exist in only one ear (unilateral) or in both ears (bilateral). There are three main types of hearing impairments, conductive hearing impairment and sensorineural hearing impairment and a combination of the two called mixed hearing loss.[2] Conductive hearing loss Main article: Conductive hearing loss
A conductive hearing impairment is present when the sound is not reaching the inner ear, the cochlea. This can be due to external ear canal malformation, dysfunction of the eardrum or malfunction of the bones of the middle ear. The ear drum may show defects from small to total resulting in hearing loss of different degree. Scar tissue after ear infections may also make the ear drum dysfunction as well as when it is retracted and adherent to the medial part of the middle ear. Dysfunction of the three small bones of the middle ear; hammer, anvil and stapes may cause conductive hearing loss. The mobility of the ossicles may be impaired for different reasons and disruption of the ossicular chain due to trauma, infection or anchylosis may also cause hearing loss. Many of these conditions can be helped with surgery, and an air conduction hearing aid is often a good choice of treatment. However, in some cases such an aid cannot be used. The most obvious reason is if the patient does not have any ear canals, because there is nowhere to put the ear mould. A more common reason is in patients with chronic ear infections that drain continuously or start to drain when the ear canal is obstructed with an air conduction hearing aid mould. In these patients a direct bone conduction hearing device could be an excellent solution. An implant made out of titanium is placed in the bone behind the external ear and allowed to osseointegrate and an impedance-matched hearing aid can be attached. There are two such hearing aids on the market; the Baha 3 by Cochlear BAS and the Ponto by Oticon Medical. Sensorineural hearing loss Main article: Sensorineural hearing loss
A sensorineural hearing loss is one caused by dysfunction of the inner ear, the cochlea, the nerve that transmits the impulses from the cochlea to the hearing centre in the brain or damage in the brain. The most common reason for sensorineural hearing impairment is damage to the hair cells in the cochlea. As we grow older the hair cells degenerate and lose their function, and our hearing deteriorates. Depending on the definition it could be estimated that more than 50% of the population over the age of 70 has an impaired hearing. Impaired hearing is the most common physical handicap in the industrialized world. Another common reason for hearing loss due to hair cell damage is noise-induced hearing loss. These types of hearing loss are often most pronounced in the high frequency range. This will often interfere with speech understanding, as it is in the high frequency range that we find the consonant sounds that are most important especially in noisy surroundings. Head trauma, ear infections, tumours and ototoxic drugs such as gentamycine are other reasons for sensorineural hearing loss. Hair cells that are damaged cannot be replaced with any surgical procedure, though research with stem cell treatment is presently ongoing in many institutions. The clinical application of this will however not yet be available for many years. Protection from noise exposure is at present the only way to reduce the hair cell damage. Conventional air conduction hearing aids are often prescribed for patients with sensorineural hearing loss. The outcome with modern types of hearing aids is often excellent, but speech understanding could still be a problem in demanding situations. Total or near total sensorineural deafness could be caused by congenital malformations, head trauma or inner ear infection. In patients with total or near total deafness, an air conduction aid cannot be used even if the drum and middle ear are normal. For these patients a cochlear implant could be a treatment option. This means that a thin electrode is placed into the cochlea and is stimulated electrically through a small microprocessor under the skin behind that ear. Mixed hearing loss
Mixed hearing loss is a combination of the two types discussed above. Chronic ear infection (a fairly common diagnosis) can cause a defective ear drum or middle-ear ossicle damages, or both. Surgery is often attempted but not always successful. On top of the conductive loss, a sensory component is often added. If the ear is dry and not infected, an air conduction aid could be tried; if the ear is draining, a direct bone condition hearing aid is often the best solution. If the conductive part of the hearing loss is more than 30–35 dB, an air conduction device could have problems overcoming this gap. A direct bone conduction aid like the Baha or the Ponto could, in this situation, be a good option. Before language Main article: Prelingual deafness
Prelingual deafness is hearing impairment that is sustained before the acquisition of language, which can occur due to a congenital condition or through hearing loss in early infancy. Prelingual deafness impairs an individual's ability to acquire a spoken language. Children born
into g families rarely have delays in language development, but most prelingual hearing impairment is acquired via either disease or trauma rather than genetically inherited, so families with deaf children nearly always lack previous experience with sign language. Cochlear implants allow prelingually deaf children to acquire an oral language with remarkable success if implantation is performed within the first 2–4 years.[18] In children, hearing loss can lead to social isolation for several reasons. First, the child experiences delayed social development that is in large part tied to delayed language acquisition. It is also directly tied to their inability to pick up auditory social cues. This can result in a deaf person becoming generally irritable. A child who uses sign language, or identifies with the Deaf sub-culture does not generally experience this isolation, particularly if he/she attends a school for the deaf, but may conversely experience isolation from his parents if they do not know sign language.[citation needed] A child who is exclusively or predominantly oral (using speech for communication) can experience social isolation from his or her hearing peers, particularly if no one takes the time to explicitly teach her social skills that other children acquire independently by virtue of having normal hearing.[citation needed] Finally, a child who has a severe impairment and uses some sign language may be rejected by Deaf peers, because of an understandable hesitation in abandoning the use of existent verbal and speech-reading skills. Some in the Deaf community can view this as a rejection of their own culture and its mores, and therefore will reject the individual preemptively.[citation needed] After language Main article: Post-lingual deafness
Post-lingual deafness is hearing impairment that is sustained after the acquisition of language, which can occur due to disease, trauma, or as a side-effect of a medicine. Typically, hearing loss is gradual and often detected by family and friends of affected individuals long before the patients themselves will acknowledge the disability.[citation needed] Common treatments include hearing aids, cochlear implants and learning lip reading. Post-lingual deafness is far more common than pre-lingual deafness. Those who lose their hearing later in life, such as in late adolescence or adulthood, face their own challenges, living with the adaptations that allow them to live independently. They may have to adapt to using hearing aids or a cochlear implant, develop speech-reading skills, and/or learn sign language. The affected person may need to use a TTY (teletypewriter), interpreter, or relay service to communicate over the telephone. Loneliness and depression can arise due to isolation (from the inability to communicate with friends and loved ones) and difficulty in accepting their disability. The challenge is made greater by the need for those around them to adapt to the person's hearing loss. Many relationships can suffer because of emotional conflicts that occur when there are general miscommunications between family . Generally, it is not only the person with a hearing disability that feels isolated, but others around them who feel they are not being "heard" or paid attention to, especially when the hearing loss has been gradual. Family then feel as if their hearing loss partner does not care about them enough to make changes to reduce their disability and make it easier to communicate.
Unilateral and bilateral
People with unilateral hearing loss or single-sided deafness (SSD) have difficulty in:
hearing conversation on their impaired side localizing sound understanding speech in the presence of background noise.
In quiet conditions, speech discrimination is approximately the same for normal hearing and those with unilateral deafness; however, in noisy environments speech discrimination varies individually and ranges from mild to severe. A similar effect can result from King-Kopetzky syndrome (also known as Auditory disability with normal hearing and obscure auditory dysfunction), which is characterized by an inability to process out background noise in noisy environments despite normal performance on traditional hearing tests. See also: "cocktail party effect", House Ear Institute's Hearing In Noise Test. One reason for the hearing problems these patients often experience is due to the head shadow effect. Newborn children with no hearing on one side but one normal ear could still have problems.[19] Speech development could be delayed and difficulties to concentrate in school are common. More children with unilateral hearing loss have to repeat classes than their peers. Taking part in social activities could be a problem. Early aiding is therefore of utmost importance.
Screening The American Academy of Pediatrics advises that children should have their hearing tested several times throughout their schooling:[6]
When they enter school At ages 6, 8, and 10, At least once during middle school At least once during high school
There is not enough evidence to determine the utility of screening in adults over 50 years old who do not have any symptoms.[20]
Prevention It is estimated that half of cases of hearing impairment and deafness are preventable.[2] A number of preventative strategies are effective including: immunization against rubella to reduce congenital infections, immunization against H. influenza and S. pneumoniae to reduce cases of otitis media, and avoiding or protecting against excessive noise exposure.[2] Education on the perils of hazardous of noise exposure increases the use of hearing protectors.[21]
Management
Illustration of a cochlear implant
There are a number of devices that can improve hearing in those who are hearing impaired or deaf or allow people with these conditions to better manage in society. Hearing aids, which amplify the incoming sound, will improve hearing ability, but nothing can restore normal hearing. Cochlear implants artificially stimulate the cochlear nerve by providing an electric impulse substitution for the firing of hair cells. Cochlear implants are not only expensive, but require sophisticated programming in conjunction with training for effectiveness. Cochlear implant recipients may be at higher risk for meningitis.[22] People who have hearing impairments, especially those who develop a hearing problem in childhood or old age, may need the and technical adaptations as part of the rehabilitation process. Recent research shows variations in efficacy but some studies[23] show that if implanted at a very young age, some profoundly impaired children can acquire effective hearing and speech, particularly if ed by appropriate rehabilitation. Assistive devices
Many hearing impaired individuals use assistive devices in their daily lives:
Individuals can communicate by telephone using telecommunications device for the deaf (TDD). These devices look like typewriters or word processors and transmit typed text over regular telephone lines. Other names in common use are textphone and minicom. There are several new telecommunications relay service technologies including IP Relay and captioned telephone technologies. A hearing-impaired person can communicate over the phone with a hearing person via a human translator. Wireless, Internet and mobile phone/SMS text messaging are beginning to take over the role of the TDD. Real-time text technologies, involving streaming text that is continuously transmitted as it is typed or otherwise composed. This allows conversational use of text. Software programs are now available that automatically generate a closed-captioning of conversations. Examples
include discussions in conference rooms, classroom lectures, and/or religious services. One such example of an available product is Auditory Sciences' Interact-AS product suite.[24] Instant messaging software. In addition, AOL Instant Messenger provides a real-time text feature called Real-Time IM.[25][26] Videophones and similar video technologies can be used for distance communication using sign language. Video conferencing technologies permit signed conversations as well as permitting a sign language–English interpreter to voice and sign conversations between a hearing impaired person and that person's hearing party, negating the use of a TTY device or computer keyboard. Video relay service and video remote interpreting (VRI) services also use a third-party telecommunication service to allow a deaf or hard-of-hearing person to communicate quickly and conveniently with a hearing person, through a sign language interpreter. Phone captioning is a service in which a hearing person's speech is captioned by a third party, enabling a hearing impaired person to conduct a conversation with a hearing person over the phone.[27] For mobile phones, software apps are available to provide TDD/textphone functionality on some carriers/models to provide 2-way communications. Hearing dogs are a specific type of assistance dog specifically selected and trained to assist the deaf and hearing impaired by alerting their handler to important sounds, such as doorbells, smoke alarms, ringing telephones, or alarm clocks. Other assistive devices include those that use flashing lights to signal events such as a ringing telephone, a doorbell, or a fire alarm. The advent of the Internet's World Wide Web and closed captioning has given the hearing impaired unprecedented access to information. Electronic mail and online chat have reduced the need for deaf and hard-of-hearing people to use a third-party Telecommunications Relay Service to communicate with the hearing and other hearing impaired people.
Resources and interventions
Many different assistive technologies, such as hearing aids, are available to those who are hearing impaired. People with cochlear implants, hearing aids, or neither of these devices can also use additional communication devices to reduce the interference of background sounds, or to mediate the problems of distance from sound and poor sound quality caused by reverberation and poor acoustic materials of walls, floors and hard furniture. Three types of wireless devices exist along with hard-wired devices. A wireless device used by people who use their residual hearing has two main components. One component sends the sound out to the listener, but is not directly connected to the listener with the hearing loss. The second component of the wireless system, the receiver, detects the sound and sends the sound to the ear of the person with the hearing loss. The three types of wireless devices are the FM system, the audio induction loop and the infra red system. Each system has advantages and benefits for particular uses. The FM system can easily operate in many environments with battery power. It is thus mobile and does not usually require a sound expert for it to work properly. The listener with the hearing loss carries a receiver and an earpiece. Another wireless system is the audio induction loop which permits the listener with hearing loss to be free of wearing a receiver provided that the
listener has a hearing aid or cochlear implant processor with an accessory called a "telecoil". If the listener does not have a telecoil, then he or she must carry a receiver with an earpiece. A third kind of wireless device for people with hearing loss is the infra red (IR) system, which also requires a receiver to be worn by the listener. Usually the emitter for the IR device, that is, the component that sends out the signal, uses an AC adaptor. The advantage of the IR wireless system is that people in ading rooms cannot listen in on conversations, making it useful for situations where privacy and confidentiality are required. Another way to achieve confidentiality is to use a hardwired amplifier which sends out no signal beyond the earpiece that is plugged directly into the amplifier. That amplifier of the hardwired device also has a microphone inside of it or plugged into it. Inside the classroom, children with hearing impairments may also benefit from interventions. These include providing favorable seating for the child. This can be achieved by having the student sit as close to the teacher as possible so that they will be able to hear the teacher, or read their lips more easily. When lecturing, teachers should try to look at the student as much as possible and limit unnecessary noise in the classroom. If a student has a hearing aid, they are likely to hear a lot of unwanted noises. Pairing hearing impaired students with hearing students is a common technique, allowing the non-hearing student to ask the hearing student questions about concepts that they have not understood. When teaching students with hearing impairments, overheads are commonly used, allowing the teacher to write, as well as maintain visual focus on the hearing impaired student. For those students who are completely deaf, one of the most common interventions is having the child communicate with others through an interpreter using sign language.[28]
Epidemiology
Disability-adjusted life year for hearing loss (adult onset) per 100,000 inhabitants in 2004. no data
475-520
<250
520-565
250-295
565-610
295-340
610-655
340-385
655-700
385-430
>700
430-475
Globally hearing loss affects about 10% of the population to some degree.[5] It caused moderate to severe disability in 124.2 million people as of 2004 (107.9 million of whom are in low and middle income countries).[29] Of these 65 million acquired the condition during childhood.[2] At birth ~3 per 1000 in developed countries and more than 6 per 1000 in developing countries have hearing problems.[2]
Society and culture Deaf culture Main article: Deaf culture
Jack Gannon, a professor at Gallaudet University, said this about deaf culture. “Deaf culture is a set of learned behaviors and perceptions that shape the values and norms of deaf people based on their shared or common experiences.” Some doctors believe that being deaf makes a person more social. Dr. Bill Vicar, from ASL University, shared his experiences as a deaf person, “[deaf people] tend to congregate around the kitchen table rather than the living room sofa… our goodbyes take nearly forever, and our hellos often consist of serious hugs. When two of us meet for the first time we tend to exchange detailed biographies.”[30] Deaf culture is not about contemplating what deaf people cannot do and how to fix their problems. That is called a "pathological view of the deaf". Instead deaf people celebrate what they can do. There is a strong sense of unity between deaf people as they share their experiences of suffering through a similar struggle. This celebration creates a unity between even deaf strangers. Dr. Bill Vicars expresses the power of this bond when stating, “if given the chance to become hearing most [deaf people] would choose to remain deaf.”[31] Views of treatments
There has been considerable controversy within the culturally deaf community over cochlear implants. For the most part, there is little objection to those who lost their hearing later in life or culturally Deaf adults (voluntarily) choosing to be fitted with a cochlear implant. Many in the Deaf community strongly object to a deaf child being fitted with a cochlear implant (often on the advice of an audiologist; new parents may not have sufficient information on raising deaf children and placed in an oral-only program that emphasizes the ability to speak and listen over other forms of communication such as sign language or total communication. Other concerns include loss of deaf culture and limitations on hearing restoration. Most parents and doctors tell children not to play sports or get involved in activities that can cause injuries to the head, for example soccer, hockey, or basketball. A child with a hearing loss may prefer to stay away from noisy places, such as rock concerts, Football games, airports, etc.,
as this can cause noise overflow, a type of headache that occurs in many children and adults when they are near loud noises.) Sign language Main article: Sign Language
Sign languages convey meaning by manual communication and body language instead of acoustically conveyed sound patterns. This can involve simultaneously combining hand shapes, orientation and movement of the hands, arms or body, and facial expressions to fluidly express a speaker's thoughts. There is no single "sign language". Wherever communities of deaf people exist, sign languages develop. While they use space for grammar in a way that oral languages do not, sign languages exhibit the same linguistic properties and use the same language faculty as do oral languages. Hundreds of sign languages are in use around the world and are at the cores of local deaf cultures. Some sign languages have obtained some form of legal recognition, while others have no status at all. Deaf sign languages are not based on the spoken languages of their region, and often have very different syntax, partly but not entirely owing to their ability to use spatial relationships to express aspects of meaning. (See Sign languages' relationships with spoken languages.) Abbe Charles-Michel de l'Épée was the first person to open a deaf school. L’Épée taught French sign language to children, and started the spread of many deaf schools across Europe. Thomas Gallaudet was traveling to England to start a deaf school. His inspiration was a nine-year-old girl who lived next door. Seeing her conquer her struggles made Gallaudet want to teach and see other children conquer their own disabilities. Gallaudet witnessed a demonstration of deaf teaching skills from Sicard, Massieu, and Clerc, the masters of teaching deaf children at the time. After the demonstration, Gallaudet studied under the French masters and perfected his own teaching skills. Once he was done learning Gallaudet and Clerc traveled to the United States and opened the first deaf school in Hartford Connecticut. American Sign Language, or ASL, started to evolve from primarily LSF, French sign language, and other outside influences.[32] School The examples and perspective in this section deal primarily with the United States and do not represent a worldwide view of the subject. Please improve this article and discuss the issue on the talk page. (December 2012)
Government policies
Texas School for the Deaf
Those who are hearing disabled do have access to a free and appropriate public education. If a child does qualify as being hearing impaired and receives an individualized education plan, the IEP team must consider, “the child’s language and communication needs. The IEP must include opportunities for direct communication with peers and professionals. It must also include the student’s academic level, and finally must include the students full range of needs”[33] The government also distinguishes between deafness from hearing loss. The U.S. Department of Education states that deafness is hearing loss that is so severe that a person cannot process any type of oral information even if that person has some sort of hearing-enhancing device. The U.S. Department of Education states that a hearing impairment is when a person’s education is affected by how much that person is able to hear. This definition is not included under the term deafness. In order for a person to qualify for special services, the person’s has to hear more than 20 decibels and their educational performance must be affected by their hearing loss. This is what the government has to say about governmental policies and individualized services. Inclusion vs. pullout
Alexander Graham Bell with teachers and students of the Scott Circle School for deaf children, Washington, D.C., 1883
There are mixed opinions on the subject between those who live in deaf communities, and those who have deaf family who do not live in deaf communities. Deaf communities are those communities where only sign languages are typically used.
Many parents who have a child with a hearing impairment prefer their child to be in the least restrictive environment of their school. This may be because most children with hearing loss are born to hearing parents. This can also be because of the recent push for inclusion in the public schools. It is commonly misunderstood that least restrictive environment means mainstreaming or inclusion. Sometimes the resources available at the public schools do not match up to the resources at a residential school for the deaf. Many hearing parents choose to have their deaf child educated in the general education classroom as much as possible because they are told that mainstreaming is the least restrictive environment, which is not always the case. However, there are parents that live in Deaf communities who feel that the general education classroom is not the least restrictive environment for their child. These parents feel that placing their child in a residential school where all children are deaf may be more appropriate for their child because the staff tend to be more aware of the needs and struggles of deaf children. Another reason that these parents feel a residential school may be more appropriate is because in a general education classroom, the student will not be able to communicate with their classmates due to the language barrier. In a residential school where all the children use the same language (whether it be a school using ASL, Total Communication or Oralism), students will be able to interact normally with other students, without having to worry about being criticized. An argument ing inclusion, on the other hand, exposes the student to people who aren't just like them, preparing them for adult life. Through interacting, children with hearing disabilities can expose themselves to other cultures which in the future may be beneficial for them when it comes to finding jobs and living on their own in a society where their disability may put them in the minority. These are some reasons why a person may or may not want to put their child in an inclusion classroom.[33] Myths
There are many myths regarding people with hearing losses including, but not limited to: 1. Everyone who is deaf or hard of hearing uses sign language. [34] o There are a variety of different sign systems used by hearing-impaired individuals. o Individuals who experience hearing loss later in life usually do not know sign language.[35] o People who are educated in the method of oralism or mainstream do not always know sign language. 2. People who cannot hear are not allowed to drive. o Deaf people may use special devices to alert them to sirens or other noises, or panoramic mirrors to enable improved visibility.[36] o Many countries allow deaf people to drive, although at least 26 countries do not allow deaf citizens to hold a driver's license.[36] 3. All forms of hearing loss can be solved by hearing aids or cochlear implants. o While many hearing-impaired individuals do use hearing aids, others may not benefit from the use of a hearing aid.[34] One reason can be that they don't have any external ear canals in which to place the molds; another, that the hearing aids are not powerful enough.
o
4.
5.
6.
7.
8.
For some hearing-impaired individuals who experience distortion of incoming sounds, a cochlear implant may actually worsen the distortion.[34] A bone conduction hearing solution (BAHA) however, will never affect the hearing in a negative way since it reroutes the sound through the skull. All deaf/hard of hearing people are experts in Deaf Culture. o Deaf people may have a variety of different beliefs, experiences, and methods of communication.[35] o This may be influenced by the age at which hearing was lost and the individual's personal background.[35] All deaf people want to be hearing. o While some individuals with hearing loss want to become hearing, this is not the case for everyone. Some take pride in their deafness or view themselves as a minority rather than a disability group.[37] People who can't hear can't use a phone. o Teletypewriters, Video phones and cell phone text messages are used by deaf people to communicate. A hearing person may use an ordinary telephone and a Telecommunications Relay Serviceto communicate with a deaf person. o Some people with moderate hearing loss may have enough hearing to use amplified telephones, even if they are culturally Deaf and depend primarily on sign language to communicate. Everyone who cannot hear can lip read. [34][35][unreliable source?] o Only about 30% of spoken English is visible on the lips. o Lip reading requires not only good lighting, but also a good understanding of the oral language in question and may also depend on contextual knowledge about what is being said.[35] Most deaf people have deaf parents. [38] o Less than 5% of deaf children in the United States have a deaf parent.
Research A 2005 study achieved successful regrowth of cochlea cells in guinea pigs.[39] However, the regrowth of cochlear hair cells does not imply the restoration of hearing sensitivity as the sensory cells may or may not make connections with neurons that carry the signals from hair cells to the brain. A 2008 study has shown that gene therapy targeting Atoh1 can cause hair cell growth and attract neuronal processes in embryonic mice. Some hope that a similar treatment will one day ameliorate hearing loss in humans.[40]
Motor speech disorders Motor speech disorders are a class of speech disorder that disturb the body's natural ability to speak. These disturbances vary in their etiology based on the integrity and integration of cognitive, neuromuscular, and musculoskeletal activities. Speaking is an act dependent on thought and timed execution of airflow and oral motor / oral placement of the lips, tongue, and jaw that can be disrupted by weakness in oral musculature (dysarthria) or an inability to execute the motor movements needed for specific speech sound production (apraxia). Such deficits can be related to pathology of the nervous system (central and /or peripheral systems involved in
motor planning) that affect the timing of respiration, phonation, prosody, and articulation in isolation or in conjunction.
Dysarthria Main article: Dysarthria
Dysarthria is the reduced ability to motor plan volitional movements needed for speech production as the result of weakness/paresis and/or paralysis of the musculature of the oral mechanism needed for respiration, phonation, resonance, articulation, and or prosody.
Apraxia Main article: Apraxia of speech
Apraxia is the inability to motor plan volitional movement for speech production in the absence of muscular weakness. [hide]
v t e
Symptoms and signs: Speech and voice / Symptoms involving head and neck (R47–R49, 784) Expressive aphasia · Receptive aphasia · Conduction aphasia · Anomic aphasia · Global aphasia · Acute Aphasias Transcortical sensory aphasia · Transcortical motor aphasia · Mixed transcortical aphasia Progressive Aphasias
Other speech disturbances
Progressive nonfluent aphasia · Semantic dementia · Logopenic progressive aphasia Speech disorder · Apraxia of speech · Auditory verbal agnosia · Dysarthria · Schizophasia · Aprosodia/Dysprosody Specific language impairment · Thought disorder · Pressure of speech · Derailment · Clanging · Circumstantiality
Developmental dyslexia/Alexia · Agnosia (Astereognosis, Prosopagnosia, Visual Symbolic dysfunctions agnosia) · Gerstmann syndrome · Developmental dyspraxia/Apraxia (Ideomotor apraxia) · Dyscalculia/Acalculia ·
Agraphia Voice disturbances Other
Dysphonia/Aphonia Auditory processing disorder · Epistaxis · Headache · Post-nasal drip · Neck mass
M: PSO/PSI
mepr
M: MOU
anat/devp
dsrd (o, p, m, p, a, d, s), proc (eval/thrp), drug sysi/epon, spvo (N5A/5B/5C/6A/6B/6D) noco/cofa proc (peri), drug (A1) (c)/cogi/tumr, sysi
Anomic aphasia Main article: Aphasia
Anomic aphasia Classification and external resources
Diffusion tensor imaging image of the brain showing the right and left arcuate fasciculus (Raf & Laf). Also shown are the right and left superior longitudinal fasciculus (Rslf & Lslf), and tapetum of corpus callosum (Ta). Damage to the Laf is known to cause anomic aphasia. ICD-9
784.3 784.69
MeSH
D000849
Anomic aphasia, also known as dysnomia, nominal aphasia, and amnesic aphasia; is a severe problem with recalling words or names.
Overview Anomic aphasia (anomia) is a type of aphasia characterized by problems recalling words or names. Subjects often use circumlocutions (speaking in a roundabout way) in order to express a certain word for which they cannot the name. Sometimes the subject can recall the name when given clues. In addition, patients are able to speak with correct grammar; the main problem is finding the appropriate word to identify an object or person. Sometimes subjects may know what to do with an object, but still not be able to give a name to the object. For example, if a subject is shown an orange and asked what it is called, the subject may be well aware that the object can be peeled and eaten, and may even be able to demonstrate this by actions or even verbal responses – however, they can not recall that the object is called an "orange." Sometimes, when a person with a condition is fully bilingual, in trying to find the right word he might confuse the language he speaks.
Types of Anomic Aphasia Color anomia, where the patient can distinguish between colors but cannot identify them by name or name the color of an object.[1] They can separate colors into categories, but they cannot name them.
Causes Anomia is caused by damage to various parts of the parietal lobe or the temporal lobe of the brain. This damage can be brain trauma, such as an accident, stroke, or tumor. This type of phenomenon can be quite complex, and usually involves a breakdown in one or more pathways between various regions in the brain. Although the main causes are not specifically known, many researchers have found contributing factors to anomic aphasia. It is known that people with damage to the left hemisphere of the brain are more likely to have anomic aphasia. Broca's area, the speech production center in the brain, was linked to being the source for speech execution problems, and with the use of functional magnetic resonance imaging (fMRI), Broca's area was connected with speech repetition problems, which is commonly used to study anomic patients.[2] Other experts believe that damage to Wernicke's area, which is the speech comprehension area of the brain, is connected to anomia because the patients cannot comprehend the words that they are hearing.[3] Although many experts have believed that damage to Broca's area or Wernicke's area are the main causes of anomia, current studies have shown that damage in the left parietal lobe is the epicenter of anomic aphasia.[4] One study was conducted using a word repetition test as well as
magnetic resonance imaging (MRI) in order to see the highest level of activity as well as where the lesions are in the brain tissue.[4] Fridrikkson, et al. saw that damage to neither Broca's area nor Wernicke's area were the sole sources of anomia in the subjects. Therefore, the original model, which showed that damage occurred on the surface of the brain on the grey matter for anomia, was debunked and it was found that the damage was done in the white matter deeper in the brain on the left hemisphere.[4] More specifically, the damage was done to a part of the nerve tract called the arcuate fasciculus, which the mechanism of action is unknown but it is shown to connect the posterior (back) of the brain to the anterior (front) and vice versa.[5] New data has shown that although the arcuate fasciculus's main function does not include connecting Wernicke's area and Broca's area, damage to the tract does create speech problems because the speech comprehension and speech production areas are connected by this tract.[4] Some studies have found that in right-handed people the language center is 99% in the left hemisphere; therefore, anomic aphasia almost exclusively occurs with damage to the left hemisphere. However, in left-handed people the language center is about 60% in the left hemisphere; thus, anomic aphasia can occur with damage to the right hemisphere in left-handed people. Therefore, the specific cause of anomia is unknown; however, research is bringing the answer into focus.
Diagnosis The best way to see if anomic aphasia has developed is by using verbal as well as imaging tests. The combination of the two tests seem to be most effective. Either test done alone will give false positives or false negatives. For example, the verbal test is used to see if there is a speech disorder and whether it is a problem in speech production or comprehension. Patients with Alzheimer's disease have speech problems that are linked to dementia or progressive aphasias which can include anomia.[6][7] The imaging test, mostly MRI, is ideal for lesion mapping or viewing deterioration in the brain. However, imaging cannot diagnose anomia on its own because the lesions may not be located deep enough to damage the white matter or damaging the arcuate fasciculus. However, anomic aphasia is the most difficult to associate with a specific lesion location in the brain. Therefore the combination of speech tests and imaging tests has the highest sensitivity and specificity.[8] However, it is also important to do a hearing test in case that the patient cannot hear the words or sentences needed in the speech repetition test.[9] In the speech tests, the person is asked to repeat a sentence with common words and if the person cannot identify the word but he or she can describe it then the person is highly likely to have anomic aphasia. However, to be completely sure, the test is given as a person is in an MRI and the exact location of the lesions and areas activated by speech are pinpointed.[4] Although no simpler or cheaper option is available as of now, lesion mapping and speech repetition tests are the main ways of diagnosing anomic aphasia.
Management Unfortunately, there is no method available to completely cure the anomic aphasia. However, there are treatments that help improve word-finding skills. Although a person with anomia may
find it difficult to recall many types of words such as common nouns, proper nouns, verbs, etc., many studies have shown that treatment for object words, or nouns, have shown promise in rehabilitation research.[9] The treatment includes visual aid, such as pictures, and the patient is asked to identify the object or activity. However, if that is not possible, then the patient is shown the same picture surrounded by words associated with the object or activity.[10][11] Throughout the process positive encouragement is provided. The treatment shows an increase in wordfinding during treatment; however, word identifying decreased two weeks after the rehabilitation period.[9] Therefore, it shows that rehabilitation needs to be continuous for word-finding abilities to improve from the baseline. The studies show that verbs are harder to recall or repeat even with rehabilitation.[9][12]
Life with Anomic Aphasia This disorder may be extremely frustrating for people with and without the disorder. Although the person with anomic aphasia may know the specific word, they may not be able to recall it and this can be very difficult for everyone in the conversation. However, it is important to be patient and work with the person so that he or she gains confidence with his or her speech. Positive reinforcements are helpful.[9] Although there are not many literary cases about anomic aphasia, there are many books out there about life with aphasia. One of the most notable books on aphasia is The Man Who Lost His Language by Sheila Hale. It is the story of Sheila Hale's husband, John Hale, who was a very prestigious scholar who suffered a stroke and lost speech formation abilities. Sheila Hale does a great job explaining the symptoms and mechanics behind aphasia and speech formation. She also adds in the emotional components of dealing with a person with aphasia and how to be patient with the speech and communication.
List of voice disorders Voice disorders[1] are medical conditions affecting the production of speech. These include:
Chorditis Vocal fold nodules Vocal fold cysts Vocal cord paresis Reinke's edema Spasmodic dysphonia Foreign accent syndrome Bogart–Bacall syndrome Laryngeal papillomatosis Puberphonia Laryngitis
Stroke
For other uses, see Stroke (disambiguation).
Stroke Classification and external resources
CT scan slice of the brain showing a right-hemispheric ischemic stroke (left side of image). ICD-10
I61-I64ner
ICD-9
434.91
OMIM
601367
DiseasesDB
2247
MedlinePlus
000726
eMedicine
neuro/9 emerg/558 emerg/557 pmr/187
MeSH
D020521
A stroke, or cerebrovascular accident (CVA), is the rapid loss of brain function due to disturbance in the blood supply to the brain. This can be due to ischemia (lack of blood flow) caused by blockage (thrombosis, arterial embolism), or a hemorrhage.[1] As a result, the affected
area of the brain cannot function, which might result in an inability to move one or more limbs on one side of the body, inability to understand or formulate speech, or an inability to see one side of the visual field.[2] A stroke is a medical emergency and can cause permanent neurological damage and death. Risk factors for stroke include old age, high blood pressure, previous stroke or transient ischemic attack (TIA), diabetes, high cholesterol, tobacco smoking and atrial fibrillation.[2] High blood pressure is the most important modifiable risk factor of stroke.[2] It is the second leading cause of death worldwide.[3] An ischemic stroke is occasionally treated in a hospital with thrombolysis (also known as a "clot buster"), and some hemorrhagic strokes benefit from neurosurgery. Treatment to recover any lost function is termed stroke rehabilitation, ideally in a stroke unit and involving health professions such as speech and language therapy, physical therapy and occupational therapy. Prevention of recurrence may involve the istration of antiplatelet drugs such as aspirin and dipyridamole, control and reduction of high blood pressure, and the use of statins. Selected patients may benefit from carotid endarterectomy and the use of anticoagulants.[2]
Classification
A slice of brain from the autopsy of a person who suffered an acute middle cerebral artery (MCA) stroke
Strokes can be classified into two major categories: ischemic and hemorrhagic.[4] Ischemic strokes are those that are caused by interruption of the blood supply, while hemorrhagic strokes are the ones which result from rupture of a blood vessel or an abnormal vascular structure. About 87% of strokes are caused by ischemia, and the remainder by hemorrhage. Some hemorrhages develop inside areas of ischemia ("hemorrhagic transformation"). It is unknown how many hemorrhages actually start as ischemic stroke.[2] Ischemic Main articles: Cerebral infarction and Brain ischemia
In an ischemic stroke, blood supply to part of the brain is decreased, leading to dysfunction of the brain tissue in that area. There are four reasons why this might happen: 1. 2. 3. 4.
Thrombosis (obstruction of a blood vessel by a blood clot forming locally) Embolism (obstruction due to an embolus from elsewhere in the body, see below),[2] Systemic hypoperfusion (general decrease in blood supply, e.g., in shock)[5] Venous thrombosis.[6]
Stroke without an obvious explanation is termed "cryptogenic" (of unknown origin); this constitutes 30-40% of all ischemic strokes.[2][7] There are various classification systems for acute ischemic stroke. The Oxford Community Stroke Project classification (OCSP, also known as the Bamford or Oxford classification) relies primarily on the initial symptoms; based on the extent of the symptoms, the stroke episode is classified as total anterior circulation infarct (TACI), partial anterior circulation infarct (PACI), lacunar infarct (LACI) or posterior circulation infarct (POCI). These four entities predict the extent of the stroke, the area of the brain affected, the underlying cause, and the prognosis.[8][9] The TOAST (Trial of Org 10172 in Acute Stroke Treatment) classification is based on clinical symptoms as well as results of further investigations; on this basis, a stroke is classified as being due to (1) thrombosis or embolism due to atherosclerosis of a large artery, (2) embolism of cardiac origin, (3) occlusion of a small blood vessel, (4) other determined cause, (5) undetermined cause (two possible causes, no cause identified, or incomplete investigation).[2][10] Hemorrhagic Main articles: Intracranial hemorrhage and intracerebral hemorrhage
An intraparenchymal bleed (bottom arrow) with surrounding edema (top arrow)
Intracranial hemorrhage is the accumulation of blood anywhere within the skull vault. A distinction is made between intra-axial hemorrhage (blood inside the brain) and extra-axial
hemorrhage (blood inside the skull but outside the brain). Intra-axial hemorrhage is due to intraparenchymal hemorrhage or intraventricular hemorrhage (blood in the ventricular system). The main types of extra-axial hemorrhage are epidural hematoma (bleeding between the dura mater and the skull), subdural hematoma (in the subdural space) and subarachnoid hemorrhage (between the arachnoid mater and pia mater). Most of the hemorrhagic stroke syndromes have specific symptoms (e.g., headache, previous head injury).
Signs and symptoms Stroke symptoms typically start suddenly, over seconds to minutes, and in most cases do not progress further. The symptoms depend on the area of the brain affected. The more extensive the area of brain affected, the more functions that are likely to be lost. Some forms of stroke can cause additional symptoms. For example, in intracranial hemorrhage, the affected area may compress other structures. Most forms of stroke are not associated with headache, apart from subarachnoid hemorrhage and cerebral venous thrombosis and occasionally intracerebral hemorrhage. Early recognition
Various systems have been proposed to increase recognition of stroke. Different findings are able to predict the presence or absence of stroke to different degrees. Sudden-onset face weakness, arm drift (i.e., if a person, when asked to raise both arms, involuntarily lets one arm drift downward) and abnormal speech are the findings most likely to lead to the correct identification of a case of stroke increasing the likelihood by 5.5 when at least one of these is present). Similarly, when all three of these are absent, the likelihood of stroke is significantly decreased (– likelihood ratio of 0.39).[11] While these findings are not perfect for diagnosing stroke, the fact that they can be evaluated relatively rapidly and easily make them very valuable in the acute setting. Proposed systems include FAST (stroke) (face, arm, speech, and time),[12] as advocated by the Department of Health (United Kingdom) and The Stroke Association, the American Stroke Association (www.strokeassociation.org), National Stroke Association (US www.stroke.org), the Los Angeles Prehospital Stroke Screen (LAPSS)[13] and the Cincinnati Prehospital Stroke Scale (SS).[14] Use of these scales is recommended by professional guidelines.[15] For people referred to the emergency room, early recognition of stroke is deemed important as this can expedite diagnostic tests and treatments. A scoring system called ROSIER (recognition of stroke in the emergency room) is recommended for this purpose; it is based on features from the medical history and physical examination.[15][16] Subtypes
If the area of the brain affected contains one of the three prominent central nervous system pathways—the spinothalamic tract, corticospinal tract, and dorsal column (medial lemniscus), symptoms may include:
hemiplegia and muscle weakness of the face numbness reduction in sensory or vibratory sensation initial flaccidity (hypotonicity), replaced by spasticity (hypertonicity), hyperreflexia, and obligatory synergies.[17]
In most cases, the symptoms affect only one side of the body (unilateral). Depending on the part of the brain affected, the defect in the brain is usually on the opposite side of the body. However, since these pathways also travel in the spinal cord and any lesion there can also produce these symptoms, the presence of any one of these symptoms does not necessarily indicate a stroke. In addition to the above CNS pathways, the brainstem gives rise to most of the twelve cranial nerves. A stroke affecting the brain stem and brain therefore can produce symptoms relating to deficits in these cranial nerves:
altered smell, taste, hearing, or vision (total or partial) drooping of eyelid (ptosis) and weakness of ocular muscles decreased reflexes: gag, swallow, pupil reactivity to light decreased sensation and muscle weakness of the face balance problems and nystagmus altered breathing and heart rate weakness in sternocleidomastoid muscle with inability to turn head to one side weakness in tongue (inability to protrude and/or move from side to side)
If the cerebral cortex is involved, the CNS pathways can again be affected, but also can produce the following symptoms:
aphasia (difficulty with verbal expression, auditory comprehension, reading and/or writing Broca's or Wernicke's area typically involved) dysarthria (motor speech disorder resulting from neurological injury) apraxia (altered voluntary movements) visual field defect memory deficits (involvement of temporal lobe) hemineglect (involvement of parietal lobe) disorganized thinking, confusion, hypersexual gestures (with involvement of frontal lobe) lack of insight of his or her, usually stroke-related, disability
If the cerebellum is involved, the patient may have the following:
altered walking gait altered movement coordination vertigo and or disequilibrium
Associated symptoms
Loss of consciousness, headache, and vomiting usually occurs more often in hemorrhagic stroke than in thrombosis because of the increased intracranial pressure from the leaking blood compressing the brain. If symptoms are maximal at onset, the cause is more likely to be a subarachnoid hemorrhage or an embolic stroke.
Causes Thrombotic stroke
In thrombotic stroke a thrombus[18] (blood clot) usually forms around atherosclerotic plaques. Since blockage of the artery is gradual, onset of symptomatic thrombotic strokes is slower. A thrombus itself (even if non-occluding) can lead to an embolic stroke (see below) if the thrombus breaks off, at which point it is called an "embolus." Two types of thrombosis can cause stroke:
Large vessel disease involves the common and internal carotids, vertebral, and the Circle of Willis.[19] Diseases that may form thrombi in the large vessels include (in descending incidence): atherosclerosis, vasoconstriction (tightening of the artery), aortic, carotid or vertebral artery dissection, various inflammatory diseases of the blood vessel wall (Takayasu arteritis, giant cell arteritis, vasculitis), noninflammatory vasculopathy, Moyamoya disease and fibromuscular dysplasia. Small vessel disease involves the smaller arteries inside the brain: branches of the circle of Willis, middle cerebral artery, stem, and arteries arising from the distal vertebral and basilar artery.[20] Diseases that may form thrombi in the small vessels include (in descending incidence): lipohyalinosis (build-up of fatty hyaline matter in the blood vessel as a result of high blood pressure and aging) and fibrinoid degeneration (stroke involving these vessels are known as lacunar infarcts) and microatheroma (small atherosclerotic plaques).[21]
Sickle-cell anemia, which can cause blood cells to clump up and block blood vessels, can also lead to stroke. A stroke is the second leading killer of people under 20 who suffer from sicklecell anemia.[22] Embolic stroke
An embolic stroke refers to the blockage of an artery by an arterial embolus, a travelling particle or debris in the arterial bloodstream originating from elsewhere. An embolus is most frequently a thrombus, but it can also be a number of other substances including fat (e.g., from bone marrow in a broken bone), air, cancer cells or clumps of bacteria (usually from infectious endocarditis).[citation needed] Because an embolus arises from elsewhere, local therapy solves the problem only temporarily. Thus, the source of the embolus must be identified. Because the embolic blockage is sudden in
onset, symptoms usually are maximal at start. Also, symptoms may be transient as the embolus is partially resorbed and moves to a different location or dissipates altogether. Emboli most commonly arise from the heart (especially in atrial fibrillation) but may originate from elsewhere in the arterial tree. In paradoxical embolism, a deep vein thrombosis embolises through an atrial or ventricular septal defect in the heart into the brain.[citation needed] Cardiac causes can be distinguished between high and low-risk:[23]
High risk: atrial fibrillation and paroxysmal atrial fibrillation, rheumatic disease of the mitral or aortic valve disease, artificial heart valves, known cardiac thrombus of the atrium or ventricle, sick sinus syndrome, sustained atrial flutter, recent myocardial infarction, chronic myocardial infarction together with ejection fraction <28 percent, symptomatic congestive heart failure with ejection fraction <30 percent, dilated cardiomyopathy, Libman-Sacks endocarditis, Marantic endocarditis, infective endocarditis, papillary fibroelastoma, left atrial myxoma and coronary artery by graft (CABG) surgery. Low risk/potential: calcification of the annulus (ring) of the mitral valve, patent foramen ovale (PFO), atrial septal aneurysm, atrial septal aneurysm with patent foramen ovale, left ventricular aneurysm without thrombus, isolated left atrial "smoke" on echocardiography (no mitral stenosis or atrial fibrillation), complex atheroma in the ascending aorta or proximal arch.
Systemic hypoperfusion
Systemic hypoperfusion is the reduction of blood flow to all parts of the body. It is most commonly due to heart failure from cardiac arrest or arrhythmias, or from reduced cardiac output as a result of myocardial infarction, pulmonary embolism, pericardial effusion, or bleeding.[citation needed] Hypoxemia (low blood oxygen content) may precipitate the hypoperfusion. Because the reduction in blood flow is global, all parts of the brain may be affected, especially "watershed" areas - border zone regions supplied by the major cerebral arteries. A watershed stroke refers to the condition when blood supply to these areas is compromised. Blood flow to these areas does not necessarily stop, but instead it may lessen to the point where brain damage can occur. Venous thrombosis
Cerebral venous sinus thrombosis leads to stroke due to locally increased venous pressure, which exceeds the pressure generated by the arteries. Infarcts are more likely to undergo hemorrhagic transformation (leaking of blood into the damaged area) than other types of ischemic stroke.[6] Intracerebral hemorrhage
It generally occurs in small arteries or arterioles and is commonly due to hypertension,[24] intracranial vascular malformations (including cavernous angiomas or arteriovenous malformations), cerebral amyloid angiopathy, or infarcts into which secondary haemorrhage has occurred.[2] Other potential causes are trauma, bleeding disorders, amyloid angiopathy, illicit drug use (e.g., amphetamines or cocaine). The hematoma enlarges until pressure from surrounding tissue limits its growth, or until it decompresses by emptying into the ventricular
system, CSF or the pial surface. A third of intracerebral bleed is into the brain's ventricles. ICH has a mortality rate of 44 percent after 30 days, higher than ischemic stroke or subarachnoid hemorrhage (which technically may also be classified as a type of stroke[2]). Silent stroke
A silent stroke is a stroke that does not have any outward symptoms, and the patients are typically unaware they have suffered a stroke. Despite not causing identifiable symptoms, a silent stroke still causes damage to the brain, and places the patient at increased risk for both transient ischemic attack and major stroke in the future. Conversely, those who have suffered a major stroke are also at risk of having silent strokes.[25] In a broad study in 1998, more than 11 million people were estimated to have experienced a stroke in the United States. Approximately 770,000 of these strokes were symptomatic and 11 million were first-ever silent MRI infarcts or hemorrhages. Silent strokes typically cause lesions which are detected via the use of neuroimaging such as MRI. Silent strokes are estimated to occur at five times the rate of symptomatic strokes.[26][27] The risk of silent stroke increases with age, but may also affect younger adults and children, especially those with acute anemia.[26][28]
Pathophysiology Ischemic
Micrograph showing cortical pseudolaminar necrosis, a finding seen in strokes on medical imaging and at autopsy. H&E-LFB stain.
Micrograph of the superficial cerebral cortex showing neuron loss and reactive astrocytes in a person that suffered a stroke. H&E-LFB stain.
Ischemic stroke occurs because of a loss of blood supply to part of the brain, initiating the ischemic cascade.[29] Brain tissue ceases to function if deprived of oxygen for more than 60 to 90 seconds and after approximately three hours, will suffer irreversible injury possibly leading to death of the tissue, i.e., infarction. (This is why fibrinolytics such as alteplase are given only until three hours since the onset of the stroke.) Atherosclerosis may disrupt the blood supply by narrowing the lumen of blood vessels leading to a reduction of blood flow, by causing the formation of blood clots within the vessel, or by releasing showers of small emboli through the disintegration of atherosclerotic plaques.[30] Embolic infarction occurs when emboli formed elsewhere in the circulatory system, typically in the heart as a consequence of atrial fibrillation, or in the carotid arteries, break off, enter the cerebral circulation, then lodge in and occlude brain blood vessels. Since blood vessels in the brain are now occluded, the brain becomes low in energy, and thus it resorts into using anaerobic metabolism within the region of brain tissue affected by ischemia. Unfortunately, this kind of metabolism produces less adenosine triphosphate (ATP) but releases a by-product called lactic acid. Lactic acid is an irritant which could potentially destroy cells since it is an acid and disrupts the normal acid-base balance in the brain. The ischemia area is referred to as the "ischemic penumbra".[31] Then, as oxygen or glucose becomes depleted in ischemic brain tissue, the production of high energy phosphate compounds such as adenosine triphosphate (ATP) fails, leading to failure of energy-dependent processes (such as ion pumping) necessary for tissue cell survival. This sets off a series of interrelated events that result in cellular injury and death. A major cause of neuronal injury is release of the excitatory neurotransmitter glutamate. The concentration of glutamate outside the cells of the nervous system is normally kept low by so-called uptake carriers, which are powered by the concentration gradients of ions (mainly Na+) across the cell
membrane. However, stroke cuts off the supply of oxygen and glucose which powers the ion pumps maintaining these gradients. As a result the transmembrane ion gradients run down, and glutamate transporters reverse their direction, releasing glutamate into the extracellular space. Glutamate acts on receptors in nerve cells (especially NMDA receptors), producing an influx of calcium which activates enzymes that digest the cells' proteins, lipids and nuclear material. Calcium influx can also lead to the failure of mitochondria, which can lead further toward energy depletion and may trigger cell death due to apoptosis.[citation needed] Ischemia also induces production of oxygen free radicals and other reactive oxygen species. These react with and damage a number of cellular and extracellular elements. Damage to the blood vessel lining or endothelium is particularly important. In fact, many antioxidant neuroprotectants such as uric acid and NXY-059 work at the level of the endothelium and not in the brain per se. Free radicals also directly initiate elements of the apoptosis cascade by means of redox signaling.[citation needed] These processes are the same for any type of ischemic tissue and are referred to collectively as the ischemic cascade. However, brain tissue is especially vulnerable to ischemia since it has little respiratory reserve and is completely dependent on aerobic metabolism, unlike most other organs. In addition to injurious effects on brain cells, ischemia and infarction can result in loss of structural integrity of brain tissue and blood vessels, partly through the release of matrix metalloproteases, which are zinc- and calcium-dependent enzymes that break down collagen, hyaluronic acid, and other elements of connective tissue. Other proteases also contribute to this process. The loss of vascular structural integrity results in a breakdown of the protective blood brain barrier that contributes to cerebral edema, which can cause secondary progression of the brain injury.[citation needed] Hemorrhagic
Hemorrhagic strokes result in tissue injury by causing compression of tissue from an expanding hematoma or hematomas. This can distort and injure tissue. In addition, the pressure may lead to a loss of blood supply to affected tissue with resulting infarction, and the blood released by brain hemorrhage appears to have direct toxic effects on brain tissue and vasculature.[22][32] Inflammation contributes to the secondary brain injury after hemorrhage.[32]
Diagnosis
A CT showing early signs of a middle cerebral artery stroke with loss of definition of the gyri and grey white boundary
Stroke is diagnosed through several techniques: a neurological examination (such as the NIHSS), CT scans (most often without contrast enhancements) or MRI scans, Doppler ultrasound, and arteriography. The diagnosis of stroke itself is clinical, with assistance from the imaging techniques. Imaging techniques also assist in determining the subtypes and cause of stroke. There is yet no commonly used blood test for the stroke diagnosis itself, though blood tests may be of help in finding out the likely cause of stroke.[33] Definition
The traditional definition of stroke, devised by the World Health Organization in the 1970s,[34] is a "neurological deficit of cerebrovascular cause that persists beyond 24 hours or is interrupted by death within 24 hours". This definition was supposed to reflect the reversibility of tissue damage and was devised for the purpose, with the time frame of 24 hours being chosen arbitrarily. The 24-hour limit divides stroke from transient ischemic attack, which is a related syndrome of stroke symptoms that resolve completely within 24 hours.[2] With the availability of treatments that, when given early, can reduce stroke severity, many now prefer alternative concepts, such as brain attack and acute ischemic cerebrovascular syndrome (modeled after heart attack and acute coronary syndrome, respectively), that reflect the urgency of stroke symptoms and the need to act swiftly.[35] Physical examination
A physical examination, including taking a medical history of the symptoms and a neurological status, helps giving an evaluation of the location and severity of a stroke. It can give a standard score on e.g., the NIH stroke scale.
Imaging
For diagnosing ischemic stroke in the emergency setting:[36]
CT scans (without contrast enhancements) sensitivity= 16% specificity= 96%
MRI scan sensitivity= 83% specificity= 98%
For diagnosing hemorrhagic stroke in the emergency setting:
CT scans (without contrast enhancements) sensitivity= 89% specificity= 100%
MRI scan sensitivity= 81% specificity= 100%
For detecting chronic hemorrhages, MRI scan is more sensitive.[37] For the assessment of stable stroke, nuclear medicine scans SPECT and PET/CT may be helpful. SPECT documents cerebral blood flow and PET with FDG isotope the metabolic activity of the neurons. Underlying cause
12-lead ECG of a patient with a stroke, showing large deeply inverted T-waves. Various ECG changes may occur in people with strokes and other brain disorders.
When a stroke has been diagnosed, various other studies may be performed to determine the underlying cause. With the current treatment and diagnosis options available, it is of particular importance to determine whether there is a peripheral source of emboli. Test selection may vary, since the cause of stroke varies with age, comorbidity and the clinical presentation. Commonly used techniques include:
an ultrasound/doppler study of the carotid arteries (to detect carotid stenosis) or dissection of the precerebral arteries; an electrocardiogram (ECG) and echocardiogram (to identify arrhythmias and resultant clots in the heart which may spread to the brain vessels through the bloodstream); a Holter monitor study to identify intermittent arrhythmias; an angiogram of the cerebral vasculature (if a bleed is thought to have originated from an aneurysm or arteriovenous malformation); blood tests to determine hypercholesterolemia, bleeding diathesis and some rarer causes such as homocysteinuria.
Prevention Given the disease burden of strokes, prevention is an important public health concern.[38] Primary prevention is less effective than secondary prevention (as judged by the number needed to treat to prevent one stroke per year).[38] Recent guidelines detail the evidence for primary prevention in stroke.[39] Because stroke may indicate underlying atherosclerosis, it is important to determine the patient's risk for other cardiovascular diseases such as coronary heart disease. Conversely, aspirin confers some protection against first stroke in people who have had a myocardial infarction or those with a high cardiovascular risk.[40][41] In those who have previously had a stroke, treatment with medications such as aspirin, clopidogrel and dipyridamole may be given to prevent platelets from aggregating.[40] Risk factors
The most important modifiable risk factors for stroke are high blood pressure and atrial fibrillation (although magnitude of this effect is small: the evidence from the Medical Research Council trials is that 833 patients have to be treated for 1 year to prevent one stroke[42][43]). Other modifiable risk factors include high blood cholesterol levels, diabetes, cigarette smoking[44][45] (active and ive), heavy alcohol consumption[46] and drug use,[47] lack of physical activity, obesity, processed red meat consumption[48] and unhealthy diet.[49] Alcohol use could predispose to ischemic stroke, and intracerebral and subarachnoid hemorrhage via multiple mechanisms (for example via hypertension, atrial fibrillation, rebound thrombocytosis and platelet aggregation and clotting disturbances).[50] The drugs most commonly associated with stroke are cocaine, amphetamines causing hemorrhagic stroke, but also over-the-counter cough and cold drugs containing sympathomimetics.[51][52] No high quality studies have shown the effectiveness of interventions aimed at weight reduction, promotion of regular exercise, reducing alcohol consumption or smoking cessation.[53] Nonetheless, given the large body of circumstantial evidence, best medical management for stroke includes advice on diet, exercise, smoking and alcohol use.[54] Medication or drug therapy
is the most common method of stroke prevention; carotid endarterectomy can be a useful surgical method of preventing stroke. Blood pressure
Hypertension (high blood pressure) s for 35-50% of stroke risk.[55] Blood pressure reduction of 10 mmHg systolic or 5 mmHg diastolic reduces the risk of stroke by ~40%.[56] Lowering blood pressure has been conclusively shown to prevent both ischemic and hemorrhagic strokes.[57][58] It is equally important in secondary prevention.[59] Even patients older than 80 years and those with isolated systolic hypertension benefit from antihypertensive therapy.[60][61][62] The available evidence does not show large differences in stroke prevention between antihypertensive drugs —therefore, other factors such as protection against other forms of cardiovascular disease should be considered and cost.[63][64] Atrial fibrillation
Those with atrial fibrillation have a 5% a year risk of stroke, and this risk is higher in those with valvular atrial fibrillation.[65] Depending on the stroke risk, anticoagulation with medications such as warfarin or aspirin is warranted for stroke prevention.[66] Blood lipids
High cholesterol levels have been inconsistently associated with (ischemic) stroke.[58][67] Statins have been shown to reduce the risk of stroke by about 15%.[68] Since earlier meta-analyses of other lipid-lowering drugs did not show a decreased risk,[69] statins might exert their effect through mechanisms other than their lipid-lowering effects.[68] Diabetes mellitus
Diabetes mellitus increases the risk of stroke by 2 to 3 times. While intensive control of blood sugar has been shown to reduce microvascular complications such as nephropathy and retinopathy it has not been shown to reduce macrovascular complications such as stroke.[70][71] Anticoagulation drugs
Oral anticoagulants such as warfarin have been the mainstay of stroke prevention for over 50 years. However, several studies have shown that aspirin and antiplatelet drugs are highly effective in secondary prevention after a stroke or transient ischemic attack.[40] Low doses of aspirin (for example 75–150 mg) are as effective as high doses but have fewer side effects; the lowest effective dose remains unknown.[72] Thienopyridines (clopidogrel, ticlopidine) "might be slightly more effective" than aspirin and have a decreased risk of gastrointestinal bleeding, but they are more expensive.[73] Their exact role remains controversial. Ticlopidine has more skin rash, diarrhea, neutropenia and thrombotic thrombocytopenic purpura.[73] Dipyridamole can be added to aspirin therapy to provide a small additional benefit, even though headache is a common side effect.[74] Low-dose aspirin is also effective for stroke prevention after sustaining a
myocardial infarction.[41] Except for in atrial fibrillation, oral anticoagulants are not advised for stroke prevention —any benefit is offset by bleeding risk.[75] In primary prevention however, antiplatelet drugs did not reduce the risk of ischemic stroke while increasing the risk of major bleeding.[76][77] Further studies are needed to investigate a possible protective effect of aspirin against ischemic stroke in women.[78][79] Surgery
Carotid endarterectomy or carotid angioplasty can be used to remove atherosclerotic narrowing (stenosis) of the carotid artery. There is evidence ing this procedure in selected cases.[54] Endarterectomy for a significant stenosis has been shown to be useful in the secondary prevention after a previous stroke.[80] Carotid artery stenting has not been shown to be equally useful.[81][82] Patients are selected for surgery based on age, gender, degree of stenosis, time since symptoms and patients' preferences.[54] Surgery is most efficient when not delayed too long — the risk of recurrent stroke in a patient who has a 50% or greater stenosis is up to 20% after 5 years, but endarterectomy reduces this risk to around 5%. The number of procedures needed to cure one patient was 5 for early surgery (within two weeks after the initial stroke), but 125 if delayed longer than 12 weeks.[83][84] Screening for carotid artery narrowing has not been shown to be a useful screening test in the general population.[85] Studies of surgical intervention for carotid artery stenosis without symptoms have shown only a small decrease in the risk of stroke.[86][87] To be beneficial, the complication rate of the surgery should be kept below 4%. Even then, for 100 surgeries, 5 patients will benefit by avoiding stroke, 3 will develop stroke despite surgery, 3 will develop stroke or die due to the surgery itself, and 89 will remain stroke-free but would also have done so without intervention.[54] Diet
Nutrition, specifically the Mediterranean-style diet, has the potential for decreasing the risk of having a stroke by more than half.[88] It does not appear that lowering levels of homocysteine with folic acid affects the risk of stroke.[89][90] Secondary prevention of ischemic stroke
Anticoagulation can prevent recurrent stroke. Among patients with nonvalvular atrial fibrillation, anticoagulation can reduce stroke by 60% while antiplatelet agents can reduce stroke by 20%.[91] However, a recent meta-analysis suggests harm from anti-coagulation started early after an embolic stroke.[92] Stroke prevention treatment for atrial fibrillation is determined according to the CHADS/CHADS2 system. The most widely used anticoagulant to prevent thromboembolic stroke in patients with nonvalvular atrial fibrillation is the oral agentWarfarin while dabigatran is a new alternative which does not require prothrombin time monitoring.
If studies show carotid stenosis, and the patient has residual function in the affected side, carotid endarterectomy (surgical removal of the stenosis) may decrease the risk of recurrence if performed rapidly after stroke.
Management Ischemic stroke
Definitive therapy is aimed at removing the blockage by breaking the clot down (thrombolysis), or by removing it mechanically (thrombectomy). The more rapidly blood flow is restored to the brain, the fewer brain cells die.[93] Tight control of blood sugars in the first few hours does not improve outcomes and may cause harm.[94] High blood pressure is also not typically lowered as this has not been found to be helpful. Thrombolysis
Thrombolysis with recombinant tissue plasminogen activator (rtPA) in acute ischemic stroke, when given before three hours of symptom onset increases the risk of death in the short term but in the long term improves the rate of independence and late mortality; the increase in long term mortality is not significant.[95] When broken down by time to treatment it increases the chance of being alive and living independently by 9% in those treated within three hours, however the benefit for those treated between three and six hours is not significant.[95] These benefits or lack of benefits occurred regardless of the age of the person treated.[95] There is no realiable way to determine who will have an intracranial hemorrhage post treatment versus who will not.[96] It use is endorsed by the American Heart Association and the American Academy of Neurology as the recommended treatment for acute stroke within three hours of onset of symptoms as long as there are not other contraindications (such as abnormal lab values, high blood pressure, or recent surgery). This position for tPA is based upon the findings of two studies by one group of investigators[97] which showed that tPA improves the chances for a good neurological outcome. When istered within the first three hours thrombolysis improves functional outcome without affecting mortality.[98] 6.4% of people with large strokes developed substantial brain hemorrhage as a complication from being given tPA thus part of the reason for increased short term mortality.[99] Additionally, it is the position of the American Academy of Emergency Medicine that objective evidence regarding the efficacy, safety, and applicability of tPA for acute ischemic stroke is insufficient to warrant its classification as standard of care.[100]Intraarterial fibrinolysis, where a catheter is ed up an artery into the brain and the medication is injected at the site of thrombosis, has been found to improve outcomes in people with acute ischemic stroke.[101]
Mechanical thrombectomy
Merci Retriever L5.
Removal of the clot may be attempted in those where it occurs within a large blood vessel and may be an option for those who either are not eligible for or do not improve with intravenous thrombolytics.[102] Significant complications occur in about 7%.[103] A randomized control trial of these procedures has not been done as of 2011.[104] Hemicraniectomy
Large territory strokes can cause significant edema of the brain with secondary brain injury in surrounding tissue. This phenomenon is mainly encountered in strokes of the middle cerebral artery territory, and is also called "malignant cerebral infaction" because it carries a dismal prognosis. Relief of the pressure may be attempted with medication, but some require hemicraniectomy, the temporary surgical removal of the skull on one side of the head. This decreases the risk of death, although some more people survive with disability who would otherwise have died.[105] Hemorrhagic stroke
People with intracerebral hemorrhage require neurosurgical evaluation to detect and treat the cause of the bleeding, although many may not need surgery. Anticoagulants and antithrombotics, key in treating ischemic stroke, can make bleeding worse. People are monitored for changes in the level of consciousness, and their blood pressure, blood sugar, and oxygenation are kept at optimum levels.[citation needed] Stroke unit
Ideally, people who have had a stroke are itted to a "stroke unit", a ward or dedicated area in hospital staffed by nurses and therapists with experience in stroke treatment. It has been shown that people itted to a stroke unit have a higher chance of surviving than those itted elsewhere in hospital, even if they are being cared for by doctors without experience in stroke.[2]
When an acute stroke is suspected by history and physical examination, the goal of early assessment is to determine the cause. Treatment varies according to the underlying cause of the stroke, thromboembolic (ischemic) or hemorrhagic. Rehabilitation
Stroke rehabilitation is the process by which those with disabling strokes undergo treatment to help them return to normal life as much as possible by regaining and relearning the skills of everyday living. It also aims to help the survivor understand and adapt to difficulties, prevent secondary complications and educate family to play a ing role. A rehabilitation team is usually multidisciplinary as it involves staff with different skills working together to help the person. These include nursing staff, physiotherapists, occupational therapists, speech and language therapists, orthotists and usually a physician trained in rehabilitation medicine. Some teams may also include psychologists, social workers, and pharmacists since at least one third of the people manifest post stroke depression. Validated instruments such as the Barthel scale may be used to assess the likelihood of a stroke patient being able to manage at home with or without subsequent to discharge from hospital. Good nursing care is fundamental in maintaining skin care, feeding, hydration, positioning, and monitoring vital signs such as temperature, pulse, and blood pressure. Stroke rehabilitation begins almost immediately. For most people with stroke, physical therapy (PT), occupational therapy (OT) and speechlanguage pathology (SLP) are the cornerstones of the rehabilitation process. Often, assistive technology such as wheelchairs, walkers and canes may be beneficial. Many mobility problems can be improved by the use of ankle foot orthoses.[106] PT and OT have overlapping areas of expertise, however PT focuses on t range of motion and strength by performing exercises and re-learning functional tasks such as bed mobility, transferring, walking and other gross motor functions. Physiotherapists can also work with patients to improve awareness and use of the hemiplegic side. Rehabilitation involves working on the ability to produce strong movements or the ability to perform tasks using normal patterns. Emphasis is often concentrated on functional tasks and patient’s goals. One example physiotherapists employ to promote motor learning involves constraint-induced movement therapy. Through continuous practice the patient relearns to use and adapt the hemiplegic limb during functional activities to create lasting permanent changes.[107] OT is involved in training to help relearn everyday activities known as the Activities of daily living (ADLs) such as eating, drinking, dressing, bathing, cooking, reading and writing, and toileting. Speech and language therapy is appropriate for patients with the speech production disorders: dysarthria and apraxia of speech, aphasia, cognitive-communication impairments and/or dysphagia (problems with swallowing). Patients may have particular problems, such as dysphagia, which can cause swallowed material to into the lungs and cause aspiration pneumonia. The condition may improve with time, but in the interim, a nasogastric tube may be inserted, enabling liquid food to be given directly into the stomach. If swallowing is still deemed unsafe, then a percutaneous endoscopic gastrostomy (PEG) tube is ed and this can remain indefinitely.
Treatment of spasticity related to stroke often involves early mobilisations, commonly performed by a physiotherapist, combined with elongation of spastic muscles and sustained stretching through various positioning.[17] Gaining initial improvements in range of motion is often achieved through rhythmic rotational patterns associated with the affected limb.[17] After full range has been achieved by the therapist, the limb should be positioned in the lengthened positions to prevent against further contractures, skin breakdown, and disuse of the limb with the use of splints or other tools to stabilize the t.[17] Cold in the form of ice wraps or ice packs have been proven to briefly reduce spasticity by temporarily dampening neural firing rates.[17] Electrical stimulation to the antagonist muscles or vibrations has also been used with some success.[17] Stroke rehabilitation should be started as quickly as possible and can last anywhere from a few days to over a year. Most return of function is seen in the first few months, and then improvement falls off with the "window" considered officially by U.S. state rehabilitation units and others to be closed after six months, with little chance of further improvement. However, patients have been known to continue to improve for years, regaining and strengthening abilities like writing, walking, running, and talking. Daily rehabilitation exercises should continue to be part of the stroke patient's routine. Complete recovery is unusual but not impossible and most patients will improve to some extent: proper diet and exercise are known to help the brain to recover. Some current and future therapy methods include the use of virtual reality and video games for rehabilitation. These forms of rehabilitation offer potential for motivating patients to perform specific therapy tasks that many other forms do not.[108] Many clinics and hospitals are adopting the use of these off-the-shelf devices for exercise, social interaction and rehabilitation because they are affordable, accessible and can be used within the clinic and home.[108] Other novel non-invasive rehabilitation methods are currently being developed to augment physical therapy to improve motor function of stroke patients, such as transcranial magnetic stimulation (TMS) and transcranial direct-current stimulation (tDCS)[109] and robotic therapies.[110]
Prognosis Disability affects 75% of stroke survivors enough to decrease their employability.[111] Stroke can affect peoples physically, mentally, emotionally, or a combination of the three. The results of stroke vary widely depending on size and location of the lesion.[112] Dysfunctions correspond to areas in the brain that have been damaged. Some of the physical disabilities that can result from stroke include muscle weakness, numbness, pressure sores, pneumonia, incontinence, apraxia (inability to perform learned movements), difficulties carrying out daily activities, appetite loss, speech loss, vision loss, and pain. If the stroke is severe enough, or in a certain location such as parts of the brainstem, coma or death can result.
Emotional problems resulting from stroke can result from direct damage to emotional centers in the brain or from frustration and difficulty adapting to new limitations. Post-stroke emotional difficulties include anxiety, panic attacks, flat affect (failure to express emotions), mania, apathy, and psychosis. 30 to 50% of stroke survivors suffer post stroke depression, which is characterized by lethargy, irritability, sleep disturbances, lowered self esteem, and withdrawal.[113] Depression can reduce motivation and worsen outcome, but can be treated with antidepressants. Emotional lability, another consequence of stroke, causes the patient to switch quickly between emotional highs and lows and to express emotions inappropriately, for instance with an excess of laughing or crying with little or no provocation. While these expressions of emotion usually correspond to the patient's actual emotions, a more severe form of emotional lability causes patients to laugh and cry pathologically, without regard to context or emotion.[111] Some patients show the opposite of what they feel, for example crying when they are happy.[114] Emotional lability occurs in about 20% of stroke patients. Cognitive deficits resulting from stroke include perceptual disorders, Aphasia, dementia, and problems with attention and memory. A stroke sufferer may be unaware of his or her own disabilities, a condition called anosognosia. In a condition called hemispatial neglect, a patient is unable to attend to anything on the side of space opposite to the damaged hemisphere. Up to 10% of people following a stroke develop seizures, most commonly in the week subsequent to the event; the severity of the stroke increases the likelihood of a seizure.[115][116]
Epidemiology
Disability-adjusted life year for cerebral vascular disease per 100,000 inhabitants in 2004.[117] no data
1125-1300
<250
1300-1475
250-425
1475-1650
425-600
1650-1825
600-775
1825-2000
775-950
>2000
950-1125
Stroke was the second most common cause of death worldwide in 2004, resulting in 5.7 million deaths (~10% of the total).[3] Approximately 9 million people had a stroke in 2008 and 30 million people have previously had a stroke and are still alive.[118] It is ranked after heart disease and before cancer.[2] Geographic disparities in stroke incidence have been observed, including the existence of a "stroke belt" in the southeastern United States, but causes of these disparities have not been explained. The incidence of stroke increases exponentially from 30 years of age, and etiology varies by age.[119] Advanced age is one of the most significant stroke risk factors. 95% of strokes occur in people age 45 and older, and two-thirds of strokes occur in those over the age of 65.[113][22] A person's risk of dying if he or she does have a stroke also increases with age. However, stroke can occur at any age, including in childhood. Family may have a genetic tendency for stroke or share a lifestyle that contributes to stroke. Higher levels of Von Willebrand factor are more common amongst people who have had ischemic stroke for the first time.[120] The results of this study found that the only significant genetic factor was the person's blood type. Having had a stroke in the past greatly increases one's risk of future strokes. Men are 25% more likely to suffer strokes than women,[22] yet 60% of deaths from stroke occur in women.[114] Since women live longer, they are older on average when they have their strokes and thus more often killed (NIMH 2002).[22] Some risk factors for stroke apply only to women. Primary among these are pregnancy, childbirth, menopause and the treatment thereof (HRT).
History
Hippocrates first described the sudden paralysis that is often associated with stroke.
Episodes of stroke and familial stroke have been reported from the 2nd millennium BC onward in ancient Mesopotamia and Persia.[121] Hippocrates (460 to 370 BC) was first to describe the phenomenon of sudden paralysis that is often associated with ischemia. Apoplexy, from the Greek word meaning "struck down with violence," first appeared in Hippocratic writings to describe this phenomenon.[122][123] The word stroke was used as a synonym for apoplectic seizure as early as 1599,[124] and is a fairly literal translation of the Greek term. In 1658, in his Apoplexia, Johann Jacob Wepfer (1620–1695) identified the cause of hemorrhagic stroke when he suggested that people who had died of apoplexy had bleeding in their brains.[122][22] Wepfer also identified the main arteries supplying the brain, the vertebral and carotid arteries, and identified the cause of ischemic stroke [also known as cerebral infarction] when he suggested that apoplexy might be caused by a blockage to those vessels.[22] Rudolf Virchow first described the mechanism of thromboembolism as a major factor.[125]
Research Angioplasty and stenting
Angioplasty and stenting have begun to be looked at as possible viable options in treatment of acute ischemic stroke. Intra-cranial stenting in symptomatic intracranial arterial stenosis, the rate of technical success (reduction to stenosis of <50%) ranged from 90-98%, and the rate of major peri-procedural complications ranged from 4-10%. The rates of restenosis and/or stroke following the treatment were also favorable.[126] This data suggests that a randomized controlled trial is needed to more completely evaluate the possible therapeutic advantage of this preventative measure. Neuroprotection
Brain tissue survival can be improved to some extent if one or more of these processes is inhibited. Drugs that scavenge reactive oxygen species, inhibit apoptosis, or inhibit excitatory neurotransmitters, for example, have been shown experimentally to reduce tissue injury caused by ischemia. Agents that work in this way are referred to as being neuroprotective. Until recently, human clinical trials with neuroprotective agents have failed, with the probable exception of deep barbiturate coma. However, more recently NXY-059, the disulfonyl derivative of the radical-scavengin phenylbutylnitrone, is reported to be neuroprotective in stroke.[127] This agent appears to work at the level of the blood vessel lining or endothelium. Unfortunately, after producing favorable results in one large-scale clinical trial, a second trial failed to show favorable results.[22] Benefit of NXY-059 is questionable.[128]
Alzheimer's disease .
Alzheimer's disease Classification and external resources
Comparison of a normal aged brain (left) and the brain of a person with Alzheimer's (right). Differential characteristics are pointed out. ICD-10
G30, F00
ICD-9
331.0, 290.1
OMIM
104300
DiseasesDB
490
MedlinePlus
000760
eMedicine
neuro/13
MeSH
D000544
GeneReviews
NBK1161
Alzheimer's disease (AD), also known in medical literature as Alzheimer disease, is the most common form of dementia. There is no cure for the disease, which worsens as it progresses, and eventually leads to death. It was first described by German psychiatrist and neuropathologist Alois Alzheimer in 1906 and was named after him.[1] Most often, AD is diagnosed in people over 65 years of age,[2] although the less-prevalent early-onset Alzheimer's can occur much earlier. In 2006, there were 26.6 million sufferers worldwide. Alzheimer's is predicted to affect 1 in 85 people globally by 2050.[3] Although Alzheimer's disease develops differently for every individual, there are many common symptoms.[4] Early symptoms are often mistakenly thought to be 'age-related' concerns, or manifestations of stress.[5] In the early stages, the most common symptom is difficulty in ing recent events. When AD is suspected, the diagnosis is usually confirmed with tests that evaluate behaviour and thinking abilities, often followed by a brain scan if available.[6] As
the disease advances, symptoms can include confusion, irritability and aggression, mood swings, trouble with language, and long-term memory loss. As the sufferer declines they often withdraw from family and society.[5][7] Gradually, bodily functions are lost, ultimately leading to death.[8] Since the disease is different for each individual, predicting how it will affect the person is difficult. AD develops for an unknown and variable amount of time before becoming fully apparent, and it can progress undiagnosed for years. On average, the life expectancy following diagnosis is approximately seven years.[9] Fewer than three percent of individuals live more than fourteen years after diagnosis.[10] The cause and progression of Alzheimer's disease are not well understood. Research indicates that the disease is associated with plaques and tangles in the brain.[11] Current treatments only help with the symptoms of the disease. There are no available treatments that stop or reverse the progression of the disease. As of 2012, more than 1000 clinical trials have been or are being conducted to find ways to treat the disease, but it is unknown if any of the tested treatments will work.[12] Mental stimulation, exercise, and a balanced diet have been suggested as ways to delay cognitive symptoms (though not brain pathology) in healthy older individuals, but there is no conclusive evidence ing an effect.[13] Because AD cannot be cured and is degenerative, the sufferer relies on others for assistance. The role of the main caregiver is often taken by the spouse or a close relative.[14] Alzheimer's disease is known for placing a great burden on caregivers; the pressures can be wide-ranging, involving social, psychological, physical, and economic elements of the caregiver's life.[15][16][17] In developed countries, AD is one of the most costly diseases to society.[18][19]
Characteristics The disease course is divided into four stages, with progressive patterns of cognitive and functional impairments. Pre-dementia
The first symptoms are often mistakenly attributed to ageing or stress.[5] Detailed neuropsychological testing can reveal mild cognitive difficulties up to eight years before a person fulfils the clinical criteria for diagnosis of AD.[20] These early symptoms can affect the most complex daily living activities.[21] The most noticeable deficit is memory loss, which shows up as difficulty in ing recently learned facts and inability to acquire new information.[20][22] Subtle problems with the executive functions of attentiveness, planning, flexibility, and abstract thinking, or impairments in semantic memory (memory of meanings, and concept relationships) can also be symptomatic of the early stages of AD.[20] Apathy can be observed at this stage, and remains the most persistent neuropsychiatric symptom throughout the course of the disease.[23] The preclinical stage of the disease has also been termed mild cognitive impairment,[22] but
whether this term corresponds to a different diagnostic stage or identifies the first step of AD is a matter of dispute.[24] Early
In people with AD the increasing impairment of learning and memory eventually leads to a definitive diagnosis. In a small portion of them, difficulties with language, executive functions, perception (agnosia), or execution of movements (apraxia) are more prominent than memory problems.[25] AD does not affect all memory capacities equally. Older memories of the person's life (episodic memory), facts learned (semantic memory), and implicit memory (the memory of the body on how to do things, such as using a fork to eat) are affected to a lesser degree than new facts or memories.[26][27] Language problems are mainly characterised by a shrinking vocabulary and decreased word fluency, which lead to a general impoverishment of oral and written language.[25][28] In this stage, the person with Alzheimer's is usually capable of communicating basic ideas adequately.[25][28][29] While performing fine motor tasks such as writing, drawing or dressing, certain movement coordination and planning difficulties (apraxia) may be present but they are commonly unnoticed.[25] As the disease progresses, people with AD can often continue to perform many tasks independently, but may need assistance or supervision with the most cognitively demanding activities.[25] Moderate
Progressive deterioration eventually hinders independence, with subjects being unable to perform most common activities of daily living.[25] Speech difficulties become evident due to an inability to recall vocabulary, which leads to frequent incorrect word substitutions (paraphasias). Reading and writing skills are also progressively lost.[25][29] Complex motor sequences become less coordinated as time es and AD progresses, so the risk of falling increases.[25] During this phase, memory problems worsen, and the person may fail to recognise close relatives.[25] Longterm memory, which was previously intact, becomes impaired.[25] Behavioural and neuropsychiatric changes become more prevalent. Common manifestations are wandering, irritability and labile affect, leading to crying, outbursts of unpremeditated aggression, or resistance to caregiving.[25] Sundowning can also appear.[30] Approximately 30% of people with AD develop illusionary misidentifications and other delusional symptoms.[25] Subjects also lose insight of their disease process and limitations (anosognosia).[25] Urinary incontinence can develop.[25] These symptoms create stress for relatives and caretakers, which can be reduced by moving the person from home care to other long-term care facilities.[25][31] Advanced
During the final stage of AD, the person is completely dependent upon caregivers.[25] Language is reduced to simple phrases or even single words, eventually leading to complete loss of speech.[25][29] Despite the loss of verbal language abilities, people can often understand and return emotional signals.[25] Although aggressiveness can still be present, extreme apathy and
exhaustion are much more common results.[25] People with AD will ultimately not be able to perform even the simplest tasks without assistance.[25] Muscle mass and mobility deteriorate to the point where they are bedridden, and they lose the ability to feed themselves.[25] AD is a terminal illness, with the cause of death typically being an external factor, such as infection of pressure ulcers or pneumonia, not the disease itself.[25]
Cause
Microscopic image of a neurofibrillary tangle, conformed by hyperphosphorylated tau protein
The cause for most Alzheimer's cases is still essentially unknown[32][33] (except for 1% to 5% of cases where genetic differences have been identified). Several competing hypotheses exist trying to explain the cause of the disease: Cholinergic hypothesis
The oldest, on which most currently available drug therapies are based, is the cholinergic hypothesis,[34] which proposes that AD is caused by reduced synthesis of the neurotransmitter acetylcholine. The cholinergic hypothesis has not maintained widespread , largely because medications intended to treat acetylcholine deficiency have not been very effective. Other cholinergic effects have also been proposed, for example, initiation of large-scale aggregation of amyloid,[35] leading to generalised neuroinflammation.[36] Amyloid hypothesis
In 1991, the amyloid hypothesis postulated that beta-amyloid (βA) deposits are the fundamental cause of the disease.[37][38] for this postulate comes from the location of the gene for the amyloid precursor protein (APP) on chromosome 21, together with the fact that people with trisomy 21 (Down Syndrome) who have an extra gene copy almost universally exhibit AD by 40 years of age.[39][40] Also APOE4, the major genetic risk factor for AD, leads to excess amyloid buildup in the brain.[41] Further evidence comes from the finding that transgenic mice that
express a mutant form of the human APP gene develop fibrillar amyloid plaques and Alzheimer's-like brain pathology with spatial learning deficits.[42] An experimental vaccine was found to clear the amyloid plaques in early human trials, but it did not have any significant effect on dementia.[43] Researchers have been led to suspect non-plaque βA oligomers (aggregates of many monomers) as the primary pathogenic form of βA. These toxic oligomers, also referred to as amyloid-derived diffusible ligands (ADDLs), bind to a surface receptor on neurons and change the structure of the synapse, thereby disrupting neuronal communication.[44] One receptor for βA oligomers may be the prion protein, the same protein that has been linked to mad cow disease and the related human condition, Creutzfeldt-Jakob disease, thus potentially linking the underlying mechanism of these neurodegenerative disorders with that of Alzheimer's disease.[45] In 2009, this theory was updated, suggesting that a close relative of the beta-amyloid protein, and not necessarily the beta-amyloid itself, may be a major culprit in the disease. The theory holds that an amyloid-related mechanism that prunes neuronal connections in the brain in the fastgrowth phase of early life may be triggered by ageing-related processes in later life to cause the neuronal withering of Alzheimer's disease.[46] N-APP, a fragment of APP from the peptide's Nterminus, is adjacent to beta-amyloid and is cleaved from APP by one of the same enzymes. NAPP triggers the self-destruct pathway by binding to a neuronal receptor called death receptor 6 (DR6, also known as TNFRSF21).[46] DR6 is highly expressed in the human brain regions most affected by Alzheimer's, so it is possible that the N-APP/DR6 pathway might be hijacked in the ageing brain to cause damage. In this model, beta-amyloid plays a complementary role, by depressing synaptic function. Tau hypothesis
The tau hypothesis is the idea that tau protein abnormalities initiate the disease cascade.[38] In this model, hyperphosphorylated tau begins to pair with other threads of tau. Eventually, they form neurofibrillary tangles inside nerve cell bodies.[47] When this occurs, the microtubules disintegrate, collapsing the neuron's transport system.[48] This may result first in malfunctions in biochemical communication between neurons and later in the death of the cells.[49] Other hypotheses
Herpes simplex virus type 1 has also been proposed to play a causative role in people carrying the susceptible versions of the apoE gene.[50] Another hypothesis asserts that the disease may be caused by age-related myelin breakdown in the brain. Iron released during myelin breakdown is hypothesised to cause further damage. Homeostatic myelin repair processes contribute to the development of proteinaceous deposits such as beta-amyloid and tau.[51][52][53] Oxidative stress and dys-homeostasis of biometal (biology) metabolism may be significant in the formation of the pathology.[54][55]
AD individuals show 70% loss of locus coeruleus cells that provide norepinephrine (in addition to its neurotransmitter role) that locally diffuses from "varicosities" as an endogenous antiinflammatory agent in the microenvironment around the neurons, glial cells, and blood vessels in the neocortex and hippocampus.[56] It has been shown that norepinephrine stimulates mouse microglia to suppress βA-induced production of cytokines and their phagocytosis of βA.[56] This suggests that degeneration of the locus ceruleus might be responsible for increased βA deposition in AD brains.[56]
Pathophysiology Main article: Biochemistry of Alzheimer's disease
Histopathologic image of senile plaques seen in the cerebral cortex of a person with Alzheimer's disease of presenile onset. Silver impregnation. Neuropathology
Alzheimer's disease is characterised by loss of neurons and synapses in the cerebral cortex and certain subcortical regions. This loss results in gross atrophy of the affected regions, including degeneration in the temporal lobe and parietal lobe, and parts of the frontal cortex and cingulate gyrus.[36] Studies using MRI and PET have documented reductions in the size of specific brain regions in people with AD as they progressed from mild cognitive impairment to Alzheimer's disease, and in comparison with similar images from healthy older adults.[57][58] Both amyloid plaques and neurofibrillary tangles are clearly visible by microscopy in brains of those afflicted by AD.[11] Plaques are dense, mostly insoluble deposits of beta-amyloid peptide and cellular material outside and around neurons. Tangles (neurofibrillary tangles) are aggregates of the microtubule-associated protein tau which has become hyperphosphorylated and accumulate inside the cells themselves. Although many older individuals develop some plaques and tangles as a consequence of ageing, the brains of people with AD have a greater number of them in specific brain regions such as the temporal lobe.[59] Lewy bodies are not rare in the brains of people with AD.[60]
Biochemistry
Enzymes act on the APP (amyloid precursor protein) and cut it into fragments. The beta-amyloid fragment is crucial in the formation of senile plaques in AD.
Alzheimer's disease has been identified as a protein misfolding disease (proteopathy), caused by accumulation of abnormally folded amyloid beta and amyloid tau proteins in the brain.[61] Plaques are made up of small peptides, 39–43 amino acids in length, called beta-amyloid (Aβ). Beta-amyloid is a fragment from a larger protein called amyloid precursor protein (APP), a transmembrane protein that penetrates through the neuron's membrane. APP is critical to neuron growth, survival and post-injury repair.[62][63] In Alzheimer's disease, an unknown process causes APP to be divided into smaller fragments by enzymes through proteolysis.[64] One of these fragments gives rise to fibrils of beta-amyloid, which form clumps that deposit outside neurons in dense formations known as senile plaques.[11][65]
In Alzheimer's disease, changes in tau protein lead to the disintegration of microtubules in brain cells.
AD is also considered a tauopathy due to abnormal aggregation of the tau protein. Every neuron has a cytoskeleton, an internal structure partly made up of structures called microtubules. These microtubules act like tracks, guiding nutrients and molecules from the body of the cell to the ends of the axon and back. A protein called tau stabilises the microtubules when phosphorylated, and is therefore called a microtubule-associated protein. In AD, tau undergoes chemical changes, becoming hyperphosphorylated; it then begins to pair with other threads, creating neurofibrillary tangles and disintegrating the neuron's transport system.[66]
Disease mechanism
Exactly how disturbances of production and aggregation of the beta-amyloid peptide gives rise to the pathology of AD is not known.[67] The amyloid hypothesis traditionally points to the accumulation of beta-amyloid peptides as the central event triggering neuron degeneration. Accumulation of aggregated amyloid fibrils, which are believed to be the toxic form of the protein responsible for disrupting the cell's calcium ion homeostasis, induces programmed cell death (apoptosis).[68] It is also known that βA selectively builds up in the mitochondria in the cells of Alzheimer's-affected brains, and it also inhibits certain enzyme functions and the utilisation of glucose by neurons.[69] Various inflammatory processes and cytokines may also have a role in the pathology of Alzheimer's disease. Inflammation is a general marker of tissue damage in any disease, and may be either secondary to tissue damage in AD or a marker of an immunological response.[70] Alterations in the distribution of different neurotrophic factors and in the expression of their receptors such as the brain derived neurotrophic factor (BDNF) have been described in AD.[71][72] Genetics
The vast majority of cases of Alzheimer's disease are sporadic, meaning that they are not genetically inherited although some genes may act as risk factors. On the other hand, around 0.1% of the cases are familial forms of autosomal dominant (not sex-linked) inheritance, which usually have an onset before age 65.[73] This form of the disease is known as early onset familial Alzheimer's disease. Most of autosomal dominant familial AD can be attributed to mutations in one of three genes: amyloid precursor protein (APP) and presenilins 1 and 2.[74] Most mutations in the APP and presenilin genes increase the production of a small protein called βA42, which is the main component of senile plaques.[75] Some of the mutations merely alter the ratio between βA42 and the other major forms—e.g., βA40—without increasing βA42 levels.[75][76] This suggests that presenilin mutations can cause disease even if they lower the total amount of βA produced and may point to other roles of presenilin or a role for alterations in the function of APP and/or its fragments other than βA. Most cases of Alzheimer's disease do not exhibit autosomal-dominant inheritance and are termed sporadic AD. Nevertheless genetic differences may act as risk factors. The best known genetic risk factor is the inheritance of the ε4 allele of the apolipoprotein E (APOE).[77][78] Between 40 and 80% of people with AD possess at least one APOEε4 allele.[78] The APOEε4 allele increases the risk of the disease by three times in heterozygotes and by 15 times in homozygotes.[73] However, this "genetic" effect is not necessarily purely genetic. For example, certain Nigerian populations have no relationship between presence or dose of APOEε4 and incidence or age-ofonset for Alzheimer's disease.[79] [80] Geneticists agree that numerous other genes also act as risk factors or have protective effects that influence the development of late onset Alzheimer's disease,[74] but results such as the Nigerian studies and the incomplete penetrance for all genetic risk factors associated with sporadic Alzheimers indicate a strong role for environmental effects.
Over 400 genes have been tested for association with late-onset sporadic AD,[74] most with null results.[73] Mutation in the TREM2 gene have been associated with a 3 to 5 times higher risk of developing Alzheimer's disease.[81][82] A suggested mechanism of action is that when TREM2 is mutated, white blood cells in the brain are no longer able to control the amount of beta amyloid present.
Diagnosis
PET scan of the brain of a person with AD showing a loss of function in the temporal lobe
Alzheimer's disease is usually diagnosed clinically from the patient history, collateral history from relatives, and clinical observations, based on the presence of characteristic neurological and neuropsychological features and the absence of alternative conditions.[83][84] Advanced medical imaging with computed tomography (CT) or magnetic resonance imaging (MRI), and with single photon emission computed tomography (SPECT) or positron emission tomography (PET) can be used to help exclude other cerebral pathology or subtypes of dementia.[85] Moreover, it may predict conversion from prodromal stages (mild cognitive impairment) to Alzheimer's disease.[86] Assessment of intellectual functioning including memory testing can further characterise the state of the disease.[5] Medical organisations have created diagnostic criteria to ease and standardise the diagnostic process for practicing physicians. The diagnosis can be confirmed with very high accuracy post-mortem when brain material is available and can be examined histologically.[87] Criteria
The National Institute of Neurological and Communicative Disorders and Stroke (NINCDS) and the Alzheimer's Disease and Related Disorders Association (ADRDA, now known as the Alzheimer's Association) established the most commonly used NINCDS-ADRDA Alzheimer's Criteria for diagnosis in 1984,[87] extensively updated in 2007.[88] These criteria require that the presence of cognitive impairment, and a suspected dementia syndrome, be confirmed by
neuropsychological testing for a clinical diagnosis of possible or probable AD. A histopathologic confirmation including a microscopic examination of brain tissue is required for a definitive diagnosis. Good statistical reliability and validity have been shown between the diagnostic criteria and definitive histopathological confirmation.[89] Eight cognitive domains are most commonly impaired in AD—memory, language, perceptual skills, attention, constructive abilities, orientation, problem solving and functional abilities. These domains are equivalent to the NINCDS-ADRDA Alzheimer's Criteria as listed in the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR) published by the American Psychiatric Association.[90][91] Techniques
Neuropsychological screening tests can help in the diagnosis of AD. In the tests, people are instructed to copy drawings similar to the one shown in the picture, words, read, and subtract serial numbers.
Neuropsychological tests such as the mini-mental state examination (MMSE) are widely used to evaluate the cognitive impairments needed for diagnosis. More comprehensive test arrays are necessary for high reliability of results, particularly in the earliest stages of the disease.[92][93] Neurological examination in early AD will usually provide normal results, except for obvious cognitive impairment, which may not differ from that resulting from other diseases processes, including other causes of dementia. Further neurological examinations are crucial in the differential diagnosis of AD and other diseases.[5] Interviews with family are also utilised in the assessment of the disease. Caregivers can supply important information on the daily living abilities, as well as on the decrease, over time, of the person's mental function.[86] A caregiver's viewpoint is particularly important, since a person with AD is commonly unaware of his own deficits.[94] Many times, families also have difficulties in the detection of initial dementia symptoms and may not communicate accurate information to a physician.[95] Another recent objective marker of the disease is the analysis of cerebrospinal fluid for betaamyloid or tau proteins,[96] both total tau protein and phosphorylated tau181P protein concentrations.[97] Searching for these proteins using a spinal tap can predict the onset of Alzheimer's with a sensitivity of between 94% and 100%.[97] When used in conjunction with existing neuroimaging techniques, doctors can identify people with significant memory loss who are already developing the disease.[97] Spinal fluid tests are commercially available, unlike the latest neuroimaging technology.[98] Alzheimer's was diagnosed in one-third of the people who
did not have any symptoms in a 2010 study, meaning that disease progression occurs well before symptoms occur.[99] Supplemental testing provides extra information on some features of the disease or is used to rule out other diagnoses. Blood tests can identify other causes for dementia than AD[5]—causes which may, in rare cases, be reversible.[100] It is common to perform thyroid function tests, assess B12, rule out syphilis, rule out metabolic problems (including tests for kidney function, electrolyte levels and for diabetes), assess levels of heavy metals (e.g. lead, mercury) and anaemia. (See differential diagnosis for Dementia). (It is also necessary to rule out delirium). Psychological tests for depression are employed, since depression can either be concurrent with AD (see Depression of Alzheimer disease), an early sign of cognitive impairment,[101] or even the cause.[102][103] Imaging
When available as a diagnostic tool, single photon emission computed tomography (SPECT) and positron emission tomography (PET) neuroimaging are used to confirm a diagnosis of Alzheimer's in conjunction with evaluations involving mental status examination.[104] In a person already having dementia, SPECT appears to be superior in differentiating Alzheimer's disease from other possible causes, compared with the usual attempts employing mental testing and medical history analysis.[105] Advances have led to the proposal of new diagnostic criteria.[5][88] A new technique known as PiB PET has been developed for directly and clearly imaging betaamyloid deposits in vivo using a tracer that binds selectively to the A-beta deposits.[106] The PiBPET compound uses carbon-11 PET scanning. Recent studies suggest that PiB-PET is 86% accurate in predicting which people with mild cognitive impairment will develop Alzheimer's disease within two years, and 92% accurate in ruling out the likelihood of developing Alzheimer's.[107] PiB PET remains investgational, however a similar PET scanning radiopharmaceutical called florbetapir, containing the longer-lasting radionuclide fluorine-18, has recently been tested as a diagnostic tool in Alzheimer's disease, and given FDA approval for this use.[108][109][110][111] Florbetapir, like PiB, binds to beta-amyloid, but due to its use of fluorine-18 has a half-life of 110 minutes, in contrast to PiB's radioactive half life of 20 minutes. Wong et al. found that the longer life allowed the tracer to accumulate significantly more in the brains of people with AD, particularly in the regions known to be associated with beta-amyloid deposits.[111] One review predicted that amyloid imaging is likely to be used in conjunction with other markers rather than as an alternative.[112] Volumetric MRI can detect changes in the size of brain regions. Measuring those regions that atrophy during the progress of Alzheimer's disease is showing promise as a diagnostic indicator. It may prove less expensive than other imaging methods currently under study.[113]
Non-Imaging biomarkers
Recent studies have shown that people with AD had decreased glutamate (Glu) as well as decreased Glu/creatine (Cr), Glu/myo-inositol (mI), Glu/N-acetylaspartate (NAA), and NAA/Cr ratios compared to normal people. Both decreased NAA/Cr and decreased hippocampal glutamate may be an early indicator of AD.[114] Early research in mouse models may have identified markers for AD. The applicability of these markers is unknown.[115] A small human study in 2011 found that monitoring blood dehydroepiandrosterone (DHEA) variations in response to an oxidative stress could be a useful proxy test: the subjects with MCI did not have a DHEA variation, while the healthy controls did.[116]
Prevention
Intellectual activities such as playing chess or regular social interaction have been linked to a reduced risk of AD in epidemiological studies, although no causal relationship has been found.
At present, there is no definitive evidence to that any particular measure is effective in preventing AD.[117] Global studies of measures to prevent or delay the onset of AD have often produced inconsistent results. However, epidemiological studies have proposed relationships between certain modifiable factors, such as diet, cardiovascular risk, pharmaceutical products, or intellectual activities among others, and a population's likelihood of developing AD. Only further research, including clinical trials, will reveal whether these factors can help to prevent AD.[118] Although cardiovascular risk factors, such as hypercholesterolaemia, hypertension, diabetes, and smoking, are associated with a higher risk of onset and course of AD,[119][120] statins, which are cholesterol lowering drugs, have not been effective in preventing or improving the course of the disease.[121][122] The components of a Mediterranean diet, which include fruit and vegetables, bread, wheat and other cereals, olive oil, fish, and red wine, may all individually or together reduce the risk and course of Alzheimer's disease.[123] The diet's beneficial cardiovascular effect has been proposed as the mechanism of action.[123] There is limited evidence that light to moderate use of alcohol, particularly red wine, is associated with lower risk of AD.[124]
Reviews on the use of vitamins have not found enough evidence of efficacy to recommend vitamin C,[125] E,[125][126] or folic acid with or without vitamin B12,[127] as preventive or treatment agents in AD. Additionally vitamin E is associated with important health risks.[125] Trials examining folic acid (B9) and other B vitamins failed to show any significant association with cognitive decline.[128] Docosahexaenoic acid, an Omega 3 fatty acid, has not been found to slow decline.[129] Long-term usage of non-steroidal anti-inflammatory drug (NSAIDs) is associated with a reduced likelihood of developing AD.[130] Human postmortem studies, in animal models, or in vitro investigations also the notion that NSAIDs can reduce inflammation related to amyloid plaques.[130] However trials investigating their use as palliative treatment have failed to show positive results while no prevention trial has been completed.[130] Curcumin from the curry spice turmeric has shown some effectiveness in preventing brain damage in mouse models due to its anti-inflammatory properties.[131][132] Hormone replacement therapy, although previously used, is no longer thought to prevent dementia and in some cases may even be related to it.[133][134] There is inconsistent and unconvincing evidence that ginkgo has any positive effect on cognitive impairment and dementia,[135] and a recent study concludes that it has no effect in reducing the rate of AD incidence.[136] A 21-year study found that coffee drinkers of 3–5 cups per day at midlife had a 65% reduction in risk of dementia in late-life.[137] People who engage in intellectual activities such as reading, playing board games, completing crossword puzzles, playing musical instruments, or regular social interaction show a reduced risk for Alzheimer's disease.[138] This is compatible with the cognitive reserve theory, which states that some life experiences result in more efficient neural functioning providing the individual a cognitive reserve that delays the onset of dementia manifestations.[138] Education delays the onset of AD syndrome, but is not related to earlier death after diagnosis.[139] Learning a second language even later in life seems to delay getting Alzheimer disease.[140] Physical activity is also associated with a reduced risk of AD.[139] Two studies have shown that medical marijuana may be effective in inhibiting the progress of AD. The active ingredient in marijuana, THC, may prevent the formation of deposits in the brain associated with Alzheimer's disease. THC was found to inhibit acetylcholinesterase more effectively than commercially marketed drugs.[141][142] A recent review of the clinical research has found no evidence that cannabinoids are effective in the improvement of disturbed behaviour or in the treatment of other symptoms of AD or dementia.[143] Some studies have shown an increased risk of developing AD with environmental factors such the intake of metals, particularly aluminium,[144][145] or exposure to solvents.[146] The quality of some of these studies has been criticised,[147] and other studies have concluded that there is no relationship between these environmental factors and the development of AD.[148][149][150][151] While some studies suggest that extremely low frequency electromagnetic fields may increase the risk for Alzheimer's disease, reviewers found that further epidemiological and laboratory investigations of this hypothesis are needed.[152] Smoking is a significant AD risk factor.[153] Systemic markers of the innate immune system are risk factors for late-onset AD.[154]
Management There is no cure for Alzheimer's disease; available treatments offer relatively small symptomatic benefit but remain palliative in nature. Current treatments can be divided into pharmaceutical, psychosocial and caregiving. Pharmaceutical
Three-dimensional molecular model of donepezil, an acetylcholinesterase inhibitor used in the treatment of AD symptoms
Molecular structure of memantine, a medication approved for advanced AD symptoms
Five medications are currently used to treat the cognitive manifestations of AD: four are acetylcholinesterase inhibitors (tacrine, rivastigmine, galantamine and donepezil) and the other (memantine) is an NMDA receptor antagonist.[155] No drug has an indication for delaying or halting the progression of the disease. Reduction in the activity of the cholinergic neurons is a well-known feature of Alzheimer's disease.[156] Acetylcholinesterase inhibitors are employed to reduce the rate at which acetylcholine (ACh) is broken down, thereby increasing the concentration of ACh in the brain and combating the loss of ACh caused by the death of cholinergic neurons.[157] Cholinesterase inhibitors approved for the management of AD symptoms are donepezil (brand name
Aricept),[158] galantamine (Razadyne),[159] and rivastigmine (branded as Exelon[160]). There is evidence for the efficacy of these medications in mild to moderate Alzheimer's disease,[161][162] and some evidence for their use in the advanced stage. Only donepezil is approved for treatment of advanced AD dementia.[163] The use of these drugs in mild cognitive impairment has not shown any effect in a delay of the onset of AD.[164] The most common side effects are nausea and vomiting, both of which are linked to cholinergic excess. These side effects arise in approximately 10–20% of s and are mild to moderate in severity. Less common secondary effects include muscle cramps, decreased heart rate (bradycardia), decreased appetite and weight, and increased gastric acid production.[165] Glutamate is a useful excitatory neurotransmitter of the nervous system, although excessive amounts in the brain can lead to cell death through a process called excitotoxicity which consists of the overstimulation of glutamate receptors. Excitotoxicity occurs not only in Alzheimer's disease, but also in other neurological diseases such as Parkinson's disease and multiple sclerosis.[166] Memantine (brand names Akatinol)[167] is a noncompetitive NMDA receptor antagonist first used as an anti-influenza agent. It acts on the glutamatergic system by blocking NMDA receptors and inhibiting their overstimulation by glutamate.[166] Memantine has been shown to be moderately efficacious in the treatment of moderate to severe Alzheimer's disease. Its effects in the initial stages of AD are unknown.[168] Reported adverse events with memantine are infrequent and mild, including hallucinations, confusion, dizziness, headache and fatigue.[169] The combination of memantine and donepezil has been shown to be "of statistically significant but clinically marginal effectiveness".[170] Antipsychotic drugs are modestly useful in reducing aggression and psychosis in Alzheimer's disease with behavioural problems, but are associated with serious adverse effects, such as cerebrovascular events, movement difficulties or cognitive decline, that do not permit their routine use.[171][172] When used in the long-term, they have been shown to associate with increased mortality.[172] Huperzine A while promising, requires further evidence before it use can be recommended.[173] Psychosocial intervention See also: Music therapy for Alzheimer’s disease
A specifically designed room for sensory integration therapy, also called snoezelen; an emotion-oriented psychosocial intervention for people with dementia
Psychosocial interventions are used as an adjunct to pharmaceutical treatment and can be classified within behaviour-, emotion-, cognition- or stimulation-oriented approaches. Research on efficacy is unavailable and rarely specific to AD, focusing instead on dementia in general.[174] Behavioural interventions attempt to identify and reduce the antecedents and consequences of problem behaviours. This approach has not shown success in improving overall functioning,[175] but can help to reduce some specific problem behaviours, such as incontinence.[176] There is a lack of high quality data on the effectiveness of these techniques in other behaviour problems such as wandering.[177][178] Emotion-oriented interventions include reminiscence therapy, validation therapy, ive psychotherapy, sensory integration, also called snoezelen, and simulated presence therapy. ive psychotherapy has received little or no formal scientific study, but some clinicians find it useful in helping mildly impaired people adjust to their illness.[174] Reminiscence therapy (RT) involves the discussion of past experiences individually or in group, many times with the aid of photographs, household items, music and sound recordings, or other familiar items from the past. Although there are few quality studies on the effectiveness of RT, it may be beneficial for cognition and mood.[179] Simulated presence therapy (SPT) is based on attachment theories and involves playing a recording with voices of the closest relatives of the person with Alzheimer's disease. There is partial evidence indicating that SPT may reduce challenging behaviours.[180] Finally, validation therapy is based on acceptance of the reality and personal truth of another's experience, while sensory integration is based on exercises aimed to stimulate senses. There is little evidence to the usefulness of these therapies.[181][182]
The aim of cognition-oriented treatments, which include reality orientation and cognitive retraining, is the reduction of cognitive deficits. Reality orientation consists in the presentation of information about time, place or person in order to ease the understanding of the person about its surroundings and his or her place in them. On the other hand cognitive retraining tries to improve impaired capacities by exercitation of mental abilities. Both have shown some efficacy improving cognitive capacities,[183][184] although in some studies these effects were transient and negative effects, such as frustration, have also been reported.[174] Stimulation-oriented treatments include art, music and pet therapies, exercise, and any other kind of recreational activities. Stimulation has modest for improving behaviour, mood, and, to a lesser extent, function. Nevertheless, as important as these effects are, the main for the use of stimulation therapies is the change in the person's routine.[174] Caregiving Further information: Caregiving and dementia
Since Alzheimer's has no cure and it gradually renders people incapable of tending for their own needs, caregiving essentially is the treatment and must be carefully managed over the course of the disease. During the early and moderate stages, modifications to the living environment and lifestyle can increase patient safety and reduce caretaker burden.[185][186] Examples of such modifications are the adherence to simplified routines, the placing of safety locks, the labelling of household items to cue the person with the disease or the use of modified daily life objects.[174][187][188] The patient may also become incapable of feeding themselves, so they require food in smaller pieces or pureed.[189] When swallowing difficulties arise, the use of feeding tubes may be required. In such cases, the medical efficacy and ethics of continuing feeding is an important consideration of the caregivers and family .[190][191] The use of physical restraints is rarely indicated in any stage of the disease, although there are situations when they are necessary to prevent harm to the person with AD or their caregivers.[174] As the disease progresses, different medical issues can appear, such as oral and dental disease, pressure ulcers, malnutrition, hygiene problems, or respiratory, skin, or eye infections. Careful management can prevent them, while professional treatment is needed when they do arise.[192][193] During the final stages of the disease, treatment is centred on relieving discomfort until death.[194] A small recent study in the US concluded that people whose caregivers had a realistic understanding of the prognosis and clinical complications of late dementia were less likely to receive aggressive treatment near the end of life. [195] Feeding tubes
There is strong evidence that feeding tubes do not help people with advanced Alzheimer's dementia gain weight, regain strength or function, prevent aspiration pneumonias, or improve quality of life.[196][197][198][199]
Prognosis
Disability-adjusted life year for Alzheimer and other dementias per 100,000 inhabitants in 2004. no data
150–170
≤ 50
170–190
50–70
190–210
70–90
210–230
90–110
230–250
110–130
≥ 250
130–150
The early stages of Alzheimer's disease are difficult to diagnose. A definitive diagnosis is usually made once cognitive impairment compromises daily living activities, although the person may still be living independently. The symptoms will progress from mild cognitive problems, such as memory loss through increasing stages of cognitive and non-cognitive disturbances, eliminating any possibility of independent living, especially in the late stages of the disease.[25] Life expectancy of the population with the disease is reduced.[9][200][201] The mean life expectancy following diagnosis is approximately seven years.[9] Fewer than 3% of people live more than fourteen years.[10] Disease features significantly associated with reduced survival are an increased severity of cognitive impairment, decreased functional level, history of falls, and disturbances in the neurological examination. Other coincident diseases such as heart problems, diabetes or history of alcohol abuse are also related with shortened survival.[200][202][203] While the earlier the age at onset the higher the total survival years, life expectancy is particularly reduced when compared to the healthy population among those who are younger.[201] Men have a less favourable survival prognosis than women.[10][204] The disease is the underlying cause of death in 70% of all cases.[9] Pneumonia and dehydration are the most frequent immediate causes of death, while cancer is a less frequent cause of death than in the general population.[9][204]
Epidemiology Two main measures are used in epidemiological studies: incidence and prevalence. Incidence is the number of new cases per unit of person–time at risk (usually number of new cases per thousand person–years); while prevalence is the total number of cases of the disease in the population at New affected any given time.
Incidence rates after age 65[205]
Age per thousand person–years Regarding incidence, cohort longitudinal studies (studies where a disease65–69
3
70–74
6
75–79
9
80–84
23
85–89
40
free population is followed over the years) provide rates between 10 and 15 per thousand person–years for all dementias and 5–8 for AD,[205][206] which means that half of new dementia cases each year are AD. Advancing age is a primary risk factor for the disease and incidence rates are not equal for all ages: every five years after the age of 65, the risk of acquiring the disease approximately doubles, increasing from 3 to as much as 69 per thousand person years.[205][206] There are also sex differences in the incidence rates, women having a higher risk of developing AD particularly in the population older than 85.[206][207]
Prevalence of AD in populations is dependent upon different factors including incidence and survival. Since the incidence of AD increases with 90– 69 age, it is particularly important to include the mean age of the population of interest. In the United States, Alzheimer prevalence was estimated to be 1.6% in 2000 both overall and in the 65–74 age group, with the rate increasing to 19% in the 75– 84 group and to 42% in the greater than 84 group.[208] Prevalence rates in less developed regions are lower.[209] The World Health Organization estimated that in 2005, 0.379% of people worldwide had dementia, and that the prevalence would increase to 0.441% in 2015 and to 0.556% in 2030.[210] Other studies have reached similar conclusions.[209] Another study estimated that in 2006, 0.40% of the world population (range 0.17–0.89%; absolute number 26.6 million, range 11.4–59.4 million) were afflicted by AD, and that the prevalence rate would triple and the absolute number would quadruple by 2050.[3]
History
Alois Alzheimer's patient Auguste Deter in 1902. Hers was the first described case of what became known as Alzheimer's disease.
The ancient Greek and Roman philosophers and physicians associated old age with increasing dementia.[1] It was not until 1901 that German psychiatrist Alois Alzheimer identified the first case of what became known as Alzheimer's disease in a fifty-year-old woman he called Auguste D. He followed her case until she died in 1906, when he first reported publicly on it.[211] During the next five years, eleven similar cases were reported in the medical literature, some of them already using the term Alzheimer's disease.[1] The disease was first described as a distinctive disease by Emil Kraepelin after suppressing some of the clinical (delusions and hallucinations) and pathological features (arteriosclerotic changes) contained in the original report of Auguste D.[212] He included Alzheimer's disease, also named presenile dementia by Kraepelin, as a subtype of senile dementia in the eighth edition of his Textbook of Psychiatry, published on July 15, 1910.[213] For most of the 20th century, the diagnosis of Alzheimer's disease was reserved for individuals between the ages of 45 and 65 who developed symptoms of dementia. The terminology changed after 1977 when a conference on AD concluded that the clinical and pathological manifestations of presenile and senile dementia were almost identical, although the authors also added that this did not rule out the possibility that they had different causes.[214] This eventually led to the diagnosis of Alzheimer's disease independently of age.[215] The term senile dementia of the Alzheimer type (SDAT) was used for a time to describe the condition in those over 65, with classical Alzheimer's disease being used for those younger. Eventually, the term Alzheimer's disease was formally adopted in medical nomenclature to describe individuals of all ages with a characteristic common symptom pattern, disease course, and neuropathology.[216]
Society and culture Social costs
Dementia, and specifically Alzheimer's disease, may be among the most costly diseases for society in Europe and the United States,[18][19] while their cost in other countries such as Argentina,[217] or South Korea,[218] is also high and rising. These costs will probably increase with the ageing of society, becoming an important social problem. AD-associated costs include direct medical costs such as nursing home care, direct nonmedical costs such as in-home day care, and indirect costs such as lost productivity of both patient and caregiver.[19] Numbers vary between studies but dementia costs worldwide have been calculated around $160 billion,[219] while costs of Alzheimer in the United States may be $100 billion each year.[19] The greatest origin of costs for society is the long-term care by health care professionals and particularly institutionalisation, which corresponds to 2/3 of the total costs for society.[18] The cost of living at home is also very high,[18] especially when informal costs for the family, such as caregiving time and caregiver's lost earnings, are taken into .[220] Costs increase with dementia severity and the presence of behavioural disturbances,[221] and are related to the increased caregiving time required for the provision of physical care.[220] Therefore
any treatment that slows cognitive decline, delays institutionalisation or reduces caregivers' hours will have economic benefits. Economic evaluations of current treatments have shown positive results.[19] Caregiving burden Further information: Caregiving and dementia
The role of the main caregiver is often taken by the spouse or a close relative.[14] Alzheimer's disease is known for placing a great burden on caregivers which includes social, psychological, physical or economic aspects.[15][16][17] Home care is usually preferred by people with AD and their families.[222] This option also delays or eliminates the need for more professional and costly levels of care.[222][223] Nevertheless two-thirds of nursing home residents have dementias.[174] Dementia caregivers are subject to high rates of physical and mental disorders.[224] Factors associated with greater psychosocial problems of the primary caregivers include having an affected person at home, the carer being a spouse, demanding behaviours of the cared person such as depression, behavioural disturbances, hallucinations, sleep problems or walking disruptions and social isolation.[225][226] Regarding economic problems, family caregivers often give up time from work to spend 47 hours per week on average with the person with AD, while the costs of caring for them are high. Direct and indirect costs of caring for an Alzheimer's patient average between $18,000 and $77,500 per year in the United States, depending on the study.[14][220] Cognitive behavioural therapy and the teaching of coping strategies either individually or in group have demonstrated their efficacy in improving caregivers' psychological health.[15][227] Notable cases Further information: Alzheimer's in the media
Charlton Heston and Ronald Reagan at a meeting in the White House. Both of them would later be diagnosed with Alzheimer's disease.
As Alzheimer's disease is highly prevalent, many notable people have developed it. Well-known examples are former United States President Ronald Reagan and Irish writer Iris Murdoch, both of whom were the subjects of scientific articles examining how their cognitive capacities
deteriorated with the disease.[228][229][230] Other cases include the retired footballer Ferenc Puskas,[231] the former Prime Ministers Harold Wilson (United Kingdom) and Adolfo Suárez (Spain),[232][233] the actress Rita Hayworth,[234] the actor Charlton Heston,[235] the novelist Terry Pratchett,[236] Indian politician George Fernandes,[237] and the 2009 Nobel Prize in Physics recipient Charles K. Kao.[238] AD has also been portrayed in films such as: Iris (2001), based on John Bayley's memoir of his wife Iris Murdoch;[239] The Notebook (2004), based on Nicholas Sparks' 1996 novel of the same name;[240] A Moment to (2004);Thanmathra (2005);[241] Memories of Tomorrow (Ashita no Kioku) (2006), based on Hiroshi Ogiwara's novel of the same name;[242] Away from Her (2006), based on Alice Munro's short story "The Bear Came over the Mountain".[243] Documentaries on Alzheimer's disease include Malcolm and Barbara: A Love Story (1999) and Malcolm and Barbara: Love's Farewell (2007), both featuring Malcolm Pointon.[244]
Research directions Main article: Alzheimer's disease clinical research
As of 2012, the safety and efficacy of more than 400 pharmaceutical treatments had been or were being investigated in 1012 clinical trials worldwide, and approximately a quarter of these compounds are in Phase III trials, the last step prior to review by regulatory agencies.[12] One area of clinical research is focused on treating the underlying disease pathology. Reduction of beta-amyloid levels is a common target of compounds[245] (such as apomorphine) under investigation. Immunotherapy or vaccination for the amyloid protein is one treatment modality under study.[246] Unlike preventative vaccination, the putative therapy would be used to treat people already diagnosed. It is based upon the concept of training the immune system to recognise, attack, and reverse deposition of amyloid, thereby altering the course of the disease.[247] An example of such a vaccine under investigation was ACC-001,[248][249] although the trials were suspended in 2008.[250] Another similar agent is bapineuzumab, an antibody designed as identical to the naturally induced anti-amyloid antibody.[251] Other approaches are neuroprotective agents, such as AL-108,[252] and metal-protein interaction attenuation agents, such as PBT2.[253] A TNFα receptor fusion protein, etanercept has showed encouraging results.[254] In 2008, two separate clinical trials showed positive results in modifying the course of disease in mild to moderate AD with methylthioninium chloride (trade name rember), a drug that inhibits tau aggregation,[255][256] and dimebon, an antihistamine.[257] The consecutive phase-III trial of dimebon failed to show positive effects in the primary and secondary endpoints.[258][259][260] The common herpes simplex virus HSV-1 has been found to colocate with amyloid plaques.[261] This suggested the possibility that AD could be treated or prevented with antiviral medication.[261][262] Preliminary research on the effects of meditation on retrieving memory and cognitive functions have been encouraging. Limitations of this research can be addressed in future studies with more detailed analyses.[263]
An FDA voted unanimously to recommend approval of florbetapir, which is currently used in an investigational study. The imaging agent can help to detect Alzheimer's brain plaques, but will require additional clinical research before it can be made available commercially.[264]
Dementia dementia Classification and external resources ICD-10
F00-F07
ICD-9
290-294
DiseasesDB
29283
MedlinePlus
000739
MeSH
D003704
Dementia (taken from Latin) originally meaning madness, from de- (without) + ment, the root of mens (mind) is a serious loss of global cognitive ability in a previously unimpaired person, beyond what might be expected from normal ageing. It may be static, the result of a unique global brain injury, or progressive, resulting in long-term decline due to damage or disease in the body. Although dementia is far more common in the geriatric population, it can occur before the age of 65, in which case it is termed "early onset dementia".[1] Dementia is not a single disease, but a non-specific illness syndrome (i.e., set of signs and symptoms). Affected cognitive areas can be memory, attention, language, and problem solving. Normally, symptoms must be present for at least six months to a diagnosis.[2] Cognitive dysfunction of shorter duration is called delirium. In all types of general cognitive dysfunction, higher mental functions are affected first in the process. Especially in later stages of the condition, subjects may be disoriented in time (not knowing the day, week, or even year), in place (not knowing where they are), and in person (not knowing who they, or others around them, are). Dementia, though often treatable to some degree, is usually due to causes that are progressive and incurable, as observed in primary progressive aphasia (PPA).[3][4][5] Symptoms of dementia can be classified as either reversible or irreversible, depending upon the etiology of the disease. Fewer than 10% of cases of dementia are due to causes that may presently be reversed with treatment. Causes include many different specific disease processes,
in the same way that symptoms of organ dysfunction such as shortness of breath, jaundice, or pain are attributable to many etiologies. Delirium can be easily confused with dementia due to similar symptoms. Delirium is characterized by a sudden onset, fluctuating course, a short duration (often lasting from hours to weeks), and is primarily related to a somatic (or medical) disturbance. In comparison, dementia has typically an insidious onset (except in the cases of a stroke or trauma), slow decline of mental functioning, as well as a longer duration (from months to years).[6] Some mental illnesses, including depression and psychosis, may produce symptoms that must be differentiated from both delirium and dementia.[7] There are many specific types (causes) of dementia, often showing slightly different symptoms. However, the symptom overlap is such that it is impossible to diagnose the type of dementia by symptomatology alone, and in only a few cases are symptoms enough to give a high probability of some specific cause. Diagnosis is therefore aided by nuclear medicine brain scanning techniques. Certainty cannot be attained except with brain biopsy during life, or at autopsy in death. Some of the most common forms of dementia are: Alzheimer's disease, vascular dementia, frontotemporal dementia, semantic dementia and dementia with Lewy bodies. It is possible for a patient to exhibit two or more dementing processes at the same time, as none of the known types of dementia protects against the others. Indeed, about ten per cent of people with dementia have what is known as mixed dementia, which may be a combination of Alzheimer's disease and multi-infarct dementia.[8][9]
Signs and symptoms Comorbidities
Dementia is not merely a problem of memory. It reduces the ability to learn, reason, retain or recall past experience and there is also loss of patterns of thoughts, feelings and activities (Gelder et al. 2005). Additional mental and behavioral problems often affect people who have dementia, and may influence quality of life, caregivers, and the need for institutionalization. As dementia worsens individuals may neglect themselves and may become disinhibited and may become incontinent. (Gelder et al. 2005). Depression affects 20–30% of people who have dementia, and about 20% have anxiety.[10] Psychosis (often delusions of persecution) and agitation/aggression also often accompany dementia. Each of these must be assessed and treated independently of the underlying dementia.[11]
Causes Fixed cognitive impairment
Various types of brain injury, occurring as a single event, may cause irreversible but fixed cognitive impairment. Traumatic brain injury may cause generalized damage to the white matter of the brain (diffuse axonal injury), or more localized damage (as also may neurosurgery). A temporary reduction in the brain's supply of blood or oxygen may lead to hypoxic-ischemic injury. Strokes (ischemic stroke, or intracerebral, subarachnoid, subdural or extradural hemorrhage) or infections (meningitis and/or encephalitis) affecting the brain, prolonged epileptic seizures and acute hydrocephalus may also have long-term effects on cognition. Excessive alcohol use may cause alcohol dementia, Wernicke's encephalopathy and/or Korsakoff's psychosis. Slowly progressive dementia
Dementia that begins gradually and worsens progressively over several years is usually caused by neurodegenerative disease—that is, by conditions that affect only or primarily the neurons of the brain and cause gradual but irreversible loss of function of these cells. Less commonly, a non-degenerative condition may have secondary effects on brain cells, which may or may not be reversible if the condition is treated. Causes of dementia depend on the age at which symptoms begin. In the elderly population (usually defined in this context as over 65 years of age), a large majority of cases of dementia are caused by Alzheimer's disease, vascular dementia, or both. Dementia with Lewy bodies is another common cause, which again may occur alongside either or both of the other causes.[12][13][14] Hypothyroidism sometimes causes slowly progressive cognitive impairment as the main symptom, and this may be fully reversible with treatment. Normal pressure hydrocephalus, though relatively rare, is important to recognize since treatment may prevent progression and improve other symptoms of the condition. However, significant cognitive improvement is unusual. Dementia is much less common under 65 years of age. Alzheimer's disease is still the most frequent cause, but inherited forms of the disease for a higher proportion of cases in this age group. Frontotemporal lobar degeneration and Huntington's disease for most of the remaining cases.[15] Vascular dementia also occurs, but this in turn may be due to underlying conditions (including antiphospholipid syndrome, CADASIL, MELAS, homocystinuria, moyamoya and Binswanger's disease). People who receive frequent head trauma, such as boxers or football players, are at risk of chronic traumatic encephalopathy[16] (also called dementia pugilistica in boxers). In young adults (up to 40 years of age) who were previously of normal intelligence, it is very rare to develop dementia without other features of neurological disease, or without features of disease elsewhere in the body. Most cases of progressive cognitive disturbance in this age group are caused by psychiatric illness, alcohol or other drugs, or metabolic disturbance. However, certain genetic disorders can cause true neurodegenerative dementia at this age. These include
familial Alzheimer's disease, SCA17 (dominant inheritance); adrenoleukodystrophy (X-linked); Gaucher's disease type 3, metachromatic leukodystrophy, Niemann-Pick disease type C, pantothenate kinase-associated neurodegeneration, Tay-Sachs disease and Wilson's disease (all recessive). Wilson's disease is particularly important since cognition can improve with treatment. At all ages, a substantial proportion of patients who complain of memory difficulty or other cognitive symptoms are suffering from depression rather than a neurodegenerative disease. Vitamin deficiencies and chronic infections may also occur at any age; they usually cause other symptoms before dementia occurs, but occasionally mimic degenerative dementia. These include deficiencies of vitamin B12, folate or niacin, and infective causes including cryptococcal meningitis, HIV, Lyme disease, progressive multifocal leukoencephalopathy, subacute sclerosing panencephalitis, syphilis and Whipple's disease. Rapidly progressive dementia
Creutzfeldt-Jakob disease typically causes a dementia that worsens over weeks to months, being caused by prions. The common causes of slowly progressive dementia also sometimes present with rapid progression: Alzheimer's disease, dementia with Lewy bodies, frontotemporal lobar degeneration (including corticobasal degeneration and progressive supranuclear palsy). On the other hand, encephalopathy or delirium may develop relatively slowly and resemble dementia. Possible causes include brain infection (viral encephalitis, subacute sclerosing panencephalitis, Whipple's disease) or inflammation (limbic encephalitis, Hashimoto's encephalopathy, cerebral vasculitis); tumors such as lymphoma or glioma; drug toxicity (e.g. anticonvulsant drugs); metabolic causes such as liver failure or kidney failure; and chronic subdural hematoma. As a feature of other conditions
There are many other medical and neurological conditions in which dementia only occurs late in the illness, or as a minor feature. For example, a proportion of patients with Parkinson's disease develop dementia, though widely varying figures are quoted for this proportion.[citation needed] When dementia occurs in Parkinson's disease, the underlying cause may be dementia with Lewy bodies or Alzheimer's disease, or both.[17] Cognitive impairment also occurs in the Parkinsonplus syndromes of progressive supranuclear palsy and corticobasal degeneration (and the same underlying pathology may cause the clinical syndromes of frontotemporal lobar degeneration). Chronic inflammatory conditions of the brain may affect cognition in the long term, including Behçet's disease, multiple sclerosis, sarcoidosis, Sjögren's syndrome and systemic lupus erythematosus. Although the acute porphyrias may cause episodes of confusion and psychiatric disturbance, dementia is a rare feature of these rare diseases. Aside from those mentioned above, inherited conditions that can cause dementia (alongside other symtoms) include:[18]
Alexander disease Canavan disease
Cerebrotendinous xanthomatosis Dentatorubral-pallidoluysian atrophy Fatal familial insomnia Fragile X-associated tremor/ataxia syndrome Glutaric aciduria type 1 Krabbe's disease Maple syrup urine disease
Niemann Pick disease type C Neuronal ceroid lipofuscinosis Neuroacanthocytosis Organic acidemias Pelizaeus-Merzbacher disease Urea cycle disorders Sanfilippo syndrome type B Spinocerebellar ataxia type 2
Diagnosis Proper differential diagnosis between the types of dementia (cortical and subcortical) requires, at the least, referral to a specialist, e.g., a geriatric internist, geriatric psychiatrist, neurologist, neuropsychologist, or geropsychologist.[citation needed] Duration of symptoms must evident for at least six months to a diagnosis of dementia or organic brain syndrome (ICD-10). Cognitive testing
Sensitivity and specificity of common tests for dementia
There exist some brief tests (5–15 minutes) that have reasonable reliability Test Sensitivity Specificity Reference and can be used in the office or other [19] setting to screen cognitive status. MMSE 71%-92% 56%-96% Examples of such tests include the [20] abbreviated mental test score (AMTS), 3MS 83%-93.5% 85%-90% the mini mental state examination [20] (MMSE), Modified Mini-Mental State AMTS 73%-100% 71%-100% [21] Examination (3MS), the Cognitive Abilities Screening Instrument (CASI),[22] the Trail-making test,[23] and the clock drawing test.[24] Scores must be interpreted in the context of the person's educational and other background, and the particular circumstances. For example, a person highly depressed or in great pain is not expected to do well on many tests of mental ability. While many tests have been studied,[25][26][27] and some may emerge as better alternatives to the MMSE, presently the MMSE is the best studied and most commonly used. Another approach to screening for dementia is to ask an informant (relative or other er) to fill out a questionnaire about the person's everyday cognitive functioning. Informant
questionnaires provide complementary information to brief cognitive tests. Probably the best known questionnaire of this sort is the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE).[28] On the other hand the General Practitioner Assessment Of Cognition combines both, a patient assessment and an informant interview. It was specifically designed for the use in the primary care setting and is also available as a web-based test. Further evaluation includes retesting at another date, and istration of other tests of mental function. Increasingly, clinical neuropsychologists provide diagnostic consultation following istration of a complex full battery of cognitive testing, often lasting several hours, to determine functional patterns of decline associated with varying types of dementia. Tests of memory, executive function, processing speed, attention, and language skills are relevant, as well as tests of emotional and psychological adjustment. These tests assist with ruling out other etiologies and determining relative cognitive decline over time or from estimates of prior cognitive abilities. Laboratory tests
Routine blood tests are also usually performed to rule out treatable causes. These tests include vitamin B12, folic acid, thyroid-stimulating hormone (TSH), C-reactive protein, full blood count, electrolytes, calcium, renal function, and liver enzymes. Abnormalities may suggest vitamin deficiency, infection or other problems that commonly cause confusion or disorientation in the elderly. The problem is complicated by the fact that these cause confusion more often in persons who have early dementia, so that "reversal" of such problems may ultimately only be temporary.[citation needed] Testing for alcohol and other known dementia-inducing drugs may be indicated. Imaging
A CT scan or magnetic resonance imaging (MRI scan) is commonly performed, although these modalities do not have optimal sensitivity for the diffuse metabolic changes associated with dementia in a patient that shows no gross neurological problems (such as paralysis or weakness) on neurological exam. CT or MRI may suggest normal pressure hydrocephalus, a potentially reversible cause of dementia, and can yield information relevant to other types of dementia, such as infarction (stroke) that would point at a vascular type of dementia. The functional neuroimaging modalities of SPECT and PET are more useful in assessing longstanding cognitive dysfunction, since they have shown similar ability to diagnose dementia as a clinical exam and cognitive testing.[29] The ability of SPECT to differentiate the vascular cause (i.e., multi-infarct dementia) from Alzheimer's disease dementias, appears superior to differentiation by clinical exam.[30] Recent research has established the value of PET imaging using carbon-11 Pittsburgh Compound B as a radiotracer (PIB-PET) in predictive diagnosis of various kinds of dementia, in particular
Alzheimer's disease. Studies from Australia have found PIB-PET 86% accurate in predicting which patients with mild cognitive impairment would develop Alzheimer's disease within two years. In another study, carried out using 66 patients seen at the University of Michigan, PET studies using either PIB or another radiotracer, carbon-11 dihydrotetrabenazine (DTBZ), led to more accurate diagnosis for more than one-fourth of patients with mild cognitive impairment or mild dementia.[31]
Prevention Main article: Prevention of dementia
A study done at the University of Bari in Italy, found that a group drinking alcoholic beverages moderately had a slower progression to dementia. In a group of 1,566 elderly Italians, 1,445 had no cognitive impairment and 121 had suffered mild cognitive impairment, the study found that that over the duration of 3.5 years the people with MCI who drank less than one alcoholic beverage a day progressed to dementia at a rate that was 85% slower than those who drank no alcoholic beverages. However, the authors of the study commented that since it was epidemiologic, the findings might only be a marker of lifestyle, showing that "moderate lifestyle" in general is associated with slower dementia-progression.[32] A study failed to show a conclusive link between high blood pressure and developing dementia. The study, published in the Lancet Neurology journal July 2008, found that blood pressure lowering medication did not reduce dementia but that meta analysis of the study data combined with other data suggested that further study could be warranted.[33] Brain-derived neurotrophic factor (BDNF) expression is associated with prevention of some dementia types.[34][35][36] A Canadian study found that a lifetime of bilingualism delays the onset of dementia by an average of four years when compared to monolingual patients. [37][38][39]
Management Except for the treatable types listed above, there is no cure to this illness. Cholinesterase inhibitors are often used early in the disease course. Cognitive and behavioral interventions may also be appropriate. Educating and providing emotional to the caregiver (or carer) is of importance as well (see also elderly care). It is important to recognize that since dementia impairs normal communication due to changes in receptive and expressive language, as well as the ability to plan and problem solve, agitated behaviour is often a form of communication for the person with dementia and actively searching for a potential cause, such as pain, physical illness, or overstimulation can be helpful in reducing agitation. [40] Additionally, using an “ABC analysis of behaviour” can be a useful tool for understanding behavior in patients with dementia. It involves looking at the antecedants (A), behavior (B), and consequences (C) associated with an event to help define the problem and prevent further incidents that may arise if the person’s needs are misunderstood. [41]
Medications
Currently, no medications are clinically proven to prevent or cure dementia.[42] Although some medications are approved for use in the treatment of dementia, these treat the behavioural and cognitive symptoms of dementia, but have no effect on the underlying pathophysiology.[43]
Acetylcholinesterase inhibitors: Tacrine (Cognex), donepezil (Aricept), galantamine (Razadyne), and rivastigmine (Exelon) are approved by the United States Food and Drug istration (FDA) for treatment of dementia induced by Alzheimer's disease. They may be useful for other similar diseases causing dementia such as Parkinsons or vascular dementia.[43] Acetylcholinesterase inhibitors aim to increase the amount of the neurotransmitter acetylcholine, which is deficient in people with dementia.[44] This is done by inhibiting the action of the enzyme acetylcholinesterase, which breaksdown acetylcholine as part of normal brain function.[45] Though these medications are commonly prescribed, in a minority of patients these drugs can cause side effects including bradycardia and syncope.[46] N-methyl-D-aspartate (NMDA) receptor blockers: Memantine is marketed under several names by different pharmaceutical companies including: Abixa, Akatinol, Axura, Ebixa, Memox and Namenda.[47] In dementia, NMDA receptors are over-stimulated by glutamate, which creates problems for neurotransmission (and thus cognition) and also leads to damage to neurons through excitotoxicity. Memantine is thought to work by improving the “signal-to-noise” ratio and preventing excitotoxic damage.[48] Hence, due to their differing mechanisms of action memantine and acetylcholinesterase inhibitors can be used in combination with each other.[49][50]
Off label
"Off label" use of a drug means use that the FDA has not formally approved for the drug, but is still legal at a doctor's discretion. Due to lack of formal FDA approval studies in given patient populations, off label use of drugs is common in medical practice. In treating children, the mentally ill, and dementia patients, off label drug use is even more common—since lack of informed consent for the treatment group in studies makes these more expensive and difficult (since it must be done by proxy). For off-patent pharmaceuticals, treatment studies are less often done, due to the greater expense. Drugs sometimes used off-label to treat underlying causes of dementia, or symptoms of dementia, include:
Antidepressant drugs: Depression is frequently associated with dementia and generally worsens the degree of cognitive and behavioral impairment. Antidepressants effectively treat the cognitive and behavioral symptoms of depression in patients with Alzheimer's disease,[51] but evidence for their use in other forms of dementia is weak.[52] Anxiolytic drugs: Many patients with dementia experience anxiety symptoms. Although benzodiazepines like diazepam (Valium) have been used for treating anxiety in other situations, they are often avoided because they may increase agitation in persons with dementia and are likely to worsen cognitive problems or are too sedating. Buspirone (Buspar) is often initially tried for mild-to-moderate anxiety.[citation needed] There is little evidence for the effectiveness of
benzodiazepines in dementia, whereas there is evidence for the effectivess of antipsychotics (at low doses).[53] Selegiline, a drug used primarily in the treatment of Parkinson's disease, appears to slow the development of dementia. Selegiline is thought to act as an antioxidant, preventing free radical damage. However, it also acts as a stimulant, making it difficult to determine whether the delay in onset of dementia symptoms is due to protection from free radicals or to the general elevation of brain activity from the stimulant effect.[54] Antipsychotic drugs: Both typical antipsychotics (such as Haloperidol) and atypical antipsychotics such as (risperidone) increase the risk of death in dementia-associated psychosis.[55] This means that any use of antipsychotic medication for dementia-associated psychosis is off-label and should only be considered after discussing the risks and benefits of treatment with these drugs, and after other treatment modalities have failed. In the UK around 144,000 people with dementia are unnecessarily prescribed antipsychotic drugs, around 2000 patients die as a result of taking the drugs each year.[56]
Pain See also: Assessment in nonverbal patients
As people age, they experience more health problems, and most health problems associated with aging carry a substantial burden of pain; so, between 25% and 50% of older adults experience persistent pain. Seniors with dementia experience the same prevalence of conditions likely to cause pain as seniors without dementia.[57] Pain is often overlooked in older adults and, when screened for, often poorly assessed, especially among those with dementia since they become incapable of informing others that they're in pain.[57][58] Beyond the issue of humane care, unrelieved pain has functional implications. Persistent pain can lead to decreased ambulation, depressed mood, sleep disturbances, impaired appetite and exacerbation of cognitive impairment,[58] and pain-related interference with activity is a factor contributing to falls in the elderly.[57][59] Although persistent pain in the person with dementia is difficult to communicate, diagnose and treat, failure to address persistent pain has profound functional, psychosocial and quality of life implications for this vulnerable population. Health professionals often lack the skills and usually lack the time needed to recognize, accurately assess and adequately monitor pain in people with dementia.[57][60] Family and friends can make a valuable contribution to the care of a person with dementia by learning to recognize and assess their pain. Educational resources (such as the Understand Pain and Dementia tutorial) and observational assessment tools are available.[57][61][62] Services
Adult daycare centers as well as special care units in nursing homes often provide specialized care for dementia patients. Adult daycare centers offer supervision, recreation, meals, and limited health care to participants, as well as providing respite for caregivers.
In addition, Home care can provide one-on-one and care in the home allowing for more individualized attention that is needed as the disease progresses. While some preliminary studies have found that music therapy may be useful in helping patients with dementia, their quality has been low and no reliable conclusions can be drawn from them.[63] Psychiatric nurses can make a distinctive contribution to people's mentalness. Psychiatric nursing is based on four main premises:
The nursing is an interactive, developmental human activity that is more concerned with the future development of the person than the origins. The experience of mental distress related to the psychiatric disorder is represented through disturbances or reports of private events that are known only to the person concerned. Nurses and the people in care are engaged in a relationship based on mutual influence. The experience of psychiatric disorder is translated into problems of everyday living and the nurse notes the human responses to the psychiatric distress, not the disorder.[64]
Feeding tubes
In advanced dementia, patients may lose the the ability to swallow effectively, leading to the consideration for feeding tubes to be used as a way to give nutrition. The risks associated with the use of tubes are not well known.[65] However, the risks include agitation and the patient pulling out the feeding tube, tubes becoming dislodged, clogged, or malpositioned. There is about a 1% fatality rate directly related to the procedure [66] with a 3% major complication rate [67]
Society and culture Driving with dementia could lead to severe injury or even death to self and others. Doctors should advise appropriate testing on when to quit driving.[68] In the United States, Florida's Baker Act allows law-enforcement authorities and the judiciary to force mental evaluation for those suspected of suffering from dementia or other mental incapacities.[citation needed] In the United Kingdom, as with all mental disorders, where a person with dementia could potentially be a danger to themselves or others, they can be detained under the Mental Health Act 1983 for the purposes of assessment, care and treatment. This is a last resort, and usually avoided if the patient has family or friends who can ensure care. The United Kingdom DVLA (Driving & Vehicle Licensing Agency) states that people with dementia who specifically suffer with poor short term memory, disorientation, lack of insight or judgment are almost certainly not fit to drive—and in these instances, the DVLA must be informed so said license can be revoked. They do however acknowledge low-severity cases and
those with an early diagnosis, and those drivers may be permitted to drive pending medical reports. Behaviour may be disorganized, restless or inappropriate. Some people become restless or wander about by day and sometimes at night. When people suffering from dementia are put in circumstances beyond their abilities, there may be a sudden change to tears or anger (a "catastrophic reaction").[69] Many countries consider the care of people living with dementia to be a national priority, and invest in resources and education to better inform health and social service workers, unpaid carers, relatives and of the wider community. Countries including the Netherlands, New Zealand, and all nations within the United Kingdom (Scotland, England, Wales and Northern Ireland) have all published national plans or strategies. [70] In these national plans, there is recognition that people can live well with dementia for a number of years, as long as there is the right and timely access to a diagnosis. David Cameron has described dementia as being a "national crisis", affecting 800,000 people in the United Kingdom.[71] There are many networks available to those who have a diagnosis of dementia, and their families and carers. There are also charitable organisations which aim to raise awareness and campaign for the rights of people living with dementia. It is often useful to these organisations if you are worried about dementia. Some examples are: the Alzheimer's Society (UK based) , Alzheimer's Scotland (full name Alzheimer's Scotland: Action on Dementia) and the Alzheimer's Association (USA). Whilst it might appear that there is only for Alzheimer's Disease, many of these charitable organisations in fact aim to increase the quality of life and uphold the rights of anyone with a diagnosis of any dementia. This includes Vascular Dementia, Dementia with Lewy Bodies and other rarer forms. A competition by the Design Council found that the smell of a bakewell tart, wrist bands that could help and guide dogs for the mind[clarification needed] were all suggestions for ideas to help people with dementia.[72] German nursing homes have installed fake bus stops, so patients with dementia "wait" for a bus there instead of wandering farther away.[73]
Epidemiology
Disability-adjusted life year for Alzheimer and other dementias per 100,000 inhabitants in 2002. <100
200-220
100-120
220-240
120-140
240-260
140-160
260-280
160-180
280-300
180-200
>300
The number of cases of dementia in 2010 was estimates at 35.6 million in 2010.[74] Rates increase significantly with age, with dementia affecting 5% of the population older than 65 and 2--40% of those older than 85.[75] Around two thirds of individuals with dementia live in low and middle income countries, where the sharpest increases in numbers are predicted.[76] Rates are slightly higher in women than men at ages 65 and grater.[75]
History Main articles: Dementia praecox and Alzheimer's disease
Until the end of the 19th century, dementia was a much broader clinical concept. It included mental illness and any type of psychosocial incapacity, including conditions that could be reversed.[77] Dementia at this time simply referred to anyone who had lost the ability to reason, and was applied equally to psychosis of mental illness, "organic" diseases like syphilis that destroy the brain, and to the dementia associated with old age, which was attributed to "hardening of the arteries." Dementia in the elderly was called senile dementia or senility, and viewed as a normal and somewhat inevitable aspect of growing old, rather than as being caused by any specific diseases. At the same time, in 1907, a specific organic dementing process of early onset, called Alzheimer's disease, had been described. This was associated with particular microscopic changes in the brain, but was seen as a rare disease of middle age. Much like other diseases associated with aging, dementia was rare before the 20th century, although by no means unknown, due to the fact that it is most prevalent in people over 80, and such lifespans were uncommon in preindustrial times. Conversely, syphilitic dementia was widespread in the developed world until largely being eradicated by the use of penicillin after WWII. By the period of 1913-20, schizophrenia had been well-defined in a way similar to today, and also the term dementia praecox had been used to suggest the development of senile-type dementia at a younger age. Eventually the two fused, so that until 1952 physicians used the dementia praecox (precocious dementia) and schizophrenia interchangeably. The term precocious dementia for a mental illness suggested that a type of mental illness like schizophrenia (including paranoia and decreased cognitive capacity) could be expected to arrive normally in all persons with greater age (see paraphrenia). After about 1920, the beginning use of dementia for what we now understand as schizophrenia and senile dementia helped limit the word's meaning to "permanent, irreversible mental deterioration." This began the change to the more recognizable use of the term today.
In 1976, neurologist Robert Katzmann suggested a link between senile dementia and Alzheimer's disease.[78] Katzmann suggested that much of the senile dementia occurring (by definition) after the age of 65, was pathologically identical with Alzheimer's disease occurring before age 65 and therefore should not be treated differently. He noted that the fact that "senile dementia" was not considered a disease, but rather part of aging, was keeping millions of aged patients experiencing what otherwise was identical with Alzheimer's disease from being diagnosed as having a disease process, rather than simply considered as aging normally.[79] Katzmann thus suggested that Alzheimer's disease, if taken to occur over age 65, is actually common, not rare, and was the 4th or 5th leading cause of death, even though rarely reported on death certificates in 1976. This suggestion opened the view that dementia is never normal, and must always be the result of a particular disease process, and is not part of the normal healthy aging process, per se. The ensuing debate led for a time to the proposed disease diagnosis of "senile dementia of the Alzheimer's type" (SDAT) in persons over the age of 65, with "Alzheimer's disease" diagnosed in persons younger than 65 who had the same pathology. Eventually, however, it was agreed that the age limit was artificial, and that Alzheimer's disease was the appropriate term for persons with the particular brain pathology seen in this disease, regardless of the age of the person with the diagnosis. A helpful finding was that although the incidence of Alzheimer's disease increased with age (from 5-10% of 75-year-olds to as many as 40-50% of 90-year-olds), there was no age at which all persons developed it, so it was not an inevitable consequence of aging, no matter how great an age a person attained. Evidence of this is shown by numerous documented supercentenarians (people living to 110+) that experienced no serious cognitive impairment. Also, after 1952, mental illnesses like schizophrenia were removed from the category of organic brain syndromes, and thus (by definition) removed from possible causes of "dementing illnesses" (dementias). At the same, however, the traditional cause of senile dementia– "hardening of the arteries" – now returned as a set of dementias of vascular cause (small strokes). These were now termed multi-infarct dementias or vascular dementias. In the 21st century, a number of other types of dementia have been differentiated from Alzheimer's disease and vascular dementias (these two being the most common types). This differentiation is on the basis of pathological examination of brain tissues, symptomatology, and by different patterns of brain metabolic activity in nuclear medical imaging tests such as SPECT and PETscans of the brain. The various forms of dementia have differing prognoses (expected outcome of illness), and also differing sets of epidemologic risk factors. The causal etiology of many of them, including Alzheimer's disease, remains unknown, although many theories exist such as accumulation of protein plaques as part of normal aging, inflammation, inadequate blood sugar, and traumatic brain injury.
Huntington's disease Huntington's disease Classification and external resources
A microscope image of Medium spiny neurons (yellow) with nuclear inclusions (orange), which occur as part of the disease process, image width 360 µm ICD-10
G10, F02.2
ICD-9
333.4, 294.1
OMIM
143100
DiseasesDB
6060
MedlinePlus
000770
eMedicine
article/1150165 article/792600 article/289706
MeSH
D006816
Huntington's disease (HD) is a neurodegenerative genetic disorder that affects muscle coordination and leads to cognitive decline and psychiatric problems. It typically becomes noticeable in mid-adult life. HD is the most common genetic cause of abnormal involuntary writhing movements called chorea, which is why the disease used to be called Huntington's chorea. It is much more common in people of Western European descent than in those of Asian or African ancestry. The disease is caused by an autosomal dominant mutation in either of an individual's two copies of a gene called Huntingtin, which means any child of an affected person
typically has a 50% chance of inheriting the disease. Physical symptoms of Huntington's disease can begin at any age from infancy to old age, but usually begin between 35 and 44 years of age. Through genetic anticipation, the disease may develop earlier in life in each successive generation. About 6% of cases start before the age of 21 years with an akinetic-rigid syndrome; they progress faster and vary slightly. The variant is classified as juvenile, akinetic-rigid or Westphal variant HD. The Huntingtin gene provides the genetic information for a protein that is also called "huntingtin". Expansion of a CAG triplet repeat stretch within the Huntingtin gene results in a different (mutant) form of the protein, which gradually damages cells in the brain, through mechanisms that are not fully understood. The genetic basis of HD was discovered in 1993 by an international collaborative effort spearheaded by the Hereditary Disease Foundation. Genetic testing can be performed at any stage of development, even before the onset of symptoms. This fact raises several ethical debates: the age at which an individual is considered mature enough to choose testing; whether parents have the right to have their children tested; and managing confidentiality and disclosure of test results. Genetic counseling has developed to inform and aid individuals considering genetic testing and has become a model for other genetically dominant diseases. Symptoms of the disease can vary between individuals and even among affected of the same family, but usually progress predictably. The earliest symptoms are often subtle problems with mood or cognition. A general lack of coordination and an unsteady gait often follows. As the disease advances, uncoordinated, jerky body movements become more apparent, along with a decline in mental abilities and behavioral and psychiatric problems. Physical abilities are gradually impeded until coordinated movement becomes very difficult. Mental abilities generally decline into dementia. Complications such as pneumonia, heart disease, and physical injury from falls reduce life expectancy to around twenty years after symptoms begin. There is no cure for HD, and full-time care is required in the later stages of the disease. Existing pharmaceutical and non-drug treatments can relieve many of its symptoms. Research and organizations, first founded in the 1960s and increasing in number, work to increase public awareness, to provide for individuals and their families, and to promote and facilitate research. Many new research discoveries have been made and understanding of the disease is improving. Current research directions include determining the exact mechanism of the disease, improving animal models to expedite research, clinical trials of pharmaceuticals to treat symptoms or slow the progression of the disease, and studying procedures such as stem cell therapy with the goal of repairing damage caused by the disease. o
Signs and symptoms Symptoms of Huntington's disease commonly become noticeable between the ages of 35 and 44 years, but they can begin at any age from infancy to old age.[1][2] In the early stages, there are
subtle changes in personality, cognition, and physical skills.[1] The physical symptoms are usually the first to be noticed, as cognitive and psychiatric symptoms are generally not severe enough to be recognized on their own at the earlier stages.[1] Almost everyone with Huntington's disease eventually exhibits similar physical symptoms, but the onset, progression and extent of cognitive and psychiatric symptoms vary significantly between individuals.[3][4] The most characteristic initial physical symptoms are jerky, random, and uncontrollable movements called chorea.[1] Chorea may be initially exhibited as general restlessness, small unintentionally initiated or uncompleted motions, lack of coordination, or slowed saccadic eye movements.[1] These minor motor abnormalities usually precede more obvious signs of motor dysfunction by at least three years.[3] The clear appearance of symptoms such as rigidity, writhing motions or abnormal posturing appear as the disorder progresses.[5] These are signs that the system in the brain that is responsible for movement has been affected.[6] Psychomotor functions become increasingly impaired, such that any action that requires muscle control is affected. Common consequences are physical instability, abnormal facial expression, and difficulties chewing, swallowing and speaking.[5] Eating difficulties commonly cause weight loss and may lead to malnutrition.[7][8] Sleep disturbances are also associated symptoms.[9] Juvenile HD differs from these symptoms in that it generally progresses faster and chorea is exhibited briefly, if at all, with rigidity being the dominant symptom. Seizures are also a common symptom of this form of Reported prevalences of behavioral and psychiatric symptoms in Huntington's disease[10] [5] HD. Irritability
38–73%
Cognitive abilities are Apathy 34–76% impaired progressive Anxiety 34–61% ly.[6] Especially Depressed mood 33–69% affected are Obsessive and compulsive 10–52% executive functions Psychotic 3–11% which include planning, cognitive flexibility, abstract thinking, rule acquisition, initiating appropriate actions and inhibiting inappropriate actions.[6] As the disease progresses, memory deficits tend to appear. Reported impairments range from short-term memory deficits to long-term memory difficulties, including deficits in episodic (memory of one's life), procedural (memory of the body of how to perform an activity) and working memory.[6] Cognitive problems tend to worsen over time, ultimately leading to dementia.[6] This pattern of deficits has been called a subcortical dementia syndrome to distinguish it from the typical effects of cortical dementias e.g. Alzheimer's disease.[6]
Reported neuropsychiatric manifestations are anxiety, depression, a reduced display of emotions (blunted affect), egocentrism, aggression, and compulsive behavior, the latter of which can cause or worsen addictions, including alcoholism, gambling, and hypersexuality.[10] Difficulties in recognizing other people's negative expressions have also been observed.[6] The prevalence of these symptoms is highly variable between studies, with estimated rates for lifetime prevalence of psychiatric disorders between 33% and 76%.[10] For many sufferers and their families, these symptoms are among the most distressing aspects of the disease, often affecting daily functioning and constituting reason for institutionalization.[10] Suicidal thoughts and suicide attempts are more common than in the general population.[1] Mutant Huntingtin is expressed throughout the body and associated with abnormalities in peripheral tissues that are directly caused by such expression outside the brain. These abnormalities include muscle atrophy, cardiac failure, impaired glucose tolerance, weight loss, osteoporosis and testicular atrophy.[11]
Genetics All humans have two copies of the Huntingtin gene (HTT), which codes for the protein Huntingtin (Htt). The gene is also called HD and IT15, which stands for 'interesting transcript 15'. Part of this gene is a repeated section called a trinucleotide repeat, which varies in length between individuals and may change length between generations. When the length of this repeated section reaches a certain threshold, it produces an altered form of the protein, called mutant Huntingtin protein (mHtt). The differing functions of these proteins are the cause of pathological changes which in turn cause the disease symptoms. The Huntington's disease mutation is genetically dominant and almost fully penetrant: mutation of either of a person's HTT genes causes the disease. It is not inherited according to sex, but the length of the repeated section of the gene, and hence its severity can be influenced by the sex of the affected parent.[12] Genetic mutation
HD is one of several trinucleotide repeat disorders which are caused by the length of a repeated section of a gene exceeding a normal range.[13] The HTT gene is located on the short arm of chromosome 4[13] at 4p16.3. HTT contains a sequence of three DNA bases—cytosine-adenineguanine (CAG)—repeated multiple times (i.e. ... CAGCAGCAG ...), known as a trinucleotide repeat.[13] CAG is the genetic code for the amino acid glutamine, so a series of them results in the production of a chain of glutamine known as a polyglutamine tract (or polyQ tract), and the repeated part of the gene, the PolyQ region.[14] Classification of the trinucleotide repeat, and resulting disease status, depends on the number of CAG repeats[13] Repeat count
Classification
Disease status
Risk to offspring
<26
Normal
Will not be affected
None
27–35
Intermediate
Will not be affected
Elevated but <<50%
36–39
Reduced Penetrance
May or may not be affected
50%
40+
Full Penetrance
Will be affected
50%
Generally, people have fewer than 36 repeated glutamines in the polyQ region which results in production of the cytoplasmic protein Huntingtin.[13] However, a sequence of 36 or more glutamines results in the production of a protein which has different characteristics.[13] This altered form, called mHtt (mutant Htt), increases the decay rate of certain types of neurons. Regions of the brain have differing amounts and reliance on these type of neurons, and are affected accordingly.[5] Generally, the number of CAG repeats is related to how much this process is affected, and s for about 60% of the variation of the age of the onset of symptoms. The remaining variation is attributed to environment and other genes that modify the mechanism of HD.[13] 36–39 repeats result in a reduced-penetrance form of the disease, with a much later onset and slower progression of symptoms. In some cases the onset may be so late that symptoms are never noticed.[15] With very large repeat counts, HD has full penetrance and can occur under the age of 20, when it is then referred to as juvenile HD, akinetic-rigid, or Westphal variant HD. This s for about 7% of HD carriers.[16] Inheritance
Huntington's disease is inherited in an autosomal dominant fashion. The probability of each offspring inheriting an affected gene is 50%. Inheritance is independent of gender, and the phenotype does not skip generations.
Huntington's disease has autosomal dominant inheritance, meaning that an affected individual typically inherits one copy of the gene with an expanded trinucleotide repeat (the mutant allele) from an affected parent.[1] Since penetrance of the mutation is very high, those who have a mutated copy of the gene will have the disease. In this type of inheritance pattern, each offspring
of an affected individual has a 50% risk of inheriting the mutant allele and therefore being affected with the disorder (see figure). This probability is sex-independent.[17] Trinucleotide CAG repeats over 28 are unstable during replication and this instability increases with the number of repeats present.[15] This usually leads to new expansions as generations (dynamic mutations) instead of reproducing an exact copy of the trinucleotide repeat.[13] This causes the number of repeats to change in successive generations, such that an unaffected parent with an "intermediate" number of repeats (28–35), or "reduced penetrance" (36–40), may on a copy of the gene with an increase in the number of repeats that produces fully penetrant HD.[13] Such increases in the number of repeats (and hence earlier age of onset and severity of disease) in successive generations is known as genetic anticipation.[13] Instability is greater in spermatogenesis than oogenesis;[13] maternally inherited alleles are usually of a similar repeat length, whereas paternally inherited ones have a higher chance of increasing in length.[13][18] It is rare for Huntington's disease to be caused by a new mutation, where neither parent has over 36 CAG repeats.[19] In the rare situations where both parents have an expanded HD gene, the risk increases to 75%, and when either parent has two expanded copies, the risk is 100% (all children will be affected). Individuals with both genes affected are rare. For some time HD was thought to be the only disease for which possession of a second mutated gene did not affect symptoms and progression,[20] but it has since been found that it can affect the phenotype and the rate of progression.[13][21]
Mechanism The Htt protein interacts with over 100 other proteins, and appears to have multiple biological functions.[22] The behavior of mutant huntingtin protein is not completely understood, but it is toxic to certain types of cells, particularly in the brain. Early damage is most evident in the striatum, but as the disease progresses, other areas of the brain are also more conspicuously affected. Early symptoms are attributable to functions of the striatum and its cortical connections - namely control over movement, mood and higher cognitive function.[12] Htt function See also: Huntingtin
Htt is expressed in all mammalian cells. The highest concentrations are found in the brain and testes, with moderate amounts in the liver, heart, and lungs.[12] The function of Htt in humans is unclear. It interacts with proteins which are involved in transcription, cell signaling and intracellular transporting.[12][23] In animals genetically modified to exhibit HD, several functions of Htt have been found.[24] In these animals, Htt is important for embryonic development, as its absence is related to embryonic death. It also acts as an anti-apoptotic agent preventing programmed cell death and controls the production of brain-derived neurotrophic factor, a protein which protects neurons and regulates their creation during neurogenesis. Htt also facilitates vesicular transport and synaptic transmission and controls neuronal gene transcription.[24] If the expression of Htt is increased and more Htt produced, brain cell survival is improved and the effects of mHtt are reduced, whereas when the expression of Htt is reduced,
the resulting characteristics are more typical of the presence of mHtt.[24] In humans the disruption of the normal gene does not cause the disease.[12] It is thought that the disease is not caused by inadequate production of Htt, but by a gain of toxic function of mHtt.[12] Cellular changes due to mHtt
A microscope image of a neuron with inclusion (stained orange) caused by HD, image width 250 µm
There are multiple cellular changes through which the toxic function of mHtt may manifest and produce the HD pathology.[25][26] During the biological process of posttranslational modification of mHtt, cleavage of the protein can leave behind shorter fragments constituted of parts of the polyglutamine expansion.[25] The polar nature of glutamine causes interactions with other proteins when it is overabundant in Htt proteins. Thus, the mHtt molecule strands will form hydrogen bonds with one another, forming a protein aggregate rather than folding into functional proteins.[27] Over time, the aggregates accumulate, ultimately interfering with neuron function because these fragments can then misfold and coalesce, in a process called protein aggregation, to form inclusion bodies within cells.[25][27] Neuronal inclusions run indirect interference. The excess protein aggregates clump together at axons and dendrites in neurons which mechanically stops the transmission of neurotransmitters because vesicles (filled with neurotransmitters) can no longer move through the cytoskeleton. Ultimately, over time, less and less neurotransmitters are available for release in signaling other neurons as the neuronal inclusions grow.[27] Inclusion bodies have been found in both the cell nucleus and cytoplasm.[25] Inclusion bodies in cells of the brain are one of the earliest pathological changes, and some experiments have found that they can be toxic for the cell, but other experiments have shown that they may form as part of the body's defense mechanism and help protect cells.[25] Several pathways by which mHtt may cause cell death have been identified. These include: effects on chaperone proteins, which help fold proteins and remove misfolded ones; interactions with caspases, which play a role in the process of removing cells; the toxic effects of glutamine on nerve cells; impairment of energy production within cells; and effects on the expression of genes. The cytotoxic effects of mHtt are strongly enhanced by interactions with a protein called Rhes, which is expressed mainly in the striatum.[28] Rhes was found to induce sumoylation of mHtt, which causes the protein clumps to disaggregate—studies in cell culture showed that the clumps were much less toxic than the disaggregated form.[28]
An additional theory that explains another way cell function may be disrupted by HD proposes that damage to mitochondria in striatal cells (numerous s of mitochondrial metabolism deficiency have been found) and the interactions of the altered huntingtin protein with numerous proteins in neurons leads to an increased vulnerability of glutamine, which, in large amounts, has been found to be an excitotoxin. Excitotoxins may cause damage to numerous cellular structures. Although glutamine is not found in excessively high amounts, it has been postulated that because of the increased vulnerability, even normal amounts glutamine can cause excitotoxins to be expressed.[29][30] Macroscopic changes due to mHtt
Area of the brain most damaged in early Huntington's disease – striatum (shown in purple)
HD affects the whole brain, but certain areas are more vulnerable than others. The most prominent early effects are in a part of the basal ganglia called the neostriatum, which is composed of the caudate nucleus and putamen.[12] Other areas affected include the substantia nigra, layers 3, 5 and 6 of the cerebral cortex, the hippocampus, purkinje cells in the cerebellum, lateral tuberal nuclei of the hypothalamus and parts of the thalamus.[13] These areas are affected according to their structure and the types of neurons they contain, reducing in size as they lose cells.[13] Striatal spiny neurons are the most vulnerable, particularly ones with projections towards the external globus pallidus, with interneurons and spiny cells projecting to the internal pallidum being less affected.[13][31] HD also causes an abnormal increase in astrocytes and activation of the brain's immune cells, microglia.[32] The basal ganglia—the part of the brain most prominently affected in early HD—play a key role in movement and behavior control. Their functions are not fully understood, but current theories propose that they are part of the cognitive executive system[6] and the motor circuit.[33] The basal ganglia ordinarily inhibit a large number of circuits that generate specific movements. To initiate a particular movement, the cerebral cortex sends a signal to the basal ganglia that causes the inhibition to be released. Damage to the basal ganglia can cause the release or reinstatement of the inhibitions to be erratic and uncontrolled, which results in an awkward start to motion or motions to be unintentionally initiated, or a motion to be halted before, or beyond, its intended completion. The accumulating damage to this area causes the characteristic erratic movements associated with HD.[33]
Transcriptional dysregulation
CREB-binding protein (CBP), a transcription factor, is essential for cell function because as a coactivator at a significant number of promoters, it activates the transcription of genes for survival pathways.[30] Furthermore, the amino acids that form CBP include a strip of 18 glutamines. Thus, the glutamines on CBP interact directly with the increased numbers of glutamine on the Htt chain and CBP gets pulled away from its typical location next to the nucleus.[34] Specifically, CRB contains an acetyltransferase domain that, in an experiment performed by Steffan and colleagues, showed that a Htt exon 1 with 51 glutamines bound to this domain in CBP.[30] Autopsied brains of those who had Huntington's disease also have been found to have incredibly reduced amounts of CBP.[34] In addition, when CBP is overexpressed, polyglutamine-induced death is diminished, further demonstrating that CBP plays an important role in Huntington's disease and neurons in general.[30]
Diagnosis Medical diagnosis of the onset of HD can be made following the appearance of physical symptoms specific to the disease.[1] Genetic testing can be used to confirm a physical diagnosis if there is no family history of HD. Even before the onset of symptoms, genetic testing can confirm if an individual or embryo carries an expanded copy of the trinucleotide repeat in the HTT gene that causes the disease. Genetic counseling is available to provide advice and guidance throughout the testing procedure, and on the implications of a confirmed diagnosis. These implications include the impact on an individual's psychology, career, family planning decisions, relatives and relationships. Despite the availability of pre-symptomatic testing, only 5% of those at risk of inheriting HD choose to do so.[12] Clinical
Coronal section from a MR brain scan of a patient with HD showing atrophy of the heads of the caudate nuclei, enlargement of the frontal horns of the lateral ventricles (hydrocephalus ex vacuo), and generalized cortical atrophy.[35]
A physical examination, sometimes combined with a psychological examination, can determine whether the onset of the disease has begun.[1] Excessive unintentional movements of any part of the body are often the reason for seeking medical consultation. If these are abrupt and have random timing and distribution, they suggest a diagnosis of HD. Cognitive or psychiatric symptoms are rarely the first diagnosed; they are usually only recognized in hindsight or when they develop further. How far the disease has progressed can be measured using the unified Huntington's disease rating scale which provides an overall rating system based on motor, behavioral, cognitive, and functional assessments.[36][37] Medical imaging, such as computerized tomography (CT) and magnetic resonance imaging (MRI), can show atrophy of the caudate nuclei early in the disease, as seen in the illustration, but these changes are not diagnostic of HD. Cerebral atrophy can be seen in the advanced stages of the disease. Functional neuroimaging techniques such as fMRI and PET can show changes in brain activity before the onset of physical symptoms but are experimental tools, and not used clinically.[13] Genetic See also: Genetic testing
Because HD follows an autosomal dominant pattern of inheritance, there is a strong motivation for individuals who are at risk of inheriting it to seek a diagnosis. The genetic test for HD consists of a blood test which counts the numbers of CAG repeats in each of the HTT alleles.[38] A positive result is not considered a diagnosis, since it may be obtained decades before the symptoms begin. However, a negative test means that the individual does not carry the expanded copy of the gene and will not develop HD.[13] A pre-symptomatic test is a life-changing event and a very personal decision.[13] The main reason given for choosing testing for HD is to aid in career and family decisions.[13] Over 95% of individuals at risk of inheriting HD do not proceed with testing, mostly because there is no treatment.[13] A key issue is the anxiety an individual experiences about not knowing whether they will eventually develop HD, compared to the impact of a positive result.[12] Irrespective of the result, stress levels have been found to be lower two years after being tested, but the risk of suicide is increased after a positive test result.[12] Individuals found to have not inherited the disorder may experience survivor guilt with regard to family who are affected.[12] Other factors taken into when considering testing include the possibility of discrimination and the implications of a positive result, which usually means a parent has an affected gene and that the individual's siblings will be at risk of inheriting it.[12] Genetic counseling in HD can provide information, advice and for initial decision-making, and then, if chosen, throughout all stages of the testing process.[39] Counseling and guidelines on the use of genetic testing for HD have become models for other genetic disorders, such as autosomal dominant cerebellar ataxias.[12][40][41] Presymptomatic testing for HD has also influenced testing for other illnesses with genetic variants such as polycystic kidney disease, familial Alzheimer's disease and breast cancer.[40] The European
Molecular Genetics Quality Network have published yearly external quality assessment scheme for molecular genetic testing for this disease and have developed best practice guidelines for genetic testing for HD to assist in testing and reporting of results.[42] Preimplantation genetic diagnosis
Embryos produced using in vitro fertilization may be genetically tested for HD using preimplantation genetic diagnosis (PGD). This technique, where one or two cells are extracted from a typically 4 to 8 cell embryo and then tested for the genetic abnormality, can then be used to ensure embryos affected with HD genes are not implanted, and therefore any offspring will not inherit the disease. Some forms of preimplantation genetic diagnosis — non-disclosure or exclusion testing — allow at-risk people to have HD-free offspring without revealing their own parental genotype, giving no information about whether they themselves are destined to develop HD. In exclusion testing, the embryos' DNA is compared with that of the parents and grandparents to avoid inheritance of the chromosomal region containing the HD gene from the affected grandparent. In non-disclosure testing, only disease-free embryos are replaced in the uterus while the parental genotype and hence parental risk for HD are never disclosed.[43][44] Prenatal testing
It is also possible to obtain a prenatal diagnosis for an embryo or fetus in the womb, using fetal genetic material acquired through chorionic villus sampling. This, too, can be paired with exclusion testing to avoid disclosure of parental genotype. Prenatal testing is performed on the understanding that if the fetus is found to carry an expanded HTT gene (or, in exclusion testing, found to be at 'high risk'), the pregnancy will be terminated.[45] Differential diagnosis
About 99% of HD diagnoses based on the typical symptoms and a family history of the disease are confirmed by genetic testing to have the expanded trinucleotide repeat that causes HD. Most of the remaining are called HD-like disorders.[5][46] Most of these other disorders are collectively labelled HD-like (HDL).[46] The cause of most HDL diseases is unknown, but those with known causes are due to mutations in the prion protein gene (HDL1), the junctophilin 3 gene (HDL2), a recessively inherited HTT gene (HDL3—only found in one family and poorly understood), and the gene encoding the TATA box-binding protein (HDL4/SCA17).[46] Other autosomal dominant diseases that can be misdiagnosed as HD are dentatorubral-pallidoluysian atrophy and neuroferritinopathy.[46] There are also autosomal recessive disorders that resemble sporadic cases of HD. Main examples are chorea acanthocytosis, pantothenate kinase-associated neurodegeneration and X-linked McLeod syndrome.[46]
Management
Chemical structure of tetrabenazine, an approved compound for the management of chorea in HD
There is no cure for HD, but there are treatments available to reduce the severity of some of its symptoms.[47] For many of these treatments, comprehensive clinical trials to confirm their effectiveness in treating symptoms of HD specifically are incomplete.[48][49] As the disease progresses the ability to care for oneself declines and carefully managed multidisciplinary caregiving becomes increasingly necessary.[48] Although there have been relatively few studies of exercises and therapies that help rehabilitate cognitive symptoms of HD, there is some evidence for the usefulness of physical therapy, occupational therapy, and speech therapy.[1] Tetrabenazine was approved in 2008 for treatment of chorea in Huntington's disease in the US.[50] Other drugs that help to reduce chorea include neuroleptics and benzodiazepines.[2] Compounds such as amantadine or remacemide are still under investigation but have shown preliminary positive results.[51] Hypokinesia and rigidity, especially in juvenile cases, can be treated with antiparkinsonian drugs, and myoclonic hyperkinesia can be treated with valproic acid.[2] Psychiatric symptoms can be treated with medications similar to those used in the general population.[48][49] Selective serotonin reuptake inhibitors and mirtazapine have been recommended for depression, while atypical antipsychotic drugs are recommended for psychosis and behavioral problems.[49] Specialist neuropsychiatric input is recommended as patients may require long-term treatment with multiple medications in combination.[1] Weight loss and eating difficulties due to dysphagia and other muscle discoordination are common, making nutrition management increasingly important as the disease advances.[48] Thickening agents can be added to liquids as thicker fluids are easier and safer to swallow.[48] Reminding the patient to eat slowly and to take smaller pieces of food into the mouth may also be of use to prevent choking.[48] If eating becomes too hazardous or uncomfortable, the option of using a percutaneous endoscopic gastrostomy is available. This is a feeding tube, permanently attached through the abdomen into the stomach, which reduces the risk of aspirating food and provides better nutritional management.[52] Assessment and management by speech and language therapists with experience in Huntington's disease is recommended.[1] Patients with Huntington's disease may see a physical therapist for non-invasive and nonmedication-based ways of managing the physical symptoms. Physical therapists may implement
fall risk assessment and prevention, as well as strengthening, stretching, and cardiovascular exercises. Walking aids may be prescribed as appropriate. Physical therapists also prescribe breathing exercises and airway clearance techniques with the development of respiratory problems.[53] Consensus guidelines on physiotherapy in Huntington's disease have been produced by the European HD Network.[53] Goals of early rehabilitation interventions are prevention of loss of function. Participation in rehabilitation programs during early to middle stage of the disease may be beneficial as it translates into long term maintenance of motor and functional performance. Rehabilitation during the late stage aims to compensate for motor and functional losses.[54] For long-term independent management, the therapist may develop home exercise programs for appropriate patients.[55] The families of individuals, who have inherited or are at risk of inheriting HD, have generations of experience of HD which may be outdated and lack knowledge of recent breakthroughs and improvements in genetic testing, family planning choices, care management, and other considerations. Genetic counseling benefits these individuals by updating their knowledge, dispelling any myths they may have and helping them consider their future options and plans.[12][56]
Prognosis The length of the trinucleotide repeat s for 60% of the variation in the age symptoms appear and the rate they progress. A longer repeat results in an earlier age of onset and a faster progression of symptoms.[13][57] Individuals with more than sixty repeats often develop the disease before age 20, while those with fewer than 40 repeats may not ever develop noticeable symptoms.[58] The remaining variation is due to environmental factors and other genes that influence the mechanism of the disease.[13] Life expectancy in HD is generally around 20 years following the onset of visible symptoms.[5] Most life-threatening complications result from muscle coordination and, to a lesser extent, behavioral changes induced by declining cognitive function. The largest risk is pneumonia, which causes death in one third of those with HD. As the ability to synchronize movements deteriorates, difficulty clearing the lungs and an increased risk of aspirating food or drink both increase the risk of contracting pneumonia. The second greatest risk is heart disease, which causes almost a quarter of fatalities of those with HD.[5] Suicide is the next greatest cause of fatalities, with 7.3% of those with HD taking their own lives and up to 27% attempting to do so. It is unclear to what extent suicidal thoughts are influenced by psychiatric symptoms, as they signify sufferers' desires to avoid the later stages of the disease.[59][60][61] Other associated risks include choking, physical injury from falls, and malnutrition.[5]
Epidemiology The late onset of Huntington's disease means it does not usually affect reproduction.[12] The worldwide prevalence of HD is 5-10 cases per 100,000 persons,[62][63] but varies greatly geographically as a result of ethnicity, local migration and past immigration patterns.[12] Prevalence is similar for men and women. The rate of occurrence is highest in peoples of Western European descent, averaging around seventy per million people, and is lower in the rest
of the world, e.g. one per million people of Asian and African descent.[12] Additionally, some localized areas have a much higher prevalence than their regional average.[12] One of the highest prevalences is in the isolated populations of the Lake Maracaibo region of Venezuela, where HD affects up to seven thousand per million people.[12][64] Other areas of high localization have been found in Tasmania and specific regions of Scotland, Wales and Sweden.[61] Increased prevalence in some cases occurs due to a local founder effect, a historical migration of carriers into an area of geographic isolation.[61][65] Some of these carriers have been traced back hundreds of years using genealogical studies.[61] Genetic haplotypes can also give clues for the geographic variations of prevalence.[61][66] Until the discovery of a genetic test, statistics could only include clinical diagnosis based on physical symptoms and a family history of HD, excluding those who died of other causes before diagnosis. These cases can now be included in statistics and as the test becomes more widely available, estimates of the prevalence and incidence of the disorder are likely to increase.[61][67] Indeed, in 2010 evidence emerged from the UK that the prevalence of HD may be as much as twice that previously estimated.[68]
History
In 1872 George Huntington described the disorder in his first paper "On Chorea" at the age of 22.[69]
Although Huntington's has been recognized as a disorder since at least the Middle Ages, the cause has been unknown until fairly recently. Huntington's was given different names throughout this history as understanding of the disease changed. Originally called simply 'chorea' for the jerky dancelike movements associated with the disease, HD has also been called "hereditary chorea" and "chronic progressive chorea".[70] The first definite mention of HD was in a letter by Charles Oscar Waters, published in the first edition of Robley Dunglison's Practice of Medicine in 1842. Waters described "a form of chorea, vulgarly called magrums", including accurate descriptions of the chorea, its progression, and the strong heredity of the disease.[71] In 1846 Charles Gorman observed how higher prevalence seemed to occur in localized regions.[71] Independently of Gorman and Waters, both students of Dunglison at Jefferson Medical College in Philadelphia,[72] Johan Christian Lund also produced an early description in 1860.[71] He specifically noted that in Setesdalen, a secluded mountain valley in Norway, there was a high prevalence of dementia associated with a pattern of jerking movement disorders that ran in families.[73] The first thorough description of the disease was by George Huntington in 1872. Examining the combined medical history of several generations of a family exhibiting similar symptoms, he realized their conditions must be linked; he presented his detailed and accurate definition of the
disease as his first paper. Huntington described the exact pattern of inheritance of autosomal dominant disease years before the rediscovery by scientists of Mendelian inheritance. "Of its hereditary nature. When either or both the parents have shown manifestations of the disease ..., one or more of the offspring almost invariably suffer from the disease ... But if by any chance these children go through life without it, the thread is broken and the grandchildren and great-grandchildren of the original shakers may rest assured that they are free from the disease.".[69][74] Sir William Osler was interested in the disorder and chorea in general, and was impressed with Huntington's paper, stating that "In the history of medicine, there are few instances in which a disease has been more accurately, more graphically or more briefly described."[71][75] Osler's continued interest in HD, combined with his influence in the field of medicine, helped to rapidly spread awareness and knowledge of the disorder throughout the medical community.[71] Great interest was shown by scientists in Europe, including Louis Théophile Joseph Landouzy, DésiréMagloire Bourneville, Camillo Golgi, and Joseph Jules Dejerine, and until the end of the century, much of the research into HD was European in origin.[71] By the end of the 19th century, research and reports on HD had been published in many countries and the disease was recognized as a worldwide condition.[71] During the rediscovery of Mendelian inheritance at the turn of the 20th century, HD was used tentatively as an example of autosomal dominant inheritance.[71] The English biologist William Bateson used the pedigrees of affected families to establish that HD had an autosomal dominant inheritance pattern.[72] The strong inheritance pattern prompted several researchers, including Smith Ely Jelliffe, to attempt to trace and connect family of previous studies.[71] Jelliffe collected information from across New York State and published several articles regarding the genealogy of HD in New England.[76] Jelliffe's research roused the interest of his college friend, Charles Davenport, who commissioned Elizabeth Muncey to produce the first field study on the East Coast of the United States of families with HD and to construct their pedigrees.[77] Davenport used this information to document the variable age of onset and range of symptoms of HD; he claimed that most cases of HD in the USA could be traced back to a handful of individuals.[77] This research was further embellished in 1932 by P. R. Vessie, who popularized the idea that three brothers who left England in 1630 bound for Boston were the progenitors of HD in the USA.[78] The claim that the earliest progenators had been established and eugenic bias of Muncey's, Davenport, and Vessie's work contributed to misunderstandings and prejudice about HD.[72] Muncey and Davenport also popularized the idea that in the past some HD sufferers may have been thought to be possessed by spirits or victims of witchcraft, and were sometimes shunned or exiled by society.[79][80] This idea has not been proven. Researchers have found contrary evidence; for instance, the community of the family studied by George Huntington openly accommodated those who exhibited symptoms of HD.[72][79] The search for the cause of this condition was enhanced considerably in 1968 when the Hereditary Disease Foundation (HDF) was created by Milton Wexler, a psychoanalyst based in Los Angeles, California whose wife Leonore Sabin had been diagnosed earlier that year with Huntingdon's disease.[81] The three brothers of Wexler's wife also suffered from this disease. The
foundation was involved in the recruitment of over 100 scientists in the Huntington's Disease Collaborative Research Project who over a 10 year period worked to locate the responsible gene. Thanks to the HDF, the ongoing US-Venezuela Huntington's Disease Collaborative Research Project was started in 1979, and reported a major breakthrough in 1983 with the discovery of the approximate location of a causal gene.[65] This was the result of an extensive study focusing on the populations of two isolated Venezuelan villages, Barranquitas and Lagunetas, where there was an unusually high prevalence of the disease. It involved over 18,000 people - mostly from a single extended family. Among other innovations, the project developed DNA-marking methods which were an important step in making the Human Genome Project possible.[82] In 1993, the research group isolated the precise causal gene at 4p16.3,[83] making this the first autosomal disease locus found using genetic linkage analysis.[83][84] In the same time frame, key discoveries concerning the mechanisms of the disorder were being made, including the findings by Anita Harding's research group on the effects of the gene's length.[85] Modelling the disease in various types of animals, such as the transgenic mouse developed in 1996, enabled larger scale experiments. As these animals have faster metabolisms and much shorter lifespans than humans, results from experiments are received sooner, speeding research. The 1997 discovery that mHtt fragments misfold led to the discovery of the nuclear inclusions they cause. These advances have led to increasingly extensive research into the proteins involved with the disease, potential drug treatments, care methods, and the gene itself.[71][86] The condition was formerly called 'Huntington's chorea', but this term has been replaced by 'Huntington's disease', because not all patients develop chorea, and because of the importance of cognitive and behavioral problems.[87]
Society and culture See also: List of Huntington's disease media depictions Ethics See also: In vitro fertilisation#Ethics and Stem cell controversy
Huntington's disease, particularly the application of the genetic test for the disease, has raised several ethical issues. The issues for genetic testing include defining how mature an individual should be before being considered eligible for testing, ensuring the confidentiality of results, and whether companies should be allowed to use test results for decisions on employment, life insurance or other financial matters. There was controversy when Charles Davenport proposed in 1910 that compulsory sterilization and immigration control be used for people with certain diseases, including HD, as part of the eugenics movement.[88] In vitro fertilization has some issues regarding its use of embryos. Some HD research has ethical issues due to its use of animal testing and embryonic stem cells.[89][90]
The development of an accurate diagnostic test for Huntington's disease has caused social, legal, and ethical concerns over access to and use of a person's results.[91][92] Many guidelines and testing procedures have strict procedures for disclosure and confidentiality to allow individuals to decide when and how to receive their results and also to whom the results are made available.[12] Financial institutions and businesses are faced with the question of whether to use genetic test results when assessing an individual, such as for life insurance or employment. Although the United Kingdom's insurance companies have agreed that until 2014 they will not use genetic information when writing most insurance policies,[93] Huntington's is explicitly excluded from this agreement.[94] As with other untreatable genetic conditions with a later onset, it is ethically questionable to perform pre-symptomatic testing on a child or adolescent, as there would be no medical benefit for that individual. There is consensus for testing only individuals who are considered cognitively mature, although there is a counter-argument that parents have a right to make the decision on their child's behalf. With the lack of an effective treatment, testing a person under legal age who is not judged to be competent is considered unethical in most cases.[26][95][96] There are ethical concerns related to prenatal genetic testing or preimplantation genetic diagnosis to ensure a child is not born with a given disease.[97] For example, prenatal testing raises the issue of selective abortion, a choice considered unacceptable by some.[97] As it is a dominant disease, there are difficulties in situations in which a parent does not want to know his or her own diagnosis. This would require parts of the process to be kept secret from the parent.[97] organizations
The death of Woody Guthrie led to the foundation of the Committee to Combat Huntington's Disease.
Poster of Recent studies of Huntington's disease Marjorie Guthrie lecture in genetics; 1985
In 1968, after experiencing HD in his wife's family, Dr. Milton Wexler was inspired to start the Hereditary Disease Foundation (HDF), with the aim of curing genetic illnesses by coordinating and ing research.[98] The foundation and Dr. Wexler's daughter, Nancy Wexler, were key parts of the research team in Venezuela which discovered the HD gene.[98] At roughly the same time as the HDF formed, Marjorie Guthrie helped to found the Committee to Combat Huntington's Disease (now the Huntington's Disease Society of America), after her husband Woody Guthrie died from complications of HD.[99] Since then, and research organizations have formed in many countries around the world and have helped to increase public awareness of HD. A number of these collaborate in umbrella organizations, like the International Huntington Association and the European HD network.[100] Many organizations hold an annual HD awareness event, some of which have been endorsed by their respective governments. For example, June 6 is designated "National Huntington's Disease Awareness Day" by the US Senate.[101] The largest funder of Huntington's disease research globally, in of financial expenditure,[102] is the CHDI Foundation, a US non-profit biomedical foundation that aims to "rapidly discover and develop drugs that delay or slow Huntington's disease".[103] CHDI was formerly known as the High Q Foundation. In 2006, it spent $50 million on Huntington's disease research.[102] CHDI collaborates with many academic and commercial laboratories globally and engages in oversight and management of research projects as well as funding.[104]
Many organizations exist to and inform those affected by HD: see the External Links section below.
Research directions See also: Huntington's disease clinical research
Research into the mechanism of HD has focused on identifying the functioning of Htt, how mHtt differs or interferes with it, and the brain pathology that the disease produces. Research is conducted using in vitro methods, animal models and human volunteers. Animal models are critical for understanding the fundamental mechanisms causing the disease and for ing the early stages of drug development.[86] Animals with chemically induced brain injury exhibit HD-like symptoms and were initially used, but they did not mimic the progressive features of the disease.[105] The identification of the causative gene has enabled the development of many transgenic animal models including nematode worms, Drosophila fruit flies, mice, rats, sheep, pigs and monkeys that express mutant huntingtin and develop progressive neurodegeneration and HD-like symptoms.[86] Three broad approaches are under study to attempt to slow the progression of Huntington's disease: reducing production of the mutant protein, improving cells' ability to survive its diverse harmful effects, and replacing lost neurons.[106] Reducing huntingtin production
Gene silencing aims to reduce the production of the mutant protein, since HD is caused by a single dominant gene encoding a toxic protein. Gene silencing experiments in mouse models have shown that when the expression of mHtt is reduced, symptoms improve.[106] Safety of gene silencing has now been demonstrated in the large, human-like brains of primates.[107] Improving cell survival
Among the approaches aimed at improving cell survival in the presence of mutant huntingtin are correction of transcriptional regulation using histone deacetylase inhibitors, modulating aggregation of huntingtin, improving metabolism and mitochondrial function and restoring dysfunction of synapses.[106] Neuronal replacement
Stem cell therapy is the replacement of damaged neurons by transplantation of stem cells into affected regions of the brain. Experiments have yielded mixed results using this technique in animal models and preliminary human clinical trials.[108] Whatever their future therapeutic potential, stem cells are already a valuable tool for studying HD in the laboratory.[109] Clinical trials
Numerous drugs have been reported to produce benefits in animals, including creatine, coenzyme Q10 and the antibiotic minocycline. Some of these have then been tested by humans
in clinical trials, with more underway, but as yet none has proven effective.[51] In 2010, minocycline was found to be ineffective for humans in a multi-center trial.[110] Large observational studies involving human volunteers have revealed insights into the pathobiology of HD and supplied outcome measures for future clinical trials.[111]
Multiple sclerosis Multiple sclerosis Classification and external resources
Demyelination by MS. The CD68 colored tissue shows several macrophages in the area of the lesion. Original scale 1:100 ICD-10
G35
ICD-9
340
OMIM
126200
DiseasesDB
8412
MedlinePlus
000737
eMedicine
neuro/228 oph/179 emerg/321 pmr/82 radio/461
MeSH
D009103
GeneReviews
Multiple Sclerosis Overview
Multiple sclerosis (MS), also known as "disseminated sclerosis" or "encephalomyelitis disseminata", is an inflammatory disease in which the fatty myelin sheaths around the axons of the brain and spinal cord are damaged, leading to demyelination and scarring as well as a broad spectrum of signs and symptoms.[1] Disease onset usually occurs in young adults, and it is more common in women.[1] It has a prevalence that ranges between 2 and 150 per 100,000.[2] MS was first described in 1868 by Jean-Martin Charcot.[3] MS affects the ability of nerve cells in the brain and spinal cord to communicate with each other effectively. Nerve cells communicate by sending electrical signals called action potentials down long fibers called axons, which are contained within an insulating substance called myelin. In MS, the body's own immune system attacks and damages the myelin. When myelin is lost, the axons can no longer effectively conduct signals.[4] The name multiple sclerosis refers to scars (scleroses—better known as plaques or lesions) particularly in the white matter of the brain and spinal cord, which is mainly composed of myelin.[3] Although much is known about the mechanisms involved in the disease process, the cause remains unknown. Theories include genetics or infections. Different environmental risk factors have also been found.[4][5] Almost any neurological symptom can appear with the disease, and often progresses to physical and cognitive disability.[4] MS takes several forms, with new symptoms occurring either in discrete attacks (relapsing forms) or slowly accumulating over time (progressive forms).[6] Between attacks, symptoms may go away completely, but permanent neurological problems often occur, especially as the disease advances.[6] There is no known cure for multiple sclerosis. Treatments attempt to return function after an attack, prevent new attacks, and prevent disability.[4] MS medications can have adverse effects or be poorly tolerated, and many people pursue alternative treatments, despite the lack of ing scientific study. The prognosis is difficult to predict; it depends on the subtype of the disease, the individual's disease characteristics, the initial symptoms and the degree of disability the person experiences as time advances.[7] Life expectancy of people with MS is 5 to 10 years lower than that of the unaffected population.[1]
Signs and symptoms Main article: Multiple sclerosis signs and symptoms
Main symptoms of multiple sclerosis
A person with MS can suffer almost any neurological symptom or sign, including changes in sensation such as loss of sensitivity or tingling, pricking or numbness (hypoesthesia and paresthesia), muscle weakness, clonus, muscle spasms, or difficulty in moving; difficulties with coordination and balance (ataxia); problems in speech (dysarthria) or swallowing (dysphagia), visual problems (nystagmus, optic neuritis including phosphenes,[8][9] or diplopia), fatigue, acute or chronic pain, and bladder and bowel difficulties.[1] Cognitive impairment of varying degrees and emotional symptoms of depression or unstable mood are also common.[1] Uhthoff's phenomenon, an exacerbation of extant symptoms due to an exposure to higher than usual ambient temperatures, and Lhermitte's sign, an electrical sensation that runs down the back when bending the neck, are particularly characteristic of MS although not specific.[1] The main clinical measure of disability progression and symptom severity is the Expanded Disability Status Scale or EDSS.[10] Symptoms of MS usually appear in episodic acute periods of worsening (called relapses, exacerbations, bouts, attacks, or "flare-ups"), in a gradually progressive deterioration of neurologic function, or in a combination of both.[6] Multiple sclerosis relapses are often unpredictable, occurring without warning and without obvious inciting factors with a rate rarely above one and a half per year.[1] Some attacks, however, are preceded by common triggers. Relapses occur more frequently during spring and summer.[11] Viral infections such as the common cold, influenza, or gastroenteritis increase the risk of relapse.[1] Stress may also trigger an attack.[12] Pregnancy affects the susceptibility to relapse, with a lower relapse rate at each trimester of gestation. During the first few months after delivery, however, the risk of relapse is increased.[1] Overall, pregnancy does not seem to influence long-term disability. Many potential
triggers have been examined and found not to influence MS relapse rates. There is no evidence that vaccination and breast feeding,[1] physical trauma,[13] or Uhthoff's phenomenon[11] are relapse triggers.
Causes Most likely MS occurs as a result of some combination of genetic, environmental and infectious factors,[1] and possibly other factors like vascular problems.[14] Epidemiological studies of MS have provided hints on possible causes for the disease. Theories try to combine the known data into plausible explanations, but none has proved definitive. Genetics
HLA region of Chromosome 6. Changes in this area increase the probability of suffering MS.
MS is not considered a hereditary disease. However, a number of genetic variations have been shown to increase the risk of developing the disease.[15] The risk of acquiring MS is higher in relatives of a person with the disease than in the general population, especially in the case of siblings, parents, and children.[4] The disease has an overall familial recurrence rate of 20%.[1] In the case of monozygotic twins, concordance occurs only in about 35% of cases, while it goes down to around 5% in the case of siblings and even lower in
half-siblings. This indicates susceptibility is partly polygenically driven.[1][4] It seems to be more common in some ethnic groups than others.[16] Apart from familial studies, specific genes have been linked with MS. Differences in the human leukocyte antigen (HLA) system—a group of genes in chromosome 6 that serves as the major histocompatibility complex (MHC) in humans—increase the probability of suffering MS.[1] The most consistent finding is the association between multiple sclerosis and alleles of the MHC defined as DR15 and DQ6.[1] Other loci have shown a protective effect, such as HLA-C554 and HLA-DRB1*11.[1] Environmental factors
Different environmental factors, both of infectious and non-infectious origin have been proposed as risk factors for MS. Although some are partly modifiable, only further research—especially clinical trials—will reveal whether their elimination can help prevent MS.[17] MS is more common in people who live farther from the equator, although many exceptions exist.[1] Decreased sunlight exposure has been linked with a higher risk of MS.[17] Decreased vitamin D production and intake has been the main biological mechanism used to explain the higher risk among those less exposed to sun.[17][18][19] Severe stress may be a risk factor although evidence is weak.[17] Smoking has also been shown to be an independent risk factor for developing MS.[18] Association with occupational exposures and toxins—mainly solvents—has been evaluated, but no clear conclusions have been reached.[17] Vaccinations were investigated as causal factors for the disease; however, most studies show no association between MS and vaccines.[17] Several other possible risk factors, such as diet[20] and hormone intake, have been investigated; however, evidence on their relation with the disease is "sparse and unpersuasive".[18] Gout occurs less than would statistically be expected in people with MS, and low levels of uric acid have been found in people with MS as compared to normal individuals. This led to the theory that uric acid protects against MS, although its exact importance remains unknown.[21] Infections
Many microbes have been proposed as potential infectious triggers of MS, but none have been substantiated.[4] Moving at an early age from one location in the world to another alters a person's subsequent risk of MS.[5] An explanation for this could be that some kind of infection, produced by a widespread microbe rather than a rare pathogen, is the origin of the disease.[5] There are a number of proposed mechanisms, including the hygiene hypothesis and the prevalence hypothesis. The hygiene hypothesis proposes that exposure to several infectious agents early in life is protective against MS, the disease being a response to a later encounter with such agents.[1] The prevalence hypothesis proposes that the disease is due to a pathogen more common in regions of high MS prevalence where in most individuals it causes an asymptomatic persistent infection. Only in a few cases and after many years does it cause
demyelination.[5][22] The hygiene hypothesis has received more than the prevalence hypothesis.[5] Evidence for viruses as a cause includes the presence of oligoclonal bands in the brain and cerebrospinal fluid of most people with MS, the association of several viruses with human demyelination encephalomyelitis, and induction of demyelination in animals through viral infection.[23] Human herpes viruses are a candidate group of viruses linked to MS. Individuals who have never been infected by the Epstein-Barr virus have a reduced risk of having the disease, and those infected as young adults have a greater risk than those who had it at a younger age.[1][5] Although some consider that this goes against the hygiene hypothesis, since the noninfected have probably experienced a more hygienic upbringing,[5] others believe that there is no contradiction since it is a first encounter at a later moment with the causative virus that is the trigger for the disease.[1] Other diseases that have also been related with MS are measles, mumps, and rubella.[1]
Pathophysiology Main article: Pathophysiology of multiple sclerosis
Demyelination in MS. On Klüver-Barrera myelin staining, decoloration in the area of the lesion can be appreciated (Original scale 1:100). Autoimmunology
MS is believed to be an immune-mediated disorder mediated by a complex interaction of the individual's genetics and as yet unidentified environmental insults.[4] Damage is believed to be caused by the person's own immune system attacking the nervous system, possibly as a result of exposure to a molecule with a similar structure to one of its own.[4] Lesions
The name multiple sclerosis refers to the scars (scleroses – better known as plaques or lesions) that form in the nervous system. MS lesions most commonly involve white matter areas close to the ventricles of the cerebellum, brain stem, basal ganglia and spinal cord; and the optic nerve. The function of white matter cells is to carry signals between grey matter areas, where the processing is done, and the rest of the body. The peripheral nervous system is rarely involved.[4]
More specifically, MS destroys oligodendrocytes, the cells responsible for creating and maintaining a fatty layer—known as the myelin sheath—which helps the neurons carry electrical signals (action potentials).[4] MS results in a thinning or complete loss of myelin and, as the disease advances, the cutting (transection) of the neuron's axons. When the myelin is lost, a neuron can no longer effectively conduct electrical signals.[4] A repair process, called remyelination, takes place in early phases of the disease, but the oligodendrocytes cannot completely rebuild the cell's myelin sheath.[24] Repeated attacks lead to successively fewer effective remyelinations, until a scar-like plaque is built up around the damaged axons.[24] Different lesion patterns have been described.[25] Inflammation
Apart from demyelination, the other pathologic hallmark of the disease is inflammation. According to a strictly immunological explanation of MS, the inflammatory process is caused by T cells, a kind of lymphocyte. Lymphocytes are cells that play an important role in the body's defenses.[4] In MS, T cells gain entry into the brain via disruptions in the blood–brain barrier. Evidence from animal models also point to a role of B cells in addition to T cells in development of the disease.[26] The T cells recognize myelin as foreign and attack it as if it were an invading virus. This triggers inflammatory processes, stimulating other immune cells and soluble factors like cytokines and antibodies. Further leaks form in the blood–brain barrier, which in turn cause a number of other damaging effects such as swelling, activation of macrophages, and more activation of cytokines and other destructive proteins.[4] Blood–brain barrier breakdown
The blood–brain barrier is a capillary system that normally prevents entry of T cells into the central nervous system.[4] However, it may become permeable to these types of cells because of an infection or a virus.[4] When the blood–brain barrier regains its integrity, typically after the infection or virus has cleared, the T cells are trapped inside the brain.[4]
Diagnosis Multiple sclerosis can be difficult to diagnose since its signs and symptoms may be similar to other medical problems.[1][27] Medical organizations have created diagnostic criteria to ease and standardize the diagnostic process especially in the first stages of the disease.[1] Historically, the Schumacher and Poser criteria were both popular.[28] Currently, the McDonald criteria focus on a demonstration with clinical, laboratory and radiologic data of the dissemination of MS lesions in time and space for non-invasive MS diagnosis, though some have stated that the only proved diagnosis of MS is autopsy, or occasionally biopsy, where lesions typical of MS can be detected through histopathological techniques.[1][29][30]
Clinical data alone may be sufficient for a diagnosis of MS if an individual has suffered separate episodes of neurologic symptoms characteristic of MS.[29] Since some people seek medical attention after only one attack, other testing may hasten and ease the diagnosis. The most commonly used diagnostic tools are neuroimaging, analysis of cerebrospinal fluid and evoked potentials. Magnetic resonance imaging of the brain and spine shows areas of demyelination (lesions or plaques). Gadolinium can be istered intravenously as a contrast to highlight active plaques and, by elimination, demonstrate the existence of historical lesions not associated with symptoms at the moment of the evaluation.[29][31] Testing of cerebrospinal fluid obtained from a lumbar puncture can provide evidence of chronic inflammation of the central nervous system. The cerebrospinal fluid is tested for oligoclonal bands of IgG on electrophoresis, which are inflammation markers found in 75–85% of people with MS.[29][32] The nervous system of a person with MS responds less actively to stimulation of the optic nerve and sensory nerves due to demyelination of such pathways. These brain responses can be examined using visual and sensory evoked potentials.[33] Clinical courses
Progression of MS subtypes
Several subtypes, or patterns of progression, have been described. Subtypes use the past course of the disease in an attempt to predict the future course. They are important not only for prognosis but also for therapeutic decisions. In 1996 the United States National Multiple Sclerosis Society standardized four clinical courses:[6] 1. relapsing remitting, 2. secondary progressive, 3. primary progressive, and
4. progressive relapsing.
The relapsing-remitting subtype is characterized by unpredictable relapses followed by periods of months to years of relative quiet (remission) with no new signs of disease activity. Deficits suffered during attacks may either resolve or leave sequelae, the latter being more common as a function of time.[1] This describes the initial course of 80% of individuals with MS.[1] When deficits always resolve between attacks, this is sometimes referred to as benign MS,[34] although people will still accrue some degree of disability in the long term.[1] The relapsing-remitting subtype usually begins with a clinically isolated syndrome (CIS). In CIS, a person has an attack suggestive of demyelination, but does not fulfill the criteria for multiple sclerosis.[1][35] However only 30 to 70% of persons experiencing CIS later develop MS.[35]
Nerve axon with myelin sheath
Secondary progressive MS (sometimes called "galloping MS") describes around 65% of those with an initial relapsing-remitting MS, who then begin to have progressive neurologic decline between acute attacks without any definite periods of remission.[1][6] Occasional relapses and minor remissions may appear.[6] The median time between disease onset and conversion from relapsing-remitting to secondary progressive MS is 19 years.[36] The primary progressive subtype describes the approximately 10–15% of individuals who never have remission after their initial MS symptoms.[37] It is characterized by progression of disability from onset, with no, or only occasional and minor, remissions and improvements.[6] The age of onset for the primary progressive subtype is later than for the relapsing-remitting, but similar to mean age of progression between the relapsing-remitting and the secondary progressive. In both cases it is around 40 years of age.[1] Progressive relapsing MS describes those individuals who, from onset, have a steady neurologic decline but also suffer clear superimposed attacks. This is the least common of all subtypes.[6]
Atypical variants of MS with non-standard behavior have been described; these include Devic's disease, Balo concentric sclerosis, Schilder's diffuse sclerosis and Marburg multiple sclerosis. There is debate on whether they are MS variants or different diseases.[38] Multiple sclerosis also behaves differently in children, taking more time to reach the progressive stage.[1] Nevertheless they still reach it at a lower mean age than adults.[1]
Management Main article: Treatment of multiple sclerosis
Although there is no known cure for multiple sclerosis, several therapies have proven helpful. The primary aims of therapy are returning function after an attack, preventing new attacks, and preventing disability. As with any medical treatment, medications used in the management of MS have several adverse effects. Alternative treatments are pursued by some people, despite the shortage of ing, comparable, replicated scientific study. Acute attacks
During symptomatic attacks, istration of high doses of intravenous corticosteroids, such as methylprednisolone, is the routine therapy for acute relapses.[1] Although generally effective in the short term for relieving symptoms, corticosteroid treatments do not appear to have a significant impact on long-term recovery.[39] Oral and intravenous istration seem to have similar efficacy.[40] Consequences of severe attacks which do not respond to corticosteroids might be treated by plasmapheresis.[1] Disease-modifying treatments Main article: Treatment_of_multiple_sclerosis#Disease-modifying_treatments
Disease-modifying treatments are expensive and most of these require frequent (up-to-daily) injections. Others require IV infusions at 1–3 month intervals.
As of September 2012, seven disease-modifying treatments have been approved by regulatory agencies of different countries, including the U.S. Food and Drug istration (FDA), the European Medicines Agency (EMA) and the Japanese PMDA.
The seven approved drugs with their trademarks are interferon beta-1a, interferon beta-1b, glatiramer acetate, mitoxantrone (an immunosuppressant also used in cancer chemotherapy), natalizumab (a humanized monoclonal antibody immunomodulator[1]), and fingolimod and teriflunomide the first and second oral drugs, respectively, to become available.[41] Most of these drugs are approved only for the relapsing-remitting course. The interferons and glatiramer acetate are delivered by frequent injections, varying from once-per-day for glatiramer acetate to once-per-week (but intra-muscular) for interferon beta-1a. Natalizumab and mitoxantrone are given by IV infusion at monthly intervals. All seven kinds of medications are modestly effective at decreasing the number of attacks in relapsing-remitting MS (RRMS) while the capacity of interferons and glatiramer acetate is more controversial. Studies of their long-term effects are still lacking.[1][42] Comparisons between immunomodulators (all but mitoxantrone) show that the most effective is natalizumab, both in of relapse rate reduction and halting disability progression.[43] Mitoxantrone may be the most effective of them all; however, it is generally not considered as a long-term therapy, as its use is limited by severe secondary effects.[1][42] The earliest clinical presentation of RRMS is the clinically isolated syndrome (CIS). Treatment with interferons during an initial attack can decrease the chance that a person will develop clinical MS.[1] Treatment of progressive MS is more difficult than relapsing-remitting MS. Mitoxantrone has shown positive effects in those with secondary progressive and progressive relapsing courses. It is moderately effective in reducing the progression of the disease and the frequency of relapses in short-term follow-up.[44] No treatment has been proven to modify the course of primary progressive MS.[45] As with many medical treatments, these treatments have several adverse effects. One of the most common is irritation at the injection site for glatiramer acetate and the interferon treatments. Over time, a visible dent at the injection site, due to the local destruction of fat tissue, known as lipoatrophy, may develop. Interferons produce symptoms similar to influenza;[46] some people taking glatiramer experience a post-injection reaction manifested by flushing, chest tightness, heart palpitations, breathlessness, and anxiety, which usually lasts less than thirty minutes.[47] More dangerous but much less common are liver damage from interferons,[48] severe cardiotoxicity, infertility, and acute myeloid leukemia of mitoxantrone,[1][42] and the putative link between natalizumab and some cases of progressive multifocal leukoencephalopathy.[1] Management of the effects of MS
Disease-modifying treatments reduce the progression rate of the disease, but do not stop it. As multiple sclerosis progresses, the symptomatology tends to increase. The disease is associated with a variety of symptoms and functional deficits that result in a range of progressive impairments and disability. Management of these deficits is therefore very important. Both drug therapy and neurorehabilitation have shown to ease the burden of some symptoms, though neither influences disease progression.[1][49] Some symptoms have a good response to medication, such as unstable bladder and spasticity, while management of many others is much more complicated.[1] As for any person with neurologic deficits, a multidisciplinary approach is key to improving quality of life; however, there are particular difficulties in specifying a 'core
team' because people with MS may need help from almost any health profession or service at some point.[1] Multidisciplinary rehabilitation programs increase activity and participation of people with MS but do not influence impairment level.[50] Historically, individuals with MS were advised against participation in physical activity due to worsening symptoms.[51] However, under the direction of a physiotherapist, participation in physical activity can be safe and has been proven beneficial for persons with MS.[52] Research has ed the rehabilitative role of physical activity in improving muscle power,[53] mobility,[53] mood,[54] bowel health,[55] general conditioning and quality of life.[53] Care should be taken not to overheat a person with MS during the course of exercise. Physiotherapists have the expertise needed to adequately prescribe exercise programs that are suitable for the individual. The FITT equation (frequency of exercise, intensity of exercise, type of exercise and time/duration of exercise) is typically used to prescribe exercises.[52] Depending on the person, activities may include resistance training,[56] walking, swimming, yoga, tai chi, and others.[55] Determining an appropriate and safe exercise program is challenging and must be carefully individualized to each person being sure to for all contraindications and precautions.[52] There is some evidence that cooling measures are effective in allowing a greater degree of exercise.[57] Alternative treatments
Many people with MS use complementary and alternative medicine. Depending on the treatments, the evidence is weak or absent.[58] Examples are a dietary regimen,[59] herbal medicine (including the use of medical cannabis),[60] hyperbaric oxygenation[61] and selfinfection with hookworm (known generally as helminthic therapy).[62] The helminic therapy of infection with Trichuris suis ova is under investigation as of 2012. Epidemiological and experimental evidence suggests parasitic infection may protect against MS.[63]
Prognosis
Disability-adjusted life year for multiple sclerosis per 100,000 inhabitants in 2004 no data
28–31
<13
31–34
13–16
34–37
16–19
37–40
19–22
40–43
22–25
>43
25–28
The prognosis (the expected future course of the disease) for a person with multiple sclerosis depends on the subtype of the disease; the individual's sex, age, and initial symptoms; and the degree of disability the person experiences.[7] The disease evolves and advances over decades, 30 being the mean years to death since onset.[1] Female sex, relapsing-remitting subtype, optic neuritis or sensory symptoms at onset, few attacks in the initial years and especially early age at onset, are associated with a better course.[7][64] The life expectancy of people with MS is 5 to 10 years lower than that of unaffected people.[1] Almost 40% of people with MS reach the seventh decade of life.[64] Nevertheless, two-thirds of the deaths in people with MS are directly related to the consequences of the disease.[1] Suicide also has a higher prevalence than in the healthy population, while infections and complications are especially hazardous for the more disabled ones.[1] Although most people lose the ability to walk before death, 90% are still capable of independent walking at 10 years from onset, and 75% at 15 years.[64][65]
Epidemiology Two main measures are used in epidemiological studies: incidence and prevalence. Incidence is the number of new cases per unit of person–time at risk (usually number of new cases per thousand person–years); while prevalence is the total number of cases of the disease in the population at a given time. Prevalence is known to depend not only on incidence, but also on survival rate and migrations of affected people. MS has a prevalence that ranges between 2 and 150 per 100,000 depending on the country or specific population.[2] Studies on populational and geographical patterns of epidemiological measures have been very common in MS,[22] and have led to the proposal of different etiological (causal) theories.[5][17][18][22] MS usually appears in adults in their thirties but it can also appear in children.[1] The primary progressive subtype is more common in people in their fifties.[37] As with many autoimmune disorders, the disease is more common in women, and the trend may be increasing.[1][66] In children, the sex ratio difference is higher,[1] while in people over fifty, MS affects males and females almost equally.[37] There is a north-to-south gradient in the northern hemisphere and a south-to-north gradient in the southern hemisphere, with MS being much less common in people living near the equator.[1][66] Climate, sunlight and intake of vitamin D have been investigated as possible causes of the disease that could explain this latitude gradient.[18] However, there are important exceptions to the north–south pattern and changes in prevalence rates over time;[1] in general, this trend might be disappearing.[66] This indicates that other factors such as environment or genetics have to be
taken into to explain the origin of MS.[1] MS is also more common in regions with northern Europe populations.[1] But even in regions where MS is common, some ethnic groups are at low risk of developing the disease, including the Samis, Turkmen, Amerindians, Canadian Hutterites, Africans, and New Zealand Māori.[67] Environmental factors during childhood may play an important role in the development of MS later in life. Several studies of migrants show that if migration occurs before the age of 15, the migrant acquires the new region's susceptibility to MS. If migration takes place after age 15, the migrant retains the susceptibility of his home country.[1][17] However, the age–geographical risk for developing multiple sclerosis may span a larger timescale.[1] A relationship between season of birth and MS has also been found which lends to an association with sunlight and vitamin D. For example fewer people with MS are born in November as compared to May.[68]
History Medical discovery
Detail of drawing from Carswell book depicting MS lesions in the brain stem and spinal cord (1838)
The French neurologist Jean-Martin Charcot (1825–1893) was the first person to recognize multiple sclerosis as a distinct disease in 1868.[69] Summarizing previous reports and adding his own clinical and pathological observations, Charcot called the disease sclerose en plaques. The three signs of MS now known as Charcot's triad 1 are nystagmus, intention tremor, and telegraphic speech (scanning speech), though these are not unique to MS. Charcot also observed cognition changes, describing his patients as having a "marked enfeeblement of the memory" and "conceptions that formed slowly".[3]
Before Charcot, Robert Carswell (1793–1857), a British professor of pathology, and Jean Cruveilhier (1791–1873), a French professor of pathologic anatomy, had described and illustrated many of the disease's clinical details, but did not identify it as a separate disease.[69] Specifically, Carswell described the injuries he found as "a remarkable lesion of the spinal cord accompanied with atrophy".[1] Under the microscope, Swiss pathologist Georg Eduard Rindfleisch (1836–1908) noted in 1863 that the inflammation-associated lesions were distributed around blood vessels.[70][71] After Charcot's description, Eugène Devic (1858–1930), Jozsef Balo (1895–1979), Paul Ferdinand Schilder (1886–1940), and Otto Marburg (1874–1948) described special cases of the disease. During all the 20th century there was an important development on the theories about the cause and pathogenesis of MS while efficacious treatments began to appear in 1990.[1] Historical cases
There are several historical s of people who lived before or shortly after the disease was described by Charcot and probably had MS. A young woman called Halldora who lived in Iceland around 1200 suddenly lost her vision and mobility but, after praying to the saints, recovered them seven days after. Saint Lidwina of Schiedam (1380–1433), a Dutch nun, may be one of the first clearly identifiable MS patients. From the age of 16 until her death at 53, she suffered intermittent pain, weakness of the legs, and vision loss—symptoms typical of MS.[72] Both cases have led to the proposal of a 'Viking gene' hypothesis for the dissemination of the disease.[73] Augustus Frederick d'Este (1794–1848), son of Prince Augustus Frederick, Duke of Sussex and Lady Augusta Murray and the grandson of George III of the United Kingdom, almost certainly suffered from MS. D'Este left a detailed diary describing his 22 years living with the disease. His diary began in 1822 and ended in 1846, although it remained unknown until 1948. His symptoms began at age 28 with a sudden transient visual loss (amaurosis fugax) after the funeral of a friend. During the course of his disease, he developed weakness of the legs, clumsiness of the hands, numbness, dizziness, bladder disturbances, and erectile dysfunction. In 1844, he began to use a wheelchair. Despite his illness, he kept an optimistic view of life.[74][75] Another early of MS was kept by the British diarist W. N. P. Barbellion, nom-de-plume of Bruce Frederick Cummings (1889–1919), who maintained a detailed log of his diagnosis and struggle with MS.[75] His diary was published in 1919 as The Journal of a Disappointed Man.[76]
Research Main article: Therapies under investigation for multiple sclerosis
Therapies
Chemical structure of alemtuzumab
Research directions on MS treatments include investigations of MS pathogenesis and heterogeneity; research of more effective, convenient, or tolerable new treatments for RRMS; creation of therapies for the progressive subtypes; neuroprotection strategies; and the search for effective symptomatic treatments.[77] A number of treatments that may curtail attacks or improve function are under investigation. Emerging agents for RRMS that have shown promise in phase 2 trials include alemtuzumab (trade name Campath), daclizumab (trade name Zenapax), rituximab, dirucotide, BHT-3009, cladribine, dimethyl fumarate, estriol, laquinimod, PEGylated interferonβ-1a,[78] minocycline, statins, temsirolimus and teriflunomide.[77] In 2010, an FDA committee recommended approving fingolimod for the treatment of MS attacks,[79] and on September 22, 2010, fingolimod (trade name Gilenya) became the first oral drug approved by the Food and Drug istration to reduce relapses and delay disability progression in people with relapsing forms of multiple sclerosis.[80] Clinical trials of fingolimod have demonstrated side effects, including cardiovascular conditions, macular edema, infections, liver toxicity and malignancies.[81][82] Much interest has been focused on the prospect of utilizing vitamin D analogs in the prevention and management of CIS and MS, especially given its possible role in the pathogenesis of the disease. While there is anecdotal evidence of benefit for low dose naltrexone,[83] only results from a pilot study in primary progressive MS have been published.[84]
Disease biomarkers
The variable clinical presentation of MS and the lack of diagnostic laboratory tests lead to delays in diagnosis and the impossibility of predicting diagnosis. New diagnostic methods are being investigated. These include work with anti-myelin antibodies, analysis of microarray gene expression and studies with serum and cerebrospinal fluid but none of them has yielded reliable positive results.[85] Currently there are no clinically established laboratory investigations available that can predict prognosis. However, several promising approaches have been proposed. Investigations on the prediction of evolution have centered on monitoring disease activity. Disease activation biomarkers include interleukin-6, nitric oxide and nitric oxide synthase, osteopontin, and fetuinA.[85] On the other hand since disease progression is the result of neurodegeneration the roles of proteins indicative of neuronal, axonal, and glial loss such as neurofilaments, tau and Nacetylaspartate are under investigation.[85] A final investigative field is work with biomarkers that distinguish between medication responders and nonresponders.[85] Chronic cerebrospinal venous insufficiency Main article: Chronic cerebrospinal venous insufficiency
In 2008, Italian vascular surgeon Paolo Zamboni reported research suggesting that MS involves a vascular disease process he referred to as chronic cerebrospinal venous insufficiency (CCSVI, CCVI), in which veins from the brain are constricted. He found CCSVI in the majority of people with MS, performed a surgical procedure to correct it and claimed that 73% of people improved.[86] Concern has been raised with Zamboni's research as it was neither blinded nor controlled[87] and further studies have had variable results.[88] This has raised serious objections to the hypothesis of CCSVI originating multiple sclerosis.[89] The neurology community currently recommends not to use the proposed treatment unless its effectiveness is confirmed by controlled studies, the need for which has been recognized by the scientific bodies engaged in MS research.[90]
Amyotrophic lateral sclerosis "ALS" redirects here. For other uses, see ALS (disambiguation).
Amyotrophic lateral sclerosis Classification and external resources
This MRI (parasagittal FLAIR) demonstrates increased T2 signal within the posterior part of the internal capsule and can be tracked to the subcortical white matter of the motor cortex, outlining the corticospinal tract, consistent with the clinical diagnosis of ALS. However, typically MRI imaging is unremarkable in a patient with ALS. ICD-10
G12.2
ICD-9
335.20
OMIM
105400
DiseasesDB
29148
MedlinePlus
000688
eMedicine
neuro/14 emerg/24 pmr/10
MeSH
D000690
Amyotrophic lateral sclerosis (ALS) – also referred to as motor neurone disease in some British Commonwealth countries and as Lou Gehrig's disease in the United States – is a debilitating disease with varied etiology characterized by rapidly progressive weakness, muscle atrophy and fasciculations, muscle spasticity, difficulty speaking (dysarthria), difficulty swallowing (dysphagia), and difficulty breathing (dyspnea). ALS is the most common of the five motor neuron diseases.
Signs and symptoms The disorder causes muscle weakness and atrophy throughout the body caused by the degeneration of the upper and lower motor neurons. Unable to function, the muscles weaken and atrophy. Individuals affected by the disorder may ultimately lose the ability to initiate and control all voluntary movement, although bladder and bowel sphincters and the muscles responsible for eye movement are not always, but usually spared until the terminal stages of the disease.[1] Cognitive function is generally spared for most patients, although some (about 5%) also have frontotemporal dementia.[2] A higher proportion of patients (30–50%) also have more subtle cognitive changes which may go unnoticed, but are revealed by detailed neuropsychological testing. Sensory nerves and the autonomic nervous system are generally unaffected, meaning the majority of people with ALS will maintain hearing, sight, touch, smell, and taste. Initial symptoms
The earliest symptoms of ALS are typically obvious weakness and/or muscle atrophy. Other presenting symptoms include muscle fasciculation (twitching), cramping, or stiffness of affected muscles; muscle weakness affecting an arm or a leg; and/or slurred and nasal speech. The parts of the body affected by early symptoms of ALS depend on which motor neurons in the body are damaged first. About 75% of people contracting the disease experience "limb onset" ALS, i.e., first symptoms in the arms or legs. Patients with the leg onset form may experience awkwardness when walking or running or notice that they are tripping or stumbling, often with a "dropped foot" which drags gently along the ground. Arm-onset patients may experience difficulty with tasks requiring manual dexterity such as buttoning a shirt, writing, or turning a key in a lock. Occasionally, the symptoms remain confined to one limb for a long period of time or for the whole length of the illness; this is known as monomelic amyotrophy. About 25% of cases are "bulbar onset" ALS. These patients first notice difficulty speaking clearly or swallowing. Speech may become slurred, nasal in character, or quieter. Other symptoms include difficulty swallowing and loss of tongue mobility. A smaller proportion of patients experience "respiratory onset" ALS, where the intercostal muscles that breathing are affected first. A small proportion of patients may also present with what appears to be frontotemporal dementia, but later progresses to include more typical ALS symptoms. Over time, patients experience increasing difficulty moving, swallowing (dysphagia), and speaking or forming words (dysarthria). Symptoms of upper motor neuron involvement include tight and stiff muscles (spasticity) and exaggerated reflexes (hyperreflexia) including an overactive gag reflex. An abnormal reflex commonly called Babinski's sign also indicates upper motor neuron damage. Symptoms of lower motor neuron degeneration include muscle weakness and atrophy, muscle cramps, and fleeting twitches of muscles that can be seen under the skin (fasciculations). Around 15–45% of patients experience pseudobulbar affect, also known as "emotional lability", which consists of uncontrollable laughter, crying or smiling, attributable to degeneration of bulbar upper motor neurons resulting in exaggeration of motor expressions of
emotion. To be diagnosed with ALS, patients must have signs and symptoms of both upper and lower motor neuron damage that cannot be attributed to other causes. Disease progression and spread
Although the order and rate of symptoms varies from person to person, eventually most patients are not able to walk, get out of bed on their own, or use their hands and arms. The rate of progression can be measured using an outcome measure called the "ALS Functional Rating Scale (Revised)", a 12-item instrument istered as a clinical interview or patient-reported questionnaire that produces a score between 48 (normal function) and 0 (severe disability). Though there is a high degree of variability and a small percentage of patients have much slower disease, on average, patients lose about 1 FRS point per month. Regardless of the part of the body first affected by the disease, muscle weakness and atrophy spread to other parts of the body as the disease progresses. In limb-onset ALS, symptoms usually spread from the affected limb to the opposite limb before affecting a new body region, whereas in bulbar-onset ALS symptoms typically spread to the arms before the legs. Disease progression tends to be slower in patients who are younger than 40 at onset,[3] have disease restricted primarily to one limb, and those with primarily upper motor neuron symptoms.[4] Conversely, progression is faster and prognosis poorer in patients with bulbar-onset disease, respiratory-onset disease, and frontotemporal dementia.[4] Late stage disease symptoms
Difficulty swallowing and chewing make eating normally very difficult and increase the risk of choking or aspirating food into the lungs. In later stages of the disease, aspiration pneumonia and maintaining a healthy weight can become a significant problem and may require insertion of a feeding tube. As the diaphragm and intercostal muscles (rib cage) that breathing weaken, measures of lung function such as forced vital capacity and inspiratory pressure diminish. In respiratory onset ALS, this may occur before significant limb weakness is apparent. External machines such as bilevel positive pressure ventilation (frequently referred to by the tradename BiPAP) are frequently used to breathing, first at night, and later during the daytime as well. BiPAP is only a temporary remedy, however, and it is recommended that long before BiPAP stops being effective, patients should decide whether to have a tracheotomy and long term mechanical ventilation. At this point, some patients choose palliative hospice care. Most people with ALS die of respiratory failure or pneumonia. Although respiratory can ease problems with breathing and prolong survival, it does not affect the progression of ALS. Most people with ALS die from respiratory failure, usually within three to five years from the onset of symptoms. The median survival time from onset to death is around 39 months, and only 4% survive longer than 10 years.[5] The best-known person with ALS, Stephen Hawking, has lived with the disease for more than 50 years, though his is an unusual case.[6]
Cause Where no family history of the disease is present – i.e., in around 95% of cases – there is no known cause for ALS. Potential causes for which there is inconclusive evidence includes head trauma, military service, and participation in sports. Many other potential causes, including chemical exposure, electromagnetic field exposure, occupation, physical trauma, and electric shock, have been investigated but without consistent findings.[7] There is a known hereditary factor in familial ALS (FALS), where the condition is known to run in families. Recently, a genetic abnormality known as a hexanucleotide repeat was found in a region called C9ORF72, which is associated with ALS combined with frontotemporal dementia ALS-FTD,[8] and s for some 6% of cases of ALS among white Europeans.[9] The high degree of mutations found in patients that appeared to have "sporadic" disease, i.e. without a family history, suggests that genetics may play a more significant role than previously thought and that environmental exposures may be less relevant. A defect on chromosome 21 (coding for superoxide dismutase) is associated with approximately 20% of familial cases of ALS, or about 2% of ALS cases overall.[10][11][12] This mutation is believed to be autosomal dominant, and has over a hundred different forms of mutation. The most common ALS-causing SOD1 mutation in North American patients is A4V, characterized by an exceptionally rapid progression from onset to death. The most common mutation found in Scandinavian countries, D90A, is more slowly progressive than typical ALS and patients with this form of the disease survive for an average of 11 years.[13] Mutations in several genes have also been linked to various types of ALS, and the currently identified associations are shown in the table below. Type
OMIM
Gene
Locus
ALS1
105400
SOD1
21q22.1
ALS2
205100
ALS2
2q33.1
ALS3
606640
?
18q21
ALS4
602433
SETX
9q34.13
ALS5
602099
?
15q15.1–q21.1
ALS6
608030
FUS
16p11.2
ALS7
608031
?
20p13
ALS8
608627
VAPB
20q13.3
Remarks
Juvenile onset
ALS9
611895
ANG
14q11.2
ALS10
612069
TARDBP
1p36.2
ALS11
612577
FIG4
6q21
ALS12
613435
OPTN
10p13
ALS13
183090
ATXN2
12q24.12
ALS14
613954
V
9p13.3
Very rare, described only in one family[14]
ALS15
300857
UBQLN2
Xp11.23–p11.1
Described in one family[15]
ALS16
614373
SIGMAR1
9p13.3
Juvenile onset, very rare, described only in one family[16]
ALS17
614696
CHMP2B
3p11
Very rare, reported only in a handful of patients
ALS18
614808
PFN1
17p13.3
Very rare, described only in a handful of Chinese families[17]
Pathophysiology The defining feature of ALS is the death of both upper and lower motor neurons in the motor cortex of the brain, the brain stem, and the spinal cord. Prior to their destruction, motor neurons develop proteinaceous inclusions in their cell bodies and axons. This may be partly due to defects in protein degradation.[18] These inclusions often contain ubiquitin, and generally incorporate one of the ALS-associated proteins: SOD1, TAR DNA binding protein (TDP-43, or TARDBP), or FUS. SOD1 It has been suggested that this article or section be merged into SOD1. (Discuss) Proposed since June 2012.
The cause of ALS is not known, though an important step toward determining the cause came in 1993 when scientists discovered that mutations in the gene that produces the Cu/Zn superoxide dismutase (SOD1) enzyme were associated with some cases (approximately 20%) of familial ALS. This enzyme is a powerful antioxidant that protects the body from damage caused by superoxide, a toxic free radical generated in the mitochondria. Free radicals are highly reactive molecules produced by cells during normal metabolism. Free radicals can accumulate and cause
damage to both mitochondrial and nuclear DNA and proteins within cells. To date, over 110 different mutations in SOD1 have been linked with the disease, some of which have a very long clinical course (e.g. H46R), while others, such as A4V, being exceptionally aggressive. Evidence suggests that failure of defenses against oxidative stress up-regulates programmed cell death (apoptosis), among many other possible consequences. Although it is not yet clear how the SOD1 gene mutation leads to motor neuron degeneration, researchers have theorized that an accumulation of free radicals may result from the faulty functioning of this gene. Current research, however, indicates that motor neuron death is not likely a result of lost or compromised dismutase activity, suggesting mutant SOD1 induces toxicity in some other way (a gain of function).[19][20] Studies involving transgenic mice have yielded several theories about the role of SOD1 in mutant SOD1 familial amyotrophic lateral sclerosis. Mice lacking the SOD1 gene entirely do not customarily develop ALS, although they do exhibit an acceleration of age-related muscle atrophy (sarcopenia) and a shortened lifespan (see article on superoxide dismutase). This indicates that the toxic properties of the mutant SOD1 are a result of a gain in function rather than a loss of normal function. In addition, aggregation of proteins has been found to be a common pathological feature of both familial and sporadic ALS (see article on proteopathy). Interestingly, in mutant SOD1 mice (most commonly, the G93A mutant), aggregates (misfolded protein accumulations) of mutant SOD1 were found only in diseased tissues, and greater amounts were detected during motor neuron degeneration.[21] It is speculated that aggregate accumulation of mutant SOD1 plays a role in disrupting cellular functions by damaging mitochondria, proteasomes, protein folding chaperones, or other proteins.[22] Any such disruption, if proven, would lend significant credibility to the theory that aggregates are involved in mutant SOD1 toxicity. Critics have noted that in humans, SOD1 mutations cause only 2% or so of overall cases and the etiological mechanisms may be distinct from those responsible for the sporadic form of the disease. To date, the ALS-SOD1 mice remain the best model of the disease for preclinical studies but it is hoped that more useful models will be developed. Other factors
Studies also have focused on the role of glutamate in motor neuron degeneration. Glutamate is one of the chemical messengers or neurotransmitters in the brain. Scientists have found that, compared to healthy people, ALS patients have higher levels of glutamate in the serum and spinal fluid.[11] Riluzole is currently the only FDA approved drug for ALS and targets glutamate transporters. It only has a modest effect on survival, however, suggesting that excess glutamate is not the sole cause of the disease.
Diagnosis No test can provide a definite diagnosis of ALS, although the presence of upper and lower motor neuron signs in a single limb is strongly suggestive. Instead, the diagnosis of ALS is primarily based on the symptoms and signs the physician observes in the patient and a series of tests to rule out other diseases. Physicians obtain the patient's full medical history and usually conduct a neurologic examination at regular intervals to assess whether symptoms such as muscle weakness, atrophy of muscles, hyperreflexia, and spasticity are getting progressively worse.
MRI (axial FLAIR) demonstrates increased T2 signal within the posterior part of the internal capsule, consistent with the clinical diagnosis of ALS.
Because symptoms of ALS can be similar to those of a wide variety of other, more treatable diseases or disorders, appropriate tests must be conducted to exclude the possibility of other conditions. One of these tests is electromyography (EMG), a special recording technique that detects electrical activity in muscles. Certain EMG findings can the diagnosis of ALS. Another common test measures nerve conduction velocity (NCV). Specific abnormalities in the NCV results may suggest, for example, that the patient has a form of peripheral neuropathy (damage to peripheral nerves) or myopathy (muscle disease) rather than ALS. The physician may order magnetic resonance imaging (MRI), a noninvasive procedure that uses a magnetic field and radio waves to take detailed images of the brain and spinal cord. Although these MRI scans are often normal in patients with ALS, they can reveal evidence of other problems that may be causing the symptoms, such as a spinal cord tumor, multiple sclerosis, a herniated disk in the neck, syringomyelia, or cervical spondylosis. Based on the patient's symptoms and findings from the examination and from these tests, the physician may order tests on blood and urine samples to eliminate the possibility of other diseases as well as routine laboratory tests. In some cases, for example, if a physician suspects that the patient may have a myopathy rather than ALS, a muscle biopsy may be performed. Infectious diseases such as human immunodeficiency virus (HIV), human T-cell leukaemia virus (HTLV), Lyme disease,[23] syphilis[24] and tick-borne encephalitis[25] viruses can in some cases cause ALS-like symptoms. Neurological disorders such as multiple sclerosis, post-polio syndrome, multifocal motor neuropathy, CIDP, and spinal muscular atrophy can also mimic certain facets of the disease and should be considered by physicians attempting to make a diagnosis. ALS must be differentiated from the “ALS mimic syndromes” which are unrelated disorders that may have a similar presentation and clinical features to ALS or its variants.[26] Because of the prognosis carried by this diagnosis and the variety of diseases or disorders that can resemble
ALS in the early stages of the disease, patients should always obtain a second neurological opinion. However, most cases of ALS are readily diagnosed and the error rate of diagnosis in large ALS clinics is less than 10%.[27][28] In one study, 190 patients who met the MND / ALS diagnostic criteria, complemented with laboratory research in compliance with both research protocols and regular monitoring. Thirty of these patients (15.78%) had their diagnosis completely changed, during the clinical observation development period.[29] In the same study, three patients had a false negative diagnoses, myasthenia gravis (MG), an auto-immune disease. MG can mimic ALS and other neurological disorders leading to a delay in diagnosis and treatment. MG is eminently treatable; ALS is not.[30] Myasthenic syndrome, also known as Lambert-Eaton syndrome (LES),can mimic ALS and its initial presentation can be similar to that of MG.[31][32]
Treatment Slowing progression
Riluzole (Rilutek) is the only treatment that has been found to improve survival but only to a modest extent.[33] It lengthens survival by several months, and may have a greater survival benefit for those with a bulbar onset. It also extends the time before a person needs ventilation . Riluzole does not reverse the damage already done to motor neurons, and people taking it must be monitored for liver damage (occurring in ~10% of people taking the drug).[34] It is approved by Food and Drug istration (FDA) and recommended by the National Institute for Clinical Excellence (NICE). Disease management
Other treatments for ALS are designed to relieve symptoms and improve the quality of life for patients. This ive care is best provided by multidisciplinary teams of health care professionals working with patients and caregivers to keep patients as mobile and comfortable as possible. Pharmaceutical treatments
Medical professionals can prescribe medications to help reduce fatigue, ease muscle cramps, control spasticity, and reduce excess saliva and phlegm. Drugs also are available to help patients with pain, depression, sleep disturbances, dysphagia, and constipation. Baclofen and diazepam are often prescribed to control the spasticity caused by ALS, and trihexyphenidyl or amitriptyline may be prescribed when ALS patients begin having trouble swallowing their saliva.[1] Physical, occupational and speech therapy
Physical therapists and occupational therapists play a large role in rehabilitation for individuals with ALS. Specifically, physical and occupational therapists can set goals and promote benefits for individuals with ALS by delaying loss of strength, maintaining endurance, limiting pain, preventing complications, and promoting functional independence.[35]
Occupational therapy and special equipment such as assistive technology can also enhance patients' independence and safety throughout the course of ALS. Gentle, low-impact aerobic exercise such as performing activities of daily living (ADL's), walking, swimming, and stationary bicycling can strengthen unaffected muscles, improve cardiovascular health, and help patients fight fatigue and depression. Range of motion and stretching exercises can help prevent painful spasticity and shortening (contracture) of muscles. Physical and occupational therapists can recommend exercises that provide these benefits without overworking muscles. They can suggest devices such as ramps, braces, walkers, bathroom equipment (shower chairs, toilet risers, etc.) and wheelchairs that help patients remain mobile. Occupational therapists can provide or recommend equipment and adaptations to enable people to retain as much safety and independence in activities of daily living as possible. ALS patients who have difficulty speaking may benefit from working with a speech-language pathologist. These health professionals can teach patients adaptive strategies such as techniques to help them speak louder and more clearly. As ALS progresses, speech-language pathologists can recommend the use of augmentative and alternative communication such as voice amplifiers, speech-generating devices (or voice output communication devices) and/or low tech communication techniques such as alphabet boards or yes/no signals. Feeding and nutrition
Patients and caregivers can learn from speech-language pathologists and nutritionists how to plan and prepare numerous small meals throughout the day that provide enough calories, fiber, and fluid and how to avoid foods that are difficult to swallow. Patients may begin using suction devices to remove excess fluids or saliva and prevent choking. Occupational therapists can assist with recommendations for adaptive equipment to ease the physical task of self-feeding and/or make food choice recommendations that are more conducive to their unique deficits and abilities. When patients can no longer get enough nourishment from eating, doctors may advise inserting a feeding tube into the stomach. The use of a feeding tube also reduces the risk of choking and pneumonia that can result from inhaling liquids into the lungs. The tube is not painful and does not prevent patients from eating food orally if they wish. Researchers have stated that "ALS patients have a chronically deficient intake of energy and recommended augmentation of energy intake."[36] Both animal[37] and human research[36][38] suggest that ALS patients should be encouraged to consume as many calories as possible and not to restrict their calorie intake. Breathing
When the muscles that assist in breathing weaken, use of ventilatory assistance (intermittent positive pressure ventilation (IPPV), bilevel positive airway pressure (BIPAP), or biphasic cuirass ventilation (BCV)) may be used to aid breathing. Such devices artificially inflate the patient's lungs from various external sources that are applied directly to the face or body. When muscles are no longer able to maintain oxygen and carbon dioxide levels, these devices may be used full-time. BCV has the added advantage of being able to assist in clearing secretions by using high-frequency oscillations followed by several positive expiratory breaths.[39] Patients
may eventually consider forms of mechanical ventilation (respirators) in which a machine inflates and deflates the lungs. To be effective, this may require a tube that es from the nose or mouth to the windpipe (trachea) and for long-term use, an operation such as a tracheostomy, in which a plastic breathing tube is inserted directly in the patient's windpipe through an opening in the neck. Patients and their families should consider several factors when deciding whether and when to use one of these options. Ventilation devices differ in their effect on the patient's quality of life and in cost. Although ventilation can ease problems with breathing and prolong survival, it does not affect the progression of ALS. Patients need to be fully informed about these considerations and the long-term effects of life without movement before they make decisions about ventilation . Some patients under long-term tracheostomy intermittent positive pressure ventilation with deflated cuffs or cuffless tracheostomy tubes (leak ventilation) are able to speak, provided their bulbar muscles are strong enough. This technique preserves speech in some patients with long-term mechanical ventilation. Palliative care
Social workers and home care and hospice nurses help patients, families, and caregivers with the medical, emotional, and financial challenges of coping with ALS, particularly during the final stages of the disease. Social workers provide such as assistance in obtaining financial aid, arranging durable power of attorney, preparing a living will, and finding groups for patients and caregivers. Home nurses are available not only to provide medical care but also to teach caregivers about tasks such as maintaining respirators, giving feedings, and moving patients to avoid painful skin problems and contractures. Home hospice nurses work in consultation with physicians to ensure proper medication, pain control, and other care affecting the quality of life of patients who wish to remain at home. The home hospice team can also counsel patients and caregivers about end-of-life issues.
Epidemiology ALS is one of the most common neuromuscular diseases worldwide, and people of all races and ethnic backgrounds are affected. One or two out of 100,000 people develop ALS each year.[40] ALS most commonly strikes people between 40 and 60 years of age, but younger and older people can also develop the disease. Men are affected slightly more often than women. Although the incidence of ALS is thought to be regionally uniform, there are three regions in the West Pacific where there has in the past been an elevated occurrence of ALS. This seems to be declining in recent decades. The largest is the area of Guam inhabited by the Chamorro people, who have historically had a high incidence (as much as 143 cases per 100,000 people per year) of a condition called Lytico-Bodig disease which is a combination of symptoms similar to ALS, parkinsonism, and dementia. Lytico-Bodig disease has been linked to the consumption of cycad seeds and in particular, the chemical found in cycad seeds, β-methylamino-L-alanine (BMAA).[41] Two more areas of increased incidence are West Papua and the Kii Peninsula of Japan.[42][43]
Although there have been reports of several "clusters" including three American football players from the San Francisco 49ers, more than fifty football players in Italy,[44] three football-playing friends in the south of England,[45] and reports of conjugal (husband and wife) cases in the south of ,[46][47][48][49][50] these are statistically plausible chance events[citation needed]. Although many authors consider ALS to be caused by a combination of genetic and environmental risk factors, so far the latter have not been firmly identified, other than a higher risk with increasing age.
Etymology Amyotrophic comes from the Greek language: A- means "no", myo refers to "muscle", and trophic means "nourishment"; amyotrophic therefore means "no muscle nourishment," which describes the characteristic atrophication of the sufferer's disused muscle tissue. Lateral identifies the areas in a person's spinal cord where portions of the nerve cells that are affected are located. As this area degenerates it leads to scarring or hardening ("sclerosis") in the region.
History Timeline Year
Event
1824 Charles Bell writes a report about ALS.[51] 1850 English scientist Augustus Waller describes the appearance of shriveled nerve fibers 1869 French doctor Jean-Martin Charcot first describes ALS in scientific literature[52] 1881
"Amyotrophic Lateral Sclerosis" is translated into English and published in a three-volume edition of Lectures on the Diseases of the Nervous System
1939
ALS becomes a cause célèbre in the United States when baseball legend Lou Gehrig's career—and, two years later, his life—is ended by the disease. He gives his farewell speech on 4 July 1939.[53]
1950s ALS epidemic occurs among the Chamorro people on Guam 1991 Researchers link chromosome 21 to FALS (Familial ALS) 1993 SOD1 gene on chromosome 21 found to play a role in some cases of FALS 1996 Rilutek becomes the first FDA-approved drug for ALS 1998 The El Escorial criteria is developed as the standard for classifying ALS patient in clinical research 1999 The revised ALS Functional Rating Scale (ALSFRS-R) is published and soon becomes a gold
standard measure for rating decline in ALS patient in clinical research 2011
Noncoding repeat expansions in C9ORF72 are found to be a major cause of ALS and frontotemporal dementia
Clinical research A number of clinical trials are underway globally for ALS; a comprehensive listing of trials in the US can be found at ClinicalTrials.gov. Thalidomide and Lenalidomide have shown efficacy in protecting motor neurons in transgenic (G93A) mice.[54] KNS-760704 (Dexpramipexole) is under clinical investigation in ALS patients. It is hoped that the drug will have a neuroprotective effect. It is one enantiomer of pramipexole, which is approved for the treatment of Parkinson's disease and restless legs syndrome.[55] The singleenantiomer preparation is essentially inactive at dopamine receptors, is not dose limited by the potent dopaminergic properties of pramipexole.[56] Results of a Phase II clinical trial conducted by Knopp Neurosciences and involving 102 patients were reported in 2010; the trial found a dose-dependent slowing in loss of function.[57] A larger phase II trial conducted by Biogen found the drug to be safe, well tolerated, and associated with a dose-dependent slowing in the decline of ALS.[58] A Phase II trial on talam was completed by Teva Pharmaceutical Industries in April 2010 however it was found to be negative for treatment viability.[59]
Parkinson's disease .
Parkinson's disease Classification and external resources
Illustration of Parkinson's disease by William Richard Gowers, which was first published in A Manual of Diseases of the Nervous System (1886) ICD-10
G20, F02.3
ICD-9
332
OMIM
168600 556500
DiseasesDB
9651
MedlinePlus
000755
eMedicine
neuro/304 neuro/635 in young pmr/99 rehab
GeneReviews
Parkinson Disease Overview
Parkinson's disease (also known as Parkinson disease, Parkinson's, idiopathic parkinsonism, primary parkinsonism, PD, hypokinetic rigid syndrome/HRS, or paralysis agitans) is a degenerative disorder of the central nervous system. The motor symptoms of Parkinson's disease result from the death of dopamine-generating cells in the substantia nigra, a region of the midbrain; the cause of this cell death is unknown. Early in the course of the disease, the most obvious symptoms are movement-related; these include shaking, rigidity, slowness of movement and difficulty with walking and gait. Later, cognitive and behavioural problems may arise, with dementia commonly occurring in the advanced stages of the disease. Other symptoms include sensory, sleep and emotional problems. PD is more common in the elderly, with most cases occurring after the age of 50.
The main motor symptoms are collectively called parkinsonism, or a "parkinsonian syndrome". Parkinson's disease is often defined as a parkinsonian syndrome that is idiopathic (having no known cause), although some atypical cases have a genetic origin. Many risk and protective factors have been investigated: the clearest evidence is for an increased risk of PD in people exposed to certain pesticides and a reduced risk in tobacco smokers. The pathology of the disease is characterized by the accumulation of a protein called alpha-synuclein into inclusions called Lewy bodies in neurons, and from insufficient formation and activity of dopamine produced in certain neurons within parts of the midbrain. Lewy bodies are the pathological hallmark of the idiopathic disorder, and the distribution of the Lewy bodies throughout the Parkinsonian brain varies from one individual to another. The anatomical distribution of the Lewy bodies is often directly related to the expression and degree of the clinical symptoms of each individual. Diagnosis of typical cases is mainly based on symptoms, with tests such as neuroimaging being used for confirmation. Modern treatments are effective at managing the early motor symptoms of the disease, mainly through the use of levodopa and dopamine agonists. As the disease progresses and dopaminergic neurons continue to be lost, these drugs eventually become ineffective at treating the symptoms and at the same time produce a complication called dyskinesia, marked by involuntary writhing movements. Diet and some forms of rehabilitation have shown some effectiveness at alleviating symptoms. Surgery and deep brain stimulation have been used to reduce motor symptoms as a last resort in severe cases where drugs are ineffective. Research directions include investigations into new animal models of the disease and of the potential usefulness of gene therapy, stem cell transplants and neuroprotective agents. Medications to treat non-movement-related symptoms of PD, such as sleep disturbances and emotional problems, also exist. The disease is named after the English doctor James Parkinson, who published the first detailed description in An Essay on the Shaking Palsy in 1817. Several major organizations promote research and improvement of quality of life of those with the disease and their families. Public awareness campaigns include Parkinson's disease day (on the birthday of James Parkinson, April 11) and the use of a red tulip as the symbol of the disease. People with parkinsonism who have enhanced the public's awareness include Michael J. Fox and Muhammad Ali.
Classification The term parkinsonism is used for a motor syndrome whose main symptoms are tremor at rest, stiffness, slowing of movement and postural instability. Parkinsonian syndromes can be divided into four subtypes according to their origin: primary or idiopathic, secondary or acquired, hereditary parkinsonism, and parkinson plus syndromes or multiple system degeneration.[1] Parkinson's disease is the most common form of parkinsonism and is usually defined as "primary" parkinsonism, meaning parkinsonism with no external identifiable cause.[2][3] In recent years several genes that are directly related to some cases of Parkinson's disease have been discovered. As much as this conflicts with the definition of Parkinson's disease as an idiopathic illness, genetic parkinsonism disorders with a similar clinical course to PD are generally included under the Parkinson's disease label. The "familial Parkinson's disease" and "sporadic Parkinson's disease" can be used to differentiate genetic from truly idiopathic forms of the disease.[4]
Usually classified as a movement disorder, PD also gives rise to several non-motor types of symptoms such as sensory deficits,[5] cognitive difficulties or sleep problems. Parkinson plus diseases are primary parkinsonisms which present additional features.[2] They include multiple system atrophy, progressive supranuclear palsy, corticobasal degeneration and dementia with Lewy bodies.[2][6] In of pathophysiology, PD is considered a synucleinopathy due to an abnormal accumulation of alpha-synuclein protein in the brain in the form of Lewy bodies, as opposed to other diseases such as Alzheimer's disease where the brain accumulates tau protein in the form of neurofibrillary tangles.[7] Nevertheless, there is clinical and pathological overlap between tauopathies and synucleinopathies. The most typical symptom of Alzheimer's disease, dementia, occurs in advanced stages of PD, while it is common to find neurofibrillary tangles in brains affected by PD.[7] Dementia with Lewy bodies (DLB) is another synucleinopathy that has similarities with PD, and especially with the subset of PD cases with dementia. However the relationship between PD and DLB is complex and still has to be clarified.[8] They may represent parts of a continuum or they may be separate diseases.[8]
Signs and symptoms Main article: Signs and symptoms of Parkinson's disease
Parkinson's disease affects movement, producing motor symptoms.[1] Non-motor symptoms, which include autonomic dysfunction, neuropsychiatric problems (mood, cognition, behavior or thought alterations), and sensory and sleep difficulties, are also common.[1] Motor
A man with Parkinson's disease displaying a flexed walking posture pictured in 1892. Photo appeared in Nouvelle Iconographie de la Salpètrière, vol. 5.
Handwriting of a person affected by PD in Lectures on the diseases of the nervous system by Charcot (1879). The original description of the text states "The strokes forming the letters are very irregular and sinuous, whilst the irregularities and sinuosities are of a very limited width. (...) the down-strokes are all, with the exception of the first letter, made with comparative firmness and are, in fact, nearly normal — the finer up-strokes, on the contrary, are all tremulous in appearance (...)." Further information: Parkinsonian gait
Four motor symptoms are considered cardinal in PD: tremor, rigidity, slowness of movement, and postural instability.[1] Tremor is the most apparent and well-known symptom.[1] It is the most common; though around 30% of individuals with PD do not have tremor at disease onset, most develop it as the disease progresses.[1] It is usually a rest tremor: maximal when the limb is at rest and disappearing with voluntary movement and sleep.[1] It affects to a greater extent the most distal part of the limb and at onset typically appears in only a single arm or leg, becoming bilateral later.[1] Frequency of PD tremor is between 4 and 6 hertz (cycles per second). A feature of tremor is pill-rolling, the tendency of the index finger of the hand to get into with the thumb and perform together a circular movement.[1][9] The term derives from the similarity between the movement in PD patients and the earlier pharmaceutical technique of manually making pills.[9] Bradykinesia (slowness of movement) is another characteristic feature of PD, and is associated with difficulties along the whole course of the movement process, from planning to initiation and finally execution of a movement.[1] Performance of sequential and simultaneous movement is hindered.[1] Bradykinesia is the most disabling symptom in the early stages of the disease.[2] Initial manifestations are problems when performing daily tasks which require fine motor control such as writing, sewing or getting dressed.[1] Clinical evaluation is based in similar tasks such as alternating movements between both hands or both feet.[2] Bradykinesia is not equal for all movements or times. It is modified by the activity or emotional state of the subject, to the point that some patients are barely able to walk yet can still ride a bicycle.[1] Generally patients have less difficulty when some sort of external cue is provided.[1][10] Rigidity is stiffness and resistance to limb movement caused by increased muscle tone, an excessive and continuous contraction of muscles.[1] In parkinsonism the rigidity can be uniform
(lead-pipe rigidity) or ratchety (cogwheel rigidity).[1][2][11][12] The combination of tremor and increased tone is considered to be at the origin of cogwheel rigidity.[13] Rigidity may be associated with t pain; such pain being a frequent initial manifestation of the disease.[1] In early stages of Parkinson's disease, rigidity is often asymmetrical and it tends to affect the neck and shoulder muscles prior to the muscles of the face and extremities.[14] With the progression of the disease, rigidity typically affects the whole body and reduces the ability to move. Postural instability is typical in the late stages of the disease, leading to impaired balance and frequent falls, and secondarily to bone fractures.[1] Instability is often absent in the initial stages, especially in younger people.[2] Up to 40% of the patients may experience falls and around 10% may have falls weekly, with number of falls being related to the severity of PD.[1] Other recognized motor signs and symptoms include gait and posture disturbances such as festination (rapid shuffling steps and a forward-flexed posture when walking),[1] speech and swallowing disturbances including voice disorders,[15] mask-like face expression or small handwriting, although the range of possible motor problems that can appear is large.[1] Neuropsychiatric
Parkinson's disease can cause neuropsychiatric disturbances which can range from mild to severe. This includes disorders of speech, cognition, mood, behaviour, and thought.[1] Cognitive disturbances can occur in the initial stages of the disease and sometimes prior to diagnosis, and increase in prevalence with duration of the disease.[1][16] The most common cognitive deficit in affected individuals is executive dysfunction, which can include problems with planning, cognitive flexibility, abstract thinking, rule acquisition, initiating appropriate actions and inhibiting inappropriate actions, and selecting relevant sensory information. Fluctuations in attention and slowed cognitive speed are among other cognitive difficulties. Memory is affected, specifically in recalling learned information. Nevertheless, improvement appears when recall is aided by cues. Visuospatial difficulties are also part of the disease, seen for example when the individual is asked to perform tests of facial recognition and perception of the orientation of drawn lines.[16] A person with PD has two to six times the risk of suffering dementia compared to the general population.[1][16] The prevalence of dementia increases with duration of the disease.[16] Dementia is associated with a reduced quality of life in people with PD and their caregivers, increased mortality, and a higher probability of needing nursing home care.[16] Behavior and mood alterations are more common in PD without cognitive impairment than in the general population, and are usually present in PD with dementia. The most frequent mood difficulties are depression, apathy and anxiety.[1] Impulse control behaviors such as medication overuse and craving, binge eating, hypersexuality, or pathological gambling can appear in PD and have been related to the medications used to manage the disease.[1][17] Psychotic symptoms— hallucinations or delusions—occur in 4% of patients, and it is assumed that the main precipitant of psychotic phenomena in Parkinson’s disease is dopaminergic excess secondary to treatment; it therefore becomes more common with increasing age and levodopa intake.[18][19]
Other
In addition to cognitive and motor symptoms, PD can impair other body functions. Sleep problems are a feature of the disease and can be worsened by medications.[1] Symptoms can manifest as daytime drowsiness, disturbances in REM sleep, or insomnia.[1] Alterations in the autonomic nervous system can lead to orthostatic hypotension (low blood pressure upon standing), oily skin and excessive sweating, urinary incontinence and altered sexual function.[1] Constipation and gastric dysmotility can be severe enough to cause discomfort and even endanger health.[20] PD is related to several eye and vision abnormalities such as decreased blink rate, dry eyes, deficient ocular pursuit (eye tracking) and saccadic movements (fast automatic movements of both eyes in the same direction), difficulties in directing gaze upward, and blurred or double vision.[1][21] Changes in perception may include an impaired sense of smell, sensation of pain and paresthesia (skin tingling and numbness).[1] All of these symptoms can occur years before diagnosis of the disease.[1]
Causes Main article: Causes of Parkinson's disease
PDB rendering of Parkin (ligase)
Most people with Parkinson's disease have idiopathic Parkinson's disease (having no specific known cause). A small proportion of cases, however, can be attributed to known genetic factors. Other factors have been associated with the risk of developing PD, but no causal relationships have been proven. PD traditionally has been considered a non-genetic disorder; however, around 15% of individuals with PD have a first-degree relative who has the disease.[2] At least 5% of people are now known to have forms of the disease that occur because of a mutation of one of several specific genes.[22] Mutations in specific genes have been conclusively shown to cause PD. These genes code for alpha-synuclein (SNCA), parkin (PRKN), leucine-rich repeat kinase 2 (LRRK2 or dardarin), PTEN-induced putative kinase 1 (PINK1), DJ-1 and ATP13A2.[4][22] In most cases, people with these mutations will develop PD. With the exception of LRRK2, however, they for only a small minority of cases of PD.[4] The most extensively studied PD-related genes are SNCA and LRRK2. Mutations in genes including SNCA, LRRK2 and glucocerebrosidase (GBA) have been
found to be risk factors for sporadic PD. Mutations in GBA are known to cause Gaucher's disease.[22] Genome-wide association studies, which search for mutated alleles with low penetrance in sporadic cases, have now yielded many positive results.[23] The role of the SNCA gene is important in PD because the alpha-synuclein protein is the main component of Lewy bodies.[22] Missense mutations of the gene (in which a single nucleotide is changed), and duplications and triplications of the locus containing it have been found in different groups with familial PD.[22] Missense mutations are rare.[22] On the other hand, multiplications of the SNCA locus for around 2% of familial cases.[22] Multiplications have been found in asymptomatic carriers, which indicate that penetrance is incomplete or agedependent.[22] The LRRK2 gene (PARK8) encodes for a protein called dardarin. The name dardarin was taken from a Basque word for tremor, because this gene was first identified in families from England and the north of Spain.[4] Mutations in LRRK2 are the most common known cause of familial and sporadic PD, ing for approximately 5% of individuals with a family history of the disease and 3% of sporadic cases.[4][22] There are many different mutations described in LRRK2, however unequivocal proof of causation only exists for a small number.[22]
Pathology
A Lewy body (stained brown) in a brain cell of the substantia nigra in Parkinson's disease. The brown colour is positive immunohistochemistry staining for alpha-synuclein. Anatomical
The basal ganglia, a group of "brain structures" innervated by the dopaminergic system, are the most seriously affected brain areas in PD.[24] The main pathological characteristic of PD is cell death in the substantia nigra and, more specifically, the ventral (front) part of the pars compacta, affecting up to 70% of the cells by the time death occurs.[4] Macroscopic alterations can be noticed on cut surfaces of the brainstem, where neuronal loss can be inferred from a reduction of melanin pigmentation in the substantia nigra and locus coeruleus.[25] The histopathology (microscopic anatomy) of the substantia nigra and several other
brain regions shows neuronal loss and Lewy bodies in many of the remaining nerve cells. Neuronal loss is accompanied by death of astrocytes (star-shaped glial cells) and activation of the microglia (another type of glial cell). Lewy bodies are a key pathological feature of PD.[25] Pathophysiology
A. Schematic initial progression of Lewy body deposits in the first stages of Parkinson's disease, as proposed by Braak and colleagues B. Localization of the area of significant brain volume reduction in initial PD compared with a group of participants without the disease in a neuroimaging study, which concluded that brain stem damage may be the first identifiable stage of PD neuropathology[26]
The primary symptoms of Parkinson's disease result from greatly reduced activity of dopaminesecreting cells caused by cell death in the pars compacta region of the substantia nigra.[24] There are five major pathways in the brain connecting other brain areas with the basal ganglia. These are known as the motor, oculo-motor, associative, limbic and orbitofrontal circuits, with names indicating the main projection area of each circuit.[24] All of them are affected in PD, and their disruption explains many of the symptoms of the disease since these circuits are involved in a wide variety of functions including movement, attention and learning.[24] Scientifically, the motor circuit has been examined the most intensively.[24] A particular conceptual model of the motor circuit and its alteration with PD has been of great influence since 1980, although some limitations have been pointed out which have led to modifications.[24] In this model, the basal ganglia normally exert a constant inhibitory influence on a wide range of motor systems, preventing them from becoming active at inappropriate times. When a decision is made to perform a particular action, inhibition is reduced for the required motor system, thereby releasing it for activation. Dopamine acts to facilitate this release of inhibition, so high levels of dopamine function tend to promote motor activity, while low levels of dopamine function, such as occur in PD, demand greater exertions of effort for any given movement. Thus the net effect of dopamine depletion is to produce hypokinesia, an overall reduction in motor output.[24] Drugs that are used to treat PD, conversely, may produce excessive
dopamine activity, allowing motor systems to be activated at inappropriate times and thereby producing dyskinesias.[24] Brain cell death
There is speculation of several mechanisms by which the brain cells could be lost.[27] One mechanism consists of an abnormal accumulation of the protein alpha-synuclein bound to ubiquitin in the damaged cells. This insoluble protein accumulates inside neurones forming inclusions called Lewy bodies.[4][28] According to the Braak staging, a classification of the disease based on pathological findings, Lewy bodies first appear in the olfactory bulb, medulla oblongata and pontine tegmentum, with individuals at this stage being asymptomatic. As the disease progresses, Lewy bodies later develop in the substantia nigra, areas of the midbrain and basal forebrain, and in a last step the neocortex.[4] These brain sites are the main places of neuronal degeneration in PD; however, Lewy bodies may not cause cell death and they may be protective.[27][28] In patients with dementia, a generalized presence of Lewy bodies is common in cortical areas. Neurofibrillary tangles and senile plaques, characteristic of Alzheimer's disease, are not common unless the person is demented.[25] Other cell-death mechanisms include proteosomal and lysosomal system dysfunction and reduced mitochondrial activity.[27] Iron accumulation in the substantia nigra is typically observed in conjunction with the protein inclusions. It may be related to oxidative stress, protein aggregation and neuronal death, but the mechanisms are not fully understood.[29]
Diagnosis
Fludeoxyglucose (18F) (FDG)] PET scan of a healthy brain. Hotter areas reflect higher glucose uptake. A decreased activity in the basal ganglia can aid in diagnosing Parkinson's disease.
A physician will diagnose Parkinson's disease from the medical history and a neurological examination.[1] There is no lab test that will clearly identify the disease, but brain scans are sometimes used to rule out disorders that could give rise to similar symptoms. Patients may be given levodopa and resulting relief of motor impairment tends to confirm diagnosis. The finding of Lewy bodies in the midbrain on autopsy is usually considered proof that the patient suffered
from Parkinson's disease. The progress of the illness over time may reveal it is not Parkinson's disease, and some authorities recommend that the diagnosis be periodically reviewed.[1][30] Other causes that can secondarily produce a parkinsonian syndrome are Alzheimer's disease, multiple cerebral infarction and drug-induced parkinsonism.[30] Parkinson plus syndromes such as progressive supranuclear palsy and multiple system atrophy must be ruled out.[1] AntiParkinson's medications are typically less effective at controlling symptoms in Parkinson plus syndromes.[1] Faster progression rates, early cognitive dysfunction or postural instability, minimal tremor or symmetry at onset may indicate a Parkinson plus disease rather than PD itself.[31] Genetic forms are usually classified as PD, although the familial Parkinson's disease and familial parkinsonism are used for disease entities with an autosomal dominant or recessive pattern of inheritance.[2] Medical organizations have created diagnostic criteria to ease and standardize the diagnostic process, especially in the early stages of the disease. The most widely known criteria come from the UK Parkinson's Disease Society Brain Bank and the U.S. National Institute of Neurological Disorders and Stroke.[1] The PD Society Brain Bank criteria require slowness of movement (bradykinesia) plus either rigidity, resting tremor, or postural instability. Other possible causes for these symptoms need to be ruled out. Finally, three or more of the following features are required during onset or evolution: unilateral onset, tremor at rest, progression in time, asymmetry of motor symptoms, response to levodopa for at least five years, clinical course of at least ten years and appearance of dyskinesias induced by the intake of excessive levodopa.[1] Accuracy of diagnostic criteria evaluated at autopsy is 75–90%, with specialists such as neurologists having the highest rates.[1] Computed tomography (CT) and magnetic resonance imaging (MRI) brain scans of people with PD usually appear normal.[32] These techniques are nevertheless useful to rule out other diseases that can be secondary causes of parkinsonism, such as basal ganglia tumors, vascular pathology and hydrocephalus.[32] A specific technique of MRI, diffusion MRI, has been reported to be useful at discriminating between typical and atypical parkinsonism, although its exact diagnostic value is still under investigation.[32] Dopaminergic function in the basal ganglia can be measured with different PET and SPECT radiotracers. Examples are ioflupane (123I) (trade name DaTSCAN) and iometopane (Dopascan) for SPECT or fluorodeoxyglucose (18F) for PET.[32] A pattern of reduced dopaminergic activity in the basal ganglia can aid in diagnosing PD.[32]
Management Main article: Management of Parkinson's disease
There is no cure for Parkinson's disease, but medications, surgery and multidisciplinary management can provide relief from the symptoms. The main families of drugs useful for treating motor symptoms are levodopa (usually combined with a dopa decarboxylase inhibitor or COMT inhibitor), dopamine agonists and MAO-B inhibitors.[33] The stage of the disease determines which group is most useful. Two stages are usually distinguished: an initial stage in which the individual with PD has already developed some disability for which he needs pharmacological treatment, then a second stage in which an individual develops motor complications related to levodopa usage.[33] Treatment in the initial stage aims for an optimal
tradeoff between good symptom control and side-effects resulting from enhancement of dopaminergic function. The start of levodopa (or L-DOPA) treatment may be delayed by using other medications such as MAO-B inhibitors and dopamine agonists, in the hope of delaying the onset of dyskinesias.[33] In the second stage the aim is to reduce symptoms while controlling fluctuations of the response to medication. Sudden withdrawals from medication or overuse have to be managed.[33] When medications are not enough to control symptoms, surgery and deep brain stimulation can be of use.[34] In the final stages of the disease, palliative care is provided to enhance quality of life.[35] Levodopa
Levodopa has been the most widely used treatment for over 30 years.[33] L-DOPA is converted into dopamine in the dopaminergic neurons by dopa decarboxylase.[33] Since motor symptoms are produced by a lack of dopamine in the substantia nigra, the istration of L-DOPA temporarily diminishes the motor symptoms.[33] Only 5–10% of L-DOPA crosses the blood–brain barrier. The remainder is often metabolized to dopamine elsewhere, causing a variety of side effects including nausea, dyskinesias and t stiffness.[33] Carbidopa and benserazide are peripheral dopa decarboxylase inhibitors,[33] which help to prevent the metabolism of L-DOPA before it reaches the dopaminergic neurons, therefore reducing side effects and increasing bioavailability. They are generally given as combination preparations with levodopa.[33] Existing preparations are carbidopa/levodopa (co-careldopa) and benserazide/levodopa (co-beneldopa). Levodopa has been related to dopamine dysregulation syndrome, which is a compulsive overuse of the medication, and punding.[17] There are controlled release versions of levodopa in the form intravenous and intestinal infusions that spread out the effect of the medication. These slow-release levodopa preparations have not shown an increased control of motor symptoms or motor complications when compared to immediate release preparations.[33][36] Tolcapone inhibits the COMT enzyme, which degrades dopamine, thereby prolonging the effects of levodopa.[33] It has been used to complement levodopa; however, its usefulness is limited by possible side effects such as liver damage.[33] A similarly effective drug, entacapone, has not been shown to cause significant alterations of liver function.[33] Licensed preparations of entacapone contain entacapone alone or in combination with carbidopa and levodopa.[33] Levodopa preparations lead in the long term to the development of motor complications characterized by involuntary movements called dyskinesias and fluctuations in the response to medication.[33] When this occurs a person with PD can change from phases with good response to medication and few symptoms ("on" state), to phases with no response to medication and significant motor symptoms ("off" state).[33] For this reason, levodopa doses are kept as low as possible while maintaining functionality.[33] Delaying the initiation of therapy with levodopa by using alternatives (dopamine agonists and MAO-B inhibitors) is common practice.[33] A former strategy to reduce motor complications was to withdraw L-DOPA medication for some time. This is discouraged now, since it can bring dangerous side effects such as neuroleptic malignant syndrome.[33] Most people with PD will eventually need levodopa and later develop motor side effects.[33]
Dopamine agonists
Several dopamine agonists that bind to dopaminergic post-synaptic receptors in the brain have similar effects to levodopa.[33] These were initially used for individuals experiencing on-off fluctuations and dyskinesias as a complementary therapy to levodopa; they are now mainly used on their own as an initial therapy for motor symptoms with the aim of delaying motor complications.[33][37] When used in late PD they are useful at reducing the off periods.[33] Dopamine agonists include bromocriptine, pergolide, pramipexole, ropinirole, piribedil, cabergoline, apomorphine and lisuride. Dopamine agonists produce significant, although usually mild, side effects including drowsiness, hallucinations, insomnia, nausea and constipation.[33] Sometimes side effects appear even at a minimal clinically effective dose, leading the physician to search for a different drug.[33] Compared with levodopa, dopamine agonists may delay motor complications of medication use but are less effective at controlling symptoms.[33] Nevertheless, they are usually effective enough to manage symptoms in the initial years.[2] They tend to be more expensive than levodopa.[2] Dyskinesias due to dopamine agonists are rare in younger people who have PD, but along with other side effects, become more common with age at onset.[2] Thus dopamine agonists are the preferred initial treatment for earlier onset, as opposed to levodopa in later onset.[2] Agonists have been related to impulse control disorders (such as compulsive sexual activity and eating, and pathological gambling and shopping) even more strongly than levodopa.[17] Apomorphine, a non-orally istered dopamine agonist, may be used to reduce off periods and dyskinesia in late PD.[33] It is istered by intermittent injections or continuous subcutaneous infusions.[33] Since secondary effects such as confusion and hallucinations are common, individuals receiving apomorphine treatment should be closely monitored.[33] Two dopamine agonists that are istered through skin patches (lisuride and rotigotine) have been recently found to be useful for patients in initial stages and preliminary positive results has been published on the control of off states in patients in the advanced state.[36] MAO-B inhibitors
MAO-B inhibitors (selegiline and rasagiline) increase the level of dopamine in the basal ganglia by blocking its metabolism. They inhibit monoamine oxidase-B (MAO-B) which breaks down dopamine secreted by the dopaminergic neurons. The reduction in MAO-B activity results in increased L-DOPA in the striatum.[33] Like dopamine agonists, MAO-B inhibitors used as monotherapy improve motor symptoms and delay the need for levodopa in early disease, but produce more adverse effects and are less effective than levodopa. There are few studies of their effectiveness in the advanced stage, although results suggest that they are useful to reduce fluctuations between on and off periods.[33] An initial study indicated that selegiline in combination with levodopa increased the risk of death, but this was later disproven.[33] Other drugs
Other drugs such as amantadine and anticholinergics may be useful as treatment of motor symptoms. However, the evidence ing them lacks quality, so they are not first choice
treatments.[33] In addition to motor symptoms, PD is accompanied by a diverse range of symptoms. A number of drugs have been used to treat some of these problems.[38] Examples are the use of clozapine for psychosis, cholinesterase inhibitors for dementia, and modafinil for daytime sleepiness.[38][39] A 2010 meta-analysis found that non-steroidal anti-inflammatory drugs (apart from acetaminophen and aspirin), have been associated with at least a 15 percent (higher in long-term and regular s) reduction of incidence of the development of Parkinson's disease.[40] Surgery
Placement of an electrode into the brain. The head is stabilised in a frame for stereotactic surgery.
Treating motor symptoms with surgery was once a common practice, but since the discovery of levodopa, the number of operations declined.[41] Studies in the past few decades have led to great improvements in surgical techniques, so that surgery is again being used in people with advanced PD for whom drug therapy is no longer sufficient.[41] Surgery for PD can be divided in two main groups: lesional and deep brain stimulation (DBS). Target areas for DBS or lesions include the thalamus, the globus pallidus or the subthalamic nucleus.[41] Deep brain stimulation (DBS) is the most commonly used surgical treatment. It involves the implantation of a medical device called a brain pacemaker, which sends electrical impulses to specific parts of the brain. DBS is recommended for people who have PD who suffer from motor fluctuations and tremor inadequately controlled by medication, or to those who are intolerant to medication, as long as they do not have severe neuropsychiatric problems.[34] Other, less common, surgical therapies involve intentional formation of lesions to suppress overactivity of specific subcortical areas. For example, pallidotomy involves surgical destruction of the globus pallidus to control dyskinesia.[41] Rehabilitation Further information: Rehabilitation in Parkinson's disease
There is some evidence that speech or mobility problems can improve with rehabilitation, although studies are scarce and of low quality.[42][43] Regular physical exercise with or without physiotherapy can be beneficial to maintain and improve mobility, flexibility, strength, gait speed, and quality of life.[43] However, when an exercise program is performed under the supervision of a physiotherapist, there are more improvements in motor symptoms, mental and emotional functions, daily living activities, and quality of life compared to a self-supervised exercise program at home.[44] In of improving flexibility and range of motion for patients experiencing rigidity, generalized relaxation techniques such as gentle rocking have been found to decrease excessive muscle tension. Other effective techniques to promote relaxation include slow rotational movements of the extremities and trunk, rhythmic initiation, diaphragmatic breathing, and meditation techniques.[45] As for gait and addressing the challenges associated with the disease such as hypokinesia (slowness of movement), shuffling and decreased arm swing; physiotherapists have a variety of strategies to improve functional mobility and safety. Areas of interest with respect to gait during rehabilitation programs focus on but are not limited to improving gait speed, base of , stride length, trunk and arm swing movement. Strategies include utilizing assistive equipment (pole walking and treill walking), verbal cueing (manual, visual and auditory), exercises (marching and PNF patterns) and altering environments (surfaces, inputs, open vs. closed).[46] Strengthening exercises have shown improvements in strength and motor function for patients with primary muscular weakness and weakness related to inactivity with mild to moderate Parkinson’s disease. However, reports show a significant interaction between strength and the time the medications was taken. Therefore, it is recommended that patients should perform exercises 45 minutes to one hour after medications, when the patient is at their best.[47] Also, due to the forward flexed posture, and respiratory dysfunctions in advanced Parkinson’s disease, deep diaphragmatic breathing exercises are beneficial in improving chest wall mobility and vital capacity.[48] Exercise may improve constipation.[20] One of the most widely practiced treatments for speech disorders associated with Parkinson's disease is the Lee Silverman voice treatment (LSVT).[42][49] Speech therapy and specifically LSVT may improve speech.[42] Occupational therapy (OT) aims to promote health and quality of life by helping people with the disease to participate in as many of their daily living activities as possible.[42] There have been few studies on the effectiveness of OT and their quality is poor, although there is some indication that it may improve motor skills and quality of life for the duration of the therapy.[42][50] Palliative care
Palliative care is often required in the final stages of the disease when all other treatment strategies have become ineffective. The aim of palliative care is to maximize the quality of life for the person with the disease and those surrounding him or her. Some central issues of palliative care are: care in the community while adequate care can be given there, reducing or withdrawing drug intake to reduce drug side effects, preventing pressure ulcers by management of pressure areas of inactive patients, and facilitating end-of-life decisions for the patient as well as involved friends and relatives.[35]
Other treatments
Muscles and nerves that control the digestive process may be affected by PD, resulting in constipation and gastroparesis (food remaining in the stomach for a longer period of time than normal).[20] A balanced diet, based on periodical nutritional assessments, is recommended and should be designed to avoid weight loss or gain and minimize consequences of gastrointestinal dysfunction.[20] As the disease advances, swallowing difficulties (dysphagia) may appear. In such cases it may be helpful to use thickening agents for liquid intake and an upright posture when eating, both measures reducing the risk of choking. Gastrostomy to deliver food directly into the stomach is possible in severe cases.[20] Levodopa and proteins use the same transportation system in the intestine and the blood–brain barrier, thereby competing for access.[20] When they are taken together, this results in a reduced effectiveness of the drug.[20] Therefore, when levodopa is introduced, excessive protein consumption is discouraged and well balanced Mediterranean diet is recommended. In advanced stages, additional intake of low-protein products such as bread or pasta is recommended for similar reasons.[20] To minimize interaction with proteins, levodopa should be taken 30 minutes before meals.[20] At the same time, regimens for PD restrict proteins during breakfast and lunch, allowing protein intake in the evening.[20] Repetitive transcranial magnetic stimulation temporarily improves levodopa-induced dyskinesias.[51] Its usefulness in PD is an open research topic,[52] although recent studies have shown no effect by rTMS.[53] Several nutrients have been proposed as possible treatments; however there is no evidence that vitamins or food additives improve symptoms.[54] There is no evidence to substantiate that acupuncture and practice of Qigong, or T'ai chi, have any effect on the course of the disease or symptoms. Further research on the viability of Tai chi for balance or motor skills are necessary.[55][56][57] Fava beans and velvet beans are natural sources of levodopa and are eaten by many people with PD. While they have shown some effectiveness in clinical trials,[58] their intake is not free of risks. Life-threatening adverse reactions have been described, such as the neuroleptic malignant syndrome.[59][60]
Prognosis See also: Hoehn and Yahr scale and Unified Parkinson's Disease Rating Scale
Global burden of Parkinson's disease, measured in disability-adjusted life years per 100,000 inhabitants in 2004
no data
42.5–50
<5
50–57.5
5–12.5
57.5–65
12.5–20
65–72.5
20–27.5
72.5–80
27.5–35
> 80
35–42.5
PD invariably progresses with time. The Hoehn and Yahr scale, which defines five stages of progression, is commonly used to estimate the progress of the disease. Motor symptoms, if not treated, advance aggressively in the early stages of the disease and more slowly later. Untreated, individuals are expected to lose independent ambulation after an average of eight years and be bedridden after ten years.[61] However, it is uncommon to find untreated people nowadays. Medication has improved the prognosis of motor symptoms, while at the same time it is a new source of disability because of the undesired effects of levodopa after years of use.[61] In people taking levodopa, the progression time of symptoms to a stage of high dependency from caregivers may be over 15 years.[61] However, it is hard to predict what course the disease will take for a given individual.[61] Age is the best predictor of disease progression.[27] The rate of motor decline is greater in those with less impairment at the time of diagnosis, while cognitive impairment is more frequent in those who are over 70 years of age at symptom onset.[27] Since current therapies improve motor symptoms, disability at present is mainly related to nonmotor features of the disease.[27] Nevertheless, the relationship between disease progression and disability is not linear. Disability is initially related to motor symptoms.[61] As the disease advances, disability is more related to motor symptoms that do not respond adequately to medication, such as swallowing/speech difficulties, and gait/balance problems; and also to motor complications, which appear in up to 50% of individuals after 5 years of levodopa usage.[61] Finally, after ten years most people with the disease have autonomic disturbances, sleep problems, mood alterations and cognitive decline.[61] All of these symptoms, especially cognitive decline, greatly increase disability.[27][61] The life expectancy of people with PD is reduced.[61] Mortality ratios are around twice those of unaffected people.[61] Cognitive decline and dementia, old age at onset, a more advanced disease state and presence of swallowing problems are all mortality risk factors. On the other hand a disease pattern mainly characterized by tremor as opposed to rigidity predicts an improved survival.[61] Death from aspiration pneumonia is twice as common in individuals with PD as in the healthy population.[61]
Epidemiology PD is the second most common neurodegenerative disorder after Alzheimer's disease.[62] The prevalence (proportion in a population at a given time) of PD is about 0.3% of the whole population in industrialized countries. PD is more common in the elderly and prevalence rises from 1% in those over 60 years of age to 4% of the population over 80.[62] The mean age of onset is around 60 years, although 5–10% of cases, classified as young onset, begin between the ages of 20 and 50.[2] PD may be less prevalent in those of African and Asian ancestry, although this finding is disputed.[62] Some studies have proposed that it is more common in men than women, but others failed to detect any differences between the two sexes.[62] The incidence of PD is between 8 and 18 per 100,000 person–years.[62] Many risk factors and protective factors have been proposed, sometimes in relation to theories concerning possible mechanisms of the disease, however none have been conclusively related to PD by empirical evidence. When epidemiological studies have been carried out in order to test the relationship between a given factor and PD, they have often been flawed and their results have in some cases been contradictory.[62] The most frequently replicated relationships are an increased risk of PD in those exposed to pesticides, and a reduced risk in smokers.[62] Risk factors
U.S. Army helicopter spraying Agent Orange over Vietnamese agricultural land during the Vietnam war. Agent Orange has been associated with PD.
Injections of the synthetic neurotoxin MPTP produce a range of symptoms similar to those of PD as well as selective damage to the dopaminergic neurons in the substantia nigra. This observation has led to theorizing that exposure to some environmental toxins may increase the risk of having PD.[62] Exposure to toxins that have been consistently related to the disease can double the risk of PD, and include certain pesticides, such as rotenone or paraquat, and herbicides, such as Agent Orange.[62][63][64] Indirect measures of exposure, such as living in rural environments, have been found to increase the risk of PD.[64] Heavy metals exposure has been proposed to be a risk factor, through possible accumulation in the substantia nigra; however, studies on the issue have been inconclusive.[62]
Protective factors
Caffeine consumption protects against PD.[65] "Prospective epidemiologic studies performed in large cohorts of men (total: 374,003 subjects) agree in which the risk of suffering Parkinson's disease diminishes progressively as the consumption of coffee and other caffeinated beverages increases."[66] Although tobacco smoking is devastating for longevity or quality of life, it has been related to a reduced risk of having PD. Smokers' risk of having PD may be reduced down to a third when compared to non-smokers.[62] The basis for this effect is not known, but possibilities include an effect of nicotine as a dopamine stimulant.[62][67] Tobacco smoke contains compounds that act as MAO inhibitors that also might contribute to this effect.[68] Antioxidants, such as vitamins C and D, have been proposed to protect against the disease but results of studies have been contradictory and no positive effect has been proven.[62] The results regarding fat and fatty acids have been contradictory, with various studies reporting protective effects, risk-enhancing effects or no effects.[62] Finally there have been preliminary indications of a possible protective role of estrogens and anti-inflammatory drugs.[62]
History Main article: History of Parkinson's disease
A 1893 photograph of Jean-Martin Charcot, who made important contributions to the understanding of the disease and proposed its current name honoring James Parkinson
Several early sources, including an Egyptian papyrus, an Ayurvedic medical treatise, the Bible, and Galen's writings, describe symptoms resembling those of PD.[69] After Galen there are no references unambiguously related to PD until the 17th century.[69] In the 17th and 18th centuries, several authors wrote about elements of the disease, including Sylvius, Gaubius, Hunter and Chomel.[69][70][71]
In 1817 an English doctor, James Parkinson, published his essay reporting six cases of paralysis agitans.[72] An Essay on the Shaking Palsy described the characteristic resting tremor, abnormal posture and gait, paralysis and diminished muscle strength, and the way that the disease progresses over time.[72][73] Early neurologists who made further additions to the knowledge of the disease include Trousseau, Gowers, Kinnier Wilson and Erb, and most notably Jean-Martin Charcot, whose studies between 1868 and 1881 were a landmark in the understanding of the disease.[72] Among other advances, he made the distinction between rigidity, weakness and bradykinesia.[72] He also championed the renaming of the disease in honor of James Parkinson.[72] In 1912 Frederic Lewy described microscopic particles in affected brains, later named "Lewy bodies".[72] In 1919 Konstantin Tretiakoff reported that the substantia nigra was the main cerebral structure affected, but this finding was not widely accepted until it was confirmed by further studies published by Rolf Hassler in 1938.[72] The underlying biochemical changes in the brain were identified in the 1950s, due largely to the work of Arvid Carlsson on the neurotransmitter dopamine and its role on PD.[74] In 1997, alpha-synuclein was found to be the main component of Lewy bodies.[28] Anticholinergics and surgery (lesioning of the corticospinal pathway or some of the basal ganglia structures) were the only treatments until the arrival of levodopa, which reduced their use dramatically.[70][75] Levodopa was first synthesized in 1911 by Casimir Funk, but it received little attention until the mid 20th century.[74] It entered clinical practice in 1967 and brought about a revolution in the management of PD.[74][76] By the late 1980s deep brain stimulation emerged as a possible treatment.[77]
Society and culture Cost
Muhammad Ali at the World Economic Forum in Davos, at the age of 64 . He has shown signs of parkinsonism since the age of 38.
The costs of PD to society are high, but precise calculations are difficult due to methodological issues in research and differences between countries.[78] The annual cost in the UK is estimated to be between 449 million and 3.3 billion pounds, while the cost per patient per year in the U.S. is probably around $10,000 and the total burden around 23 billion dollars.[78] The largest share of direct cost comes from inpatient care and nursing homes, while the share coming from medication is substantially lower.[78] Indirect costs are high, due to reduced productivity and the burden on caregivers.[78] In addition to economic costs, PD reduces quality of life of those with the disease and their caregivers.[78] Advocacy
April 11, the birthday of James Parkinson, has been designated as Parkinson's disease day.[72][79] A red tulip was chosen by international organizations as the symbol of the disease in 2005: it represents the James Parkinson Tulip cultivar, ed in 1981 by a Dutch horticulturalist.[79] Advocacy organizations include the National Parkinson Foundation, which has provided more than $155 million in care, research and services since 1982,[80] Parkinson's Disease Foundation, which has provided more than $96 million for research and $40 million for education and advocacy programs since its founding in 1957 by William Black;[81][82] the
American Parkinson Disease Association, founded in 1961;[83] and the European Parkinson's Disease Association, founded in 1992.[84] Notable cases Main article: List of people diagnosed with Parkinson's disease
Actor Michael J. Fox has PD and has greatly increased the public awareness of the disease. Fox was diagnosed in 1991 when he was 30, but kept his condition secret from the public for seven years.[85] He has written two autobiographies in which his fight against the disease plays a major role,[86] and appeared before the United States Congress without medication to illustrate the effects of the disease.[86] The Michael J. Fox Foundation aims to develop a cure for Parkinson's disease. In recent years it has been the major Parkinson's fundraiser in the U.S., providing $140 million in funding between 2001 and 2008.[86] Fox was named one of the 100 people "whose power, talent or moral example is transforming the world" in 2007 by Time magazine,[85] and he received an honorary doctorate in medicine from Karolinska Institutet for his contributions to research in Parkinson's disease.[87] A foundation that s Parkinson's research, focusing on quality of life for people with Parkinson's, was founded in 2004 by professional cyclist and Olympic medalist Davis Phinney, who was diagnosed with young onset Parkinson's at age 40.[88] The Davis Phinney Foundation's mission is to help people living with Parkinson's disease live well by providing them with information, inspiration and tools.[89][90] Muhammad Ali has been called the "world's most famous Parkinson's patient".[91] He was 42 at diagnosis although he showed signs of Parkinson's when he was 38.[92] Whether he has PD or a parkinsonian syndrome caused by boxing is unresolved.[92][93]
Research See also: Parkinson's disease clinical research
There is little prospect of dramatic new PD treatments expected in a short time frame.[94] Currently active research directions include the search for new animal models of the disease and studies of the potential usefulness of gene therapy, stem cell transplants and neuroprotective agents.[27] Animal models
PD is not known to occur naturally in any species other than humans, although animal models which show some features of the disease are used in research. The appearance of parkinsonian symptoms in a group of drug addicts in the early 1980s who consumed a contaminated batch of the synthetic opiate MPPP led to the discovery of the chemical MPTP as an agent that causes a parkinsonian syndrome in non-human primates as well as in humans.[95] Other predominant toxin-based models employ the insecticide rotenone, the herbicide paraquat and the fungicide maneb.[96] Models based on toxins are most commonly used in primates. Transgenic rodent models that replicate various aspects of PD have been developed.[97]
Gene therapy
Gene therapy involves the use of a non-infectious virus to shuttle a gene into a part of the brain. The gene used leads to the production of an enzyme that helps to manage PD symptoms or protects the brain from further damage.[27][98] In 2010 there were four clinical trials using gene therapy in PD.[27] There have not been important adverse effects in these trials although the clinical usefulness of gene therapy is still unknown.[27] One of these reported positive results in 2011.[99] Neuroprotective treatments
Several chemical compounds such as GDNF (chemical structure pictured) have been proposed as neuroprotectors in PD, but their effectiveness has not been proven.
Investigations on neuroprotection are at the forefront of PD research. Several molecules have been proposed as potential treatments.[27] However, none of them have been conclusively demonstrated to reduce degeneration.[27] Agents currently under investigation include antiapoptotics (omigapil, CEP-1347), antiglutamatergics, monoamine oxidase inhibitors (selegiline, rasagiline), promitochondrials (coenzyme Q10, creatine), calcium channel blockers (isradipine) and growth factors (GDNF).[27] Preclinical research also targets alpha-synuclein.[94] A vaccine that primes the human immune system to destroy alpha-synuclein, PD01A (developed by Austrian company, Affiris), has entered clinical trials in humans.[100] Neural transplantation
Since early in the 1980s, fetal, porcine, carotid or retinal tissues have been used in cell transplants, in which dissociated cells are injected into the substantia nigra in the hope that they will incorporate themselves into the brain in a way that replaces the dopamine-producing cells that have been lost.[27] Although there was initial evidence of mesencephalic dopamineproducing cell transplants being beneficial, double-blind trials to date indicate that cell transplants produce no long-term benefit.[27] An additional significant problem was the excess release of dopamine by the transplanted tissue, leading to dystonias.[101] Stem cell transplants are a recent research target, because stem cells are easy to manipulate and stem cells transplanted into the brains of rodents and monkeys have been found to survive and reduce behavioral abnormalities.[27][102] Nevertheless, use of fetal stem cells is controversial.[27] It has been
proposed that effective treatments may be developed in a less controversial way by use of induced pluripotent stem cells taken from adults.[27]