Share Reilly and his colleagues set out to evaluate and quantify the changes in personality perception that occur after various types of facial rejuvenation surgery, including face lift, upper and lower eye lifts, brow lift, neck lift and/or chin implant.The study involved pre- and postoperative photos of 30 Caucasian women and included survey responses from 170 people. Respondents were asked to rate their perception of attractiveness and femininity, and personality traits (extroversion, likeability, social skills, risk-seeking behavior, aggressiveness and trustworthiness) of each picture they reviewed. No reviewer saw both the before and after photos of the same woman, and no one knew whether plastic surgery had been performed.Post-surgery improvement was detected for four traits: social skills, likeability, attractiveness and femininity. While not statistically significant, a trend toward trustworthiness also was seen.“Having a facelift and lower eye lift were the two procedures that appeared to garner more favorable reviews after surgery, with the lower eye lift carrying a little more weight,” Reilly says.He says an earlier psychological study showed that the eyes are highly diagnostic for attractiveness as well as for trustworthiness. “This may explain why the patients who had a lower eyelift were found to be significantly more attractive and feminine, and experienced improved trustworthiness scores,” Reilly says.Attempts were made to determine if there was an identifiable factor that may have yielded less favorable responses, but no single variable appears to be statistically significant, Reilly explains. However, some patients were rated for increased aggressiveness and risk-taking after surgery. “Some might say that is negative, but others may want that look,” he says.“The comprehensive evaluation and treatment of the facial rejuvenation patient requires an understanding of the changes in a person’s perceived aura that are likely to occur with surgery beyond just the traditional measures of age and attractiveness,” Reilly adds.He points out that the study was small and included only white female participants, potentially limiting its application for others.“It’s reasonable to expect that patients would like to know how each surgical procedure could affect others’ perceptions of their personality traits. As we gain more specific knowledge about what these changes in perception are, we will be able to improve outcomes for our patients,” Reilly concludes. Pinterest Email Facial plastic surgery may do more than make you look youthful. It could change — for the better — how people perceive you. The first study of its kind to examine perception after plastic surgery finds that women who have certain procedures are perceived as having greater social skills and are more likeable, attractive and feminine.The study is not superficial — the importance of facial appearance is rooted in evolution and studies suggest that judging a person based on his or her appearance boils down to survival. The results were published online today in JAMA Facial Plastic Surgery.“Our animal instinct tells us to avoid those who are ill-willed and we know from previous research that personality traits are drawn from an individual’s neutral expressions,” says Michael J. Reilly, MD, an assistant professor of otolaryngology-head and neck surgery at Georgetown University School of Medicine who sees patients at MedStar Georgetown University Hospital. Share on Twitter Share on Facebook LinkedIn
When Niels Bohr hypothesised his model of atom with the electrons orbiting the nucleus just like satellites orbit a planet, he was engaging in analogical reasoning. Bohr transferred to atoms the concept of “a body orbiting another”, that is, he transferred a relation between objects to other, new objects. Analogical reasoning is an extraordinary ability that is unique to the human mind, is not seen in animals (except very rarely in primates) and that forms the basis of highly sophisticated human thoughts. Scientists have wondered about the origin of this cognitive function: for example, is it necessary to have developed linguistic abilities or are we born already cognitively equipped for this type of abstraction?According to a new study carried out with the collaboration of the International School for Advanced Studies (SISSA) of Trieste and just published in Child Development, the second hypothesis is probably true: analogical abilities precede language and are already present in infants just a few months old.“We worked with the same-different relation, which, as the simplest abstract relation, has been a focus of research on analogical thinking”, explains Alissa Ferry, SISSA research fellow and first author of the study. “We investigated two different questions about how humans start to think analogically: First, we asked if language is required to understand abstract relations or if this skill is independent of language. Second, if this is independent of language and humans are born with analogical skills, do humans also naturally possess some knowledge of relations that they use to start thinking analogically? Or are humans born with analogical skills and that their understanding of relations is built from scratch using only these analogical skills. Email Share on Twitter Share LinkedIn Share on Facebook Pinterest To answer these questions, Ferry worked with prelinguistic infants aged 7 to 9 months who were trained on same or different pairs of puppets and then tested on their ability to generalize the observed property to novel pairs of objects.“Even children of that age are able to identify the ‘abstract’ relation between objects and then recognize it in novel objects, but a single trial in the training phase is not sufficient: they need several trials to understand the relation”. This, according to Ferry, means two things: that analogical reasoning is independent of linguistic ability (which it precedes) but we are not born with same-different templates encoded in our brains and we need some experience before we learn it.More in detail…By definition, a prelinguistic child is unable to speak and carry out tasks based on instructions given by an experimenter. So how do neuroscientists understand what happens in the child’s mind?“When we work with very young children we use a special technique based on the fact that after a child looks at a cue for a while, his attention will drop in a fairly typical fashion”, explains Ferry. Attention is measured by monitoring gaze: if the child’s gaze is fixed on the cue, that means the child is paying attention, but when his gaze starts to wander he is no longer paying attention. “We know from the literature that when a child becomes habituated to a stimulus and no longer looks at it, presenting him with something new will bring his gaze back to the stimulus. This gives us a clue to understand whether the child is experiencing something different from before”.In Ferry’s study, the children were trained on a pairs of identical (or different, in the alternative condition) objects. The pairs of same objects were left in view until the child’s attention started to wane. At that point, the experimenters showed the children two pairs of objects simultaneously: one with two identical puppets and one with different puppets. If the child’s gaze went towards the pair of different objects, then the researchers understood that the child had grasped the sameness relation in the training pair and considered the different pair as “novel”.
Share LinkedIn Share on Facebook Share on Twitter Email Former National Football League (NFL) players who started playing tackle football before the age of 12 were found to have a higher risk of altered brain development compared to those who started playing at a later age. The study is the first to demonstrate a link between early exposure to repetitive head impacts and later life structural brain changes.Led by researchers at Boston University School of Medicine (BUSM) and Brigham and Women’s Hospital (BWH), the study appears online in the Journal of Neurotrauma.The researchers examined 40 former NFL players between the ages of 40-65 who had more than 12 years of organized football participation, with at least two years at the NFL level. Half of the players participated in tackle football before the age of 12 and half began at age 12 or later. The number of concussions sustained was similar between the two groups. All of these players experienced at least six months of memory and thinking problems. Pinterest “To examine brain development in these players, we used an advanced technique called diffusor tensor imaging (DTI), a type of magnetic resonance imaging that specifically looks at the movement of water molecules along white matter tracts, which are the super-highways within the brain for relaying commands and information,” explained study co-author, Inga Koerte, MD, professor of neurobiological research at the University of Munich and visiting professor at BWH, Harvard Medical School.The results showed that the research participants who started playing football before age 12 were more likely to have alterations of the white matter tracts of the corpus callosum, the largest structure of the brain that connects the two cerebral hemispheres.According to the researchers there is growing evidence that there is a “critical window” of brain development between ages 10-12, when the brain may be especially susceptible to injury. “Therefore, this development process may be disrupted by repeated head impacts in childhood possibly leading to lasting changes in brain structure,” explained lead author Julie Stamm, PhD, currently a post-doctoral fellow at the University of Wisconsin School of Medicine and Public Health, who conducted this study as part of her doctoral dissertation at BUSM, and who was funded by the National Institutes of Health (NIH).While the study shows there may be a neurodevelopmental window susceptible to repeated head impacts, such as those experienced playing tackle football, the authors underscore the fact that this is a small study with just 40 individuals, and results cannot be generalized to individuals who did not go on to play professional football. “The results of this study do not confirm a cause and effect relationship, only that there is an association between younger age of first exposure to tackle football and abnormal brain imaging patterns later in life,” said senior author Martha Shenton, PhD, professor and director, Psychiatry Neuroimaging Laboratory, Department of Psychiatry, BWH, Harvard Medical School.This study was conducted as part of the Diagnosing and Evaluating Chronic Traumatic Encephalopathy using Clinical Tests (DETECT) project, funded by NIH, under the leadership of Robert Stern, PhD, professor and director of the Boston University Alzheimer’s Disease and CTE Center’s Clinical Core. Stern, who is corresponding author on this publication, cautions that, “these abnormal neuroimaging findings are not necessarily indicative of chronic traumatic encephalopathy or CTE,” referring to the neurodegenerative disease that is associated with a history of repetitive head impacts and has been diagnosed post-mortem in numerous former football players. “While this study adds to the growing concern that exposing children to repetitive hits to the head in tackle football may have long lasting consequences, there are likely other contributing factors that contribute to overall risk for CTE,” added Stern.Previous research by the same investigators showed that there was a difference in former NFL players’ cognitive functions depending on whether they started playing before or after age 12, with the former group at a higher risk of developing mood, behavioral or cognitive impairment later in life. In both studies, the authors caution that more investigation into later-life consequences of childhood exposure to repetitive head impacts is necessary, and that the ultimate goal is to increase safety in youth sports, thus allowing young athletes to take advantage of the tremendous benefits of sports participation without increased risk of long-term negative health consequences.
Share on Facebook Email LinkedIn The potential benefits of dietary cocoa extract and/or its final product in the form of chocolate have been extensively investigated in regard to several aspects of human health. Cocoa extracts contain polyphenols, which are micronutrients that have many health benefits, including reducing age-related cognitive dysfunction and promoting healthy brain aging, among others.Dr. Giulio Maria Pasinetti, MD, PhD, Saunders Family Chair and Professor of Neurology at the Icahn School of Medicine at Mount Sinai, Director of Biomedical Training at J.J. Peters Bronx VA Medical Center, is leading author of a recent paper entitled “Recommendations for development of new standardized forms of cocoa breeds and cocoa extract processing for the prevention of Alzheimer’s disease,” to be published in the Journal of Alzheimer’s Disease. This research suggests that “there is strong scientific evidence supporting the growing interest in developing cocoa extract, and potentially certain dietary chocolate preparations, as a natural source to maintain and promote brain health, and in particular to prevent age-related neurodegenerative disorders such as Alzheimer’s disease, which is the most common form of age-related dementia affecting an estimated 44 million people worldwide.”Previous studies from Dr. Pasinetti’s laboratory and others suggest that certain cocoa extract preparations may prevent or possibly delay Alzheimer’s disease in animal experimental models of the disease, in part by inhibiting the generation and promoting the clearance of toxic proteins, including β-amyloid (Aβ) and abnormal tau aggregates, in the brain through mechanisms mediated by polyphenols. Most importantly, the role of cocoa polyphenols in preventing abnormal accumulation of toxic protein aggregates in the brain would play a pivotal role in preventing the loss of synapses that are critical for functional connection among neurons. Recent clinical studies appear to confirm the potential beneficial role of certain cocoa extracts in delaying cognitive aging. The benefits of cocoa polyphenols in preventing synapse loss and, therefore, in preserving/restoring synaptic function may provide a viable and important strategy for preserving cognitive function and, thereby, protecting against the onset and progression of Alzheimer’s disease. Share Share on Twitter Pinterest In spite of the promises of cocoa polyphenols for treating and/or preventing Alzheimer’s disease, Dr. Pasinetti hypothesizes in his new publication that there is a need for multidisciplinary collaborative efforts involving cocoa producers, wholesalers, and the biomedical community if we want to succeed in the development of cocoa extract for health benefits. For example, there are still major issues relating to the diminishing global supply of cocoa and the lack of consistency and reproducibility of cocoa extract processing, which should be carefully addressed. Changes in growth, climate/conditions, and cocoa plant diseases are decreasing the supply of cocoa. To address this, new breeds of cocoa, engineered to be fruitful, more resistant to disease, and more flavorful, are currently being investigated. Furthermore, little is known about how cocoa processing may influence the biological effect of cocoa extracts. Evidence suggests that certain procedures used in cocoa processing can significantly influence its polyphenol content, ultimately influencing its biological activity. Interestingly, two of the most common processing techniques for the chocolate we consume have been reported to result in the loss of as much as 90% of the polyphenols in cocoa.Dr. Pasinetti notes that ongoing interdisciplinary research will provide an unprecedented opportunity to strengthen our understanding of the beneficial roles of cocoa polyphenols and improve cocoa development and processing in order to promote healthy brain aging and possibly prevent Alzheimer’s disease.
Share on Twitter LinkedIn Pinterest Email Share on Facebook The neural architecture in the auditory cortex – the part of the brain that processes sound – of profoundly deaf and hearing people is virtually identical, a new study has found.The study raises a host of new questions about the role of experience in processing sensory information, and could point the way toward potential new avenues for intervention in deafness. The study is described in a June 18 paper published in Scientific Reports.The paper was authored by Ella Striem-Amit, a post-doctoral researcher in Alfonso Caramazza’s Cognitive Neuropsychology Laboratory at Harvard, Mario Belledonne from Harvard, Jorge Almeida from the University of Coimbra, Quanjing Chen, Yuxing Fang, Zaizhu Han and Yanchao Bi from Beijing Normal University. Share “One reason this is interesting is because we don’t know what causes the brain to organize the way it does,” said Striem-Amit, the lead author of the study. “How important is each person’s experience for their brain development? In audition, a lot is known about (how it works) in hearing people, and in animals…but we don’t know whether the same organization is retained in congenitally deaf people.”Those similarities between deaf and hearing brain architecture, Striem-Amit said, suggest that the organization of the auditory cortex doesn’t critically depend on experience, but is likely based on innate factors. So in a person who is born deaf, the brain is still organized in the same manner.But that’s not to suggest experience plays no role in processing sensory information.Evidence from other studies have shown that cochlear implants are far more successful when implanted in toddlers and young children, Striem-Amit said, suggesting that without sensory input during key periods of brain plasticity in early life, the brain may not process information appropriately.To understand the organization of the auditory cortex, Striem-Amit and her collaborators first obtained what are called “tonotopic” maps showing how the auditory cortex responds to various tones.To do that, they placed volunteers in an MRI scanner and played different tones- some high frequency, some low frequency – and tracked which regions in the auditory cortex were activated. They also asked groups of hearing and deaf subjects to simply relax in the scanner, and tracked their brain activity over several minutes. This allows mapping which areas are functionally connected – essentially those that show similar, correlated patterns of activation – to each other.They then used the areas showing frequency preference in the tonotopic maps to study the functional connectivity profiles related to tone preference in the hearing and congenitally deaf groups and found them to be virtually identical.“There is a balance between change and typical organization in the auditory cortex of the deaf” said the senior researcher, Prof. Yanchao Bi, “but even when the auditory cortex shows plasticity to processing vision, its typical auditory organization can still be found”.The study also raises a host of questions that have yet to be answered.“We know the architecture is in place – does it serve a function,” Striem-Amit said. “We know, for example, that the auditory cortex of the deaf is also active when they view sign language and other visual information. The question is: What do these regions do in the deaf? Are they actually processing something similar to what they process in hearing people, only through vision?”In addition to studies of deaf animals, the researchers’ previous studies of people born blind suggest clues to the puzzle.In the blind, the topographical architecture of the visual cortex (the visual parallel of the tonotopic map, called “retinotopic”) is like that in the sighted. Importantly, beyond topographic organization, regions of the visual cortex show specialization in processing certain categories of objects in sighted individuals show the same specialization in the congenitally blind when stimulated through other senses. For example, the blind reading Braille, or letters delivered through sound, process that information in the same area used by sighted subjects in processing visual letters.“The principle that much of the brain’s organization develops largely regardless of experience is established in blindness,” Striem-Amit said. “Perhaps the same principle applies also to deafness”.
Share MIT neuroscientists have discovered that brain cells called glial cells play a critical role in controlling appetite and feeding behavior. In a study of mice, the researchers found that activating these cells stimulates overeating, and that when the cells are suppressed, appetite is also suppressed.The findings could offer scientists a new target for developing drugs against obesity and other appetite-related disorders, the researchers say. The study is also the latest in recent years to implicate glial cells in important brain functions. Until about 10 years ago, glial cells were believed to play more of a supporting role for neurons.“In the last few years, abnormal glial cell activities have been strongly implicated in neurodegenerative disorders. There is more and more evidence to point to the importance of glial cells in modulating neuronal function and in mediating brain disorders,” says Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience. Feng is also a member of MIT’s McGovern Institute for Brain Research and the Stanley Center for Psychiatric Research at the Broad Institute. LinkedIn Email Share on Twitter Share on Facebook Feng is one of the senior authors of the study, which appears in the Oct. 18 edition of the journal eLife. The other senior author is Weiping Han, head of the Laboratory of Metabolic Medicine at the Singapore Bioimaging Consortium in Singapore. Naiyan Chen, a postdoc at the Singapore Bioimaging Consortium and the McGovern Institute, is the lead author.Turning on appetiteIt has long been known that the hypothalamus, an almond-sized structure located deep within the brain, controls appetite as well as energy expenditure, body temperature, and circadian rhythms including sleep cycles. While performing studies on glial cells in other parts of the brain, Chen noticed that the hypothalamus also appeared to have a lot of glial cell activity.“I was very curious at that point what glial cells would be doing in the hypothalamus, since glial cells have been shown in other brain areas to have an influence on regulation of neuronal function,” she says.Within the hypothalamus, scientists have identified two key groups of neurons that regulate appetite, known as AgRP neurons and POMC neurons. AgRP neurons stimulate feeding, while POMC neurons suppress appetite.Until recently it has been difficult to study the role of glial cells in controlling appetite or any other brain function, because scientists haven’t developed many techniques for silencing or stimulating these cells, as they have for neurons. Glial cells, which make up about half of the cells in the brain, have many supporting roles, including cushioning neurons and helping them form connections with one another.In this study, the research team used a new technique developed at the University of North Carolina to study a type of glial cell known as an astrocyte. Using this strategy, researchers can engineer specific cells to produce a surface receptor that binds to a chemical compound known as CNO, a derivative of clozapine. Then, when CNO is given, it activates the glial cells.The MIT team found that turning on astrocyte activity with just a single dose of CNO had a significant effect on feeding behavior.“When we gave the compound that specifically activated the receptors, we saw a robust increase in feeding,” Chen says. “Mice are not known to eat very much in the daytime, but when we gave drugs to these animals that express a particular receptor, they were eating a lot.”The researchers also found that in the short term (three days), the mice did not gain extra weight, even though they were eating more.“This raises the possibility that glial cells may also be modulating neurons that control energy expenditures, to compensate for the increased food intake,” Chen says. “They might have multiple neuronal partners and modulate multiple energy homeostasis functions all at the same time.”When the researchers silenced activity in the astrocytes, they found that the mice ate less than normal.Unknown interactionsStill unknown is how the astrocytes exert their effects on neurons. Some recent studies have suggested that glial cells can secrete chemical messengers such as glutamate and ATP; if so, these “gliotransmitters” could influence neuron activity.Another hypothesis is that instead of secreting chemicals, astrocytes exert their effects by controlling the uptake of neurotransmitters from the space surrounding neurons, thereby affecting neuron activity indirectly.Feng now plans to develop new research tools that could help scientists learn more about astrocyte-neuron interactions and how astrocytes contribute to modulation of appetite and feeding. He also hopes to learn more about whether there are different types of astrocytes that may contribute differently to feeding behavior, especially abnormal behavior.“We really know very little about how astrocytes contribute to the modulation of appetite, eating, and metabolism,” he says. “In the future, dissecting out these functional difference will be critical for our understanding of these disorders.” Pinterest
Share on Facebook Pinterest Share Gay men and lesbian women face discrimination when seeking leadership positions due to the sound of their voice, a new study in the Archives of Sexual Behaviour has found.The study, carried out by researchers at the University of Surrey, also found that people thought gay men should be paid less than their heterosexual counterparts.During this study researchers presented voice samples of gay and heterosexual speakers and pictures, devoid of any background features and other characteristics, to a heterosexual sample group. Participants were not informed of the sexual orientation of the person but allowed to freely guess from the voice or face of the individual. The sample group were asked to form impressions about applicants for the fake position of CEO and evaluate the employability of candidates by responding to five statements (which were rated on a scale of one to five) and to report the amount of monthly salary they considered adequate. The process was then repeated with lesbian candidates. LinkedIn Email Share on Twitter Researchers discovered that participants perceived men and women who they considered to be gay or lesbian, as inadequate for a leadership position.For male candidates, auditory and not facial features impacted on whether they were deemed suitable for the role. Researchers discovered that having a heterosexual- rather than a ‘gay- sounding’ voice created the impression that the speaker had typically masculine traits, which in turn increased their perceived suitability for the role and the chance of receiving a higher salary. Lesbian candidates were associated with a lack of femininity and identified as gender non-conforming and received less positive evaluation than heterosexual counterparts.Dr Fabio Fasoli said: “These results demonstrate that the mere sound of a voice is sufficient to trigger stereotyping denying gay- and lesbian-sounding speakers the qualities that are considered typical of their gender.“It is revealing, that despite all the work to lessen discrimination against the LGBT community, people subconsciously type cast an individual before getting to know them. This study highlights that it can be a real problem in the workplace and for people’s career prospects.”In another study participants were also asked to listen to the voices of two different speakers who pronounced a single sentence of neutral content and then requested to evaluate the speakers’ likely personality traits and personal interests (i.e. sports and fields of study). The traits and interests were manipulated in order to be recast characteristics/interests perceived to be “typically masculine” (e.g., football) and “typically feminine” (e.g., dance). In addition, participants were asked which of the speakers they would choose as an acquaintance. Similarly to the first study this was repeated with lesbian candidates.Researchers discovered that participants attributed more feminine traits to the gay than to the heterosexual speakers and lesbian speakers were more likely to be associated with masculine than to feminine characteristics. Interestingly, this happened without any mention of sexual orientation of the speakers demonstrating that vocal cues can lead to unfair stereotyping.When asked which of the speakers’ participants would choose as an acquaintance for an interaction, researchers found that male participants were more likely to avoid male gay-sounding speakers, suggesting a subtle impact of voice on social exclusion of gay individuals.Dr Fasoli added: “What is most concerning about this study is the subconscious behaviour intention of participants, where heterosexual male participants avoided choosing a gay male as an acquaintance.“This study demonstrates that unacceptable levels of discrimination, be they subconscious or conscious, still exists in our society, and we need to do more to tackle the discrimination faced by the LGBT community.”
Share on Facebook Share on Twitter Email LinkedIn “With regard to existing literature, some of the best methods of prejudice reduction involve confrontation. While I do believe that prejudice should be confronted, I have often wondered whether there are other ways we can reduce discrimination and whether we can harness the negative power of racial language and humor for good rather than evil.”“One possible way this power has been harnessed is the reclamation of slurs by targeted groups. Instead of using the terms derogatively, groups can self identify, use the term affiliatively, and potentially subvert the derogative meaning of the slur. This subversion of prejudice is exciting and I am interested in further examining ways that we can fight against racial and other injustices.”In the study, 324 White participants read a brief story about a Black person using a slur to refer to a White person during a basketball game. In one version of the story, the two people were described as friends. Another version described them as strangers. The slur used in the story varied from “nigger” to “nigga” to “cracker” to “asshole” to “buddy.”The participants viewed the use of “nigger” and “nigga” as less derogatory than “cracker” and “asshole.” They also viewed Black racial slurs used by Black individuals toward White individuals as more affiliative. In other words, they were more likely to perceive the Black racial slurs as being used in a friendly way and to show a social bond compared to “cracker” and “asshole.”The researchers also found that slurs used between friends were viewed as less offensive, less derogatory, and more affiliative than slurs used between strangers.To examine how African Americans viewed Black individuals using Black racial slurs toward White individuals, the researchers conducted a similar experiment with 211 Black participants. Similar to their previous results, they found that a Black person using “nigga” to describe a White person was perceived as less derogatory and more affiliative than the use of both “asshole” and “cracker.”Though there was some evidence that the use of reappropriated slurs was perceived positively, White participants still perceived the words as more derogatory and less affiliative than “buddy.”“Above anything else, I think that people should realize the potential for racial slurs to be incredibly negative for people belonging to marginalized groups and to not take their use lightly,” O’Dea told PsyPost.“While our research does suggest that the reclamation of slurs by minority groups can potentially help individuals gain power over derogative terms, bond with their ingroup, and potentially improve relations between people of different groups, it is important to realize that we cannot control how people interpret the things that we say. Slurs are dangerous and they should never be used thoughtlessly.”“While many people belonging to marginalized groups seem to voice support for the reclamation of slurs that were once meant to disparage their group, not everyone is in favor of this reclamation. For example, the reclamation and use of the n-word among the Black community is heavily debated within the group and by people belonging to other groups,” O’Dea explained.“It is also important to realize that this resistance by people belonging to other groups can even heighten racial tensions as some of our recent work has shown that, while White individuals on average react quite positively to a Black person referring to them using the n-word, many White individuals perceive this as a negative thing and may actually respond with more prejudice toward Black individuals in the future, again coming back to my point that we cannot control how people interpret the things that we say and the dangers of using racial or other group-based slurs.”The study, “Perceptions of Racial Slurs Used by Black Individuals Toward White Individuals: Derogation or Affiliation?“, was authored by Conor J. O’Dea and Donald A. Saucier. Pinterest Racial slurs such as the n-word are sometimes adopted by the group they were once meant to insult — a phenomenon known as reappropriation. But what happens when a reappropriated slur is used by a Black person towards a White person? New research published in the Journal of Language and Social Psychology provides insight into the intragroup uses of reappropriated slurs.Previous studies investigated the use of racial slurs by White individuals toward Black individuals and the reclaiming of disparaging words among racial minorities. But no research had yet examined the reappropriated use of racial slurs by Black individuals toward White targets.“I am fascinated in understanding why people continue to exhibit extreme forms of prejudice despite society typically discouraging their use. Within this understanding, I am most interested in how to combat the negative effects of racial slurs, racial humor, and racially disparaging language more broadly,” said study author Conor O’Dea, a visiting assistant professor at Skidmore College. Share
Jun 25, 2010 (CIDRAP News) – Pandemic flu activity remained low in most parts of the world, though some areas such as Caribbean countries continued to see active transmission, with increased activity reported in a few areas, including Colombia and parts of India, the World Health Organization (WHO) said today.After rising throughout winter and spring, levels of influenza B transmission throughout the world are decreasing, while influenza A (H3N2) viruses are increasing in some areas such as East Africa and South America, according to the WHO.In a virological update that accompanied its weekly influenza report today, the WHO said overall, in the Northern Hemisphere, influenza B detections exceed that of influenza A. In the Southern Hemisphere, the proportion of influenza A (H3N2) is increasing, even exceeding that of pandemic H1N1.Some states in India, such as Karala, have reported spikes in pandemic flu illnesses and deaths in the wake of monsoon rains. The WHO said today that severe illnesses and deaths in India are particularly occurring in pregnant women. Over the past few weeks Indian news outlets have reported that Karala, Maharashtra, Karnataka, and Andhra Pradesh states have been among the hardest hit. Today The Hindu, India’s largest newspaper, reported that three deaths have recently been recorded in Andhra Pradesh.In Colombia, pandemic flu levels have slightly increased after persistent but low-level circulation since late May, the WHO said. Other South American countries such as Venezuela and Bolivia are reporting recent circulation of seasonal influenza A and B viruses.In Ghana, the proportion of respiratory samples testing positive for pandemic H1N1 increased from 16% to 23% during the first 2 weeks of June, according to the WHO report.Other areas experiencing active pandemic virus transmission are Bangladesh, Singapore, and Malaysia.Small numbers of seasonal H3N2 viruses are being detected across Africa, particularly in the east, the WHO said. “The most recent detections have been reported in Ghana, Kenya, and South Africa during the second week of June 2010. The persistence of H3N2 in this area over time very likely represents sustained community transmission of the virus.”Southern Hemisphere countries, which are in the early part of their flu season, are seeing only sporadic detections of the pandemic H1N1 virus, with generally low levels of other respiratory diseases. Australia and New Zealand both reported flu activity levels that are below national baselines.Yesterday New Zealand’s health ministry in its weekly flu update said flu activity remained low, but the pandemic H1N1 virus is still circulating alongside other respiratory viruses. It said the country’s health advice phone line has seen a slight increase in the number of people experiencing flu-like illnesses and that general medical practices are noting that more young children are being seen for illnesses.North American countries continue to report only sporadic detections of pandemic and seasonal flu.Anthony Fiore, MD, a medical epidemiologist for the US Centers for Disease Control and Prevention (CDC) said yesterday in an update to CDC’s vaccine advisory committee that US health officials are keeping a close eye on global reports of influenza B and H3N2. He said most recent global detections of influenza B have involved the Victoria lineage covered by both hemispheres’ seasonal flu vaccines, though he added that some Yamagata lineage samples have been detected.The WHO said today that most pandemic H1N1 viruses that it has analyzed so far are closely related to the pandemic H1N1 strain recommended for flu vaccines. It added that it has received no new reports of oseltamivir (Tamiflu) resistance.See also:Jun 25 WHO statementJun 23 WHO virological updateJun 24 New Zealand health ministry flu update
May 5, 2011 (CIDRAP News) – A report from an independent panel of the United Nations (UN) on the source of Haiti’s cholera outbreak yesterday stopped short of blaming Nepalese soldiers at a UN peacekeeping base, emphasizing that a combination of factors led to the outbreak that has so far sickened almost 300,000 people.The panel of four experts, who conducted an epidemiologic, water and sanitation, and molecular analysis, said Haiti’s cholera outbreak began at a river tributary near an audited peacekeeping base that did have inadequate plumbing to prevent contamination, but that environmental contamination from a fecal source couldn’t have spread without deficiencies in the country’s water, sanitation, and health systems.Suspicion about the role of the UN peacekeepers in the cholera outbreak was one factor that led to rounds of violent protests in Haiti, which is recovering from a January 2010 earthquake disaster. Controversy over presidential elections also fueled the unrest.In December 2010 an epidemiologist sent by France to assist with Haiti’s cholera outbreak who investigated the events reported that the Nepalese soldiers were the most likely source of the outbreak and that the outbreak started in an Artibonite River tributary near their base.The group began its work in early January after its appointment by UN Secretary-General Ban Ki-moon. Their 32-page report is available on the UN’s Web site. The panel was headed by Dr. Alejandro Cravioto, a Mexican citizen from the International Center for Diarrhoeal Disease Research in Bangladesh. The other members are Dr. Claudio Lanata from Peru’s Instituto de Investigacion Nutritional, Dr. Daniele Lantagne of Harvard University, and Dr. Balakrish Nair from India’s National Institute of Cholera and Enteric Diseases.In a statement yesterday, Ban Ki-moon said he plans to convene a task force to study the report’s findings “to ensure prompt and appropriate follow-up.”The experts said evidence “overwhelmingly supports” human activity as the source that contaminated the Artibonite River tributary with the cholera strain, which they said did not originate in Haiti. The outbreak strain is very similar, but not identical, to South Asian strains circulating in Asia, they reported.Other factors that led to the spread of cholera throughout the country included:High numbers of people who use the river for washing, bathing, drinking, and recreationAgricultural workers who are exposed to river waters, especially those working in rice paddiesLack of population immunity to choleraInfected people who fled their communities, dispersing the outbreak”The independent panel concludes that the Haiti cholera outbreak was caused by the confluence of circumstances . . . and was not the fault of, or deliberate action of, a group or individual,” according to the report.Though the group didn’t specifically fault the UN peacekeepers, several of their recommendations for preventing future cholera outbreaks were aimed at UN mission personnel. They suggested that UN and emergency responders traveling from cholera-endemic areas receive a prophylactic dose of antibiotics before departure, be screened to confirm absence of asymptomatic carriage of Vibrio cholerae, or both.Further, UN groups and others who respond to emergencies where cholera epidemics are occurring should receive prophylactic antibiotics, be immunized with oral vaccines, or both, to protect themselves and others, the experts recommended.To prevent environmental contamination, the group recommended that UN installations worldwide treat fecal waste using on-site systems that inactivate pathogens before disposal.Other recommendations focus on improving cholera case management, investing in water supply and sanitation improvements, exploring the role of vaccines to curb the spread of the disease, and promoting the use of molecular techniques to improve surveillance, detection, and tracking.See also:May 4 UN press releaseMay 4 UN independent panel cholera outbreak reportDec 8, 2010, CIDRAP News story “Experts disagree on Haiti cholera source as cases near 100,000”