Greetings!! Is this your first visit? If so, please consider registering. It enables downloads and removal of adverts. Use the 'facebook connect' for quick access.
Register
+ Have your say...
Results 1 to 2 of 2
  1. #1
    gaynorbarry's Avatar
    gaynorbarry is offline Established Member
    Member Since
    Jun 2007
    Location
    NO FIXED ABODE
    Posts
    342

    Statistical significance

    Ally Fogg: Research statistics are too important to be disrespected and abused | Comment is free | guardian.co.uk
    In 2007, the blogger Chameleon interviewed Professor Liz Kelly, one of Britain's leading feminist academics and director of the child and woman abuse studies unit at London Metropolitian University. Kelly described how one of her first studies into child abuse found that one in two women reported some instance of "intimate intrusion" before the age of 18. These were mostly isolated incidents of flashing or attempts at sexual coercion by a boyfriend. By her estimate only one in 60 or 70 of those cases involved ongoing sexual abuse by an adult male family member, and yet:
    We quote these figures, one in two, one in four, one in whatever as if it means serious ongoing abuse always and it doesn't. It's exactly the same with domestic violence figures. Yes, one in two, one in three, one in four in whatever survey in different countries have had an incident at some point in their lives. That's not the same as the pattern of coercive control, which is what I mean by domestic violence. There are complicated issues about what these measurements mean and we need to be more accurate and more careful when we invoke them, being clear that we do so in an accurate and not an inaccurate way. The figures do say something accurate, but we sometimes stretch that to mean something that it doesn't.
    Seldom a week goes by without a new illustration of Kelly's vital point. Perhaps the most notorious and common example concerns the estimates, contained in the World Bank's 1993 world development report (pdf), of the global health burden of gender-based violence, using a complex and controversial economic construct called the disability adjusted life year. These estimates were simplified into a neat little table by the researcher Lori Heise in 1994, which placed domestic violence and rape as the sixth most damaging "condition" to women aged 15-44 worldwide. However an important footnote explained that this had been added "for illustrative purposes only". Violence and rape are causes of morbidity (such as post-traumatic mental health problems, physical injury and STDs), not conditions in themselves. Therefore they should really be compared to other causative factors of morbidity, not to illnesses such as cancer and heart disease. Heise, as she admitted herself, was simply not comparing like with like. Nonetheless, her factoid has been endlessly repeated and wildly distorted ever since. I've seen Heise's interpretation of the World Bank's estimates quoted as saying that gender-based violence is one of the leading causes of mortality (not morbidity) worldwide; seen the same figures attributed to domestic violence alone, instead of to all gender-based offences; and recently read that "It is the main cause of death and disability globally for women aged 15 to 44 – rape and gross bodily violence cause more death and permanent disability than cancer, motor vehicle accidents, war and malaria combined." (My emphasis)
    This is an extreme example of statistical legerdemain, but there are countless others. The reasons are often understandable. When we find a statistic that appears to support the case we wish to make, it is very tempting to swoop on it, without too much effort to verify the source. Sometimes dodgy stats stem from careless misreading or misunderstanding of the numbers, but in others, they originate in studies of little or no credibility in the first place – seemingly served up to order by research companies, to meet the agenda of a campaign group or media outlet. Last week, Panorama based its exposé of sexual bullying in schools around a commissioned survey (pdf), using methodology and statistical reporting that would see any undergraduate social scientist laughed out of college. The same (or much worse) could be said of the Women's Aid/Bliss magazine survey on a similar topic, which appeared to commit many methodological errors, including the cardinal sin of using self-selected respondents. These surveys have no more authority than any of the exercises in PR-generated churnalism that reveal, for example, that people are willing to trade their computer passwords for a chocolate bar.
    Of course the misuse and abuse of statistics is by no means the preserve of feminists. The government is undoubtedly the most serious serial offender, as the report from the UK Statistics Authority demonstrated last week. Environmentalists, human rights campaigners, advocates for and against immigration and just about anyone who has ever campaigned for anything will probably have a cynical or careless offence against statistics somewhere on their record, myself included.
    So ultimately, does it matter if campaigners and commentators play fast and loose? It might be argued that if global warming or domestic violence is a deadly important issue, then presenting the most dramatic and compelling statistics can help the cause and save lives, even if they do not entirely reflect reality. Furthermore, the first thing anyone learns about statistics is Disraeli's triumvirate of falsehood, while the classic text on the subject is called How to Lie with Statistics. Nobody believes, far less understands statistics anyway, do they?
    Well they should. Social science is imperfect and unreliable in many ways, but it still provides the only tatty map we have to the labyrinth of society. Without quantification and analysis, we are looking at the world through a long, thin tube. That said, for statistics to be valuable, they have to be treated with extreme caution and suspicion. They are powerful, important and can be very dangerous if used carelessly. What angers me most about the use of advocacy stats in politics and the media is not the mistakes and the misunderstandings, but the disrespect.
    When a campaign produces invalid research to gain support, when a journalist misrepresents credible research to make his case, or when a politician ignores findings that fail to support her policy, they do themselves no favours. To quote the Guardian readers' editor:
    With thousands of potential fact-checkers out there, writers who cite 'studies', 'reports' and 'league tables' in support should fully expect to be called upon to produce them.
    Misquoting a statistic is as unethical and reprehensible as misquoting a witness. We can all check sources these days, often within a few seconds, and claimants' credibility can be shot to pieces – along with the argument they are trying to make – if they haven't done so themselves. Such behaviour devalues, degrades and undermines even the best and most honest quantitative research. It also risks sending policymakers down wrong turns. Bad data mean bad policy, which inevitably means worse governance and a worse society.
    I leave you with the wise words of Liz Kelly:
    We need to think about how we sometimes invoke statistics, which ends up having the opposite effect. It's not raising awareness; it's actually undermining our message.

  2. #2
    Percy's Avatar
    Percy is online now Knackered old Knight
    Member Since
    May 2006
    Location
    Overlooking the D'Entrecasteaux Channel. The views are magnificent.
    Posts
    18,235

    Re: Statistical significance

    I hope you do not object if I repeat this VERY IMPORTANT article, Barry, and add emphasis. I will separate the paragraphs too. It makes easier reading.

    It is 'Warcraft File' material and everyone needs to copy and file it.


    Ally Fogg: Research statistics are too important to be disrespected and abused

    In 2007, the blogger Chameleon interviewed Professor Liz Kelly, one of Britain's leading feminist academics and director of the child and woman abuse studies unit at London Metropolitian University.

    Kelly described how one of her first studies into child abuse found that one in two women reported some instance of "intimate intrusion" before the age of 18. These were mostly isolated incidents of flashing or attempts at sexual coercion by a boyfriend.

    By her estimate only one in 60 or 70 of those cases involved ongoing sexual abuse by an adult male family member, and yet:

    We quote these figures, one in two, one in four, one in whatever as if it means serious ongoing abuse always and it doesn't.

    It's exactly the same with domestic violence figures. Yes, one in two, one in three, one in four in whatever survey in different countries have had an incident at some point in their lives. That's not the same as the pattern of coercive control, which is what I mean by domestic violence.

    There are complicated issues about what these measurements mean and we need to be more accurate and more careful when we invoke them, being clear that we do so in an accurate and not an inaccurate way. The figures do say something accurate, but we sometimes stretch that to mean something that it doesn't.

    Seldom a week goes by without a new illustration of Kelly's vital point.

    Perhaps the most notorious and common example concerns the estimates, contained in the World Bank's 1993 world development report (pdf), of the global health burden of gender-based violence, using a complex and controversial economic construct called the disability adjusted life year.

    These estimates were simplified into a neat little table by the researcher Lori Heise in 1994, which placed domestic violence and rape as the sixth most damaging "condition" to women aged 15-44 worldwide.

    However an important footnote explained that this had been added "for illustrative purposes only".

    Violence and rape are causes of morbidity (such as post-traumatic mental health problems, physical injury and STDs), not conditions in themselves.

    Therefore they should really be compared to other causative factors of morbidity, not to illnesses such as cancer and heart disease.

    Heise, as she admitted herself, was simply not comparing like with like.

    Nonetheless, her factoid has been endlessly repeated and wildly distorted ever since.

    I've seen Heise's interpretation of the World Bank's estimates quoted as saying that gender-based violence is one of the leading causes of mortality (not morbidity) worldwide; seen the same figures attributed to domestic violence alone, instead of to all gender-based offences; and recently read that "It is the main cause of death and disability globally for women aged 15 to 44 – rape and gross bodily violence cause more death and permanent disability than cancer, motor vehicle accidents, war and malaria combined."

    This is an extreme example of statistical legerdemain, but there are countless others. The reasons are often understandable. When we find a statistic that appears to support the case we wish to make, it is very tempting to swoop on it, without too much effort to verify the source.

    Sometimes dodgy stats stem from careless misreading or misunderstanding of the numbers, but in others, they originate in studies of little or no credibility in the first place – seemingly served up to order by research companies, to meet the agenda of a campaign group or media outlet.

    Last week, Panorama based its exposé of sexual bullying in schools around a commissioned survey (pdf), using methodology and statistical reporting that would see any undergraduate social scientist laughed out of college.

    The same (or much worse) could be said of the Women's Aid/Bliss magazine survey on a similar topic, which appeared to commit many methodological errors, including the cardinal sin of using self-selected respondents.

    These surveys have no more authority than any of the exercises in PR-generated churnalism that reveal, for example, that people are willing to trade their computer passwords for a chocolate bar.

    Of course the misuse and abuse of statistics is by no means the preserve of feminists. The government is undoubtedly the most serious serial offender, as the report from the UK Statistics Authority demonstrated last week.

    Environmentalists, human rights campaigners, advocates for and against immigration and just about anyone who has ever campaigned for anything will probably have a cynical or careless offence against statistics somewhere on their record, myself included.

    So ultimately, does it matter if campaigners and commentators play fast and loose? It might be argued that if global warming or domestic violence is a deadly important issue, then presenting the most dramatic and compelling statistics can help the cause and save lives, even if they do not entirely reflect reality.

    Furthermore, the first thing anyone learns about statistics is Disraeli's triumvirate of falsehood, while the classic text on the subject is called How to Lie with Statistics. Nobody believes, far less understands statistics anyway, do they?

    Well they should.

    Social science is imperfect and unreliable in many ways, but it still provides the only tatty map we have to the labyrinth of society. Without quantification and analysis, we are looking at the world through a long, thin tube.

    That said, for statistics to be valuable, they have to be treated with extreme caution and suspicion. They are powerful, important and can be very dangerous if used carelessly.

    What angers me most about the use of advocacy stats in politics and the media is not the mistakes and the misunderstandings, but the disrespect.
    When a campaign produces invalid research to gain support, when a journalist misrepresents credible research to make his case, or when a politician ignores findings that fail to support her policy, they do themselves no favours.

    To quote the Guardian readers' editor:
    With thousands of potential fact-checkers out there, writers who cite 'studies', 'reports' and 'league tables' in support should fully expect to be called upon to produce them.

    Misquoting a statistic is as unethical and reprehensible as misquoting a witness. We can all check sources these days, often within a few seconds, and claimants' credibility can be shot to pieces – along with the argument they are trying to make – if they haven't done so themselves.

    Such behaviour devalues, degrades and undermines even the best and most honest quantitative research. It also risks sending policymakers down wrong turns.

    Bad data mean bad policy, which inevitably means worse governance and a worse society.

    I leave you with the wise words of Liz Kelly:
    We need to think about how we sometimes invoke statistics, which ends up having the opposite effect. It's not raising awareness; it's actually undermining our message.
    When in need of a drink to Refresh the soul
    Drop into the Knight & Drummer Free House.
    http://parzivalshorse.blogspot.com.au/
    Always leave a Comment as a tip.


    Cum dilectione hominum et odio vitiorum
    Love the Sinner but not the Sin.
    (St. Augustine)

    For we wrestle not against flesh and blood, but against Principalities, against Powers,
    against the Rulers of the Darkness of this world, against Spiritual Wickedness in high places. “
    (and within ourselves)


    A Feminist is a human being who has lost her way and turned vicious. If you meet one on the road as you
    Go your Own Way, offer kindness but keep your sword drawn.






 

You may also enjoy reading the following threads, why not give them a try?

  1. Replies: 3
    Last Post: 24th-December-2008, 02:54 AM
  2. The significance of the boy-girl ratio in school
    By Major Tom in forum Chit chat (MAIN)
    Replies: 0
    Last Post: 10th-April-2007, 07:02 PM

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may post replies
  • You may not post attachments
  • You may not edit your posts
  •  

Donate to AntiMisandry

1e2 Forum