Visit our sponsor: PACSCanada.com    Visit our sponsor: Your Ad Here    Visit our sponsor: Beaus.ca
Visit our sponsor: Your Ad Here

The Perils of Polls and Other Lessons in Amateur Punditry

[Welcome, National Newswatch readers!] [Welcome, Bourque readers!]

Joan Bryden of the Canadian Press has written two of the most important stories in political journalism recently, all the more significant for daring to out the dirty little secret of the racket that's been developing the past few years. [I'm quoting at length, but there's still more of value to read in the full-length stories themselves.]

There's broad consensus among pollsters that proliferating political polls suffer from a combination of methodological problems, commercial pressures and an unhealthy relationship with the media.

Start with the methodological morass.

"The dirty little secret of the polling business . . . is that our ability to yield results accurately from samples that reflect the total population has probably never been worse in the 30 to 35 years that the discipline has been active in Canada," says veteran pollster Allan Gregg, chairman of Harris-Decima which provides political polling for The Canadian Press.

For a poll to be considered an accurate random sample of the population, everyone must have an equal chance to participate in it. Telephone surveys used to provide that but, with more and more people giving up land lines for cell phones, screening their calls or just hanging up, response rates have plummeted to as little as 15 per cent.

Gregg says that means phone polls are skewing disproportionately towards the elderly, less educated and rural Canadians. Pollsters will weight their samples to compensate but that inevitably means "messing around with random probability theory" on which the entire discipline is based.

So why do pollsters continue to trumpet their imperfect data to the media?

Money. Or more precisely, the lack of it.

When Gregg started polling in the 1970s, there were only a handful of public opinion research companies. Polls were expensive so media outlets bought them judiciously.

Now, Gregg laments almost anyone can profess to be a pollster, with little or no methodological training. There is so much competition that political polls are given free to the media, in hopes the attendant publicity will boost business.

Turcotte says political polls for the media are "not research anymore" so much as marketing and promotional tools. Because they're not paid, pollsters don't put much care into the quality of the product, often throwing a couple of questions about party preference into the middle of an omnibus survey on other subjects which could taint results.

And there's no way to hold pollsters accountable for producing shoddy results since, until there's an actual election, there's no way to gauge their accuracy.

"I believe the quality overall has been driven to unacceptably low levels by the fact that there's this competitive auction to the bottom, with most of this stuff being paid for by insufficient or no resources by the media," concurs Graves.

"You know what? You get what you pay for."

The problem is exacerbated by what Gregg calls an "unholy alliance" with the media. Reporters have "an inherent bias in creating news out of what is methodologically not news." And pollsters have little interest in taming the media's penchant for hype because they won't get quoted repeatedly saying their data shows no statistically significant change.

"In fact, they do the exact opposite. They will give quotes, chapter and verse, and basically reverse and eat themselves the next week," says Gregg.

"You just say, 'Oh geez, the gender gap is gone' (one week) and then, 'Oops, sorry, it's back (the next week).' It's unconscionable."

Gregg, who rose to prominence as a Progressive Conservative party pollster, recalls the initial "giddy" feeling of being treated like a media celebrity. Now, he feels partly responsible for creating a bunch of "mini-Frankensteins."

"You've got this situation where the polling profession has sort of fallen in love with the sound of its own voice and says things, quite frankly, that the discipline can not support."

But it's not all the pollsters' fault. Turcotte says journalists used to be more knowledgeable about methodological limits and more cautious about reporting results. Now, they routinely misconstrue data and ignore margins of error. [emphasis added]

The story had all the more relevance for me after spending a weekend in Toronto where I met still other professionals in the opinion research business, and we wound up discussing virtually the same problem, including the way these media polls have been extended to produce some highly speculative but sensational seat projection schemes.

To recap, there are numerous sources of potential error in polls:

  • It is becoming increasingly difficult, and expensive, to draw a truly random and representative sample.
  • The theory behind the calculation of margins of error is based on the ability to draw a random sample. Drawing a sample from an online panel might be one of the ways to reach people not now being reached by telephone surveys, but it is not a random procedure, and the industry association does not allow its members to report margins of error on those samples, though notably it doesn't sanction the violators either.
  • Even in a perfectly random sample, there would be normal random variation, error and statistical noise.
  • Regional sub-samples of the small national samples of the size that can be afforded in the Canadian marketplace are wholly unsuitable for the task of PREDICTING ANYTHING by way of individual seat outcomes with any kind of accuracy.
  • Comparing the results of voting intention questions in polls conducted now, with regular sample sizes and when the public is barely paying attention to politics, to those in polls reported on E-1 to E-3 with an attentive public, and topped-up sample sizes and everyone on their best sampling behaviour (because they know they'll be judged on the results), is not a valid basis for drawing conclusions about the overall "house effects" of one polling firm or another.
  • The differences between results from various polling firms will have as much to do with contact methodologies (landline vs cellphone vs online, and/or the combination thereof), the amount of effort (and therefore cost) that goes into drawing a sample, the wording of the question(s), the ordering of the question, the other topics of the omnibus survey, and so forth.

… and in the seat projections based on them:

  • Built into the previous election results in individual ridings is underlying random variation (weather conditions, changes in candidates, etc.) that can't be separated out from party results themselves.
  • Even the most meticulous methodology can't exactly capture the patterns for a single riding, when it's designed to predict a large number of ridings. Compared to a population of ridings, any given riding will have inherent idiosyncracies, which create random variation that produces errors in estimation.
  • A methodology that wants to be taken seriously has to quantify, in a replicable way, the variables and weights that it introduces to modify the application of national polling results to past election results … and the basis for those assumptions and weights has to have a validated evidentiary basis in the literature.
  • No-one can think of any study that has ever validated the use of a party's or candidate's "resistance to polling trends" in a given riding, as a valid weighting factor. It could just as easily reflect the lack of competition, a bad snowstorm, or other random variation, and we have no way of knowing the difference. There is some work done on incumbency, on the other hand.
  • The greater the number of parties whose votes need to be predicted in order to project a seat's outcome, the greater the scope for random error.
  • A seat projection based on a poll today should never be extrapolated to predict the outcome of an individual riding election after a five-week or longer election campaign. And yet this is done constantly, and sensationally to great effect, in quite an irresponsible and unscientific way. Projection ≠ prediction. This will also be important to remember when the strategic voting sites crank up into full gear and start extrapolating national polling results down to a riding level, with predictably bad results.

For those who don't know, because they've never worked in the polling industry or professionally in politics, political parties don't waste their money conducting "beauty contest" "horse-race numbers" surveys: it doesn't tell them a thing and isn't worth the money*.

[* ... unless their purpose is to leak the numbers in order to try and drive a certain result, in which case there's a good chance they never even conducted the poll in the first place, or else heavily skewed the results by asking leading questions first. Don't think this has never happened; it's why rules were placed in the Elections Act to prevent the reporting of polls during the final days of a campaign, by the very politicians who are all too familiar with the history of such practices.]

What the parties do do was laid out long ago in an article written by Frank Luntz in the Journal of Campaigns and Elections that I wish I could lay my hands on now, but hasn't changed substantially in the intervening 20 years I don't imagine:

  • Conduct a large-scale Baseline Survey, with a sample size in the order of what Michael Marzolini of Pollara is reported to have produced for the Liberal Party recently (I heard 5,000 nationally), that allows for *significant* regional and demographic sub-samples. Its purpose is to
    • identify each party's perceived strengths and weaknesses, and those of their leaders
    • find the issues on which each party is seen as credible, and those on which they are not
    • test people's current inclination, and overall willingness to ever consider supporting each party
    • find which parties voters would never consider supporting, and why not
    • locate second choices, firmness of support, and party of identification ("thinking of federal politics, which party do you normally feel closest to")
    • ask questions designed to uncover what issues might drive a person's vote one way or the other (since not all issues are vote-determining issues), and try to find the best ballot question from the perspective of your own party (and what ballot questions competing parties will be trying to create)
    • ascertain the more permanent attitudes that underly and perhaps activate voters' current opinions and beliefs
  • From the baseline survey, the parties will be able to identify the regional and demographic characteristics of their likely swing voters (and those of the voters most at risk of defecting). These demographic groups are then studied in greater detail through focus groups that test what language moves people, their reaction to advertising "creative", and build a qualitative understanding of their attitudes and motivations.
  • Also from the baseline survey, some more targetted tracking samples can be drawn to see how those target groups are reacting to developments on an on-going basis.

Of course, somewhere in those surveys a voting intention question is going to be asked, but it's as much to generate categories of respondents as anything else. A political party certainly doesn't make decisions – such as whether it's going to support a federal budget – on the answer to that one question alone, and to suggest that would happen betrays a real absence of relevant experience.

So: stop reading "the polls", stop obsessing about "the polls", and for goodness sake stop writing that "the polls say this will happen or that will happen", because they don't say any such thing. And take amateur seat predictions having a zero-general-election track record with as big a boulder of salt as Joan Bryden's sources advise taking the polls. "The polls" didn't predict the Conservatives would win Montmagny–l'Islet–Kamouraska–Rivière-du-Loup, or that the Liberals would win Winnipeg North either. Campaigns matter, and outsiders don't know what ridings political parties are going to target or send resources to yet, though we can certainly draw some conclusions now (and no, it's not based exclusively on whether they were "close" last time or not either).

By way of demonstrating the presence or absence of variability in party results over all ridings across the past 4 elections, I have assembled this chart showing party vote as a percent of eligible voters (with non-voters in purple), in each of five regions (Atlantic, Quebec, Ontario, Prairies, BC) for each of the five main political parties. For ease of presentation, the results of the Canadian Alliance and Progressive Conservative parties were added in 2000 and are displayed with the Conservative Party's results from 2004-2008. The Bloc was also shown on the same chart as the Green Party, given the almost complete lack of overlap between their bands of support.

[Click on image to open a full-sized version.]

Five Parties, Four Elections, All Ridings

This is how many data-points you have to predict accurately in order to extrapolate predictions from a seat projection based on a poll, i.e., (308 or so x 4) + 75 assuming turnout is constant (which it hasn't always been). Notice the outliers amongst the Liberals on the prairies and BC (fourth and fifth graphs down in the left-most column), the NDP in the Atlantic and Québec, the Conservatives in Québec (and notice the four seats well below their range in western Canada: Churchill, Winnipeg North and Centre on the prairies, and Vancouver East in BC).

Tags:

42 Responses to “The Perils of Polls and Other Lessons in Amateur Punditry”

  1. Shadow says:

    A question must be asked, is there a political implication to us talking about this ?

    There’s a meme that’s been pushed lately that polls don’t matter.

    It all started with Michael Marzolini’s finding that only 15% of Canadians pay attention to politics.

    The hope was that it would calm Liberal fears about going into an election with horrid poll numbers.

    His agenda is reflected in the story:

    “Marzolini worries the plethora of shoddy media polls, an annoyance between elections, can be self-fulfilling during a campaign.”

  2. Shadow says:

    A cynic might think this story was fed to the liberal media by the liberals and then filled with quotes from pollsters who aren’t friendly to Harper (Graves, Gregg, Marzolni.)

    But Andre Turcotte has worked on Harper’s team.

    And Jane Taber raised an interesting counter point on CTV’s question period today.

    It could be in the CPC’s benefit if polling falls into disrepute.

    Right now people feel Harper is on the cusp of a majority becuase of polling.

    That could scare Canadians. That forces people to vote Liberal to block him. That could make the activists lazy or over-confident, less likely to give money or volunteer.

    So we may be entering a situation where both major parties like the notion of fighting an election under a very thick cloud of war.

  3. Sacha says:

    This is a brilliant article. I especially enjoyed your bit about political parties “leaking” doctored polls to try to affect public perception.

    I generally take these seat projections as being for entertainment purposes.

    If more people subscribed to the general philosophy of this article, it makes you wonder whether any of these polls would be released at all.

  4. Ken Summers says:

    I think its being talked about in the open finally because the crock that it has become serves NO ONE well.

    Except the media, for whom it generates easy copy.

  5. Elgin says:

    This item should be front page news in every media in the country, and then repeated whenever the next election is called.

  6. Shadow, I believe you’ve been reading this blog long enough to know that commentary like this is entirely consistent with my overall approach for some time.

    It’s interesting, because what Marzolini actually said is being missed in the retelling of it. He said that AT THE MOMENT only 15% of Canadians are paying attention to politics. During an election, that becomes increasingly less the case, which is why the polls move during campaigns, and become increasingly more predictive.

    They might be “accurate” now, but they’re not “predictive”. Geez, I wish I’d thought of that way of saying things back when I was writing the post itself, because it sums it up rather well.

    I’m not dumping on the polling industry, but rather the tendency in political journalism these days to use coverage of polls as the only coverage of politics. I think that’s debased the quality of public debate on issues that matter for quite some time.

    I haven’t seen yesterday’s Question Period yet, so I can’t comment on what Jane Taber said. But I believe Tom Flanagan has said on CBC something similar to what Joan Bryden’s reporting Turcotte to be saying as well: that you can’t follow every jot and tittle in these public domain polls and try and infer whether it had something to do with one news story or the other after the fact.

  7. Thanks for the comment Sacha. I take it as high praise given the source.

  8. hollinm says:

    So I guess the Frank Graves polls that come out every week for the CBC is a pile of crap. Then to add insult to injury he prognosticates on virtually anything including seat projections and generally the commentary is supportive of the Liberals and negative towards the Conservatives.
    His poll showing a 12 1/2 pt lead for the Conservatives is an outlier and probably was meant to encourage the government to force an election. Harper is not stupid. The party is doing its own polling and probably has a lot more money for indepth research than the broke Liberal party. There will be no election caused by the Conservatives unless internal polling shows there has been a change in the attitude of Canadians. Of course that doesn’t mean the opposition parties could stumble into an election by voting against the budget.

  9. Ken, I think political junkies will have to recalibrate their own willingness to measure everything they do according to whether it “moves the polls” or not, as well. It might be a big cold turkey session for a lot of people, if you reflect on it.

  10. Hollinm, I believe political parties decide whether to try and create election opportunities not based on “internal polling” about their standings in the voting intention questions, but rather whether they believe they have a compelling issue that favours their party, and which is vote-determining for the group of target voters they have to win over.

    They also make judgements about the position of their opponents (preparation, financing, error-proneness, positioning on their own vote-determining issues), and whether they think they can take them on in the competition that will ensue. To the extent that their internal polling gives them data to analyze about those matters, yes they will consult it.

  11. By the way, Hollinm, I have no problem with the academic exercise of doing seat projections. What bothers me is the pretense that they predict anything. They are no more predictive than the polls they are based on (less so, in fact), and “the polls” right now are not very predictive, since most voters are not paying active attention to federal politics. If an election is triggered in some fashion, that will change very quickly.

  12. SteveV says:

    Most in depth, comprehensive analysis available. Well done!

  13. Chris says:

    The more these polls come out with the Liberals trailing, the more people who do not want the CPC in office will not walk, but will RUN to the election booth and vote strategically, to stop Harper.

  14. JoeCalgary says:

    What Gregg and Graves never seem to explain is why Nanos, with arguably smaller samples, has nailed the election outcome not once, but twice.

    Not kinda close, not sort of close, but dead on.

    It’s good to hear Gregg and others starting to look in the mirror, but I’ll trust a Nanos poll everytime.

  15. Joe, the issue here is whether single questions in omnibus surveys now predict election results months or years down the road.

    I will note one interesting methodological difference in the Nanos surveys, which is the wording of his question. He specifically asks about voters intentions in relation to their local candidates. That is hard to answer this far out from voting day, but gets increasingly more accurate the closer to an election we get, the way I look at things.

    Thanks for reading, and for taking the time to comment.

  16. Joffré says:

    The most important methodological difference between Nanos and the other pollsters is probably actually the fact that Nanos doesn’t prompt for parties. Nanos asks the people polled to come up with the name of the party they support on their own, as opposed to the other pollsters, who give a list of the parties.

    (EKOS actually ridiculously asks if people support the CPC, LPC, NDP, GPC, Bloc, or another party, which leads to their polls always showing 2-5% support for, presumably, the Animal Alliance/Christian Heritage Coalition.)

    I’m of the opinion that a large number of the people who tell pollsters they’ll vote Green (and Other in EKOS polls) are actually people who won’t actually turn out to vote, and the fact that Nanos consistently finds the Greens to be 2-3% lower than other pollsters seems to support that hypothesis.

  17. It’s certainly a matter of open debate, Joffré. I don’t know that the by-election ridings have been the very best case studies, but if the Greens do have the support levels being attributed to them by some of the public domain polls, they sure didn’t get any of those ballots into the boxes in any outing since 2008.

    I believe people who work in the opinion research industry see the relationships between their research and electoral outcomes in a more nuanced way than the rest of us can. People with a background in politics clearly see the influence of party targeting and organization.

    Developing an understanding of how current circumstances can shape future outcomes probably requires a good understanding of both aspects.

  18. jad says:

    Great post, Alice. As usual you demonstrate both a comprehensive understanding and an in-depth knowledge of the issue.

    I’m not sure how many people other than the media really take polls seriously these days. Even political junkies seem to be more and more agnostic. This whole thing seems to be turning into yet another episode about the importance of the media in any story, the rise of the pollster as media rockstar, and the appalling appetite of the 24/7 news-cycle beast.

    For instance, CBC contracts with Ekos to produce polls for them. CBC gets a number of filler items for their newscasts and their political shows, and Mr. Graves gets a (hopefully) modest honorarium. He also gets a regular gig on CBC and can pontificate at length on whatever the theme for that week’s poll is. It’s hard to take seriously any poll that shows the consistently wild fluctuations on a provincial basis that Ekos does, and it would seem to me that this sort of variation would give any conscientious pollster pause for thought. It is as if these polls are more intended to move public opinion than reflect it, particularly when the media join in with their hysterical cries of “The Liberals are virtually tied !” or “The Conservatives’ numbers are crashing !” This is generally followed the next week by the complete opposite of what was posited previously as serious political commentary.

    I do think it’s virtually impossible to poll accurately between elections, and if no-one is very interested in politics, then by extension, no-one is very interested in polls which predict the result of the next election “if a vote were held today”. This rather glib disclaimer also cavalierly ignores the effect of a seven-week election campaign. However, I think the leadership polls are a bit more valid than the horse race ones. Most people have some idea of party leaders and perhaps the pollsters would serve us better by concentrating on this area. Although of course, if there was not much fluctuation, there would be no news stories, no TV gigs, etc, etc, and therefore no point.

  19. The standard for judging them is not even correct, jad. Rather than expect any one particular set of data to be correct, folks should understand that there is a normal variation in small sub-samples. So, I’d argue that so long as people are reading the data with the appropriate cautions, there’s no harm done. It’s the living or dying by every twitch that’s overwraught.

    Thanks for your comment.

  20. The polls are for political junkies, most people that I know have no interest in politics.

    They will change the channel if the subject of politics come on.

    During an actual campaign I suspect many of them become more informed and make an effort to catch up in making the best decision.

    I am curious how effective the ad campaign is in moving the numbers a few points.

    Nik Nanos gave an interesting roundup post analysis on CPAC on September 15, 2008.

    He stated the last weekend the numbers moved over the thanksgiving dinner with family and friends.

    I imagine it has been that way for a long time.

  21. Can’t disagree, CS.

  22. Troy says:

    A recent article in the Toronto Sun revealed that Graves and EKOS have received millions of dollars in polling contracts from the Conservative government.

  23. Troy,

    The federal government commissions a lot of opinion research to do with public awareness of certain issues, public policy positions, to test whether a certain government advertising campaign is effectively communicating the information they want it to, and whether people recall seeing the advertising. This is an important part of conducting the business of any large organization nowadays, although it’s also true that the amount of federal spending on public opinion research has been cut back substantially recently.

    Polling firms would have to apply for and bid on those contracts when they are posted on MERX.ca, and would have to qualify for the work, based on an assessment by public servants.

    Thus, I’m unsure what significant you’re attaching to this fact. Indeed a number of different polling firms do work for the federal and provincial governments.

  24. Shadow says:

    I have to admit the Graves-CBC relationship does get under my skin.

    The 2.5% to 3% he gives “other” and the 10% he gives the Greens lowers the true vote share of the CPC.

    Again, the more cynical would say that our left leaning public news network has Graves on to undermine Harper by making him less popular then he actually is.

    Of course, its probably just a coincidence.

    But the optics are awful and do touch a lot of nerves.

    For whatever reason there is A LOT of bad blood between Frank Graves/CBC and Conservatives.

  25. At the end of the day, Shadow, the proof of the pudding will be in the eating of it.

  26. Terence says:

    I think the Tories know they might not even win the election and this poll scares the bejesus out of them and their hard core base as it appears to take the edge of their hype.

  27. Shadow says:

    Exactly Alice.

    In their last poll before the 2008 election Graves (who didn’t poll “others back then”) over estimated Green support by 3% and underestimated CPC support by 3%.

    If in their last poll before the next election Graves overestimates Other AND Green support by a similiar margin it will reflect very, very poorly on him.

  28. jad says:

    John Ivison just reminded me of a quote from Adlai Stevenson which you may find appropriate:

    “Polls should be taken, not inhaled.”

  29. Yes, I see it in his column this evening:

    http://fullcomment.nationalpost.com/2011/02/14/john-ivison-when-elections-loom-liberal-support-collapses/

    I agree with the premise of that column as well: that if one is targeting carefully and effectively, one can win seats that would not be projected on the strength of the national numbers alone. That approach helps the parties which have always had to target and know how to do it well, and is less helpful to those relying on a mass marketing strategy.

  30. Shadow says:

    I wonder if “targeting” is to some extent just recruiting good candidates.

    The article points out the CPC has gotten better at winning close races over the last three cycles.

    How much of that that is impressive, ambitious people jumping on the band wagon versus improvements in identifying voters and tailoring messages to them is an interesting question.

  31. Candidate recruitment is a big part of it, but there’s also ensuring sufficient resources are available or transferred, experienced campaign staff are identified or sent in, pre-election strategic planning and so forth take place. It’s not either-or, I wouldn’t say, but a strong candidate is usually a necessary component of winning an uphill battle.

  32. Ken Summers says:

    I know of plenty of illustrations of that for both the Conservatives and the NDP.

    With the Liberals, look at Winnipeg North. There is an example of winning an uphill battle. But I would suggest that it is the exception that illustrates the rule: the LPC doesn’t seem to be able to put together what is required to pull out the tough ones.

    Winnipeg North was not the LPC, it was Kevin Lamoreaux. And not just KL as candidate, but very much providing the organizational machine that the LPC has not been providing.

  33. Wilf Day says:

    Your permanent penchant for prominent publicity on the procedures, problems, pressures and perceived potential perils of producing, presenting and promoting proliferating pointless political polls of people’s projected party preference patterns from a panel of the present participating proportion of the population has, umm, a predictably professional purpose.

  34. Troy says:

    pollsters aren’t paid?

  35. Not by the media anymore, Troy.

    This is the part that has changed a great deal over the past decades. Nowadays each media organization has a “partnership” with a news organization, the essence of the deal being: “you run a few questions on your omnibus surveys or polls being run for other clients, that really you were going to ask anyways, and give us the answers for free, and we’ll run the stories on them, quoting you and no others, and won’t refer to anyone else’s polls that day”. The polling firms get free publicity, and the media (who are fighting a changing marketplace and the reduced readership and margins that come with it) get easy stories that cost nothing to obtain.

    This is how we get into the situation where there is no discussion of polling methodologies in the media anymore (can’t bite the hand that feeds them). I remember when I was younger seeing debates about opinion research methodologies on The Journal. You sure won’t see that anymore. If I had to guess at any show that would give it a serious treatment these days, it would probably be The Agenda on TV Ontario (hint, hint).

  36. Perhaps you pile on, Wilf? ;-)

  37. Public opinion polls are fine if one knows how to interpret them. However, they cannot snow the future dynamics of a campaign. For example, why Michael Ignatieff’s Liberals seem fairly low in the polls, Mr. Ignatieff has not shown his campaign poker hand. We don’t know which issues on which he will concentrate. We don’t know how well he will present those issues. He may either be a great or poor campaigner. On the flip-side, since Stephen Harper and his Conservatives are the government, they are examined on their previous political poker hands. Harper must also show his campaign cards first. We know how he will campaign.

  38. I tend to agree, SD. The ground is always the same height, whether there is geothermal activity below it or not … right up until the earthquake.

  39. There was too much to read and it is now the wee hours of the morning. The commentary I have read re the present election campaign does not seem to match the polls at all. I am not sure if what I have read in the material above is any great revelation. A few past (greater??) politicians would suggest that “you know what dogs used polls for” and that “one can make polls say anything they want” and of course the usual “the only real poll is election day” which to me meant the methodology (except the election day vote)was biased or slanted to produce a wanted outcome but I also see it as a manipulative tool to try and influece in what direction voting should go. The regular poll results gives me the jitters so I also think that it can be used as another fear tactic. Rebelle (penname)

Leave a Reply