Quantcast
Channel: Information operations – To Inform is to Influence
Viewing all 5256 articles
Browse latest View live

Vanderbilt Professor Under Attack for Criticizing Islam

$
0
0

Originally posted on The Counter Jihad Report:

carol-228x350Frontpage, January 23, 2015 by Mark Tapson:

Last week, in response to the Paris massacre at the offices of Charlie Hebdo, Carol M. Swain, an openly conservative professor of political science and law at Vanderbilt University, wrote an op-ed for The Tennessean titled, “Charlie Hebdo attacks prove critics were right about Islam.” Naturally, any critique of Islam from our leftist-dominated campuses is going to be met with frothing outrage, and Professor Swain’s article was no exception.

“What would it take to make us admit we were wrong about Islam?” the professor began. “What horrendous attack would finally convince us that Islam is not like other religions in the United States, that it poses an absolute danger to us and our children unless it is monitored better than it has been under the Obama administration?” Good questions, and ones that those of us whose eyes have long been opened to the threat…

View original 1,026 more words


Filed under: Information operations

Russian Hackers Leak List of Pro-Russian Influence Group Made of High-Profile European Individuals

$
0
0

Alexandr Dugin

Russian hacker collective Shaltay Boltay published email correspondence of George Gavrish, close collaborator of Alexandr Dugin, containing names of high-profile people in Europe for creating a pro-Russian influence group along with the techniques used for this end.

Dugin is a philosopher and political scientist, promoter of Eurasianism, and he has been the Head of the Department of Sociology of International Relations at Moscow University.

Shaltay Boltay leaks email correspondence of Gavrish

The endeavor to attract pro-Russian individuals in Europe is financed by Konstantin Malofeev, according to Shaltay Boltay (Google Translate), who leaked various messages relating to the activity of the organizations controlled and financed by the 40-year-old multimillionaire.

On the list of personalities either targeted to become “friends of Russia” or already in the club, released by the Ukrainian publication Texty (Google Translate), there are prominent figures in Europe and outside of it; the document from Texty is named “agents,” suggesting that they have already been recruited with the help of Russia Today news channel.

It contains entries of individuals from Romania, Poland, Turkey, Hungary, Argentina, France, Croatia, Slovakia, Serbia, Greece, Lebanon, Italy, Germany, Chile, and Malaysia.

Influencers in high positions wanted

The individuals are occupying important positions in their countries, which allows them certain degree of influence, and have either met Dugin himself or his representatives. They are politicians (former prime ministers and presidents), journalists, scientists, professors, philosophers, government employees, and even priests.

Among the names listed there are Ion Iliescu (former president of Romania), Suleyman Demirel (former president of Turkey), Roman Giertych (former minister of education in Poland), Viktor Orbán (prime minister of Hungary), Robert Fico (prime minister of Slovakia), Vojislav Kostunica (former president of Serbia), Massimo Fini (Italian journalist), Tiberio Gratsiani (president of the Institute of Geopolitics and Applied Sciences in Italy), Jurgen Elsasser (German journalist and political activist), and Felix Allemand (German anti-globalization blogger).

The list is quite expansive, and at the end, it is mentioned that similar opportunities have been identified in the US, Brazil, Portugal, Spain, Iran, India, Sweden, Norway, Belgium, Switzerland, England, Bulgaria, and Canada.

Hackers do not claim the attack

In the typical manner of Shaltai Boltai, the group does not claim responsibility for breaking into the email account of George Gavrish. Instead, they say that the documents and the email correspondence have been found on the Internet, by chance.

The hackers came to fame in August this year, when they “found” files from the phone of Russia’s prime minister Dmitry Medvedev, and published them on the Internet. At that time, Medvedev’s Twitter account had been hacked and used to post messages against President Putin and his actions.

Source: http://news.softpedia.com/news/Russian-Hackers-Leak-List-of-Pro-Russian-Influence-Group-Made-of-High-Profile-European-Individuals-466418.shtml


Filed under: Information operations, Russia, Ukraine Tagged: Alexandr Dugin, correspondence, Europe, George Gavrish, Konstantin Malofeev, Shaltay Boltay

How Russia outfoxes its enemies

$
0
0

I am repeatedly humbled that my class on Russian IW is being validated time after time within the past week.  This article is rock solid on so many levels.

In retrospect, these “little green men” are so obvious as a Russian deception.  Why didn’t we react?

One, the Ukrainian government was absolutely brand spanking new and still trying to figure out many basic fundamentals.  Second, there was a tremendous lack of information for leaders to make decisions.  Everybody seemed to know the Russians were lying and the “little green men” were Russian soldiers, but nobody in Kyiv was willing to make a decision which committed them to stand up to a nuclear power.  Russian Information Warfare also contributed, strongly, to Kyiv’s indecision. ‘The Little Green Men are giving safety to Crimea’.  ‘NATO forces may move in at any time, we are just protecting the Russian people’.  ‘We are a Crimean Self-Defense Force’.

Russia has forever lost this as a tactic in a future conflict, but it worked well in Crimea.


Russia’s annexation of Crimea last year caught almost everyone off guard. The Russian military disguised its actions, and denied them – but those “little green men” who popped up in the Black Sea peninsula were a textbook case of the Russian practice of military deception – or maskirovka.

At a cadet school in the southern suburbs of Moscow, Maj Gen Alexander Vladimirov heaves two enormous red volumes off his bookcase and slams them down on the table. “My Theory and Science of Warfare,” he says, beaming. “It’s three times longer than Leo Tolstoy’s War and Peace!”

Vladimirov, vice-president of Russia’s Collegium of Military Experts, is an authority on maskirovka – the hallmark of Russian warfare and a word which translates as “something masked” or “a little masquerade”.

“As soon as man was born, he began to fight,” he says. “When he began hunting, he had to paint himself different colours to avoid being eaten by a tiger. From that point on maskirovka was a part of his life. All human history can be portrayed as the history of deception.”

Vladimirov quotes liberally from the Roman general Frontinus and the ancient Chinese philosopher Sun Tzu who described war as an eternal path of cunning.

But it’s Russia, he tells me, with unmistakable pride, that has over the centuries really honed these techniques to perfection.

One of the most famous examples is the Battle of Kulikovo Field in 1380, when the young Muscovite, Prince Dmitry Donskoy, and 50,000 Russian warriors fought against 150,000 Tatar-Mongolian soldiers led by Khan Mamai. It was the first time the Slavs were fighting as a united army – Russia against the Golden Horde.

“The fighting was very tough, but we eventually triumphed thanks to one regiment hiding in the forest,” says Vladimirov. “They attacked ferociously and unexpectedly and the ambushed Tatars ran away.”

Single combat of Peresvet and Temir-murza on the Kulikovo Field in 1380. Artist: Jacobi, Mavriki Petrovich (1906-1938)

The battle of Kulikovo Field in 1380 (20th Century painting)

But that was just a start. Vladimirov reels off some more recent legendary battles in which Russia outfoxed its enemies, with flair and cunning.

There was the Jassy-Kishinev operation of August 1944, which featured dozens of dummy tanks as well as whole Red Army divisions sent in false directions to throw the Germans off the scent.

And that came just after Operation Bagration in Belorussia had dealt Hitler’s troops a devastating blow.

“It was clear the military skill of Soviet leaders outclassed the Germans,” Vladimirov says. “Our generals decided not to go the easy way along the road but through the swamps! That way they attacked the rear of the German forces. That’s mastery for you! All throughout Bagration, there were colossal examples of maskirovka involving thousands of tanks and troops. After that the war was practically over.”

Out of 117 divisions and six brigades, half were destroyed and the rest suffered 50% losses – half a million Germans died there.

Soviet troops cross a pontoon bridge at the Western Bug in July 1944, as part of Operation Bagration

Operation Bagration, 1944

Surprise is a key ingredient in maskirovka and the clandestine forces which occupied Crimea last February certainly delivered that.

Pyotr Shelomovskiy, a Russian photojournalist, was there as they arrived. He had rushed down to Crimea expecting tensions to arise after Ukraine’s Russian-backed president, Viktor Yanukovych, fled the country – and on 24 February he watched local pro-Russian activists building a small barricade on the square outside parliament.

“Maskirovka is used to wrong-foot your enemies, to keep them guessing”

“They started brewing tea and distributing drinks. Some journalists, myself included, were allowed to take pictures,” says Shelomovskiy, “and that was it for the night.”

Or so he thought. But in the small hours, unmarked military trucks drove up filled with heavily armed men.

“They ordered those demonstrators to lie face down on the ground – until they realised they were on the same side,” says Shelomovskiy. Then they made them carry ammunition into the parliament.

He was told this story by the activists the next morning. “They didn’t really understand themselves what was going on,” he says.

The troops which had arrived in the dark, as if by magic, with no insignia on their olive-coloured uniforms, were soon nicknamed “little green men”.

“We know now these guys were Russian special forces,” says Shelomovskiy. “But no-one said so at the time.”

Soldiers, who were wearing no identifying insignia and declined to say whether they were Russian or Ukrainian, patrol outside the Simferopol International Airport, February 2014

One of the “little green men” – Russian soldiers without insignia spearheading the 2014 annexation of Crimea

Denial is another vital component in maskirovka. At a press conference a few days later Vladimir Putin coolly batted away awkward questions about where the troops came from.

“There are many military uniforms. Go into any shop and you can find one,” he said.

But were they Russian soldiers? Poker-faced, the president said the men were local self-defence units.

Five weeks later, once the annexation had been rubber-stamped by the Parliament in Moscow, Putin admitted Russian troops had been deployed in Crimea after all. But the lie had served its purpose. Maskirovka is used to wrong-foot your enemies, to keep them guessing.

Russian President Vladimir Putin speaks during his visit to the Crimean port of Sevastopol on May 9, 2014.

Vladimir Putin in Crimea, May 2014 after the region was annexed

Maj Gen Gordon ‘Skip’ Davis, in charge of operations and intelligence at Nato’s military HQ in Belgium, admits it took him and his colleagues some time to figure out the “size and the scale” of the troop reinforcement which was “continuously denied by the Russians”.

But if Nato was taken by surprise, the historian and journalist Anne Applebaum was not.

“I knew immediately what it was because it reminded me of 1945. It looked so familiar,” she says.

“With Crimea I got a bizarre sense of deja vu, because bringing in soldiers who weren’t really soldiers – that was what the NKVD did in Poland after the war. They also created fake political entities which nobody had seen before, with fake ideologies already attached to them… It’s a game of smoke and mirrors.”

After Crimea came the war in eastern Ukraine. Officially there are no Russian troops or little green men fighting there either – only patriotic volunteers who have gone to the region on holiday.

But there is growing evidence of Moscow’s intervention in the separatist conflict including a mounting toll of Russian soldiers killed in action.

In August Russian TV showed footage of water and baby food being loaded on to lorries heading for Ukraine’s war zone. The Russian government called this humanitarian aid but many were more than a little suspicious. Nato already had plenty of intelligence about Russian air defence and artillery forces moving into Ukraine.

Maj Gen Davis calls the first convoy “a wonderful example of maskirovka” because it created something of a media storm. TV crews breathlessly followed the convoy, trying to find out what was really inside the green army trucks which had been hastily repainted white. Was this a classic Trojan horse operation to smuggle weapons to rebel militias? And would the Ukrainian authorities allow the convoy in?

Lorries part of a Russian humanitarian convoy are parked not far from a checkpoint at the Ukrainian border some 30 km outside the town of Kamensk-Shakhtinsky in the Rostov region, on August 20, 2014.

The Russian humanitarian aid convoy – a classic case of maskirovka?

“All the while at other border crossing points controlled by the Russians – not by the Ukrainians – equipment, personnel and troops were passing into Eastern Ukraine,” says Davis. He sees the convoy as a clever “diversion or distraction”.

The fog of war isn’t something which just happens – it’s something which can be manufactured. In this case the Western media were bamboozled, but the compliant Russian media has also worked hard to generate fog.

“The Russian strategy, both at home and abroad, is to say there is no such thing as truth”

Peter Pomerantsev, Russian film-maker

Ukrainian novelist Andrei Kurkov says he is constantly amazed by what he calls “the fantasy and imagination of Russian journalists”. One of the most lurid stories broadcast on a Moscow TV channel claimed that a three-year-old boy in Sloviansk – a town in eastern Ukraine with a mostly Russian-speaking population – was crucified… for speaking Russian.

The TV report is still online. A blonde woman, her voice choked with emotion, tells a serious-looking Russian news reporter that the three-year-old child was nailed to a wooden notice board in front of his mother and died in agony. The mother she alleges, was then tied to a tank and dragged through the streets until she died. She adds that she is risking her life by talking but wants to protect children against Ukrainian soldiers who behave like beasts and fascists.

“The lady claimed she’d witnessed this horrible story in Sloviansk,” says Kurkov. “But then she mentioned the name of the square where it happened and this square doesn’t exist in Sloviansk. There’s no such place.”

As Kurkov says, the story doesn’t stand up. It emerged that the woman eyewitness had a history of filing false police reports and her own parents said they thought she’d given the interview for money.

line

The elements of maskirovka

Russian soldier in balaclava, pictured 2007
  • Surprise
  • Kamufliazh – camouflage
  • Demonstrativnye manevry – manoeuvres intended to deceive
  • Skrytie – concealment
  • Imitatsia – the use of decoys and military dummies
  • Dezinformatsia – disinformation, a knowing attempt to deceive
line

TV and the digital world are awash with similar reports. A group of Kiev journalism students who set up a website to expose fake stories say some approaches are more sophisticated than this, mixing truth and falsehood to produce a report that appears credible. But even an incredible story may serve to confuse, and create uncertainty.

Peter Pomerantsev, who recently spent several years working on documentaries and reality shows for Russian TV, argues that Russian state media are not just distorting truth in Ukraine, they go much further, promoting a seductive nihilism.

“The Russian strategy, both at home and abroad, is to say there is no such thing as truth,” he says.

“I mean, you know, ‘The Americans are bad, we’re bad, and everyone’s bad, so what’s the big deal about us being a bit corrupt? You know our democracy’s a sham, their democracy’s a sham.’

“It’s a sort of cynicism that actually resonates very powerfully in the West nowadays with this lack of self-confidence after the Iraq War, after the financial crash – and that’s what the Russians are hoping for, just to take that cynicism and then use that in a military environment.”

Of course, every country uses strategies of deception. Churchill famously said: “In wartime, truth is so precious she should always be accompanied by a bodyguard of lies.” The Americans call such tactics CC&D – concealment, camouflage and deception.

So what sets Russia apart? Maj Gen Skip Davis argues Western forces are sometimes economical with the truth but says they don’t tell outright lies: “We are talking about denial of information – in other words, not confirming facts – versus blatantly denying. Saying, ‘No that’s not us invading, that’s not our forces there, that’s someone else’s.'”

But what about the false information that propelled Britain and the US into war with Iraq? Few would now deny that the facts on WMD were massaged in a maskirovka-type way. The word Davis keeps coming back to is “mindset”. He insists maskirovka has become a modus operandi for Russia itself.

“I think that there is an alignment between what probably started out as military doctrine, but now is much more a part of state policy and there’s an alignment between the strategic down to the tactical level in terms of the mindset of maskirovka.”

This perception is nothing new for Russia’s neighbours. A decade ago Andrei Kurkov predicted recent events in Ukraine in his book, The President’s Last Love. He writes in Russian and most of his books are on sale there but this one was stopped at the border.

A Ukrainian serviceman stands guard at a Ukrainian National Guard position on the border checkpoint near Novoazovsk, Donetsk region, August 2014

A Ukrainian solder stands guard on a checkpoint near Donetsk, August 2014

“Putin is one of the main characters,” he says. “In this book he promises the Ukrainian president that he will annex Crimea and cut the gas supply and lots of other things that later became reality – this is the reason why the book is banned.”

Isn’t it uncanny that he managed such accurate predictions?

“I don’t think it was difficult – somehow when you live in a not very logical world, when the logic of absurdity prevails and the players don’t evolve – it’s actually quite simple.”

Maskirovka: Deception Russian Style was broadcast as part of theAnalysis series on BBC Radio 4 – listen to the programme on BBC iPlayer or download the podcast.

Source: http://www.bbc.com/news/magazine-31020283


Filed under: Information operations

Journalists in party gear — the real issue is reputation

$
0
0

Reputation.

The issue of reputation is a vital part of the discussion of Information Operations, Information Warfare and any of a myriad of terms for using information to influence others.

In this case the discussion is along political party lines internal to South Africa but it is also important to an overall discussion of news sources.

The case of reputation is also a vital part of the discussion of Russian Information Warfare.  Russia relies that the reputation of many news sources is not known and is not associated with Russian propaganda.  GlobalResearch.ca, for instance, appears for all intents and purposes as a Canadian news source, which are generally neutral or even pro-US/anti-Russian.  It’s reputation, for we who follow them, is that GlobalResearch.ca is deeply pro-Russian, anti-US and anti-UK.  Their reputation is not hidden, yet it is not well known, so some may fall for their deception.

This is a reminder that a reputation still means something.


The noise and personal attacks that characterised the recent debate about whether journalists should publicly nail their political colours to the mast has made it nearly impossible to reflect on the possible implications of it all.

This debate is neither new nor unique to SA or Africa. South Africa. In some African countries in particular of our continent it is the membership card of the ruling party that guarantees you space to practise as a journalist without harassment.

In the US the political leanings of radio hosts, columnists and even entire TV networks like Fox News are well known. What they broadcast or say is taken within that context and media culture.

What stirred the hornet’s nest here was the appearance of photographs where two senior editorial executives of the Independent News & Media Group wore ANC paraphernalia while attending the party’s 103rd birthday.

As much as we can have opinions on whether we agree with this or not, it is for journalists, their employers and their associations to decide if whether this is acceptable practice. What none of them is are immune from, is the perception the public develops based on their behaviour, and the reaction of their readers.

Frankly, I do not believe there is anything inherently wrong with journalists belonging to political parties. It is nearly impossible to find apolitical human beings anyway, and such affiliations, informal or formal, are part of our reality.

But flaunting such membership can lead to unnecessary difficulties. Once a journalist belongs to a political party, the problem is that once the consumers of news perceive, rightly or wrongly, that the journalist or their medium is politically aligned, then all sorts of assumptions come into play, and these can have adverse effects on brand reputation.

The first thing I learnt about reputation management, which and incidentally one I have been married to for nearly 20 years, is that it is nothing but perception management. If you have costumers to whom you provide a service or a product, you should always be concerned about the perceptions such customers have of you.

Perceptions can be fickle, develop fast, are often influenced by personal or professional prejudices, and can be lethal to your business. Once your consumers have negative perceptions about you, you have a very big problem. Arguing with them about whether or not their perceptions are stupid hardly helps, and is often the last thing anyone in such a position should be doing.

Most, if not all, journalists and their media have an ideological base to which their readers belong, including those that claim to be independent.

The question we should concern ourselves with about is whether they are fair in the coverage of news, especially when where there are opposing views.

Even when they scrupulously adhere to the tenets of excellent journalism, it is their duty to manage perceptions.

This is the limitation that few of those who are participating in the debate seemed to care about or recognise, which is unfortunate.

In 2010 I resigned from quit a radio station where I was a talk show host because I signed up as adviser to a cabinet minister. I was not required to resign and did not have to.

I just thought it best to step down from the radio station as I did not want perceptions to cloud my work with the minister or have the radio station’s independence questioned.

Consumers of news — read customers of the news media — want the assurance that they can trust their sources of news to be nonaligned so that the information they give can be taken at face value

Anything less than this is detrimental in the long run because once readers lose trust in your bona fides, they are likely to invest less in the product or service that you offer.

I do not believe nailing one’s political colours to the mast is a prerequisite to subscribing to one or the other ideology. The fact is that most discerning news consumers can detect this untold.

The reality is that any journalist who wants to voice an opinion always has space to do so. In that case, we all can distinguish between reporting and opinion making.

I hope the furore has triggered customer-focused soul searching in all newsrooms, or otherwise we could see further decline in the reputation of a fourth estate that is so crucial to our democracy.

Mabote is a public relations coach and founder of Kingmaker Consulting

Source: http://www.rdm.co.za/politics/2015/01/29/journalists-in-party-gear–the-real-issue-is-reputation


Filed under: Information operations Tagged: #RussiaLies, Reputation

BBG Condemns Extended Detention Of Khadija Ismayilova

$
0
0

Khadija Ismayilova tries to greet supporters and journalists outside the Baku courtroom on January 27, when she had her pretrial detention extended.

The court system in this part of the world is… confusing, to say the least.

Corrupt, subjective and, did I say corrupt?

Read Peter Pomerantsev’s “Nothing Is True and Everything Is Possible”.  Towards the beginning of the book he describes a trial against a female business owner who honestly tried to stay legal and was still arrested.  Pomerantsev says she could have paid a (probably hefty) fine and had a speedy trial. Others might have had the charges dismissed.  The bottom line many Russian and CIS courts are merely forms of intimidation to force compliance with laws, written or unwritten, to show condemnation.

Being a journalist for an American state-owned broadcasting corporation is not at all like being a normal journalist in many parts of the world. In Azerbaijan, apparently, Khadija Ismayilova is risking her life. Just for doing her job, which is legal, but may expose not entirely legal practices…


JANUARY 30, 2015

WASHINGTON – The Broadcasting Board of Governors today expressed concern about the imprisonment of Khadija Ismayilova, an investigative reporter and contributor to Radio Free Europe/Radio Liberty’s (RFE/RL) Azerbaijani Service, and called for her immediate release following a ruling by an Azerbaijani court to extend her pre-trial detention.

On January 27, a court in Azerbaijan prolonged Ismayilova’s detention, originally set to expire on February 5, by two months. Ismayilova was arrested on politically motivated charges on December 5 and could serve three to seven years in prison if convicted.

Ismayilova is being held in a prison cell with four other women. She has written several letters from custody to record her experiences and encourage her colleagues. The latest letter, published by RFE/RL, resulted in her placement in solitary confinement as punishment.

“We are concerned about Khadija’s well-being, and outraged by the Azerbaijani government’s flagrant assaults on press freedom,” said BBG Chairman Jeff Shell. “Not only is Khadija unjustly imprisoned on a fabricated accusation, but our news bureau in Baku remains sealed by Azerbaijani authorities. We demand that the authorities permit the bureau to reopen, release Khadija Ismayilova, and halt the harassment of RFE/RL journalists and their families.”

On December 26, RFE/RL’s Baku bureau was raided and closed by agents of the state’s “grave crimes investigations committee” in connection with a new law on “foreign agents.”  The same law was invoked to force the National Democratic Institute, IREX, and other organizations supporting civil society development to suspend their local operations in Azerbaijan.

RFE/RL and BBG representatives have repeatedly contacted Azerbaijani officials to protest her case without success.

The BBG joins RFE/RL, the U.S. Department of State, Amnesty International, OSCE,Index on Censorship, and many other officials and organizations in condemning the Azerbaijani government’s imprisonment of Ismayilova and assault on freedom of expression.

Source: http://www.bbg.gov/blog/2015/01/30/bbg-board-condemns-extended-detention-of-khadija-ismayilova/


Filed under: Information operations Tagged: anti-censorship, Azerbaijan

Nordic information office suspends activities

$
0
0

St. Petersburg is Russia’s second largest city with more than five million inhabitants. (Photo: Trude Pettersen)

The Nordic Council of Ministers’ office in St.Petersburg has suspended or postponed many of its planned activities in Russia after being included on the list of NGOs considered as foreign agents.

The decision to suspend parts of the activities is valid until further notice and the situation is updated on a daily basis, the Nordic Council of Ministers’ web site reads.

On January 20, the Russian Ministry of Justice decided to include the Nordic Council of Ministers’ (NMC) office in St Petersburg on the list of NGOs considered as foreign agents in Russia. The Nordic countries have appealed against this decision.

Under Russian law, NGOs engaged in political activities and receiving financing from abroad must register as foreign agents. NCM’s office in Russia has had the status of NGO, i.e. a voluntary organisation, since its inception in 1995.

“The Nordic Council of Ministers regrets what has happened. We believe that both the prosecuting authority’s demands and the Ministry of Justice’s decision are unfounded. The Nordic Council of Ministers’ office has reported this to the prosecuting authority in a meeting,” Secretary General Dagfinn Høybråten said in a press release.

NCM did not succeed in reaching a solution with Russian authorities, and the office in St. Petersburg will freeze most of its acitivities.

Denmark’s Minister for Nordic Cooperation and Chair of the council Carsten Hansen, says that the council has had a good cooperation with Russian and Russians for 20 years, and that the work has been especially important in Northwest-Russia.

“We hope it can continue, but it is not acceptable that the authorities call the office ‘foreign agent’, Hansen says to NRK.

As BarentsObserver reported, the Nordic Council of Ministers’ office on January 12 received a letter from the procurator’s office in St. Petersburg, ordering the office to immediately register as ‘foreign agent’.

Source: http://barentsobserver.com/en/politics/2015/01/nordic-information-office-suspends-activities-29-01


Filed under: Information operations

New Russian Deception Attempts

$
0
0

And now for something completely different.

An official within the Kremlin released three points which the Russia plans to play with in 2015.  I read these and immediately rolled my eyes, because I immediately recognized these as a deception, generating interest away from the russian main interests.

Without further ado, here they are, translated by Bing (embedded in Facebook):

Important Russian foreign policy objectives in the European sector in 2015
1. Creation of conditions for the release of the German Democratic Republic from the British-American occupation. Conclusion 150,000 British-American occupation forces from the territory of Germany.
2. Support Greece, after its exit from the EU and NATO. Adoption of Greece in the Eurasian Union.
3. Creating conditions for a just solution to the question Memel Territory (Klaipeda and its surroundings).

The first, the release of the German Democratic Republic (GDR), is Russia’s latest deception tactic.  Since nobody in the former GDR had a “referendum” to rejoin West Germany and form Germany, the Russia is declaring the reunification illegal. Somehow they think the annexation of Crimea is ‘more legal’ than the reunification of Germany.  Pardon me, I just threw up in my mouth a little bit.

Second, the Grexitas or the exit of Greece from the EU, is a significant rally cry for the division of the EU in the eyes of the Russia.  Anything that divides enemies or potential enemies is, in the eyes of the Russia, good.  Don’t forget that also includes China, put that in your pot and stir it for a minute.

The last, Memel Territory, is yet another attempt to dredge up and use an obscure issue, almost 100 years old, to which territory was independent as a result of the Treaty of Versailles, and in 1924 just taken over by Lithuania.

I swear, someone in the Kremlin is drinking vodka. Scratch that, they all drink vodka.  How about way, way, too much vodka.  Perhaps that’s normal, as well, but the issue is whoever wrote these three points should be ignored as incompetent.

You know who…


Filed under: Information operations

This is how a “troll factory” works #FreeSavchenko

$
0
0

Posted on by

By Sobaka.ru
1.28.2015
Translated and edited by Voices of Ukraine

City description: propaganda blogger

St. Petersburg has become famous throughout Russia as the cradle of information wars and “internet-trolls,” who settled in an elite residential complex in Olgino, and later moved closer to the city center. A former colleague of this organization told us on condition of anonymity how the gigantic propaganda machine works and why it is impossible to last long in this job.

Jobs in this wonderful place are scattered throughout headhunters’ resources. Companies who are looking for a “copywriter” or “content managers” appear very different. Conspiracy does not work for two reasons: all indicate a salary of 40-45 thousand rubles, and the address “town of Stara Selo/ town of Black River.” In the job description, a minimum of information is given about who, where and what is needed – all of it calculated so that such a high salary would discourage the desire to know where you would have to settle. As it turns out, it works: many came here after a long and painful search for work, almost in despair. Personally, I have recently arrived from a major regional center and–having education in journalism–in late August, just sent a resume to one of the dozens of ads placed in an online job search. The call with an invitation to come for an interview with the media holding “Internet Research” came after a couple of days.

The interview took place in a beautiful new office building at #55 Savushkina Street, which occupies four floors. To get there just off the street is not possible: they have a harsh security turnstile system. If you do not have the capacity cards, then you have to write an explanatory note, which includes your passport details.

The interview begins with you being handed a form to fill out. On it you provide information about yourself, including point of residence, place of actual residence, full information regarding former employment, information on your parents’ place of work, etc. After that they ask you a couple of questions and request you “re-write” any relevant news. There is a feeling that they take anyone who can prove that they can write and speak in Russian. At the same time you get no information about where you’ve ended up: “media holding, several websites, we need to develop the traffic, the salary is above average.” If before that you were unable to find work for several months, you agree immediately after the words “forty five thousand rubles.” This is the base salary of all the ordinary employees – whether they are “bloggers” (who post in LiveJournal and social networks), “content-managers,” “CEO-specialists” or creators of patriotic “demotivators,” who are called “illustrators.” Those who work their way up to more senior positions receive more – 55 000, 60 000 and so on. During the hiring they virtually don’t ask you about your personal political beliefs.

The first day of work. From 9:00am until 5:30pm. “you have to make 20 news pieces, uniqueness has to be about 75%, the news has to be relevant, here is a login and password from the PC, get to work.” Everything reminds you of school – the offices are very much like “computer classes,” you cannot be late for even 2 minutes (they fine you for it), and you must leave work immediately after it finishes and not a minute later. In total, as I understood, the holding has 12 sites of various subjects, but all of them in one way or another touch on politics and Ukraine. Although the “business card” says the “Federal News Agency” (when calling on the phone from work, the majority of the employees from different departments present themselves specifically as employees of “FAN”), the majority of traffic is from the so-called “News Agency of Kharkiv” (ironically named asnahnews.com.ua). The site is apparently Ukrainian, but all its publications are made at Savushkina 55. There are several such “Ukrainian sites” in the media holding, including the famous “Antimaidan.” These faux sites do not make direct prevocational fakes, but re-write the news in a specific key: for example, separatists become “militias.” The “media holding” has been operating from July 2014.

For the first few days you simply do not know where you are and why you are re-writing this news, filling the sites with them. There is a feeling that this is some social experiment or a reality show: especially since in every open office where there are around 20-30 employees, there are surveillance cameras. There was no ideological brainwashing or regular briefings; everything is very simple and clear to virtually everyone who got hired: you can’t write bad about Putin, the insurgents are not terrorists, “you understand yourself….” There is a sense that the newcomers understand everything themselves where they ended up and what to write, if ideological instruction is conducted, it is at the level of chief editors. There are no planning meetings or general meetings. Speaking of ordinary employees: most often they are newcomers from other cities, all people with higher education, quite clever. There are many very young people with an informal look, with piercings and dreadlocks. In general the employees are divided into three categories: 1) “they are paying me and I don’t care, I don’t even know who this is:” many of these have families, loans and so on, 2) “yes, I know that this is a pro-Kremlin troll factory, but to hell with mental anguish – they pay me so it’s OK”, 3) “I am waging information warfare against the fascist junta!” The latter are in an overwhelming minority. Perhaps they are the only ones who truly love their work. In our department there were, I think, only two of these.

The “media holding” itself takes up only one floor of the building. The other floors are occupied by other workers of the propaganda front: including those very “trolls” which have become famous throughout St. Petersburg, basically sharp-tongued mercenaries who flood each thread in social networks and blogs with aggressive comments. The workers of the “media holding” treat them with irony bordering nevertheless on some caution. I personally have not had a chance to chat with the “trolls,” but only saw them in the smoking area.

The unprofessionalism of the management of the “holding” can be felt already after only a week of work. The main goal is the number of views, visitors. The plan is to have their number (on all the sites of the holding in total) rise by 3000 people every day. At the same time they do not include weekends, holidays etc. – only “a five-year plan in one month.” The SEO department, which is supposed to promote the content of these sites, carries out blatant spamming (which is why many of the sites are blocked by Google and VKontakte). Given the way in which the staff is recruited, this is not surprising.

Meanwhile the leadership shakes up the site chiefs, they in turn demand “relevant news” from their workers, and the workers try to be the first to “re-write” the news of key Russian news agencies. To improve the traffic they take news regarding murder, rape and other crime in the Russian regions, celebrity gossip, about Pugacheva, Madonna and so on. News about gays is very popular: obviously, with a negative take on LGBT. If they mention feminism, then it is always with reference to the Ukrainian performers Femen, not otherwise. But, nevertheless, the main news of each site is – Putin, Crimea, “Novorossia.” At the same time the talk is constantly “we are on a startup, need to create more traffic so that we can reach self-sufficiency through advertising, it is only for now that we are living on money from investors,” but all of this is taken with a smile – because it is obvious even to the most naive here who these “investors” really are.

The decision to leave this “troll sanctuary” was maturing for quite a while. On one hand I understood that this, in all fairness, comfortable job with a reasonable wage for St. Petersburg will be difficult to find during the crisis: there was never a single day on Savushkina when I ran into difficulties of a technical nature. The problem was with the psychological severity of this work. By December my eye started twitching from nervous tension, and by night I had dreams where I was constantly re-writing news about Putin and Ukraine. In addition I hold liberal opinions, I have many friends who are opposition-minded, and at some point I realized that I am simply ashamed to tell people what I am doing. All these factors outweighed the considerations of comfort, and I quit with relief.

Source: sobaka.ru


Filed under: Information operations

Semantic Culturomics

$
0
0

Fabian M. Suchanek,Nicoleta Preda

ABSTRACT

Newspapers are testimonials of history. The same is increasingly true of social media such as online forums, online communities, and blogs. By looking at the sequence of articles over time, one can discover the birth and the development of trends that marked society and history – a field known as “Culturomics”. ButCulturomics has so far been limited to statistics on keywords. In this vision paper, we argue that the advent of large knowledge bases (such as YAGO [37], NELL [5], DBpedia [3], and Freebase) will revolutionize the field. If their knowledge is combined with the news articles, it can breathe life into what is otherwise just a sequence of words for a machine. This will allow discovering trends in history and culture, explaining them through explicit logical rules, and making predictions about the events of the future. We predict that this could open up a new field of research, “Semantic Culturomics”, in which no longer human text helps machines build up knowledge bases, but knowledge bases help humans understand their society.

1. INTRODUCTION

Newspapers are testimonials of history. Day by day, news articles record the events of the moment – for months, years, and decades. The same is true for books, and increasingly also for social media such as online forums, online communities, and blogs. By looking at the sequence of articles over time, one can discover the trends, events, and patterns that mark society and history. This includes, e.g., the emancipation of women, the globalization of markets, or the fact that upheavals often lead to elections or civil wars. Several projects have taken to mining these trends. The Culturomics project [27], e.g., mined trends from the Google Book Corpus. We can also use the textual sources to extrapolate these trends to the future. Twitter data has been used to make predictions about election results, book sales, or consumer behavior. However, all of these analyses were mostly restricted to the appearance of keywords. The analysis of the role of genders in [27], for instance, was limited to comparing the frequency of the word “man” to the frequency of the word “woman” over time. It could not find out which men and women were actually gaining importance, or in which professions. This brings us to the general problem of such previous analyses: They are mostly limited to counting the occurrences of words. So far, no automated approach can actually bring deep insight into the meaning of news articles over time. Meaning, however, is the key for understanding roles of politicians, interactions between people, or reasons for conflict. For example, if a sentence reads “Lydia Taft cast her vote in 1756”, then this sentence gets its historic value only if we know that Lydia Taft was a woman, and that she lived in the United States, and that women’s suffrage was not established there until 1920. All of this information is lost if we count just words.

Screen Shot 2015-01-27 at 12.20.44 PM

We believe that this barrier will soon be broken, because we now have large commonsense knowledge bases (KBs) at our disposal: YAGO [37], NELL [5], TextRunner [4], DBpedia [3], and Freebase (http://freebase.com). These KBs contain knowledge about millions of people, places, organizations, and events. The creation of these KBs is an ongoing endeavor. However, with this vision paper, we take a step ahead of these current issues in research, and look at what can already be achieved with these KBs: If their knowledge is combined with the news articles, it can breathe life into what is otherwise just a sequence of words for a machine. Some news organisations1 participate already in the effort of annotating textual data with entities from KBs. Once people, events, and locations have been identified in the text, they can be unfolded with the knowledge from the KB. With this combination, we can identify not just the word “woman”, but actually mentions of people of whom the KB knows that they are female. Figure 1 shows a proof of concept that we conducted on YAGO and the French newspaper Le Monde [15]. It looks at all occurrences of people in the articles, and plots the proportion of women both in general and among politicians. Women are mentioned more frequently over time, but the ratio is smaller among politicians. Such a detailed analysis is possible only through the combination of textual data and semantic knowledge.

2. SEMANTIC CULTUROMICS

Semantic Culturomics is the large-scale analysis of text documents with the help of knowledge bases, with the goal of discovering, explaining, and predicting the trends and events in history and society.

Semantic Culturomics could for example answer questions such as: “In which countries are foreign products most prevalent?” (where the prevalence can be mined from the news, and the producer of a product, as well as its nationality, comes from the KB), “How long do celebrities usually take to marry?” (where co-occurrences of the celebrities can be found in blogs, and the date of marriage and profession comes from the KB), “What are the factors that lead to an armed conflict?” (where events come from newspapers, and economic and geographical background information comes from the KB), “Which species are likely to migrate due to global warming?” (where current sightings and environmental conditions come from textual sources, and biological information comes from the KB). None of these queries can be answered using only word-based analysis. The explanations that Semantic Culturomics aims at could take the form of logical rules such as “A politician who was involved in a scandal often resigns in the near future”. Such rules can explain particular past events by pointing to a general pattern of history with past instances. They can also be used to make predictions, and to deliver an explication as to why a certain prediction is made. Semantic Culturomics would turn around a long-standing paradigm: Up to now, all information extraction projects strive to distill computer-understandable knowledge from the textual data of the Web. Seen this way, human-produced text helps computers structure and understand this world. Semantic Culturomics would put that paradigm upside down: It is no longer human text that helps computers build up knowledge, but computer knowledge that helps us understand human text – and with it human history and society.

3. STATE OF THE ART

Digital Humanities and Culturomics. The Digital Humanities make historical data digitally accessible in order to compare texts, visualize historic connections, and trace the spread of new concepts. The seminal paper in this area, [27], introduced the concept of “Culturomics” as the study of cultural trends through the quantitative analysis of digitized texts. This work was the first large-scale study of culture through digitized texts. Yet, as explained above, it remains bound to the words of the text. The work has since been advanced [20, 1], but still remains confined to counting occurrences and co-occurrences of words. Closer to our vision, the GDELT project [21] annotates news articles with entities and event types for deeper analysis. The focus is on the visualisation of trends. In contrast, Semantic Culturomics aims also at providing explanations for events, which become possible by the background knowledge from the KB.

Event prediction. A recent work [33] mined the New York Times corpus to predict future events. This work was the first that aimed at predicting (rather than modeling) events. Of particular interest is the ability to bound the time point of the predicted events. The authors make use of key phrases in the text, as well as semantic knowledge to some degree. A recent follow-up work [34] extended the analysis to Web queries. Another approach modeled causality of events by using background data from the Linked Open Data cloud [32]. These works were the first to address the prediction of events at large scale. [32] goes a long way towards the identification of events and causality. In a similar vein, Recorded Future2 , a company, has specialised in the detection and the prediction of events with the help of a KB [36]. However, these works built classifiers for predictions rather than explicit patterns in the form of logical rules that we aim at. Furthermore, Semantic Culturomics would model the interplay between text and semantic knowledge in a principled way, and thus unify the prediction of future events with the modeling of past trends.

Predictive analytics. Businesses and government agencies alike analyze data in order to predict people’s behavior3 . There is a business-oriented conference4 dedicated to these projects. Therefore, we believe that this endeavor should preferably be studied also in a public, academic, space. Furthermore, predictive analytics is mostly centered on a specific task in a specific domain. A model that can predict sales of a certain product cannot be used to predict social unrest in unstable countries. Semantic Culturomics, in contrast, aims at a broader modeling of the combination of textual sources and knowledge bases.

Social Media Analysis. Recently, researchers have increasingly focused on social media to predict social trends and social movements. They have used Twitter data and blogs to predict crowd phenomena, including illnesses [18], box office sales, the stock market, consumer demand, book sales, consumer behavior, and public unrest (see, e.g., [16] and references therein). Other Web data has been used to predict the popularity of a news article [13] or to analyze elections [39]. These works have demonstrated the value of Twitter for event prediction. However, they always target a particular phenomenon. We believe that what is needed is a systematic and holistic study of textual data for both explanation of the past and prediction of the future.

Machine Reading. Several projects have looked into mining the Web at large scale for facts [5, 4, 28, 38]. Recent work has mined the usual order of events from a corpus [40], the precedence relationships between facts in a KB [41], and implicit correlations in a KB [19]. Several of these methods can be of use for Semantic Culturomics. However, they can only be an ingredient to the project, because Semantic Culturomics aims at mining explicit logical rules, together with a temporal dimension, from text and KBs. Enabling Technologies. Our vision of Semantic Culturomics can build on techniques from entity recognition, event detection, rule mining, and information extraction. We detail next how these techniques would have to be advanced.

4. CHALLENGES

Mining text in combination with knowledge bases is no easy endeavor.

The key challenges would be as follows:

Modeling hybrid data. KBs contain knowledge about entities, facts, and sometimes logical axioms. Text, on the other hand, determines the importance of an entity, the cooccurrence of entities, the location of entities in time, the type of events in which an entity is involved, the topic of an entity, and the actions of entities. Thus, Semantic Culturomics has to operate on a hybrid space of textual data and semantic knowledge. KB information is usually represented in RDF. RDF, however, cannot model time, let alone textual content. Other approaches can represent hybrid data, but do not allow KB-style reasoning [22, 12, 44, 6].

Semantic Culturomics calls for a new data model, which can represent entities and their mentions, textual patterns between entities, the dimension of time, and out-of-KB entities. In analogy to an OLAP data cube, this data model could be called a “Semantic Cube”. It should support truly hybrid query operations such as: do a phrase matching to find all text parts that contain names of entities with a certain property; choose one out of several disambiguations for a mention; given a logical rule, remove all facts that match the antecedent, and replace them by the succedent; dice the cube so that the text contains all paraphrases of a relation name. The goal is to develop a query language that subsumes all types of analyses that can be of interest on hybrid data of text and semantic KBs in general.

Identify events and entities. Given, for example, a history of the newspaper articles of a certain region, we want to be able to predict the crime rate, voting patterns, or the rise of a certain person to political prominence. In order to mine trends from a given text corpus, we have to develop methods that can load the textual data (jointly with the structured knowledge) into a Semantic Cube. This requires first and foremost the identification of entities and events in the textual corpora.

There is a large body of prior work on information extraction, and on event mining in news articles [7, 24, 43]. However, most of this work is non-ontological: It is not designed to connect the events to types of events and to entities of the KB. Several works have addressed the problem of mapping entity mentions to known entities in the KB (e.g., [14, 26]). However, these works can deal only with entities that are known to the KB. The challenge remains to handle new entities with their different names. For example, if Lady Gaga is not in the KB and is mentioned in the text, we want to create a new entity Lady Gaga. However, if we later find Stefani Germanotta, in the text, then we do not want to introduce a new entity, but rather record this mention as an occurrence of Lady Gaga with a different name.

Empower rule mining. The goal of Semantic Culturomics is not only to mine trends, but also to explain them. These explanations will take the form of logical rules, weighted with confidence and support measures. Rule mining, or inductive logic programming, has been studied in a variety of contexts [11, 23, 29, 10, 8, 31, 17]. Yet, for Semantic Culturomics we envision rules that cannot be mined with current approaches.

We would like to mine numerical rules such as “Mathematicians publish their most remarkable works before their 36th anniversary”, or “The spread between the imports and the exports of a country correlates with its current account deficit”. Previous work on numeric rule mining [31, 25] was restricted to learning intervals for numeric variables. Other approaches can learn a function [17, 9], but have been tested only on comparatively small KBs (less than 1000 entities) – far short of the millions of entities that we aim at.

We also aim to mine temporal rules such as “An election is followed by the inauguration of a president”. These should also predict the time of validity of literals. First work in this direction [30] has been tried on just toy examples.

Another challenge is to mine of rules with existential variables, such as “People usually have a male father and a female mother”. Such rules have to allow several literals in the succedent, meaning that Horn rule mining approaches and concept learning approaches become inapplicable. Statistical schema induction [42] can provide inspiration, but has not addressed existential rule learning in general.

We would also need rules with negation, such as “People marry only if they are yet not married”. Such rules have been studied [31], but not under the Open World Assumption. In this setting, learning rules with negation risks learning the patterns of incompleteness in the KB rather than negative correlations in reality. Furthermore, there exist many more statements outside the KB than inside in the KB, meaning that we risk mining a large number of irrelevant negative statements.

Finally, we want to mine rules that take into account the textual features that the hybrid space brings. These are features such as the importance of an entity or the textual context in which an entity (or a pair of entities) appears. [35] mines rules on textual phrases, but does not take into account logical constraints from the KB. If we succeed in mining rules that take into account textual features, the reward will be highly attractive: Finally, we will be able to explain why a certain event happened – by giving patterns that have led to this type of events in the past.

Privacy. Predicting missing facts means also that some facts will no longer be private. For instance, consider a rule that can predict the salary of a person given the diploma, the personal address, and the employment sector. Smart social applications could warn the user when she discloses information that, together with already disclosed information, allows predicting private data. The intuition is that automatic rule mining could reveal surprising rules that humans may not directly see or may ignore, as shown in [2].

5. CONCLUSION

In this vision paper, we have outlined the idea of Semantic Culturomics, a paradigm that uses semantic knowledge bases in order to give meaning to textual corpora such as news and social media. This idea is not without challenges, because it requires the link between textual corpora and semantic knowledge, as well as the ability to mine a hybrid data model for trends and logical rules. If Semantic Culturomics succeeds, however, it would add an interesting twist to the digital humanities: semantics. Semantics turns the texts into rich and deep sources of knowledge, exposing nuances that today’s analyses are still blind to. This would be of great use not just for historians and linguists, but also for journalists, sociologists, public opinion analysts, and political scientists. They could, e.g., search for mentions of politicians with certain properties, for links between businessmen and judges, or for trends in society and culture, conditioned by age of the participants, geographic location, or socio-economic indicators of the country. Semantic Culturomics would bring a paradigm shift, in which no longer human text is at the service of knowledge bases, but knowledge bases are at the service of human understanding.

6. REFERENCES

[1] O. Ali, I. N. Flaounas, T. D. Bie, N. Mosdell, J. Lewis, and N. Cristianini. Automating news content analysis: An application to gender bias and readability. In WAPA, 2010.

[2] N. Anciaux, B. Nguyen, and M. Vazirgiannis. Limiting data collection in application forms: A real-case application of a founding privacy principle. In PST, 2012.
[3] S. Auer, C. Bizer, G. Kobilarov, J. Lehmann, R. Cyganiak, and Z. G. Ives. DBpedia: A Nucleus for a Web of Open Data. In ISWC, 2007.
[4] M. Banko, M. J. Cafarella, S. Soderland, M. Broadhead, and O. Etzioni. Open Information Extraction from the Web. In IJCAI, 2007.
[5] A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E. R. H. Jr., and T. M. Mitchell. Toward an architecture for never-ending language learning. In AAAI, 2010.
[6] D. Colazzo, F. Goasdou´e, I. Manolescu, and A. Roatis. Rdf analytics: Lenses over semantic graphs. In WWW, 2014.[
[7] A. Das Sarma, A. Jain, and C. Yu. Dynamic relationship and event discovery. In WSDM, 2011.
[8] L. Dehaspe and H. Toironen. Discovery of relational association rules. In Relational Data Mining. 2000.
[9] N. Fanizzi, C. d’Amato, and F. Esposito. Towards numeric prediction on owl knowledge bases through terminological regression trees. In ICSC, 2012.
[10] L. Gal´arraga, C. Teflioudi, K. Hose, and F. M. Suchanek. Amie: association rule mining under incomplete evidence in ontological knowledge bases. In WWW, 2013.
[11] B. Goethals and J. Van den Bussche. Relational association rules: getting warmer. In Pattern Detection and Discovery. 2002.
[12] J. Han. Mining heterogeneous information networks by exploring the power of links. In ALT, 2009.
[13] E. Hensinger, I. Flaounas, and N. Cristianini. Modelling and predicting news popularity. Pattern Anal. Appl., 16(4), 2013.
[14] J. Hoffart, M. A. Yosef, I. Bordino, H. F¨urstenau, M. Pinkal, M. Spaniol, B. Taneva, S. Thater, and G. Weikum. Robust disambiguation of named entities in text. In EMNLP, 2011.
[15] T. Huet, J. Biega, and F. M. Suchanek. Mining history with le monde. In AKBC, 2013.
[16] N. Kallus. Predicting crowd behavior with big public data. In WWW, 2014.
[17] A. Karaliˇc and I. Bratko. First order regression. Machine Learning, 26(2-3), 1997.
[18] V. Lampos and N. Cristianini. Nowcasting events from the social web with statistical learning. ACM Trans. Intell. Syst. Technol., 3(4), Sept. 2012.[19] N. Lao, T. Mitchell, and W. W. Cohen. Random walk inference and learning in a large scale knowledge base. In EMNLP, 2011.[20] K. Leetaru. Culturomics 2.0: Forecasting large-scale human behavior using global news media tone in time and space. First Monday, 16(9), 2011.
[21] K. Leetaru and P. Schrodt. Gdelt: Global data on events, language, and tone, 1979-2012. In International Studies Association Annual Conference, 2013.
[22] C. X. Lin, B. Ding, J. Han, F. Zhu, and B. Zhao. Text cube: Computing ir measures for multidimensional text database analysis. In ICDM, 2008.
[23] F. A. Lisi. Building rules on top of ontologies for the semantic web with inductive logic programming. Theory and Practice of Logic Programming, 8(3), 2008.[24] W. Lu and D. Roth. Automatic event extraction with structured preference modeling. In ACL, 2012.
[25] A. Melo, M. Theobald, and J. Voelker. Correlation-based refinement of rules with numerical attributes. In FLAIRS, 2014.
[26] P. N. Mendes, M. Jakob, A. Garcia-Silva, and C. Bizer. Dbpedia spotlight: shedding light on the web of documents. In ICSS, 2011.
[27] J.-B. Michel, Y. K. Shen, A. P. Aiden, A. Veres, M. K. Gray, T. G. B. Team, J. P. Pickett, D. Holberg, D. Clancy, P. Norvig, J. Orwant, S. Pinker, M. A. Nowak, and E. L. Aiden. Quantitative analysis of culture using millions of digitized books. Science, 331(6014), 2011.
[28] N. Nakashole, M. Theobald, and G. Weikum. Scalable knowledge harvesting with high precision and high recall. In WSDM, 2011.
[29] V. Nebot and R. Berlanga. Finding association rules in semantic web data. Knowledge-Based Systems, 25(1), 2012.
[30] M. C. Nicoletti, F. O. S. de S´a Lisboa, and E. R. H. Jr. Automatic learning of temporal relations under the closed world assumption. Fundam. Inform., 124(1-2), 2013.
[31] J. R. Quinlan. Learning logical definitions from relations. Machine learning, 5(3), 1990.
[32] K. Radinsky, S. Davidovich, and S. Markovitch. Learning to predict from textual data. J. Artif. Intell. Res., 45, 2012.
[33] K. Radinsky and E. Horvitz. Mining the web to predict future events. In WSDM, 2013.
[34] K. Radinsky, K. M. Svore, S. T. Dumais, M. Shokouhi, J. Teevan, A. Bocharov, and E. Horvitz. Behavioral dynamics on the web: Learning, modeling, and prediction. ACM Trans. Inf. Syst., 31(3), 2013.
[35] S. Schoenmackers, O. Etzioni, D. S. Weld, and J. Davis. Learning first-order horn clauses from web text. In EMNLP, 2010.
[36] Staffan Truv´e. Big Data For the Future: Unlocking the Predictive Power of the Web. Technical report, Recorded Future, 2011.
[37] F. M. Suchanek, G. Kasneci, and G. Weikum. YAGO: A core of semantic knowledge – unifying WordNet and Wikipedia. In WWW, 2007.
[38] F. M. Suchanek, M. Sozio, and G. Weikum. Sofie: a self-organizing framework for information extraction. In WWW, 2009.
[39] S. Sudhahar, T. Lansdall-Welfare, I. N. Flaounas, and N. Cristianini. Electionwatch: Detecting patterns in news coverage of us elections. In EACL, 2012.
[40] P. P. Talukdar, D. T. Wijaya, and T. M. Mitchell. Acquiring temporal constraints between relations. In CIKM, 2012.
[41] P. P. Talukdar, D. T. Wijaya, and T. M. Mitchell. Coupled temporal scoping of relational facts. In WSDM, 2012.
[42] J. V¨olker and M. Niepert. Statistical schema induction. In ESWC, 2011.
[43] D. Wang, T. Li, and M. Ogihara. Generating pictorial storylines via minimum-weight connected dominating set approximation in multi-view graphs. In AAAI, 2012.
[44] P. Zhao, X. Li, D. Xin, and J. Han. Graph cube: on warehousing and olap multidimensional networks. In SIGMOD, 2011.

Footnotes

1 http://developer.nytimes.com/docs/semantic_api

2 http://www.recordedfuture.com

3 http://www.forbes.com/sites/gregpetro/2013/06/13/ what-retail-is-learning-from-the-nsa/

4 http://www.predictiveanalyticsworld.com/

###

This work is licensed under the Creative Commons Attribution NonCommercial-NoDerivs 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/3.0/. Obtain permission prior to any use beyond those covered by the license. Contact copyright holder by emailing info@vldb.org. Articles from this volume were invited to present their results at the 40th International Conference on Very Large Data Bases, September 1st – 5th 2014, Hangzhou, China. Proceedings of the VLDB Endowment, Vol. 7, No. 12 Copyright 2014 VLDB Endowment 2150-8097/14/08.


Filed under: Information operations Tagged: Semantic Culturomics

Hackers Use Old Lure on Web to Help Syrian Government

$
0
0

WASHINGTON — To the young Syrian rebel fighter, the Skype message in early December 2013 appeared to come from a woman in Lebanon, named Iman Almasri, interested in his cause. Her picture, in a small icon alongside her name, showed a fair-skinned 20-something in a black head covering, wearing sunglasses.

They chatted online for nearly two hours, seemingly united in their opposition to the rule of Bashar al-Assad, the Syrian leader still in power after a civil war that has taken more than 200,000 lives. Eventually saying she worked “in a programing company in Beirut,” the woman asked the fighter whether he was talking from his computer or his smartphone. He sent her a photo of himself and asked for another of her in return. She sent one immediately, apologizing that it was a few years old.

“Angel like,” he responded. “You drive me crazy.”

What the fighter did not know was that buried in the code of the second photo was a particularly potent piece of malware that copied files from his computer, including tactical battle plans and troves of information about him, his friends and fellow fighters. The woman was not a friendly chat partner, but a pro-Assad hacker — the photos all appear to have been plucked from the web.

The Syrian conflict has been marked by a very active, if only sporadically visible, cyberbattle that has engulfed all sides, one that is less dramatic than the barrel bombs, snipers and chemical weapons — but perhaps just as effective. The United States had deeply penetrated the web and phone systems inSyria a year before the Arab Spring uprisings spread throughout the country. And once it began, Mr. Assad’s digital warriors have been out in force, looking for any advantage that could keep him in power.

In this case, the fighter had fallen for the oldest scam on the Internet, one that helped Mr. Assad’s allies. The chat is drawn from a new study by the intelligence-gathering division of FireEye, a computer security firm, which has delved into the hidden corners of the Syrian conflict — one in which even a low-tech fighting force has figured out a way to use cyberespionage to its advantage. FireEye researchers found a collection of chats and documents while researching malware hidden in PDF documents, which are commonly used to share letters, books or other images. That quickly took them to the servers where the stolen data was stored.

Like the hackers who the United States says were working for North Korea when they attacked Sony Pictures in November, the assailants aiding Mr. Assad’s forces in this case took steps to hide their true identities.

The report says the pro-Assad hackers stole large caches of critical documents revealing the Syrian opposition’s strategy, tactical battle plans, supply requirements and data about the forces themselves — which could be used to track them down. But it is not evident how or whether this battlefield information was used.

“You’ve got a conflict with a lot of young, male fighters who keep their contacts and their operations on phones in their back pockets,” said one senior American intelligence official who spoke on the condition of anonymity to discuss espionage matters. “And it’s clear Assad’s forces have the capability to drain all that out.”

Mr. Assad was also the victim of cyberattacks, but of a far more advanced nature.

A National Security Agency document dated June 2010, written by the agency’s chief of “Access and Target Development,” describes how the shipment of “computer network devices (servers, routers, etc.) being delivered to our targets throughout the world are intercepted” by the agency. The document, published recently by Der Spiegel, the German magazine, came from the huge trove taken by Edward J. Snowden; this one shows a photograph of N.S.A. workers slicing open a box of equipment from Cisco Systems, a major manufacturer of network equipment.

After being opened, electronic “beacon implants” were placed in the circuitry. One set of devices was “bound for the Syrian Telecommunications Establishment to be used as part of their Internet backbone,” the document reveals. To the delight of American intelligence agencies, they soon discovered they had access to the country’s cellphone network — enabling American officials to figure out who was calling whom, and from where.

Such interceptions are still highly classified; the United States government has never discussed its access to the Assad communications network. But the FireEye report, which will be released on Monday, makes it clear that such “network exploitation” is now a routine part of even the most low-tech if brutal civil wars, and available to those operating on a shoestring budget.

And that is a new development. The theft of the rebel battle plans stands in contrast to the cybervandalism carried out in recent years by the Syrian Electronic Army, which American intelligence officials suspect is actually Iranian, and has conducted strikes against targets in the United States, including the website of The New York Times. But mostly these have been denial-of-service attacks, which are annoying but not potential game-changers on the battlefield.

Exactly who conducted the hacking on behalf of Mr. Assad’s forces remains a mystery, as does whether the stolen data was ever used by the Syrian military. One of the authors of the report, Nart Villeneuve, a threat intelligence analyst for the company, said that it was likely that the hackers were based in Lebanon — which would be the only true statement in the chat with the Syrian fighter. They used a computer server in Germany, where FireEye found many of their chats in unprotected directories. A handful of the targets of the Syrian operation were contacted in recent months by FireEye researchers. “They really didn’t understand what had happened,” Mr. Villeneuve said. “They didn’t know their computers and phones had been compromised.”

But if information was forwarded to Mr. Assad’s forces, it would have provided his troops or their allies with important intelligence and a critical battlefield advantage, according to analysts and Syrian military specialists.

“This activity, which takes place in the heat of a conflict, provides actionable military intelligence for an immediate battlefield advantage,” the FireEye report concluded. “It provides the type of insight that can thwart a vital supply route, reveal a planned ambush, and identify and track key individuals.”

By mid-2013, according to the information that FireEye recovered, 10 rebel groups fighting Mr. Assad’s regime were planning a major operation intended to reclaim from Syrian government forces a key portion of territory along a strategic north-south highway linking Damascus, the capital, with Jordan.

The plans called for retaking the town of Khirbet Ghazaleh, a strategic gateway to the major city of Daraa. In May 2013, Syrian troops had seized control of the town near the highway.

“The Assad regime’s biggest vulnerabilities over the last year have been in south Syria, so disrupting that operation would be key to the regime fending off an attack on Damascus from the south — the traditional route for invading armies,” said Andrew J. Tabler, a Syria specialist at the Washington Institute for Near East Policy. Mr. Tabler said he was not aware of the stolen information.

According to FireEye, which merged last year with the Mandiant Corporation, the company that has tracked Unit 61398, the Chinese Army’s hacking operation, the rebels shared photocopied battle plans, and in red ballpoint pen added defensive embankments, storing their plans electronically as pictures taken with their cellphones. They prepared for a battle involving 700 to 800 men, who were divided into groups to launch separate attacks, including an ambush. They used Google Earth to map their defensive lines and communicate grid coordinates.

They mapped locations for reserve fighters, staging areas and support personnel; settled on a field operations area; and planned supply routes for their forces, according to FireEye. Commanders received stern instructions not to make any “individual” decisions without approval from rebel superiors.

The battle details that the security service recovered are impressive. The rebels, who are not identified, would begin the attack with 120-millimeter mortar fire, followed by an assault against key Syrian Army locations. They drew up lists of men from each unit, with names, birth dates and other identifying information. But they stored them on their phones and laptops, and they were vulnerable to slightly customized versions of commercially available malware.

“It’s the democratization of intelligence,” said Laura Galante, a former Defense Intelligence Agency analyst who now works for FireEye and oversaw the Syria work. “We in the private sector can see some of this, and adversaries can steal it in a wholesale way and understand the full picture of an operation.”

And perhaps they can even stop an operation. The retaking of Khirbet Ghazaleh never materialized, Syria analysts say. It is unclear whether Syrian authorities thwarted the plot before it could be carried out, or if the rebels aborted the plan, perhaps suspecting the hacking or for some other reason.

Source: http://www.nytimes.com/2015/02/02/world/middleeast/hackers-use-old-web-lure-to-aid-assad.html?hp&action=click&pgtype=Homepage&module=first-column-region&region=top-news&WT.nav=top-news&_r=1 


Filed under: Information operations Tagged: hackers, Syria

Russia Disinformation Campaign Preps for Russian Ukraine Invasion

$
0
0

Screen Shot 2015-02-03 at 8.14.54 PMFollowing yesterday’s very phony “video” of masked men claiming to be “Ukrainian partisans in Russia” who threatened violent acts against Russian civilians, there is another report today warning of one or more “false flag” terrorist attacks to be staged in Russia, particularly in Rostov Province of Russia and regions adjoining Ukraine.

The report indicates that Russia’s “Life News”  has started a propaganda campaign to demonize Ukrainians and prepare the Russian public for such attacks. Some say Life News is creating a cover story for Russian special forces and the FSB.

This appears to be essentially a repeat of Putin’s 1999 scenario in which apartment buildings were blown up in Russia in order to justify Putin’s invasion of Chechnya.

This needs to be exposed now because the decimation of Russian terrorist forces by the Ukrainian army in southeastern Ukraine over the last two weeks threatens a collapse of Russian aggressive efforts in Ukraine and, manpower replenishment is desperately needed by the phony “pro-Russian republics.” This can only come about through a major Russian deployment of regular forces to Ukraine that can no longer be disguised as “local rebels” or “separatists.”

You are intelligent readers.  This could very well be propaganda from someone in Ukraine, but I just want you to be informed.

Stay informed, my friends.  Read with a discerning eye. Think.


(Translated using Google Translate)

Life News began preparing a terrorist attack information: Rostov SOS!

February 2, 2015

Dear friends, readers and bloggers, we ask you to help warn civilians of the Rostov Region (Russian Federation) on the preparation of terrorists Kremlin junta bloody provocation. February 1 propaganda channel Life News began preparing a terrorist attack information, read more here: “Life News is preparing the ground for new provocations: Casus Belli”.

We have to warn civilians of the Kuibyshev district and other border regions of the Russian Federation that their lives terrorists want to make a small change and the reason for war with Ukraine.

But we can disrupt their plans!

As one reader suggested, “Information preemption tactics very well established in the Maidan.”

We need common efforts to bring terrorists to clean water and to save any innocent people.

Today, tomorrow and for a few days please cover this topic in social networks and the media. Call your friends and relatives in Russia, warns them of danger. The Russian government has repeatedly demonstrated that in order to put their selfish ends it is ready to blow up the house, people were gassed and terrorize civilians. Need to foil their plans. It is better to make an effort to know what you’ve done everything I could to change the situation than, waving his hand, and then find out that your indifference killed several people. Killed just because you doubt over his own will and power. Bring The Action! We may be able to prevent the tragedy.

With faith in the goodness.

Source: https://informnapalm.org/5456-life-news-nachal-ynformatsyonnuyu-podgotovku-terakta-rostov-sos


Filed under: Information operations, Propaganda, Russia, Ukraine Tagged: #RussiaLies

The Good, the Bad, and the Ugly of Public Opinion Polls

$
0
0

by Russell D. Renka
Professor of Political Science
Southeast Missouri State University
E-Mail:  rdrenka@semo.edu
February 22, 2010

Source: http://cstl-cla.semo.edu/rdrenka/Renka_papers/polls.htm 

° Polls v. Reports from Polls
° Sampling Error
° Good Polls
° Bad Polls
° Ugly Polls
° Conclusion
° Polling Links
° Notes
° References

Public opinion polls or surveys are everywhere today.  A nice sampling of professional surveyors is at Cornell Institute for Social and Economic Research (CISER), Public Opinion Surveys.  The Wikipedia Opinion poll site has history and methods of this emergent profession that was pioneered in America, and its Polling organizations lists some globally distributed polling organizations in other countries.  PollingReport.comcompiles opinion poll results on a wide array of current American political and commercial topics.  USA Election Polls track the innumerable election-related polls in the election-rich American political system.  TheNational Council on Public Polls (NCPP) defines professional standards for and lists its members–but many polls online and off do not adhere to such standards.

Polls have become indispensable to finding out what people think and how they behave.  They pervade commercial and political life in America.  Poll results are constantly reported by national and local media to a skeptical public.  Seemingly everyone has been contacted by a pollster or someone posing as one.  There is no escape from the flood of information and disinformation from polls.  The internet has enhanced both the use and misuse of such polls.  Any student therefore should be able to reliably tell a good poll from a bad one.  Bad ones are distressingly commonplace on the web.  What is more, bad polls come in two forms. The more common one is the innocuous or unintended worthless poll.   But there is a far more malevolent form that I label “ugly” polls.  This is a manual for separating good polls from bad ones, and garden-variety bad from the truly ugly.

Polls v. Reports from Polls

Rule One in using website polls is to access the original source material.  The web is full of polls, and reports about polls.  They are not the same thing.  A polling or survey site must contain the actual content of the poll, specifically the questions that were asked of participants, the dates during which the poll was done, the number of participants, and the sampling error (see next section below).  Legitimate pollsters give you all that and more.  They also typically have a website page devoted to news reports based on their polls.  The page will include links for the parent website, including the specific site of the surveys being reported.  So anyone who wants to directly check the information to see if the report is accurate, may easily do so on the spot.

But once polls are published, advocate groups rapidly put them to their own uses.  Sometimes they do not show links to the source.  For instance, see Scenic America’s Opinion Polls:  Billboards are Ugly, Intrusive, Uninformative.  This is a typical advocate group site with a report based on several polls saying the American people consistently dislike highway billboards.  But the polls are not linked (although this group does cite them properly at the bottom of their file).  Therefore readers either hunt these down or must take this report’s word for it–and that is never a good idea in dealing with advocate groups!  Advocate groups have a bad habit of selectively reporting only the information that flatters their causes.  That should not be accepted at face value.  It’s best to draw no conclusion at all unless one can access the source information for oneself.

&    Some advocacy groups attack legitimate pollsters and polls by distorting their data and purposes.  A Christian conservative group with the name Fathers’ Manifesto produced The Criminal Gallup Organizationto attack this well-known and reputable pollster for alleged misrepresentation of American public opinion on legalized abortion.  They said “The fact that almost half of their fellow citizens view the 40 million abortions which have been performed in this country as the direct result of an unpopular, immoral and unconstitutional act by their own government, as murder, is an important thing for Americans to know.  This is not a trivial point, yet the Gallup Organization took it upon itself to trivialize it by removing any and all references to these facts from their web site.” (Abortion Polls by the Criminal Gallup Organization)  That was followed with a link to the offender’s URL at www.gallup.com/poll/indicators/indabortion.asp, now a dead URL.  The truth is far simpler than conspiracy.  In late 2002, Gallup went private on the web with nearly all its regular issue sets, not excepting abortion.  One will only know this by escaping the confines of an advocate group’s narrow perspective and seeing the targeted poll and pollster’s own take on the issue.  And that can now readily be done, via the newer Gallup site’s search using “abortion polls.”  That produces an Abortion In Depth Review summary of numerous polls dating from 1975 at this URL: www.gallup.com/poll/9904/Public-Opinion-About-Abortion-InDepth-Review.aspx.   It demonstrates that 12 to 21 percent of Americans would prefer that abortions be “illegal in all circumstances”; but of course (for reasons cited below), the word “murder” is not employed.

The lesson is that any poll-based report must make the full source information available to its readership.  There is no excuse for not identifying the source or directly linking to the source.  If they do neither, it’s grounds for suspicion that they want you to take their word as the final authority.  That is not acceptable conduct in the world of polls and surveys.  I do not mean the report must literally attach links, although that’s never a bad idea.  But they must identify the source in such a way that anyone can then do a standard search and examine the original source material.1

Sampling Error

This elementary term must be properly understood before we go further.  “Sampling error” is a built-in and unavoidable feature of all proper polls.  The purpose of polls is not to get direct information about a sample alone.  It is to learn about the “mother set” of all those from which a poll’s sample is randomly drawn.2  This “population” consists of everyone or everything we wish to understand via our sample.  A particular population is defined by the questions we ask.  It might be “all flips of a given coin” or “all presidential election voters in the 2008 American general election” or “all batteries sold by our firm in calendar 2008″ or “all aerial evasions of predatory bats by moths” or “all deep-sky galaxies” or any number of other targets.  The object is not to poll the whole population, but rather to draw a sample from it and directly poll them for sake of authoring an “inference” or judgment about that population.  But all samples have an inherent property:  they fluctuate from one sample to the next one as each is drawn at random from the elements of the targeted population.  This natural property is “sampling error” or “margin of error” (Mystery Pollster:  What does the margin of error mean?).  These are not surveyor’s mistakes, but rather are inherent properties of all sampling (SESTAT’s Understanding Sampling Errors).  Cautions on reading and interpreting these are at PollingReport’s Sampling Error (Taylor 1998) or Robert Niles’ Margin of Error.

Sampling error tells us the possible distance of a population’s true attribute from a directly found sample attribute.  You cannot assume any sample’s measured properties (such as mean and standard deviation) is exactly like the population’s properties. The sweet part of sampling error is that we can easily calculate how large it is.  This is chiefly defined by the number of units in the sample.  You can use the DSS Research Sample Error Calculator to determine this (also: American Research Group’s Margin of Error Calculator).  Or first specify a desired accuracy level, and find out what size sample will achieve that (Creative Research Systems, Sample Size Calculator).

People tend to believe that samples must be a significantly large part of a population from which they’re drawn.  That is simply wrong.  Asher (2001) cites the fallacy of thinking that cooks testing the broth or blood testers taking red and white cells must take some appreciable portion of the whole.  Thank goodness, neither of those is necessary.  I like to cite coin flips, because the population of “all flips of a coin” is some undefined huge number, yet we routinely test coins for heads-to-tails fairness with a mere 500 to 1000 flips.  Our sampling error for 1000 flips is just 3.1% or 31 flips; so we predict that a fair coin produces 500 heads plus-or-minus 31.  We don’t mind the huge population size (all coin flips).  In fact, we prefer that it be very large, because that way our extraction of a sample has no appreciable effect on the leftover items from that population.3

The DSS Calculator also permits us to seek different levels of assurance about the sampling error.  We call this “confidence level” or “confidence interval.”  Customarily we accept a 95% level, meaning that our 1000 flips will go above or below the 3.1% only 1 time in every 20 samples.  We get 500 heads plus or minus 31 on 19 trials out of 20.  If that isn’t good enough for the cautious, they can select 99% instead, and that produces a larger sampling error (about 4.1%) for a more cautious inference about the mother set of flips; and now we predict 500 heads plus-or-minus 41.  Polls can be custom-fit for different accuracy demands.

Good Polls             Next down; Top

All polls are surveys based on samples drawn from parent populations.  A poll’s purpose is to make accurate inferences about that population from what is directly learned about the sample through questions the sampled persons answer.  Knowledge of the sample is just a means to that end.  All good polls follow three indispensable standard requirements of scientific polling.

First, the questions must be worded in a clear and neutral fashion.  Avoid wording that will bias subjects toward or away from a particular point of view.  The object is to discover what respondents think, not to influence or alter it.  Along with clear wording is an appropriate set of options for the subject to choose.  It makes no sense to ask someone’s income level down to the dollar; just put in options that are sufficiently broad that most respondents can accurately place themselves.  A scan of good polls generally shows the “no opinion” option as well.  That’s to capture the commonplace fact that many people have no feelings or judgments one way or the other on the survey question.  If obliged to choose only from “True” or “False,” many who have no opinion will flip the coin and check off one of those options.  Thus a warning:  the business of fashioning truly effective survey questions is not easy.  Even the best polls have problems with fashioning their questions to avoid bias, confusion, and distortion (Asher 2001, 44-61).  Roper illustrates this via a confusing double negative causing a high proportion of respondents to opt for a Holocaust-denial reply, whereas a more clearly worded question showed that this radical view is held by a tiny proportion of respondents (Ladd 1994, Roper Holocaust Polls; Kagay 1994, Poll on Doubt Of Holocaust Is Corrected – The New York Times).  It usually takes a professional like Professor Ladd to parse out such distinctions in question wording among valid polls.  This is where determined issue advocates can be valuable, because many watch out for subtle differences in question wording that can alter responses to the advocate’s pet issue (for example, Mooney 2003, Polling for Intelligent Design).  But with some practice it’s still feasible for any alert reader to see the difference between properly worded questions and the rest.

The rest fall into two categories:  amateur work, and deliberate distortion.  A great many website polls exhibit amateurs at work, with highly imprecise or fuzzy wording of questions.  I’ll not bother to show these by links, since their numbers are legion all over the web.  The deliberate abusers are less common.  These are discussed later on under “ugly polls.”

Second, the subjects in the sample must be randomly selected (Research Methods Knowledge Base:  Random Selection & Assignment).  The term “random” does not mean haphazard or nonscientific.  Quite the opposite, it means every subject in a targeted or parent population (such as “all U.S. citizens who voted in the 1996 general election for president”) has the same chance of being sampled as any other.  Think of it like tumbling and pulling out a winning lottery number on a State of Kentucky television spot; they are publicly showing that winning Powerball numbers are selected fairly by showing that any of the numbers can emerge on each round of selection (Kentucky Lottery).  Fairness means every number has identical likelihood of being the winning number, no matter what players might believe about lucky or unlucky numbers.  So “random” means lacking a pattern (such as more heads than tails in coin flips, or more of one dice number than the other five on tumbled dice) by which someone can discover a bias and thereby predict a result (Random number generation – Wikipedia).  That’s a powerful property, as only random selection is truly “fair” (unbiased on which outcome occurs).  Any deviation from random produces biased selection, and that’s one of the hallmarks of bad polls.

Granted, national pollsters cannot literally select persons at random from all U.S. citizenry or residents, because no one has a comprehensive list of all names (despite what conspiracy theorists want to believe).  So they substitute a similar method, of random digit dialing or “RDD” based on telephone exchanges (Random digit dialing – Wikipedia).  Or the U.S. Census Bureau will do block sampling; that is, they will randomly select city or town blocks for direct contact of sample subjects (Data Access Tools from the Census Bureau; or direct to Accuracy of the Data 2004).  Emergent web polls do the same from their mother population of potential subjects.  These honor the principle of pure random selection by coming as close to that method as available information allows.

It is not perfect stuff.  Green and Gerber have long argued that there are better methods than RDD for pre-election polling (Green and Gerber 2002).  There are also serious issues among telephone pollsters over household reliance on cell phones only, as that is disproportionately true of younger households which may therefore be excluded using landline RDD procedures (Blumenthal 2007, Mystery Pollster:  Cell Phones and Political Surveys: Part I, 3 July 2007; Part II, 13 July 2007).  That problem is being handled in a manner resembling block sampling to approximate a true random sample (Asher 2005, 74-77; Pew Research Center – Keeter, Dimock and Christian 2008a, The Impact Of “Cell-Onlys” On Public Opinion Polling:  Ways of Coping with a Growing Population Segment, 31 January 2008; Keeter 2008, Latest Findings on Cell Phones and Polling, 23 May 2008; Keeter, Dimock and Christian 2008b, Cell Phones and the 2008 Vote:  An Update, 23 September 2008).  But this does not change the underlying principle of seeking a random sample.4

Third, the survey or poll must be sufficiently large that the built-in sampling error is reasonably small.  Sampling error is the natural variation that occurs from taking samples.  We don’t expect a sample of 500 flips of a coin will produce exactly the same heads/tails distribution as a second sample of 500.  But the larger the samples are, the less the natural variation from one to another.  Common experience tells us this–or it should.  A sample of newborn babies listed in large city birth registers will show approximately (but not exactly) the same proportion of boys and girls in each city, or in one city each time the register is revisited; but in small towns there are large variation in boy-to-girl ratios.  Generally, we do not want sampling error to be larger than about 5 percent.  That requires about 400 or more subjects, without subdivisions among groups within the sample.  If you divide the sample evenly into male and female subgroups, then you naturally get larger sampling errors for each 200-person subgroup.  Ken Blake’s guide entitled “The Ten Commandments of Polling” provides a step by step guide to calculate sampling errors via calculator for any given sample size; and you can go on line to the DSS Calculator for that.  The sound theoretical grounding is in any standard book on statistics and probability, in manuals with scientific calculators, and in several websites listed below.

Remember another rule about sample size.  It does no harm that the sample is extremely small in number compared to the target population.  Consider coin flips as a sample designed to test the inherent fairness of a coin.  There is virtually no limit to number of possible flips of a coin.  You want to know if the coin is fair, meaning that half of all flips will be heads and half tails.  So “all flips” is the population you want to know about.  “Actual flips” are the sample.  You can never know what “all flips” looks like, but that’s OK.  The key to accurate judgment of “all flips” is to make sure you have a large enough sample of actual flips.  Asher (2005, 78) gives a similar example of taking a small proportion of one’s billions of red blood cells to take its profile, or a chef sampling soup before serving it.  Statisticians refer to a law of large numbers, and it’s explained at many sites like The Why Files, Obey the Law.

If all three of these criteria are met, you have reasonable assurance the poll is good.  How can you know this?  Expect all poll reports to honor the journalists’ rule.  They must cite all the information necessary to let you confirm the three conditions.  Even a brief news report can cite the method of selection (such as “nationwide telephone sample obtained by random digit dialing, on October 5-6, 1996″), the sample size and sampling error (1000 subjects, with sampling error of plus-or-minus 3.1 percent at a 95% confidence level), and the questions used in that survey.  For more extended print articles there are fuller guidelines (Gawiser and Witt undated, 20 Questions A Journalist Should Ask About Poll Results, Third Edition).  Still, most reports of poll results will not reproduce the poll questions in full for you to see; too little space in papers, too little time on television or radio.  So they must provide a link to the original source for the full set of questions.  With websites now universally available, no pollster can plausibly slip that responsibility.  Neither can any reputable news organization.

The New York Times offers a brief review on how modern polling has expanded and been revised, at Michael Kagay’s Poll Watch Looking Back on 25 Years of Changes in Polling.  I recommend this for those seeking more detail.

In conclusion:  all three criteria must be met for a poll to be judged “good.”  The burden of proof is on the pollster or those who use and report from it.  In turn, students shouldn’t report poll information only from a secondary source.  Instead, a web news source that summarizes the relevant information should also be linked to the primary source.  You should also check the primary source to ascertain that the information was correctly interpreted by the reporter.

Bad Polls                         Next down; Top

So when is a poll not good?  Simply enough, it only has to violate any of the three rules specified above.  The one emphasized here is violation of random selection–because that’s the prevalent website violation.

The web is filled with sites inviting you to participate by posting your opinion.  This amounts to creation of samples via self-selection.  That trashes the principle of random selection, where everyone in a target population has the same likelihood of being in the sample.  A proper medical experiment never permits someone to choose whether to receive a medication rather than the placebo.  No; subjects are randomly placed in either the “experimental group” (gets the treatment) or the “control group” (gets the sugar-coated placebo).  If you can call or e-mail yourself into a sample, why would you believe the sample was randomly selected from the population?  It won’t be.  It consists of persons interested enough or perhaps abusive enough to want their voices heard.  Participation feels good, but it is not random selection from the parent population.

Next, remember this:  any self-selected sample is basically worthless as a source of information about the population beyond itself.  This is the single main reason for the famous failure of the Literary Digestelection poll in 1936, where the Digest sampled 2.27 million owners of telephones and automobiles to decide that Franklin Roosevelt would lose the election to Republican Alfred Landon, who’d win 57 percent of the national popular vote (History Matters, Landon in a Landslide: The Poll That Changed Polling).  Landon didn’t!  Dave Leip’s Atlas of Presidential Elections, 1936 Presidential Election Results, displays the 36.54% won by Landon below the 60.80% of national popular vote won by the incumbent Roosevelt.  This even though the Digest had affirmed of its straw poll:  “The Poll represents the most extensive straw ballot in the field–the most experienced in view of its twenty-five years of perfecting–the most unbiased in view of its prestige–a Poll that has always previously been correct.” (Landon in a Landslide)  Yeah, but a lot of 1936 depression-era Roosevelt voters didn’t own telephones or automobiles so never received the opportunity to voice their opinions.

So if they are worthless, why are they so commonplace?  Self-selected polls are highly useful for certain legitimate but limited purposes.  Sellers always want to know more about their customers; but such customer surveys are necessarily self-selected rather than selection as a random sample.  Suppose you are an internet seller such as Amazon.  You try for a profile of customers by inviting them to give you some feedback.  This helps you discover new things about them, gives tips on who else you’d want to reach, alerts you to trouble spots in advance, and lets you decide how to promote new products.  But none of this is to discover the nature of the parent population.  It’s to know more about those customers who care enough to respond.  All such samples are not random; they are biased via self-selection to include mostly the interested, the opinionated, the passionate, and the site-addicted.  All the rest are silent and therefore unknown.  So long as you understand this limitation, it is perfectly fine to invite the “roar of the crowd” from your customers.

Now suppose your self-selected sample is very large, and you cannot study all of it.  Then define that total sample as your population (called “all site visitors”), and seek a sample within it for intensive study.  But that takes random sampling from the population.  Inviting some of your site visitors to fill out surveys won’t tell you about “all site visitors.”  Instead you get the relative few who bother to reply, and they are probably untypical of the rest.  So smart sellers who really want to know all their traffic seek to establish a full list of all customers–by posting cookies to their computers, by getting telephone numbers at checkout counters to produce comprehensive customer lists, or by telling you to go online to get a warranty validated whereupon you must show them an email address and telephone to get the job done.  Understand, though, that smart businesses do this to avoid hearing only from an untypical few of their customers.

The dangers of self-selection may seem obvious by now, yet flagrant violations of random selection have sometimes received polite and promotional treatment in the press.  Shere Hite has made a successful career writing on the habits and mores of modern women.  In 1987 she hit the headlines and made $3 million selling a book based upon a mail survey of 4500 American women derived from a baseline sample of 100,000 women drawn from lists compiled in various women’s magazines.  The highlight was a report that well over half her sample of women married five or more years were having one or more extramarital affairs.  That got Hite oceans of free publicity and celebrity tours.  Yet the Hite 4500 were a heavily self-selected sample who chose to respond to Hite’s invitation to disclose sensitive matters of private and personal beliefs and behavior.  This outraged legitimate surveyors, who know that any “response rate” (percentage of those surveyed who submit to the questions) below 60 percent invites distortion of the sample in favor of the vocal and opinionated few.  A response rate of 4.5 percent clearly will not do.

That low response-rate samples invite bias is well known from congressional offices inviting citizen responses to franked mail inquiries.  It mainly draws responses from those who have some knowledge and interest in public affairs and who feel favorably toward that Member of Congress.  In Hite’s case, most knew and cared little about her or her very strongly held opinions on feminism and man-woman relations.  But a few did.  Those divided into persons who liked and shared Hite’s basic views, and those who didn’t.  The friendlies were far more likely to fill out and mail back the survey.  So Hite got a biased sample of Hite supporters.  This is non-response bias:  her sample was stacked with angry and dissatisfied women who were much more likely than the 95.5 percent non-responders to have had affairs outside of marriage and to tell that (Singer in Rubenstein 1995, 133-136; T.W. Smith 1989, Sex Counts: A Methodological Critique of Hite’s “Women and Love”, pp. 537-547 with this conclusion:  “In the marketplace of scientific ideas, Hite’s work would be found in the curio shop of the bazaar of pop and pseudoscience.”).

It could be that those who do not share Hite’s views systematically select themselves out of her sample, while those sharing her views select themselves in.  Or it could be that her original sample was drawn in a way that violates random selection with respect to the questions about which she was inquiring.  Or some combination of these.  Whatever it is, we finish with a highly biased sample from which one cannot draw valid inferences on those questions about the population of all American women or even from her original 100,000 mail-list.  Low response rate is a well-known pitfall.  Alongside the Hite example, it is one of the many mistakes committed by the infamous Literary Digest polls (Squire 1988; Rubenstein 1995, 63-67).

Biased samples are not automatically shunned by marketers.  Sometimes they are a welcomed thing.  Members of Congress use their congressional franking privileges to conduct district mail surveys that are irredeemably flawed by self-selection of the samples (Stolarek, Rood and Taylor 1981).  Citizens who like that office holder are much more likely to respond to the query.  So are those with high interest in the subject matter.  Thus the sample leans heavily to those who like its sponsor and care about its questions. These queries produce a predictably biased set of responses favoring the point of view held by the politician.  This pleases most politicians, who are practiced in arts of self-promotion and recognize a favorable data source when they see one.  Typically these franked-letter survey questionnaires are followed by another franked report summarizing the results in a way that validates the Member’s policy program.

The most spectacular example of deliberate creation of a biased sample is associated with the annual voting culminated in May of 2001 through 2008 on American IdolAmerican Idol FAQs explains how to vote once an Idol show is completed.  Voting by voice is done to toll-free numbers, but there’s also the option of text messaging.  The FAQ site says “if you vote using Cingular Wireless Text Messaging, standard Text Messaging fees will apply.”  The show is tremendously popular, and voting requires waiting in line, unless the text message option is used.  Cingular does not disallow repeat messaging, for the baldly obvious reason that it charges a fee per message.  Thus FAQ says “input the word VOTE into a new text message on your cell phone and send this message to the 4 digit short number assigned to your contestant of choice (such as 5701 for contestant 1).  Only send the word ‘VOTE’ to the 4 digit numbers you see on screen, you cannot send a text message to the toll-free numbers.”  That’s right, there are two separate procedures, one for toll free lines with slow one-at-a-time votes and then slow waits for another crack at it, another for fast repeat voting with fees to Cingular via text messaging.  That’s a positive invitation to creation of a highly biased sample.

Biased samples can also be dangerous to democratic standards of voting for public office.  The most important self-selected population in the political world is the voting citizenry in democratic elections.  Serious political elections are obliged to follow three strict standards of fairness:  each individual voter gets to vote only once, no voter’s ballot can be revealed or traced back to that person, and every vote that is cast gets counted as a cast vote in the appropriate jurisdictional locale.  Internet voting is heralded as a coming thing, but so far the experience with it is studded with instances of ballot tampering by creative hackers.  That tampering is a violation of the third condition, that cast votes are counted properly.  ElectionsOnline.us–Enabling Online Voting (URL: www.electionsonline.us/) assures us that it “makes possible secure and foolproof online voting for your business or organization,” but hackers have demonstrated that security is a relative term.  AP Wire 06-21-2003 UCR student arrested for allegedly trying to derail election cites a campus hacker who demonstrated in July 2003 how a student election for president could be altered through repeat voting.  That’s documented online by Sniggle.net: The Culture Jammer’s Encyclopedia, in their Election Jam section (URL:  sniggle.net/index.php > sniggle.net/election.php); and there are other sources as well.

Indeed this campus hacker is not an isolated case.  After 2000 the U.S. Department of Defense set forth a Federal Voting Assistance Project project called SERVE (Secure Electronic Registration and Voting Experiment).  This was an ambitious pilot plan to enable overseas military personnel from seven states to vote online in the 2004 national election (formerly available at Welcome to the SERVE home page atwww.serveusa.gov/public/aca.aspx, but now gone – RDR, September 2005).  The ultimate goal was to permit the several million overseas voters to register in their counties and vote by secure on-line links.  But on 20 January 2004, four co-authors with specialties in computer security produced a potent indictment of the shortcomings of SERVE in terms of potential election fraud.  The prospects of hacking into the system to stack the ballot box are daunting barriers to a system that must also secure the individual’s anonymity.  Commercial security lacks any comparable requirement to ensure that the individual participant’s true identity remain unknown.5 (Jefferson et al., 2004, A Security Analysis of the Secure Electronic Registration and Voting Experiment (SERVE); also John Schwartz, Report Says Internet Voting System Is Too Insecure to Use, New York Times, 1/21/04)  As a result, the Pentagon wisely scrapped plans to use online voting for 2004, in part due to a State of Maryland demonstration of how easily a skilled hacker can break locks and alter voter identity paper trails (Report from a Review of the Voting System in The State of Maryland, 12 October 2006).  Yet even that damning evidence has not deterred one prominent manufacturer of on-line voting machines from nonetheless claiming their system is foolproof.6

So a ‘vigorous debate’ supposedly exists over how to insulate website voting against the danger of fraud and altered results via ballot box stuffing.  It clearly pays to be deeply skeptical of those who claim on-line voting is immune from dangers of getting a distorted sample.  That is an extreme form of the self-selection inherent to all elections, which count recorded votes rather than opinions from the whole electorate.

Bad polls on the web do not include election results but are nonetheless remarkably abundant.  These fall into two basic categories.  First are amateur bad polls.  The web is positively overflowing with these.  These show self-selection and other errors like small sample sizes or badly worded questions.  Some are simply interactive web pages created for fun and dialogue with others.  They often make no pretense of being legitimate surveys.  Some are self-evidently not serious.  They all tend to have certain common signs of amateurs at work.  For one, there are frequent wrongly spelled words.  For another, the questions are worded in vague or unclear ways that may be typical of everyday speech but are strictly not allowed at legitimate polling sites.  Sometimes these are humorous sites with gonzo questions about a variety of current news items, especially those of salacious or bizarre nature.  Others are accompanied with blogs that really amount to ranting licenses.  Amateur bad polls are very easy to recognize on a little inspection.  Their samples are running tallies determined by whoever has chosen to participate one or more times.  They lack any “sampling error” because they’re just running tallies of recorded responses, not samples taken at random from a population.

The second category of bad polls is the sophisticated bad poll.  These are more serious.  Self-selection along with a seller’s denial of the problem are their hallmarks.  They are professionally presented on the web, they do not have the obvious spelling and grammatical failures, and they customarily ask questions in a manner similar to legitimate polls.  These are not the work of amateurs.  Surface level recognition of their failings is much harder to recognize.  Shere Hite’s 1987 poll is a pre-web era example of this genre.  Its purveyor defended the poll vigorously and insisted upon its legitimacy as the real thing.  So do current offenders, as we shall see.

A website example of this practice is PulsePoll.community Network, which in Spring 2000 ran four pre-primary polls for the New Hampshire, Arizona, Washington and Colorado presidential primaries (at PulsePoll Primary: Arizona Results).  They got very similar results to four scientific telephone-based polls taken on the eve of these four events.  So they concluded that “The PulsePoll has made Internet polling history” with a web poll emulating telephone surveys in its forecasting accuracy.  But this claim does not bear close examination.  Objections from professional survey sources came in immediately.  Some are captured in Jeff Mapes’ article of 12 April 2000 entitled “Web Pollster Hopes To Win Credibility” in PulsePoll.com News The Oregonian.  Even if four spring 2000 primary polls did closely resemble legitimate survey results, that could be pure luck.  One should remember that the Literary Digest also used wrong sampling methods to correctly pick presidential winners in four straight elections from 1920 through 1932 (Rubenstein 1995, 63-67).  But they made one major mistake.  In 1936 they predicted a fifth one–and got it spectacularly wrong.  Luck has a natural way of eventually running out.

PulsePoll still relies on a self-selected sample rather than a randomly selected one.  The only defense for this is that internet users of this site were somehow typical of the larger population of citizens, or more particularly, of citizens who vote in presidential primaries.  The problem with this is already known:  internet users were not a random sample of all citizens, all voters, or all presidential primary voters.  See “The Digital Divide” spring 2003 theme issue of IT&Society (URL: www.stanford.edu/group/siqss/itandsociety/v01i04.html) for indications that digital users were still quite different by factors such as wealth and political activism from the non-digital population.  There is no doubt that digital users have been different, and often so in ways that especially attract both politicians and advertisers to them.  But even if the self-chosen PulsePoll sample somehow captured all the attributes of its parent population of digital users, those users still did not resemble the true target population of presidential primary voters.

Another sophisticated bad poll is run by former President Clinton’s ex-advisor Dick Morris at Vote.com (URL: www.vote.com).   Like PulsePoll, Vote.com is professionally presented in hopes of producing enough audience to interest advertisers in subsidizing the site.  The issues are current and interesting.  The site promises all participants that their opinions and votes truly count, since those in power will hear about the poll results.  That might satisfy the millions whom legitimate polls show are alienated from their own government.  But just like PulsePoll and its brethren, this site is irretrievably biased by its failure to do random sampling.  It does just the opposite, by inviting the opinionated to separate themselves from the silent and make their voices heard by those in power.

Internet polling is nonetheless here to stay.  By 2003 it had taken a quantum jump in publicity and material impact.  Even groups that know better will use it.  The Berkeley, California organization known as MoveOn.org ran an online vote among its membership on June 24-25, 2003 to determine which among the Democratic presidential candidates its membership preferred (MoveOn.org PAC at URL:www.moveonpac.org/moveonpac/).  The results was a strong plurality for outspoken anti-Iraq War candidate Howard Dean, with 43.87% of 317,647 members who cast votes in this 48-hour period (Report on the 2003 MoveOn.org Political Action Primary).  The second-place result was nearly-unknown long-shot Dennis Kucinich, with 23.93% of the vote.  Near the bottom, the well-known candidates Joseph Lieberman and Richard Gephardt got 1.92% and 2.44% respectively!  What can be concluded from this?  Self-selection of a highly left-wing participant voter pool is dramatically obvious.  Stark distinction between this group and the actual 2004 Democratic presidential primary voters was forthcoming soon thereafter (Democratic Party presidential primaries, 2004).  But the appeal of doing such polls is evident.

Incidentally, MoveOn.org, a knowledgeable organization on survey methods, engaged the professional services of a telephone polling organization to verify that its 317,647 votes were not biased through “stacking the ballot box” by anyone voting more than once.  To check this, a randomly selected sample of 1011 people from those 317 thousand were directly surveyed by telephone to ascertain that the sample results were remarkably close to those of the parent population.  That means if ballot stuffing were done at all, its effect was minor or negligible since the sample of 1011 was fundamentally similar in result to the population of 317,647 (Greenberg Quinlan Rosner Research, Inc. – gqr at former URL:  www.moveon.org/moveonpac/gqr.pdf).  Nonetheless, this safeguard had no effect upon the original self-selected nature of the voting population of 317,647 web-surfing MoveOn.org participants compared to the target population of “all persons who will vote in 2004 Democratic presidential primaries and caucuses.”  They remained as distinctive and politically untypical a group as ever.

The conclusion is inescapable: no one to date has discovered a method of making web-based polls truly representative of a general parent population.  Amateur or sophisticated, these polls are not capable of accurately profiling a parent population beyond themselves.

Ugly Polls                                 Next down; Top

This is a special category of bad poll, reserved for so-called pollsters who deliberately use loaded or unfairly worded questions under disguise of doing an objective survey.  Some of these are done by amateurs, but the most notorious are produced by political professionals.  These include the infamous push polls.  I treat these first.  There are also comparable polls composed of subtle question biases that create a preconceived set of responses.  These fall into the category of hired gun polls.  I treat them second, but not least.

A push poll is a series of calls, masquerading as a public-opinion poll, in which the caller puts out negative information about a target candidate (Push poll – Wikipedia).  Sometimes called robo-calls, the auto-call from a supposed polling operation spews out derogatory information about a specific target.  They call very large numbers of households to disseminate as much derogation as possible (Blumenthal 2006b, A Real Push Poll?”, 8 September 2006).  They appear before presidential primary and general elections and in swing district congressional or senatorial contests, always by hard-to-trace nominally independent organizations not directly linked to the beneficiary candidate or party.7  They are quite common in recent elections.  Obviously someone in campaigns makes use of these shadow practitioners.  The operative most closely identified with their use is former Bush political strategist Karl Rove, suspected as director of the infamous February 2000 South Carolina accusatory telephone “polls” maligning Bush primary rival John McCain (Push poll – SourceWatch; Green 2007, The Rove Presidency; Moore and Slater 2006; NPR Karl Rove, ‘The Architect’ interview with Slater, 2006; Green 2004, Karl Rove in a Corner; Borger 2004, The Brains; Davis 2004, The anatomy of a smear campaign; Suskind 2003, Why are These Men Laughing?; DuBose 2001, Bush’s Hit Man; Snow 2000, The South Carolina Primary).  That did not end the practice despite the expose.  The 2006 midterm saw a spate of these (Drew 2006, New Telemarketing Ploy Steers Voters on Republican Path – New York Times, 12/6/06).  On eve of the 3 January 2008 Iowa caucuses, Republican rivals of Mike Huckabee received such calls (Martin 2007, Apparent pro-Huckabee third-party group floods Iowa with negative calls – Jonathan Martin’s Blog – Politico.com, 12/3/07).  One may expect another round of these in fall 2008 before the 4 November election of a 44th president and the 111th Congress.

These dirty campaign practices masquerade as legitimate polls.  They are not inquiries into what respondents truly think.  Traugott and Lavrakas (2000, 165) define them as “a method of pseudo polling in which political propaganda is disseminated to naive respondents who have been tricked into believing they have been sampled for a poll that is sincerely interested in their opinions.  Instead, the push poll’s real purpose is to expose respondents to information … in order to influence how they will vote in the election.”  Asher (2001, 19) concurs:  “push polls are an election campaign tactic disguised as legitimate polling.”  Their contemporary expression through automated telephone calls led Mark Blumenthal of Mystery Pollster to call them “roboscam,” meaning an automated voice asks respondents to indicate a candidate preference, followed by a scathing denunciation of the intended target (Blumenthal 2006a, Mystery Pollster – RoboScam: Not Your Father’s Push Poll, 21 February 2006).  After a couple of attack-statements, it’s on to another number, hitting as many as possible for sake of maximizing the damage to the intended political target.  That, of course, is not real polling at all, which explains why Blumenthal shuns the very term “push poll” for these.

Legitimate polling organizations universally condemn push polls.  The National Council on Public Polls has shunned them since they masquerade as legitimate queries yet are intended to sway rather than discover the opinion of respondents (NCPP 1995, A Press Warning from the National Council on Public Polls).  So has the American Association for Public Opinion Research, which recommends that the media never publish them or portray them as polls (AAPOR 2007, AAPOR Statement on Push Polls).  Push polls are propaganda similar to negative advertising.  They are conducted by professional political campaign organizations in a manner that detaches them from the intended beneficiary of actions taken against a rival (see Saletan 2000, Push Me, Poll You in Slate Magazine).  Some political interest groups also use them, often in a hot-language campaign to raise money and membership by using scare tactics.  No matter the source, they treat their subjects with contempt.

Hired gun polls are real polls, with limited size samples and numerous questions.8  They have been defined as: “Polls commissioned and carried out to promote a particular point of view.  Hired gun polls are associated with reckless disregard for objectivity.  A synonym for the term hired gun poll is the term advocacy poll—although the hired gun metaphor connotes a much sleazier and less professional image.  Selective reporting of poll results is one mark of hired gun polls.  Another is questions worded to reflect the positions of sponsors.  Both practices blatantly violate accepted ethical standards in the polling field.” (Young 1992, 85; The Polling Company TM)

Hired gun polls are not literally synonymous with advocacy polls, polls used by advocacy groups to promote their viewpoints.  Advocacy polls become very widespread in American politics in the past two decades (Beck, Taylor, Stanger, and Rivlin 1997 at REP16Issue Advocacy Advertising During the 1996 Campaign).  Issue advocacy is any communication intended to promote a particular policy or policy-based viewpoint.  Polls can be extremely helpful in doing this persuasively.  There is an important political market for legitimate poll-based issue information.  Advocacy groups often commission a poll to be done and then selectively release that information which furthers their cause.  But usually they do not go further, into the realm of push polling.

But there are apparently some exceptions.  In 2002 the professional golf tour witnessed a political fight which ultimately yielded a hired gun poll that quite deliberately violated all standards enunciated in The Polling Company TM definition.  Chairman and CEO Hootie Johnson of the Augusta Golf Club chose an aggressive counter-campaign to Martha Burk of the National Council of Women’s Organizations, who sought to oblige the Masters’ Golf Tournament’s host club to open its doors to women for the first time.  He hired The Polling Company and WomanTrend, a Washington D.C. polling firm chaired by a prominent Republican woman named Kellyanne Fitzpatrick Conway (the polling company T inc. – Kellyanne Conway).

The result was satisfying for CEO Johnson and unsatisfying for Burk.  One conservative advocacy group took the survey and ran with it (Center for Individual Freedom, Augusta National Golf Club Private Membership Policies under title “Shoot-Out Between Hootie and the Blowhard Continues”).  Conway herself accompanied Johnson at a November 13, 2002 press conference to announce the poll result, which had an 800-person-based sampling error of 3.5%.  As portrayed in the official PGA website (Poll shows support for Augusta’s right to choose membership – PGATOUR.COM): “When asked whether — like single-sex colleges, the Junior League, sororities, fraternities and other similar same-sex organizations — “Augusta National Golf Club has the right to have members of one gender only,” 74 percent of respondents agreed.  Asked whether Augusta National was “correct in its decision not to give into Martha Burk’s demand,” 72 percent of the respondents agreed.'”  That would appear to wrap the matter up.

But a look at the poll questions is instructive.  They are clearly aimed at a push throughout the survey.  We get this language in Question 21 (Augusta National poll Part III – PGATOUR.COM; also CFIF,cfif_poll_data):

21.  As you may or may not know, Augusta National Golf Club is a private golf club in Augusta, Georgia that does not receive any type of government funding. Each year, the Masters Tournament is held at Augusta National Golf Club. Currently, only men are members.

Martha Burk, the President of the National Council of Women’s Organizations, wrote a letter to the Augusta National Golf Club, saying that the Masters Golf Tournament should not be held at a club that does not have women members. She demanded that the Golf Club review its policy and change it immediately, in time for the tournament scheduled for April 2003.

Do you recall hearing a lot, some, only a little, or nothing about this?

Some 51 percent of the sample heard nothing about this.  Normally that’s a warning to pollsters not to proceed further with questions except under very high cautions.  But here, Question 22 proceeds immediately with this stem:

22.  And, as you may or may not know, the Chairman of Augusta National Golf Club, William Johnson, responded to Martha Burk by saying that membership to the club is something that is determined by members only, and they would not change their policies just because of Burk’s demand.

And, do you support or oppose the decision by Augusta National Golf Club to keep their membership policy as it is?

The result was net support by 62 percent, opposition by 30 percent, and a volunteered “do not know” from the remainder.  Then Question 23:

23. Although currently, there are no women members of the Augusta National Golf Club, the Golf Club does allow women to play on their golf course, and visit the course for the Masters Tournament.  In other words, women are welcome to visit the Club and often play golf there as guests.

Knowing this, would you say that you support or oppose the Augusta National Golf Club’s decision to keep their membership policy as it is?

That was a now-obvious push.  This time we get 60 net support for the status quo and 33 percent opposed.  Questions 24 and 25 then sally forth in this fashion:

Please tell me if you agree or disagree with the following statements:

24. “Martha Burk had the right to send a letter to Augusta National Golf Club about their membership policies, but if she really wanted to make some progress on behalf of women, she would have focused her time and resources on something else.” [and]

25. “Martha Burk did not really care if the Augusta National Golf Club began allowing women members, she was more concerned with attracting media attention for herself and her organization.”

The replies to this being satisfactory, the key item 26 comes in:

26. “The Augusta National Golf Club was correct in its decision not to give into Martha Burk’s demand.  They should review and change their policies on their own time, and in their own way.”

That got 72 percent to agree to not bending to this awful woman’s unreasonable demands against a selfless and public-minded private club that welcomes women golfers with open arms.  A little later, Question 29 kept up the drumbeat:

And please tell me if you agree or disagree with the following statement:

29. “Just like single-sex colleges, the Junior League, Boys and Girls Scouts, Texas Women’s Shooting Club, Sororities and Fraternities, and women business organizations, Augusta National Golf Club has the right to have members of one gender only.”

Lo, this produced a full 74 percent sample agreement with some form of defense for the existence of single-gender organizations in America.  That was the highest proportion of any of these leading items, and thus was the single one seized by Mr. Johnson for highlighting in his press conference accompanied by this hired-gun poll’s principal.

But a rebuke to the “Hootie Poll” soon come from within the golf community itself.  The November 14, 2002 issue of PGA Tour’s Golf Web carried a piece entitled Is the Augusta National poll misleading?.  Its verdict:  “The “Hootie Poll” is a mishmash of loaded statements and biased, leading questions that are unworthy of Johnson or Augusta.  It is a poll that is slanted to get the answers they wanted, and in that it succeeded.”9

All such ugly polls commit gross violations of ethical standards of behavior.  They masquerade as legitimate objective surveys, but then launch into statements designed to prejudice respondents against a specific candidate or policy.  Alongside the Hootie Poll, the web has produced other direct examples for perusal.  The investigative left-wing magazine Mother Jones in 1996 published Tobacco Dole, by Sheila Kaplan.  The target turned out to be former Attorney General of the State of Texas, Dan Morales.  He and others are routinely brought forth as statewide office-holders, but suddenly on Question 24 onward, the true purpose of this query is revealed in a series of relentlessly negative statements about Morales alone.  The reason was that Attorney General Morales at the time was point man for engaging the state in legal action against tobacco firms, and this alleged poll was a response to undermine that goal.

How does one detect these false jewels?  Not simply by looking at the sample’s selection.  The Hootie Poll obtained a proper sample in the proper way, thus avoiding the most common reason for a “bad poll” label.  They also did not launch an immediate attack on a target the way robo-calls do.  Instead, questions worded in a deliberate leading way are the surest sign of these ugly polls.  Watch for loaded or biased questions somewhere in the question sequence.  Of course, most of these polls are done by telephone since that’s still the prevalent means of doing legitimate surveys; so in person one must wait out the innocuous queries before discovering the push component.  Once that does show up, ask yourself if that question or statement would be permitted in court of law without an objection from the subject’s counsel (or the judge).  If “objection!” followed by “sustained!” come to mind, you’re probably looking at an ugly poll.  These deserve no more of your time, and should be publicly given the contempt they so richly deserve.

An on-line discussion of this technique is the Leading Questions sector of The Business Research Lab’s site (URL:  www.busreslab.com/tips/tip34.htm).  It cites a survey designed to move opinion toward a change in the location of a charitable-walk event that had the professed objective of preventing teenage suicides.  The status quo was location A but the survey’s sponsor obviously wanted to switch to location B.  So the question was worded this way: “We are considering changing from location A to location B this year.  Would you be willing to walk starting from location B, if it meant that hundreds more teenage suicides would be avoided?”  Now that’s an authentic leading question!

There are ways to get even with these moral offenders.  Herbert Asher, author of the six-edition polling text Polling and the Public:  What Every Citizen Ought to Know, recommends that citizens who are push-polled should alert their local media of that fact (Asher 2005, 140).  One might also consider self-policing by political consultants via their organization, the American Association of Political Consultants.  However, a 1998 survey of political consultants showed that few believe their organization’s formal stance against push polls is an effective deterrent (Thurber and Dulio 1999, at Reprinted from the July 1999 Issue of Campaigns and Elections Magazine:  A Portrait of the Consulting Industry, p. 6).  Subsequent pre-election practice has verified that concern.  So citizen and media pressure is the only effective current avenue for curbing this practice–but that in turn requires wide public recognition of the ugly poll for what it truly is.  I offer this paper in pursuit of that worthy end.

Remember also that questions are half the story.  The other half is the set of responses available to the polled.  Another “ugly” sign is that respondents face choices designed to help ensure the pre-ordained response sought by the alleged pollster.  This is not done only by campaign organizations seeking to impeach a rival.  It is also done at web poll sites, sometimes in a rankly biased but amateur manner.  This is richly displayed at Opinion Center from Opinion Center.com.  One has to sample their fare to see how biased it truly is.  Here is one example that shortly followed the 2003 death of actress Katherine Hepburn:  “Everyone talks about how Katherine Hepburn was such a role model.  She wore pants, had a long affair with a married man, never had kids and never married.  Is this a good role model?”  The respondent is left to choose only a “yes” or “no” response to this rant.

Another entry from them concerned the scandal revelations of the New York Times in 2003:  “Top management at The New York Times, including Howell Raines and Gerald Boyd, resigned / were asked to leave / were fired.  These two individuals were known for their curmudgeon-style of management.  Is there actually a curmudgeon-style of management or is that really just management by intimidation and a bad attitude toward employees?”  The respondents could choose among the following three responses:

1) Curmudgeon-style management is a valid style.
2) Curmudgeon-style management is not a valid style.
3) Managers manage that way because they are insecure.

As one can see, subtlety is not a long suit at Opinion Center.com.  They borrow from legitimacy of real polls and profess this as their motto:  “Surveys are intended to elicit honest information for academic and consumer-oriented market research & entertainment.”  Opinion Center falls alarmingly short of that.  But they do teach us how to recognize bias that is built straight into the questions and available responses.  The professional push polls and hired gun polls are considerably more difficult to smell out–but with a little practice and a skeptical eye, any layperson can get their drifts too.

Many issue advocacy groups routinely engage in blatantly biased polling on their pet topics.  Some are organizations that address “hot button” issues such as abortion or gun control in the U.S.  A website poll from pro-gun Keep and Bear Arms (Keep and Bear Arms – Gun Owners Home Page – 2nd Amendment Supporters, reviewed 10/14/03) had this survey question and result.

How do you feel about the blatant abuses being foisted upon lawful, peaceable gun owners by crooked politicians and the biased media?
Angry 26.1% 336 votes
Frustrated 3.0% 39 votes
Sad 0.8% 10 votes
Afraid 1.1% 14 votes
Ready for whatever comes our way 3.8% 49 votes
Empowered that we will be victorious 2.9% 37 votes
Amused — they will never take our guns away 1.4% 18 votes
All of the above 60.9% 782 votes
Total Votes: 1285

This is no attempt to discover public opinion.  It validates the sponsor’s biases by using a self-selected audience which is urged along in its slant by questions that would never gain admittance to a courtroom trial transcript.

One effect of these slants is to invite skepticism about anyone who addresses hot button political topics.  Students often mistakenly identify polls on controversial subjects to be ugly polls.  This is patently incorrect.  It is perfectly legitimate for good polls to address the most touchy or delicate subjects.  In fact, those are often the things most worthwhile to know and understand.  Content addressing an explosive topic is not itself grounds for sensing “ugly” in a poll.  I recommend studying the legitimate polls to see how two or three of them address such hot-button topics as abortion or gun control.10  Once you see the nature of the wording, compare it to someone who is genuinely trying to sway you instead of learn what your opinions are.  With some practice and alertness, you won’t find it difficult to tell the difference of good from ugly.

Conclusion                                 Next down; Top

Surveying of public opinion has become an important part of public life in democracies.  With just a little knowledge and practice, any student can master the distinctions of good and bad, or bad and ugly public polls.  These polls are so pervasive in modern life that the need to accomplish this is self-evident.  Getting fleeced is not a good thing!  No citizen should wander into the public informational arena lacking the equipment for protection against false and misleading sales pitches.  In that spirit, I offer this piece as a shield against the bad and ugly of the survey world at large.

Russell D. Renka

°Polling Links:
° Blogs and Commentary on Polls
° Data Sources
° Election Polls
° Embarrassments in polling history
° General Sources for Polls and Surveys
° How to interpret and judge polls
° New York Times Polling Standards
° Numeracy
° Numbskull abuses with numbers
° Push Polls
° Skepticism
° Specific polls (all “good” ones, of course)
° Statistical basis of polling
° Ugly Poll List

Polling Links:             Next down; Top

Blogs and Commentary on polls:
° Mystery Pollster and Mystery Pollster – Pollsters by Mark Blumenthal is excellent for “Demystifying the Science and Art of Political Polling.”

Data Sources:
° NORC–The General Social Survey at the University of Chicago; very widely used data source, abundant documentation
° Center for Political Studies at the University of Michigan, Ann Arbor – gateway to several major sources
° Public Opinion Quarterly – journal devoted to methodology and results of public opinion surveys
° CESSDA HomePage from Council of European Social Science Data Archives
° National Network of State Polls – “the largest available collection of state-level data,” from the data archive of the Odum Institute for Research in Social Science
° RealClear Politics – Polls from John McIntyre and Tom Bevan

Election Polls:
° The Cook Political Report’s National Poll – biweekly election-year polling, from Associated Press and Ipsos Public Affairs (also see Ipsos News Center – Polls, Public Opinion, Research & News)
° American Research Group Inc. is a clearinghouse site
° PollingReport.com – Public Opinion Online – “An independent, nonpartisan resource on trends in American public opinion ”
° NAES 2004 Home Page (National Annenberg Election Survey ) from The Annenberg Public Policy Center of the University of Pennsylvania
° Presidential Trial Heats: A Daily Time Series and Documentation for time series extraction from James Stimson, University of North Carolina

Embarrassments in polling history – Literary Digest of 1936 and other royal screw-ups:
° The Seattle Times Political Classroom Political Primer Polls
° Oops!! (Yes, it’s the Digest again.  But there are others as well.)

General Sources for Polls and Surveys:
° Copernicus Election Watch: Public Opinion Polls
° National Council on Public Polls
° The archive of polls surveys — The Roper Center for Public Opinion Research
° Public Opinion from University of Michigan Documents Center
° Ruy Teixeira – Center for American Progress has weekly polling columns; parent site is Home – Center for American Progress with Ruy’s columns shown under heading of “Public Opinion Watch”
° SSLIS Public Opinion Guide from Yale University Social Science Libraries and Information Services (SSLIS)
° PRI – Links to Public Opinion Research
    ° Public Opinion Polling in Canada (BP-371E)
° Pew Forum on Religion & Public Life – American Religious Landscapes and Political Attitudes (in 2004)

How to interpret and judge polls:
° The Ten Commandments of Polling by Ken Blake, UNC-Chapel Hill
° 20 Questions A Journalist Should Ask from National Council on Public Polls (NCPP)
° A Press Warning on Push Polls from National Council on Public Polls
° Statement About Internet Polls from National Council on Public Polls – “there is a consensus that many web-based surveys are completely unreliable.  Indeed, to describe them as “polls” is to misuse that term.”
° NCPP Principles of Disclosure – a statement on ethics of proper polling
° Howard W. Odum Institute Poll Item Database Query Page has properly worded questions
° Answers to Questions We Often Hear from the Public from National Council on Public Polls
° If You’re Going to Poll by The Why Files; see its Polling Glossary (for layman’s explanation of standard terminology in polls), Serious Statistical Secrets page (good explanation of the basics), Obey the Law(law of large numbers, that is), Doing it wrong… (on subtle failures associated with not taking a truly random sample), Oops!! (screwing up royally), A Little Knowledge … on deliberative polling per James Fishkin of U of Texas (my alma mater) and his Goes a Long Way and Why Change (of opinions) on the National Issues Convention in Austin, TX.
° ABCNEWS.com ABCNEWS Polling Guide from Gary Langer, head of the ABC News Polling Unit

Numeracy – Competent interpretation of statistics and data-based information is essential for sifting out the good from the bad and the ugly.  Besides that, these sites contain rich collections of abuse and plain old bunkum that will delight and repel us in comparable proportions.  Here are some sites that promote literacy in handling numerical, statistical, and mathematical information:
° innumeracy.com – the home site; below are subcategories with extensive links to illustrative sources
° numeracy
° numeracy – Archives
° critical thinking
° Knowlogy

Numbskull abuses with numbers – Here’s a rich category, probably unlimited in potential number of examples.
° Best, Joel. 2001.  Telling the Truth About Damned Lies and Statistics, The Chronicle Review, May 4.

Push Polls:
° 2003_pushpollstatement from AAPOR – American Association for Public Opinion Research (AAPOR)
° NCPP – National Council on Public Polls – Press WARNING on push polls as political telemarketing
° Campaigns & Elections: What Are Push Polls, Anyway? by Karl G. Feld, May 2000.  Campaigns & Elections 21:62-63, 70
° Push Polls – a biographical source compilation
° CBS News: The Truth About Push Polls February 14, 2000 180605 by Kathleen Frankovic
° pushpolls (The Case Against Negative Push Polls), from Michael Sternberg; a compilation of cases, including the one immediately below
° Public Opinion Strategies Push Poll from Mother Jones; a reproduced full poll that looks legitimate but is actually designed to condemn a specific candidate; the serious abuse starts with Question no. 24.
    ° Push Me, Poll You By William Saletan in Slate (February 15, 2000) on South Carolina push polling by the Bush campaign against presidential primary rival John McCain in February 2000

Skepticism – This combination of attitude and education is a mighty valuable approach for any who want to avoid fraud.
° The Skeptic’s Refuge, including link to The Skeptic’s Dictionary: A Guide for the New Millennium

Specific polls (all “good” ones, of course):
° American attitudes Program on International Policy Attitudes from Program on International Policy Attitudes (PIPA) ; intro at [PIPA] About Us says “This website will report on US public opinion on a broad range of international policy issues, integrating all publicly available polling data.”
° Current Population Survey Main Page and CPS Overview – From the U.S. Bureau of the Census, the CPS has special benefit of exceptionally large samples that can be subdivided almost endlessly.
° Eurobarometer – Monitoring the Public Opinion in the European Union
° European Public Opinion – Homepage
° The Gallup Organization – Gallup has gone commercial, limiting web access to subscribers only.  But a few recent summations are present at any given time.
° Welcome to the Harris Poll Online and Harris Interactive – online polling
° Knowledge Networks® – The consumer information company for the 21st century – online polling
° The NES Guide to Public Opinion and Electoral Behavior – American National Election Studies has queries in 9 categories from 1948 through 2002.
° The New York Times/CBS News Poll – a useful archive
° On Politics – Washington Post Archive
° The Pew Research Center for The People & The Press
° PollingReport.com – Public Opinion Online – This is a compilation site organized by subject.
° Program on International Policy Attitudes (PIPA)
° Public Agenda Online – Public Opinion and Public Policy
° SurveyUSA® Methodology – They do simultaneous 50-state surveys for presidential election forecasting, comparison of state and regional presidential approval, and other cross-unit comparative purposes.
° Zogby International

Statistical basis of polling:
° Statistics Every Writer Should Know from RobertNiles.com
° Statistics Every Writer Should Know – Margin of Error
° Statistics Every Writer Should Know – Standard Deviation
° Statistics Every Writer Should Know – Mean
° Statistics Every Writer Should Know – Median
° Sample Sizes
° Calculate a Sample
° The Stats Board
° Statistical Assessment Service

Ugly Poll List (I am always looking for these characters.)
° Opinion Center is from Opinion Center.com; these characters take the cake for slanted and biased questions.
Top

Notes

1 This practice is noticeably violated in recent years by Investor’s Business Daily and their polling agency, Technometrica Institute of Policy and Politics (IBD/TIPP).  TIPP does polls available only to IBD, which produces deeply biased reports based on TIPP surveys with no direct or full link to that surveyor’s questions or methods of acquiring its samples.  Their practices and results are of doubtful value, to say the least.  Nate Silver reviews a notorious recent IBD/TIPP polls of doctors thusly:  “that special pollster which is both biased and inept..” (Nate Silver of FiveThirtyEight:  Politics Done Right at ibdtipp-doctors-poll-is-not-trustworthy, 9/16/2009).

2 The HIP-Sampling Error site defines Sampling Error as “That part of the total estimation error of a parameter caused by the random nature of the sample” where a Random Sample is “A sample that is arrived at by selecting sample units such that each possible unit has a fixed and determinate probability of selection.”  In layman’s terms, this means every sample unit has the same likelihood of being included in the sample, yet there’s still error when making an inference about the population.  A self-selected sample that is not randomly selected from a population has no specification of sampling error–as the term is meaningless in that context.
A more technical online introduction with a bit of math from National Science Foundation is SESTAT’s Understanding Sampling Errors and What is the Margin of Error.
Good polls use computer-generated random numbering.  There’s evidence that human beings cannot create truly random numbers very well.  See Nate Silver, FiveThirtyEight Politics Done Right:  Strategic Vision Polls Exhibit Unusual Patterns, Possibly Indicating Fraud, 9/25/2009 where an Atlanta polling firm called Strategic Vision, LLC is suspected of claiming poll results without doing the polls.  The results distribute in a markedly nonrandom way.

3 The 1995 NPTS Courseware Interpreting Estimates – Sampling Error site shows that sampling error follows naturally from drawing out a part of a population for creation of a sample.  In the DSS Calculator, entry of population size of 1000 AND also a sample size of 1000 produces a 0% sampling error, because the entire population went into that sample, so any second sample of 1000 cannot possibly vary from the first one.  That’s true for all finite population and sample sizes, such as the 2004 presidential election voter turnout of about 122,000,000.  But if you enter population of 122,000,000 and sample size of 1220, then you get a manageably small sampling error of about 3%, even though this sample consists of only 1 in every 100,000 voters-to-be from the population.

4  Traditional response rates in randomly selected telephone exchange samples are declining, and those not called differ substantially from those called.  Cell-only households are younger, more affluent, more politically liberal, and less likely to be married or to own their home; so polling cannot be indifferent to their absence from landline samplings.  However, a May 2006 report cited “a minimal impact on the results” of surveys where cell-only users are excluded (Pew Charitable Trusts, The Cell Phone Challenge to Polling, 17 May 2006); and for now, RDD usage is still widely employed.
The response rate problem along with rapidly spreading standard web access in U.S. households has prompted the Stanford-based Polimetrix firm to abandon the telephone outright in favor of an internet-based Matrix database (Polimetrix, Scientific Sampling for Online Research).  They claim successes compared to traditional firms in California-based 2004 statewide referenda forecasts; and they may well be the portent of future polling methods from large on-line databases (Hill, Lo, Vavreck, and Zaller 2007).
However, their standard internet polling site (PollingPoint – A Nationwide Network of Millions of People Inspiring Public Debate) invites the usual website visitors’ indulgence in online polling, with results showing almost nothing about resultant sample size, sampling error, or comparability to other polls.  This is still self-selected sampling rather than random selection.  I believe the jury is out; there is yet no consumer-linked warrant to inspire confidence in the results obtained by this method.

5 Their warnings include this:  “The SERVE system might appear to work flawlessly in 2004, with no successful attacks detected. It is as unfortunate as it is inevitable that a seemingly successful voting experiment in a U.S. presidential election involving seven states would be viewed by most people as strong evidence that SERVE is a reliable, robust, and secure voting system. Such an outcome would encourage expansion of the program by FVAP in future elections, or the marketing of the same voting system by vendors to jurisdictions all over the United States, and other countries as well. However, the fact that no successful attack is detected does not mean that none occurred. Many attacks, especially if cleverly hidden, would be extremely difficult to detect, even in cases when they change the outcome of a major election. Furthermore, the lack of a successful attack in 2004 does not mean that successful attacks would be less likely to happen in the future; quite the contrary, future attacks would be more likely, both because there is more time to prepare the attack, and because expanded use of SERVE or similar systems would make the prize more valuable. In other words, a “successful” trial of SERVE in 2004 is the top of a slippery slope toward even more vulnerable systems in the future.”  Jefferson et al., 2004, A Security Analysis of the Secure Electronic Registration and Voting Experiment (SERVE).

6 That would be Diebold Elections Systems.  Parent website is Welcome To Diebold Election Systems.  See Diebold Investor Relations News Release of January 29, 2004 – “Maryland Security Study Validates Diebold Election Systems Equipment for March Primary” at URL: www.corporate-ir.net/ireye/ir_site.zhtml?ticker=DBD&script=410&layout=-6&item_id=489744.  See also the New York Times Opinion piece on this bizarre claim:  How to Hack an Election – New York Times, 31 January 2004; and Trusted Agent_Report_AccuVote, 20 January 2004, a report to the state legislature on Diebold’s Maryland experience.

7 There is one prominent exception to concealment.  FreeEats and its director Gabriel Joseph III were effectively outed after the 2006 midterm election as authors of robo-call attacks against targets of conservative candidates and causes (Schulman 2006, 2007).  Those who hired FreeEats remained unknown, but this shadowy organization has assumed a certain notoriety.  That may only be good advertising for someone offering this product.  Joseph made himself well known in Indiana by counter-suing that State’s attorney general (Schulman 2006).

8 The hired gun poll is succinctly described by Humphrey Taylor, chairman of the Harris Poll in the U.S., with journalist Sally Dawson.  See Public Affairs News – Industry – Polling:  Poll Position (June 2006) and scroll down to “hired gun” polling.  Taylor says “there is a long history of hired-gun polls which are actually designed to mislead people using every methodology. The prime offenders have included PR firms, and sometimes non-profit groups, who really more or less will come to you and say: ‘I need a survey which shows that 80 per cent of people support our position – pro- or anti-abortion, or pro- or anti-globalisation, or whatever it is’, or ‘80 per cent of people like my client’s product more than they liked the other product.’”
That’s right.  Here’s an example from a religious right wing group on the topic of abortion:  Faith 2 Action Abortion Poll, with Wirthlin Worldwide National Quorum serving as the hired gun.  (Thanks to my student Laura Muir for providing this example.  RDR, 10/4/07)

9 This is not the only occasion for Conway’s firm to conduct polls with deliberate intent to produce an ideologically conservative policy boost.  After the 2008 presidential election, The Federalist Society employed the firm to such effect, per Key Findings from a National Survey of 800 Actual Voters » Publications » The Federalist Society (November 7, 2008).  The full poll is labeled 2008 Post-Election Survey of 800 Actual Voters with Questions 7 through 11 on judicial philosophy (locale:  pp. 5-6 of this 73-page Acrobat file).  The wording is designed to ensure a high proportion of respondents will select the literalist approach strongly sought by the Society; and that mission was accomplished.

10 For the abortion issue, PollingReport.com has an Abortion and Birth Control site with years of legitimate polls showing typically worded legitimate questions on this topic.

Top

References

American Association for Public Opinion Research.  2007.  Push Polls:  Not to be confused with legitimate polling (filename: AAPOR Statement on Push Polls).  URL: www.aapor.org/aaporstatementonpushpolls.

Asher, Herbert.  2001.  Polling and the Public:  What Every Citizen Should Know, 5th ed.  Washington, D.C.:  CQ Press.

Asher, Herbert.  2005.  Polling and the Public:  What Every Citizen Should Know, 6th ed.  Washington, D.C.:  CQ Press.

Beck, Deborah, Paul Taylor, Jeffrey Stanger, and Douglas Rivlin.  1997.  Issue Advocacy Advertising During the 1996 Campaign.  URL:www.annenbergpublicpolicycenter.org/03_political_communication/issueads/REP16.PDF.

Blake, Ken.  1996.  The Ten Commandments of Polling.  URL: facstaff.uww.edu/mohanp/methodspolls.html.

Blumenthal, Mark.  2006a.  Mystery Pollster – RoboScam: Not Your Father’s Push Poll, 21 February 2006.  URL: www.mysterypollster.com/main/2006/02/roboscam_not_yo.html.

Blumenthal Mark.  2006b.  A Real Push Poll?”, 8 September 2006.  URL: www.pollster.com/blogs/roboscam/.

Blumenthal, Mark.  2007a.  Mystery Pollster:  Cell Phones and Political Surveys: Part I, 3 July 2007.   URL:  www.pollster.com/blogs/cell_phones_and_political_surv.php.

Blumenthal, Mark.  2007b.  Mystery Pollster:  Cell Phones and Political Surveys:  Part II. 13 July 2007. URL: www.pollster.com/blogs/cell_phones_and_political_surv_1.php.

Borger, Julian.  2004.  The BrainsThe Guardian, March 9, 2004.  URL: www.guardian.co.uk/uselections2004/story/0,13918,1165126,00.html.

Business Research Lab, The.  2004.  A Business Research Lab Tip, Leading Questions.  URL:  www.busreslab.com/tips/tip34.htm.

Davis, Richard H.  2004.  The Anatomy of a Smear Campaign, The Boston Globe, March 21, 2004.  URL:  www.boston.com/news/politics/president/articles/2004/03/21/the_anatomy_of_a_smear_campaign/.

Diebold Investor Relations.  2004.  News Release of January 29, 2004 – “Maryland Security Study Validates Diebold Election Systems Equipment for March Primary.”  URL: www.corporate-ir.net/ireye/ir_site.zhtml?ticker=DBD&script=410&layout=-6&item_id=489744.

The Digital Divide.  2003.  IT&Society: A Web Journal Studying How Technology Affects Society, Volume 1, Issue 4, Spring 2003.  URL:  www.stanford.edu/group/siqss/itandsociety/v01i04.html.

Drew, Christopher.   2006.  New Telemarketing Ploy Steers Voters on Republican Path, New York Times, 12/6/06.  URL:  www.nytimes.com/2006/11/06/us/politics/06push.html.

DuBose, Louis.  2001.  Bush’s Hit ManThe Nation, February 15, 2001.  URL:  www.thenation.com/doc/20010305/dubose.

ElectionsOnline.us–Enabling Online Voting.  URL: www.electionsonline.us/.

Fathers’ Manifesto.  The Criminal Gallup Organization.  URL:  www.christianparty.net/gallup.htm.

Fathers’ Manifesto.  Abortion Polls by the Gallup Organization.  URL:  christianparty.net/abortiongallup.htm.

Gawiser, Sheldon R., and G. Evans Witt.  undated.  20 Questions A Journalist Should Ask About Poll Results, Third Edition.  URL: www.ncpp.org/?q=node/4.

GolfWeb Wire Services, PGATOUR.com – Is the Augusta National poll misleading? (November 14, 2002).  URL: images.golfweb.com/story/5888231.

Green, Donald P. and Alan S. Gerber.  2002.  Enough Already with Random Digit Dialing: A Proposal to Use Registration-Based Sampling to Improve Pre-Election Polling, May 5, 2002.  URL: bbs.vcsnet.com/df/RegistrationBasedSampling.pdf.

Green, Joshua.  2004.  Karl Rove in a CornerAtlantic Monthly, November 2004.  URL:  www.theatlantic.com/doc/200411/green.

Green, Joshua.  2007.  The Rove PresidencyAtlantic Monthly, September 2007.  URL: www.theatlantic.com/doc/200709/karl-rove.

Greenberg Quinlan Rosner Research, Inc. (with MoveOn.org).  Filename: gqr at URL:  www.moveonpac.org/moveonpac/gqr.pdf.

Hill, Seth J., James Lo, Lynn Vavreck, and John Zaller.  2007.  The Opt-in Internet Panel: Survey Mode, Sampling Methodology and the Implications for Political Research.  Annual meeting of the American Political Science Association, Chicago, IL.  URL:  web.mit.edu/polisci/portl/cces/material/HillLoVavreckZaller2007.pdf.

Hootie Poll:  See Helen Ross, Poll shows support for Augusta’s right to choose membership – PGATOUR.COM, November 13, 2002 at URL: www.golfweb.com/u/ce/multi/0,1977,5885978,00.html.
Poll questions are listed sequentially on five files:  Augusta National poll Part I – PGATOUR.COM; Augusta National poll Part II – PGATOUR.COM; Augusta National poll Part III – PGATOUR.COM; Augusta National poll Part IV – PGATOUR.COM and Augusta National poll Part V – PGATOUR.COM.  All were posted November 13, 2002 with respective URL suffixes: 0,1977,5886264,00.html; 0,1977,5886269,00.html ;0,1977,5886271,00.html ;0,1977,5886273,00.html; and 0,1977,5886278,00.html.

Jefferson, David, Aviel D. Rubin, Barbara Simons, and David Wagner.  2004 (January 20).  A Security Analysis of the Secure Electronic Registration and Voting Experiment (SERVE).  URL: www.servesecurityreport.org/.

Kagay, Michael.  1994.   Poll on Doubt Of Holocaust Is Corrected, New York Times, July 8, 1994.  URL:  www.nytimes.com/1994/07/08/us/poll-on-doubt-of-holocaust-is-corrected.html.

Kagay, Michael.  2000.  Poll Watch Looking Back on 25 Years of Changes in Polling, New York Times, April 20, 2000.  URL:  www.nytimes.com/library/national/042000poll-watch.html.

Kaplan, Sheila.  1996.  Tobacco Dole, Mother Jones, May/June 1996.  URL:  www.motherjones.com/news/special_reports/1996/05/kaplan.html.

Keep and Bear Arms.com.  Keep and Bear Arms – Gun Owners Home Page – 2nd Amendment Supporters.  URL:  keepandbeararms.com/polls/pollmentorres.asp?id=10.

Keeter, Scott, Michael Dimock and Leah Christian.  2008.  Pew Research Center for the People & the Press.  The Impact Of “Cell-Onlys” On Public Opinion Polling:  Ways of Coping with a Growing Population Segment, 31 January 2008; Cell Phones and the 2008 Vote:  An Update, 23 September 2008.  URLs: people-press.org/report/391/  and pewresearch.org/pubs/964/.

Keeter, Scott.  2008.   Research Roundup: Latest Findings on Cell Phones and Polling, 22 May 2008.  URL:  pewresearch.org/pubs/848/cell-only-methodology.

Ladd, Everett Carl.  1994.  The Holocaust Poll Error:  A Modern Cautionary Tale.  Public Perspective, Vol. 5, No. 5 (July/August 1994).  Filename: Roper Holocaust Polls.  Reprinted at URL: edcallahan.com/web110/articles/holocaust.htm, from Ed Callahan’s STAT 110 Articles site at URL:  edcallahan.com/web110/articles/.

Leip, Dave.  Atlas of Presidential Elections:  1936 Presidential Election Results.  URL: www.uselectionatlas.org/RESULTS/national.php?f=0&year=1936.

Mapes, Jeff.  2000.  Web Pollster Hopes To Win Credibility. PulsePoll.com News: The Oregonian, April 12, 2000.  URL:  www.pulsepoll.com/news/pr/oregonian.html.

Martin, Jonathan.  2007.  Apparent pro-Huckabee third-party group floods Iowa with negative calls – Jonathan Martin’s Blog, Politico.com, 12/3/07.  URL: www.politico.com/blogs/jonathanmartin/1207/Apparent_proHuckabee_thirdparty_group_floods_Iowa_with_negative_calls.html.

Mooney, Chris.  2003.  Polling for Intelligent Design (Doubt and About).  September 11, 2003.  URL:  www.csicop.org/specialarticles/show/polling_for_id/.

Moore, James and Wayne Slater.  2006.  The Architect:  Karl Rove and The Master Plan for Absolute Power.  New York: Crown Publishers.

MoveOn.org.  2003.  Report on the 2003 MoveOn.org Political Action Primary.  URL:  moveon.org/pac/primary/report.html.

National Council on Public Polls.  1995.  A Press Warning from the National Council on Public Polls.  URL:  www.ncpp.org/push.htm.

Niles, Robert.  Margin of Error at RobertNiles.com.  URL:  www.robertniles.com/stats/margin.shtml.

NPR Karl Rove, ‘The Architect’ interview with coauthor Wayne Slater, WHYY, September 6, 2006.  URL: www.npr.org/templates/story/story.php?storyId=5775226.

Opinion Center.  URL:  www.opinioncenter.com/.

PulsePoll.  2000.  PulsePoll Primary: Arizona Results.  URL:  www.pulsepoll.com/primary/primary.html.

Rubenstein, Sondra Miller.  1995.  Surveying Public Opinion.  Belmont, CA:  Wadsworth Publishing.

Saletan, William.  2000.  Push Me, Poll You, Slate Magazine, February 15, 2000.  URL:  slate.msn.com/id/74943/.

Scenic America.  undated.  Opinion Polls:  Billboards are Ugly, Intrusive, Uninformative.  URL: www.scenic.org/billboards/background/opinion.

Schulman, Daniel.  2006.  Tales of a Push Pollster, Mother Jones, 29 October 2006.  URL: www.motherjones.com/news/update/2006/10/free_eats.html.

Schulman, Daniel.  2007.  i, robo-caller, Mother Jones, January/February 2007.  URL:  www.motherjones.com/news/outfront/2007/01/i_robo_caller.html.

Schwartz, John.  2004 (January 21).  Report Says Internet Voting System Is Too Insecure to Use.  URL:  www.nytimes.com/2004/01/21/technology/23CND-INTE.html?ex=1076821200&en=7d215de9386d6652&ei=5070  (Use the file name at a search engine or at the New York Times site should this URL be a failure.)

Silver, Nate.  2009.   ibdtipp-doctors-poll-is-not-trustworthy, 9/16/2009.  URL:  wwww.fivethirtyeight.com/2009/09/ibdtipp-doctors-poll-is-not-trustworthy.html.
Silver, Nate.  2009.  FiveThirtyEight Politics Done Right:  Strategic Vision Polls Exhibit Unusual Patterns, Possibly Indicating Fraud, 9/25/2009.  URL:  wwww.fivethirtyeight.com/2009/09/strategic-vision-polls-exhibit-unusual.html.

Singer, Eleanor.  1995.  The Professional Voice 3:  Comments on Hite’s Women and Love.  In Rubenstein, Sondra Miller, Surveying Public Opinion, pp. 132-136.  Belmont, CA:  Wadsworth Publishing.

Smith, Tom W.  Sex Counts:  A Methodological Critique of Hite’s Women and Love.  1989.  Washington, D.C.:  National Academies Press.  On line:  Nat’l Academies Press, AIDS, Sexual Behavior, and Intravenous Drug Use (1989), Sex Counts A Methodological Critique of Hite’s Women and Love, pp. 537-547.   URL:  www.nap.edu/books/0309039762/html/537.html.

Sniggle.net:  The Culture Jammer’s Encyclopedia, AP Wire 06-21-2003 UCR student arrested for allegedly trying to derail election.  URL: web.archive.org/web/20030703124458/http://cbs11tv.com/national/HackerArrested-aa/resources_news_html.

Snow, Nancy.  2000.  The South Carolina Primary:  Bush Wins, America Loses.  CommonDream.org News Center.  URL:  www.commondreams.org/views/022100-106.htm.

Squire, Peverill.  1988.  Why the 1936 Literary Digest Poll Failed.  Public Opinion Quarterly 52:1 (Spring), 125-133.

Stolarek, John S., Robert M. Rood, and Marcia Whicker Taylor.  1981.  Measuring Constituency Opinion in the U.S. House:  Mail Versus Random Surveys.  Legislative Studies Quarterly 6:4 (November), 589-595.

Suskind, Ron.  2003.  Why are These Men Laughing?Esquire, January 1, 2003.  URL: www.ronsuskind.com/newsite/articles/archives/000032.html.

Taylor, Humphrey.  1998.  Myth and Reality in Reporting Sampling Error:  How the Media Confuse and Mislead Readers and Viewers. URL: PollingReport.com at www.pollingreport.com/sampling.htm.

The Polling Company TM and WomanTrend.  URL:  www.pollingcompany.com/resourcecenter.asp?FormMode=Call&LinkType=Text&ID=14 (or www.pollingcompany.com/resourcecenter.asp and link via “Polling Definitions”).

The Why Files.  Obey the Law.  URL:  whyfiles.org/009poll/math_primer2.html.

Thurber, James A., and David A. Dulio.  1999.  A Portrait of the Consulting Industry.  Campaigns and Elections, July 1999.  URL:  Reprinted from the July 1999 Issue of Campaigns and Elections Magazine “A Portrait of the Consulting Industry” at 216.239.53.104/search?q=cache:WcajxnubstUJ:www.american.edu/spa/ccps/pdffiles/A_Portrait_of_the_Consulting_Industry.pdf+American+Association+of+Political+Consultants%2Bpush+polls&hl=en&ie=UTF-8 (better to find this via copy and paste of the filename to Google at www.google.com).

Traugott, Michael W., and Paul J. Lavrakas.  2000.  The Voter’s Guide to Election Polls, 2d ed.  Chatham, NJ:  Chatham House.

Vote.com.   URL: www.vote.com.

Voting_System_Report_Final.  2003.  Risk Assessment Report:  Diebold AccuVote-TS Voting System and Processes, September 2, 2003.  SAIC (Scientific Applications International Corporation), for State of Maryland.  URL:  www.dbm.maryland.gov/dbm_search/technology/toc_voting_system_report/votingsystemreportfinal.pdf.

Welcome to the SERVE home page.  URL:  www.serveusa.gov/public/aca.aspx.

Young, Michael L.  1992.  Dictionary of Polling: The Language of Contemporary Opinion Research.  Westport, CT:  Greenwood Press.

Copyright©2011, Russell D. Renka

Source: http://cstl-cla.semo.edu/rdrenka/Renka_papers/polls.htm
Thursday, June 09, 2011 11:32:21 AM


Filed under: Information operations

Putin’s spokesman dismisses ‘stupid’ Asperger’s claim

$
0
0

Not so fast, Mr. Kremlin spokesman…

Once you’ve read this news article from Yahoo, look at the video contained in the article.

I question the Kremlin’s dismissal.  Did they rule out the possibility of Asperger’s syndrome through a brain scan, an MRI?

I can never look at Putin the same, I will always wonder if he has this form of autism.


Moscow (AFP) – President Vladimir Putin’s spokesman has angrily dismissed a Pentagon study that claimed the Russian leader had Asperger’s syndrome, a form of autism.

“That is stupidity not worthy of comment,” spokesman Dmitry Peskov told Gazeta.ru news website late Thursday.

His comments came after USA Today reported that a 2008 study carried out by an internal Pentagon think tank, the Office of Net Assessment, suggested that Putin has Asperger’s syndrome, giving him a need to exert “extreme control” on his surroundings and is uncomfortable with social interaction.

Experts studying his movements and facial expressions in video footage theorised that Putin’s neurological development was disrupted in infancy, giving him a sense of physical imbalance and a discomfort with social interaction.

The Pentagon played down the study, saying it apparently never made its way to the desk of the defense secretary or other top decision makers.

Source: http://news.yahoo.com/pentagon-study-claimed-putin-aspergers-syndrome-183603255.html


Filed under: Information operations, Russian Tagged: Asperger's Syndrome, Dmitry Peskov, Kremlin, news article, Pentagon, President Vladimir Putin, putin

If you still call Russians fighting in Ukraine ‘separatists’ please answer these questions

$
0
0

If the war in Ukraine is ‘local rebels’ uprising or ‘separatist’ against the Ukrainian government, then answer these questions please:

  1. Why was the FIRST thing they did after seizing administration buildings, replacing ALL tv-channels for Russian ones?
  2. Why did Russia attack a Ukrainian camp near Zelenopole on July 11th CROSSING THE BORDER with 14 GRAD rocket launchers? See this blog.
  3. Why did Russia attack a Ukrainian camp with GRAD rocket launchers from Gukovo on July 16th?See this blog.
  4. Why did Russia drive one of their most advanced anti-air systems into Ukraine that shot down MH17? See this blog.
  5. Why did Russia invade Ukraine in the end of August to destroy the volunteer battalions that were retreating from Ilovaisk and were promised a safe passage? See this blog.
  6. Why did Russia invade Ukraine in the end of August to conquer Lugansk Airport as can be seen on Google Earth? See this blog.
  7. Why did Russia attack a Ukrainian army camp OUTSIDE the conflict area from RUSSIAN TERRITORY on Sept 14th, which is AFTER the Minsk agreement. See this blog.
  8. Why does Russia send ENDLESS amounts of ammunition into Ukraine?
  9. Why does Russia send all their newest equipment into Ukraine?
  10. Why does Russia send their Electronic warfare units into Ukraine?
  11. Why does Russia recruit criminals to fight in Ukraine in exchange for amnesty?
  12. Why does Russia recruit mercenaries from all over Russia to fight in Ukraine?
  13. Why do we hear “Alluha akbar” all the time in videos?
  14. Why does Russia keep Nadiya Savchenko captive as a prisoner of war?
  15. Why does all the command and control of the battle groups in Ukraine consist of Russian officers (such as Girkin and others)?
  16. Why does Russia send in their Special Forces (to capture administration buildings or to lay ambushes on retreating Aidar units etc.)?
  17. Why does Russia send in it’s green little men like it did in Crimea. But now they are called ‘polite people’?
  18. Why are there Russian generals in Ukraine? See this blog.
  19. Why do Russian tanks fly the Russian flag in Ukraine?
Russia wants you to believe it is a local uprising, so it can freely operate under that cover.
STOP TALKING ABOUT REBELS AND SEPARATISTS otherwise you CONTRIBUTE to the confusion.

You want to hear what locals say? Watch this powerful video:

Also see: you can not fight an enemy you don’t acknowledge.

Source: http://ukraineatwar.blogspot.fi/2015/02/if-you-think-separatists-fight-against.html 


Filed under: Information operations

Words

$
0
0

I list here many words and phrases which are sometimes used in our field. It’s what some of us do.

(+)

  • Information
  • Inform
  • Publication
  • Spokesperson
  • Public Affairs
  • Strategic Communication
  • Strategic Communications
  • Public Diplomacy
  • Truthteller
  • Truthful
  • Promotion
  • Advertising
  • Marketing
  • Influence
  • Inculcation
  • Promulgation
  • Persuade
  • Evangelism
  • Hogwash
  • Hype
  • Prosleytize
  • Information Operations
  • Information Warfare
  • Manipulate
  • Perfidy
  • Spin
  • Trick
  • Psychological Operations or Warfare
  • Misinformation
  • Disinformation
  • Lies/Liar
  • Deception
  • Propaganda
  • Brainwashing
(-)
Is my rank prioritization fairly correct, regarding positive to negative perception of these words?  Correct me if wrong, please.
Is there a cut off line where words (or phrases) above this line are acceptable and words below are either illegal or morally reprehensible?  In some cases illegal?

Which of these things can a US government entity not do?

Which are unethical, immoral or would land on the front page of the Washington Post?

Which are illegal and why?


Filed under: Information operations

RT Tutorial: How to Write Propaganda

$
0
0

Thank you, RT.

RT finally presented a tutorial on “How to write propaganda”.

No, crap, really.

For you, gentle readers, I actually had to view the video more than once, just to make sure I got all the facts straight.

Of course, everything the anchor says is designed to highlight somebody else’s supposed attempt at propaganda, from “The Economist”.

Now, I’m not one to think of The Economist as a propaganda arm of the UK government, the BBC or anything else.  I am guessing, however, RT took a piece with which they didn’t agree and decided to label it propaganda.  I enclose certain words with [ square brackets] when I’m not entirely sure of the word.  My ears aren’t so good after decades crawling up the butt end of aircraft when I was an Airborne soldier, those engines are loud and earplugs don’t work so well.

First, you come up with a statement that looks true, or somewhat like the truth, [for] anyone not well-informed with the topic.

Bingo, hardly anyone will dig deeper to find this statement is completely absurd, especially if you’re a non-Russian speaker.

It’s clever.  They know you won’t check Russian newspapers.

Rule #2.  Reveal the enemy. The article then explains why.

Now the video cites some incongruous facts, supposed to confuse people?  Then the conclusion: ”

But who cares? The reader already has [a] negative feeling.

The most important part is confusion. Or misinformation, rather.

Mission completed.

There you have it.  Propaganda piece. Signed, sealed and delivered.

Thank you, RT.  Now we know what to look for when we watch your broadcasts.

I’ll bring the popcorn. We’ve just been taught a lesson by the experts.


Filed under: Information operations, Propaganda, Russia

RT Admonished

$
0
0

RT has nobody to blame but itself.

When the senior editor at the Economist called for ostracism of RT for lack of adherence to any journalistic standards, RT ripped into him not once, but twice.

I believe public admonition of RT in a public forum with international REAL news coverage results in RT lashing out.  Perhaps flailing would be more accurate, but I don’t want to accuse them of being desperate. Okay, maybe I do.  It sounds like the old Soviet over-reaction to facts being flung in their face, publicly, and their inability to respond.

Their defense?  This:

RT correspondents risk their lives every day to report on stories and events nobody else dares touch, to show people around the world the side of reality that most other news media either inadvertently or deliberately conceal.

If I had to synopsize this paragraph in a short sentence it would be: “We try so hard to lie with a straight face!”

Just because their journalists become embedded with the pro-Russian terrorists in Donbas, Ukraine, does this allow RT to repeatedly twist and hide the truth?  No.

Just because RT creates propaganda, built on the tiniest slivers of truth and then embellishes them with distorted facts and lies, does that give them license to call that alternate reality?  No folks, it’s not close to reality.

Does RT conceal the truth?  Yes, repeatedly and deliberately.

Funny thing, I haven’t seen RT adhere to any journalistic standards, I haven’t seen them win any awards for journalism by any major Western journalistic or a recognized international news organization. I have seen them win propaganda awards from Putin, which is not a good thing in my opinion.

Nay, RT continues to deny the truth, deny their flouting journalistic standards, and completely deny they are a propaganda tool of Russian Information Warfare.

RT is a joke.

I also admonish RT for lack of integrity, lack of journalistic standards and for being a tool for propaganda for Russian Information Warfare against the West.


A senior editor at the Economist, Edward Lucas, has used a panel discussion at the Munich Security Conference to slander RT’s standard of journalism and call for media “ostracism” of our employees. Here is our response.

We are absolutely outraged by Mr Lucas’ comments. It is the height of hypocrisy to come to an event, dedicated to the collective resolution of the multitude of tough security questions the world faces today, to use it as a platform for specious attacks.

In fact, while Mr. Lucas was shaming RT journalists from the comfort and security of this conference, our crew was under fire near Donetsk.

RT correspondents risk their lives every day to report on stories and events nobody else dares touch, to show people around the world the side of reality that most other news media either inadvertently or deliberately conceal.

To attack them in such a despicable manner is a disgrace to Mr Lucas, to the journalistic organizations he represents and to the Munich Security Conference.

Source: http://rt.com/op-edge/authors/rt-editorial/


Filed under: Information operations

Обращение студентов Украины к студентам России

$
0
0

http://www.youtube.com/watch?v=mPB-sZ4sVss

От студентов и преподавателей известных школ в Украине для студентов и преподавателей школ в России.

#МГТУ_им_Баумана #МГТУ #МГТУГА #mstuca #МГУ #msu
#МФПУ #Синергия #нау #nau #кпи #кпі #kpi #knu #кну #кнеу #kneu #students #студенты #студенти #россия #москва

Update:  I now understand this video is two weeks old and has been answered with a very regimented propagandistic video by young people who are not Russian students.

The words used echo those used by RIA Novosti, Sputnik News, RT and other propaganda sources, so they have taken the bait – hook, line and sinker.  Perhaps no Russian students can think for themselves, they have been cowed into submission, or they lack the resources necessary to make an independent response.

Source for the response: http://www.rferl.org/content/ukrainian-students-antipropaganda-video-russian-reply/26826540.html

Я призываю вас, настоящие студенты России, найти источники для независимых новостей, вопрос, что государственные источники новостей сказать вам, думаю, для себя.

I urge you, real students of Russia, find sources for independent news, question what state-run news sources tell you, think for yourselves.


Filed under: Information operations, Russia, Ukraine

Crow Call – 19 February 2015

$
0
0

The Association of Old Crows – Capitol Club Chapter

       Invites you to: Happy Hour – Crow Call !!

Thursday, 19 February 2015

1600-1900

   WHERE? Sweetwater, Centreville, VA

Centre Ridge Marketplace, 14250 Sweetwater Ln, Centreville, VA 20121

(703) 449-1100

(No host bar, light fare included)

Association of Old Crows

The Association of Old Crows advocates the need for a strong defense capability emphasizing electronic warfare and information operations to government, industry, academia, and the public. The AOC Capitol Club provides a unique forum within the national capital region for sharing ideas and experiences through communication, education.

COME MEET THE EW and IO/IW EXPERTS!

The AOC supports the EW community through:

  • Improving awareness and understanding of EW and related disciplines
  • Identifies deficiencies and advocates solutions and develops new concepts
  • Recognizes the contributions of individuals and organizations to the EW community
  • Documents historical perspectives and lessons learned​ throughout the EW enterprise

The AOC Capitol Club supports the information and influence communities

  • Hosting classes and meetings on information warfare, information operations, public diplomacy, strategic communication
  • Hosting workshops on IW, IO, PD, SC, to advance information activities within the United States
  • Assists in the development of solutions for systemic information problems

http://aoccapitolclub.com/index.php

I’ll be there.  I can’t treat you to anything, there will be some free food provided.  BUT, we’ll have a chance to chat in a conspiratorial environment, plan world domina…  whoops, make friends and have a great time.  I do accept drinks – water.  Oh, and I love hamburgers!


Filed under: Electronic Warfare, Information operations

Crow Call Correction – 19 Feb 2015 – Sweetwater Sterling

$
0
0

Sweetwater Sterling, VA 45980 Waterview Plaza, Sterling, VA 20166

A few days ago I published an invitation to come meet the folks in the Capitol Club at what is called a Crow Call.

This is a correction.

I got the location incorrect.  It is being held at Sweetwater Sterling.

Yeah, I know, for those of you in the DC area that’s a long way, but it gets you out of the house.

The actual address is 45980 Waterview Plaza, Sterling, VA 20166.

The phone number there is (703) 449-1100, if your GPS conks out.

I honestly hope I see a bunch of you there.

It’s the only group where you can talk IO, SC and PD and drink beer at the same time.   Feel free to treat me to the finest water in the land and no, please don’t walk outside and get a glass full of snow…


Filed under: Information operations Tagged: AOC, EW, Information Activities, information warfare, Io, PD, SC

Viewing all 5256 articles
Browse latest View live




Latest Images