Freedom of the press annual rankings: are they worth reporting on?

We’re all familiar with press freedom rankings, like Freedom
House’s annual Freedom of the Press Index. But are they valid?

This was the question of the day at the Central European University in Budapest. I’m
here for a summer course in Media,
Democratization and International Development
put on by the CEU and the Annenberg School for Communication,
University of Pennsylvania.

I arrived at the university this morning filled with the
apprehension of any first-day student, wondering how a room of some 30
strangers will become my friends and colleagues within the space of
two-and-a-half weeks. CEU’s Summer University attracts students from around the
globe. My classmates are from El Salvador, China, Palestine, Lithuania, Russia,
India, Uganda, and all points in between. They represent an even mix of
journalists, media scholars, and people working in the growing field of media
development.

“There’s no better place and time to be talking about media
and the process of democratization than here in Eastern Europe,” said guest
faculty member Tudor
Vlad
, referring to the rapid emergence of independent media in the former
East Bloc countries.

To kick off the summer course, Vlad and his research partner
Lee Becker
– both from the University of Georgia’s Cox Center  – took a look at systems that measure
freedom of the press. Specifically, they reported on their research into three
of the most popular annual report cards, produced by Freedom House, Reporters Without Borders and the International Research and Exchanges
Board
(IREX).

While many of us highlight these rankings in news articles
and academic papers alike, what evidence do we have that they are valid and
unbiased?

Vlad and Becker reasoned that consistency would tell the tale.
Using the
Pearson Product Moment Correlation Coefficient
, they checked to see if the
rankings of different agencies supported each other’s findings. If the three
agencies came up with markedly different results, this might be a sign that
bias and/or randomness were at play.

Each agency had its own approach, ranging from
questionnaires to expert panelists who examined and graded key press freedom
indicators like protection for journalists.

In the end, the tallies came out remarkably the same, no
matter who gathered the data, or how. Vlad and Lee concluded replication was
the sincerest form of validity. As well, results over a number of
years revealed that the indicators rose and fell with historical events, such
as the fall of the Berlin Wall.

“We look for identification of a pattern of change that
shows it isn’t random,” explained Becker. 

The discussion afterward turned to a bigger question: does
anyone care? A participant said that when she asked a group of U.S. government policy workers if they included press freedom measures in their deliberations, the answer was no. In fact, they hadn’t even heard of the annual
surveys until she showed them copies.

Moral of the story: as with all measurement tools, press
freedoms rankings are only as valuable as they are used. At least now journalists can use the weight of Vlad and Becker’s study to argue that
governments should take press freedom report cards more seriously, and act
on them.

Download the study:

An
Analysis of Press Freedom Indicators,
by Lee B. Becker, Tudor Vlad and
Nancy Nusser.