draft handbook on values for life in a democracy

Transcripción

draft handbook on values for life in a democracy
DRAFT HANDBOOK ON
VALUES FOR LIFE IN A DEMOCRACY
Edited by
Robert Stradling & Christopher Rowe
The opinions expressed in this work are those of the authors and do not necessarily
reflect the official policy of the Council of Europe
ii
CONTENTS
Page
v
PREFACE
vi
THE AUTHORS
THE AIMS AND STRUCTURE OF THIS BOOK
1
INTRODUCTION: Cultural Identities, Shared Values and European
Citizenship
5
KEY QUESTION ONE: How is the ordinary person to be protected against
the arbitrary power of the state?
13
CASE STUDY 1: The Case of Extraordinary Renditions in the “War on
Terror”
17
CASE STUDY 2: When a totalitarian regime is overthrown should the
secret police files be destroyed or should the archives be opened so that
society can confront its past?
23
KEY QUESTION TWO: Does the state need to protect people from
themselves?
32
CASE STUDY 3: The banning of tobacco smoking in public places
38
CASE STUDY 4: The right to live and the right to die
43
KEY QUESTION THREE: Do we have the right to freely express ourselves
in any way we wish?
51
CASE STUDY 5: Free Speech or Religious Offence: The Case of the
Danish Cartoons mocking the Prophet Mohammed.
55
CASE STUDY 6: The right to march to commemorate one’s cultural
history: the case of Northern Ireland.
60
KEY QUESTION FOUR: Does everybody have the right to live where they
wish?
66
CASE STUDY 7: Political refugees or economic migrants? Europe’s
changing response to immigration
70
CASE STUDY 8: The process of becoming a minority
78
iii
KEY QUESTION FIVE: Is there such a thing as a “Just War”?
88
CASE STUDY 9: The ‘War on Terror’
91
CASE STUDY 10: Cultural monuments or human lives? The case for the
protection of cultural property
101
KEY QUESTION SIX: What is more important: maintaining a healthy
national economy or ensuring that everyone is entitled to the basic necessities
of life?
107
CASE STUDY 11: Did the end of communism in the Soviet Union leave
the elderly and vulnerable in a worse position?
112
CASE STUDY 12: Is government intervention the best way to promote the 119
principle of Equal Pay for Equal Work?
CASE STUDY 13: Women have the same right to education as men. Why 124
is it that they cannot always exercise that right?
KEY QUESTION SEVEN: Why do human beings seem to find it so difficult to 128
look after their environment?
CASE STUDY 14: The Kyoto Protocol and the debate on the speed and 133
impact of climate change
CASE STUDY 15: How are we going to meet our increasing energy needs 141
in the 21st Century?
KEY QUESTION EIGHT: Is democracy enough?
149
CASE STUDY 16: Can democracy take root when it is transplanted? The 153
example of Iraq
CASE STUDY 17: Will Communication Technologies enable ordinary 160
people to influence their governments more effectively?
169
CONCLUSION
iv
PREFACE
v
THE AUTHORS
Robert STRADLING, Research Fellow, College of Humanities
Social Science, University of Edinburgh, Scotland, United Kingdom
&
Christopher ROWE, historian and retired history teacher and currently a Chief
Examiner for Upper Secondary history examinations, based in England, United
Kingdom
Zofia Halina ARCHIBALD, Senior Research Fellow, School of Archaelogy, Classics
and Egyptology, University of Liverpool, United Kingdom
Damir AGIČIĆ, University of Zagreb, Croatia
Magdalena NAJBAR-AGIČIĆ, Srednja Europa, Zagreb, Croatia
Mihai MANEA, School Inspector of History, Ministry of Education, Romania.
Jean PETAUX, Institute of Political Studies, Bordeaux, France
Jacek WÒDZ, International School of Political Science, Katowice, Poland
vi
THE AIMS AND STRUCTURE OF THIS BOOK
Although the International Human Rights instruments which have emerged since 1945 are
intended as a check on the actions of governments and can only be used to bring a legal claim
against public bodies, they also require states to promote human rights as well as refrain from
abusing the rights of their citizens. It is also clear from the preambles to these documents that
they are concerned not only with how individual human beings should be treated by the state
and other organs of society but also how people should behave towards one another1.
However, for this to happen, it is necessary to take steps to ensure that these rights, and the
values which underpin them, take a hold in society; that each and everyone of us has a sense
of ownership over them and sees them as expressions of the broad principles on which not
only public but also private life should be based.
In recent years, a variety of resources have been produced by governments, intergovernmental
organisations and NGOs which are aimed specifically at raising public awareness about
human rights. Most of these materials are designed to develop young people’s knowledge
about their various rights; the key international declarations and conventions; the
intergovernmental bodies which have been set up to monitor and protect people’s rights, and
the NGOs which also monitor abuses such as Amnesty International and Human Rights
Watch. But human rights are framed in an abstract and legalistic language which tends to be
remote from people’s experiences of everyday life. In the view of the authors of this current
publication, we are all more likely to make connections between the Declarations,
Conventions and Constitutions drafted by lawyers and ratified by politicians, on the one hand,
and our everyday experiences and dealings with each other if we have a clear understanding:
ƒ
ƒ
of what life can be like when certain rights are denied to some or all of us; and:
why people have struggled and continue to struggle - to acquire certain rights or to
prevent the further abuse of people’s established rights.
This entails a shift of emphasis from the “rights” dimension to the “human” dimension of
human rights. As Eleanor Roosevelt, when she was still Chair of the United Nation’s
Commission on Human Rights, observed in 1958 in her address to the UN:
“Where, after all, do universal rights begin? In small places, close to
home…[T]hey are the world of the individual person; the neighbourhood
he lives in, the school or college he attends, the factory, farm or office
where he works. Such are the places where every man, woman or child
seeks equal justice, equal opportunity, equal dignity without
1
The necessity of taking steps to build a culture of human rights in each society became increasingly
apparent after 1945 as evidence emerged of the extent to which civil society in totalitarian states was
penetrated by the organs of repression. For example, in Germany between 1933 and 1945 under
National Socialism, hundreds of thousands of people were employed in the agencies which enforced
repression; others voluntarily spied on their neighbours and betrayed them to the authorities, ordinary
businessmen tendered for contracts to supply equipment to the death camps, travel agencies organised
transport to those camps, some medical doctors and scientists ignored their professional codes of ethics
to assist in the extermination process, and so on. The involvement of civil society in the organs of
repression continued in the occupied states during the Second World War and subsequently in
Communist regimes.
1
discrimination. Unless these rights have meaning there, they have little
meaning anywhere”.
So, what we have tried to do here is develop a resource that encourages users to:
−
−
−
−
−
develop their own point of view in relation to other opinions and
perspectives;
think about clashes of values and human rights issues and how they might be
resolved in ways that are fair, balanced and proportionate;
empathise with other people’s points of view (even if one does not agree with
them);
engage in dialogue over disputed issues rather than in monologues based
solely on their own point of view or cultural perspective.
set particular issues and debates into a wider historical, cultural and global
context.
There are three main target groups for the resource:
ƒ
16-25 year-olds not only in formal education but also those participating in
informal groups within youth work and the voluntary sector;
ƒ
professionals working with 16-25 year-olds in these settings;
ƒ
those who are involved in the training of these professionals.
The resource itself comprises three inter-related elements: this book, a timeline poster and a
series of discussion cards.
The Book
This is structured around Key Questions and Case Studies. The Key Questions approach
fundamental issues associated with core European values and universal human rights,
particularly when they may be in conflict with each other. The discussion around each Key
Question attempts to explore the different positions and opinions rather than promote a
specific point of view. For each Key Question, there are also several Case Studies that look at
particular events, developments or circumstances where some people have asserted one
particular right or value and others have asserted another. The discussion of each case study
provides a timeline of the issue, an outline of what is under dispute and a selection of different
viewpoints. The objective here is to promote discussion and explore ways in which issues
like this might be resolved.
Two other points should be emphasised about the structure and content of the book. First, this
book has not been designed to be read from cover to cover. You could begin with any key
question or case study although we would suggest that if you are a professional working with
groups of young people then it would be advisable to read the introduction before looking at
one of the case studies or using the discussion cards. Second, the book is not intended to be
comprehensive either in terms of the topics covered or the areas of Europe from which the
case studies have been drawn. What we have tried to produce here is a template that
professionals working formally and informally with young people in areas like citizenship,
human rights education, community education, social education, modern languages, European
2
studies, life skills, etc could use to develop similar materials around other key questions,
issues and case studies.
The Timeline Poster
This is a wall chart which, when unfolded, provides a timeline. There are three parallel
strands here:
ƒ
Important historical developments which have shaped our thinking about the
relationship between governments and their citizens, the rights and responsibilities of
citizens and the ways in which we should treat each other in our day-to-day
relationships. These include the overthrow of tyrants and absolute rulers through
revolution and civil war but also demands for the vote by ordinary people, the
emancipation of serfs and slaves, the desire to regulate the practice and conduct of
war and to prevent the victims of war from being badly treated by the conquerors, and
even the desire to protect people, animals and the natural environment from the worst
excesses of urbanisation and industrialisation.
ƒ
Important developments in our thinking about human rights and core values such as
justice and freedom.
ƒ
Important international measures (such as Declarations, Conventions, Treaties and
Bills of Rights) which have been introduced in order to protect human rights and
ensure that some of the worst abuses of human rights will never happen again.
The Discussion Cards
This is a series of cards on a range of different issues relating to civil rights, social and
economic rights, cultural rights and children’s rights. The cards are designed to introduce
discussion on each issue, provide useful background information and different perspectives
and points of view. They are designed to stimulate further thought and discussion in a group
setting.
Robert Stradling & Christopher Rowe
Strasbourg, 2007
3
4
INTRODUCTION:
Citizenship
Cultural
Identities,
Shared
Values
and
Robert Stradling
The Idea Of Citizenship
When we say that someone is a citizen we usually mean that they are a member of or “belong
to” a particular state: a citizen of France, a citizen of Romania, a citizen of the Czech
Republic, and so on. As a citizen of that state, he or she enjoys the rights and privileges of
membership and also the responsibilities and obligations that come with that membership.
It would seem therefore that citizenship is a legal status. But it has never been quite as simple
as that. If you are a tourist in another country, you will share many of the responsibilities and
obligations of a citizen of that country. You will be expected to obey that country’s laws,
recognise the authority of the police and the courts, give evidence in court if you witnessed a
crime, not get involved in spying or other activities against the regime, not insult the national
flag or other national symbols, and show respect for the culture and religious practices of
people who live there. At the same time, as a visitor to that country, you would still expect to
exercise many of the freedoms which citizens enjoy, the same right to legal representation if
accused of a crime, the same right to a fair trial, the same consumer protection and similar
entitlements to medical treatment.
It is clear then that the jurisdiction of the State is applied to everyone who happens to be
within its territorial borders regardless of whether or not they were born there and regardless
of whether or not they are a citizen of that State. Of course, there are some differences
between the rights and responsibilities of the tourist and those of the permanent or long-stay
resident in a country. The latter is far more likely to have a job, own property and send their
children to school and in return they will be expected to pay taxes and social insurance or
social security contributions. They will probably also be entitled to join a trade union or other
associations and pressure groups that could protect their interests.
However, unless that long-stay resident in a foreign country becomes a naturalised citizen,
they will not have the right to vote in local and national elections and they would not be
expected to do military service. This distinction highlights two other criteria of citizenship
which are usually introduced into any discussion of what it means to be a citizen:
participation in public life and allegiance to the state. The citizen is more than just a subject of
the State. The subject also has legal status with rights, privileges and responsibilities that
have been granted by the monarch, the dictator or the autocrats who rule the country. But, in
other respects, the subject is passive. The citizen, on the other hand, has the right to
participate in the decision-making process either directly as in the city states of ancient
Greece, or indirectly through elected representatives as in modern mass democracies.
This notion of active participatory citizenship has always meant more than just the casting of
a vote in periodic elections. This idea of citizenship in action was probably best described by
the Athenian Pericles more than two millennia ago:
5
“Our constitution is called a democracy because power is in the hands not of a
minority but of the whole people. Everyone here is equal before the law, and no one,
so long as he has it in him to be of service to the state, is kept in political obscurity
because of poverty…..Here each individual is interested not only in his own affairs,
but in the affairs of state as well: even those who are mostly occupied with their own
business are extremely well-informed on general politics. We Athenians, in our own
persons, take our decisions on policy or submit them to proper discussions: we do not
think there is an incompatibility between words and deeds, since the worst thing is to
rush into action before the consequences have been properly debated”.
Of course, it should be pointed out that the citizenry of ancient Athens excluded women and
slaves but, in every other respect, the fundamental principles are there: active participation in
public affairs, public discussion and debate before decisions are taken and everyone’s vote
counting equally regardless of status or wealth. Undoubtedly, the scope for direct
participation in public affairs has become more limited in modern mass democracies which
tend to have large state bureaucracies and where the decisions that have to be taken are
increasingly technical but, at the same time, the emergence of mass political parties and
interest groups, mass media and the Internet have combined to provide new ways of enabling
the citizen to participate, exercise influence and engage in the discussion of public issues and
policies.
This brings us to the third possible criterion of citizenship: allegiance. When someone seeks
to acquire citizenship through the process of naturalisation - in other words, where someone
who was born in another country becomes a citizen of the country in which they now choose
to live - they are often asked to sign an oath of allegiance to their new country. In becoming
naturalised, the immigrant acquires the same political and civil rights and responsibilities as a
citizen by birth. In some countries, they even allow their naturalised citizens to have dual
citizenship. That is, they are a citizen of the country of their birth (or their parents’ birth) and
a naturalised citizen of the country in which they now live.
But this does not always mean that their legal status is exactly the same as the status of a
citizen born in that country. For example, if war breaks out between their country of origin
and the country in which they are now a naturalised citizen, they may find that some of their
rights and freedoms are withheld because the government is not sure about their loyalty. They
may, for instance, find themselves arrested and interned as security risks regardless of
whether or not they are naturalised citizens in the country in which they now live. It is also
the case that, in many countries, naturalised citizenship may be withdrawn if the naturalised
citizen commits a serious crime or becomes involved in a plot against the regime. By
contrast, in modern times at least, the citizen by birth is unlikely to have his or her citizenship
withdrawn whatever crimes they commit, even if the crime involves treason.
Why should the State make this distinction between the rights of citizens by birth (or by
descent from parents and grandparents) and the rights of dual citizens and naturalised
citizens? Rightly or wrongly, when the State makes this distinction, it is assuming that legal
status is not sufficient to guarantee either the good conduct of a naturalised citizen or his or
her loyalty to the State. In some circumstances, such as hot and cold wars, international
terrorism or conflicts in neighbouring countries, the State - and many of the people who live
in it - may begin to question the allegiance of some of its citizens if it is thought that they
may have conflicting loyalties.
6
So what is the basis of that allegiance? In recent times, the bond that has united the citizens
of a particular state has usually been their nationality, a common history and shared cultural
traditions. It is no accident that the modern idea of citizenship - with its emphasis on rights
and obligations – has developed alongside the emergence of nationalism and popular
democratic control (the idea that it is the will of the people that gives a government legitimacy
rather than God or the regime’s capacity to coerce everyone). In modern times, “the People”
has usually meant “the nation” although sometimes it has been used to describe people whose
religion or ethnicity has transcended national borders. For example, the term “pan-Arabism”
emerged as a reaction against the imposition of territorial borders by colonial powers in the
19th and early 20th Centuries.
It is clear that, when citizenship is linked to identity and cultural heritage in this way, some
people will be included and some will be excluded. Is that inevitable or is it possible to also
think in terms of a citizenship which is more inclusive, more universalistic?
This is the intention behind the idea of European citizenship. It begins with the question: Can
people have a sense of belonging to and identifying with something larger than the nation
state? The idea of a united Europe of some kind has been around for over 80 years now. In
1923, Count Coudenhove-Kalergi wrote a book called Pan-Europa which argued for a
European Federation. Six years later the French socialist, Aristide Briand, called for a
European Federal Union. Although both ideas gained some popular support at the time, the
Wall Street Crash, the economic recession and political and international developments in the
1930s which led to war, soon put a stop to any further thinking about federation.
Just after the Second World War, Winston Churchill made a speech in Zurich where he called
for “a United States of Europe”. This reflected a view that was growing across war-torn
Europe and in 1949 the Council of Europe, comprising 11 member states, came into being,
based at Strasbourg. Within a year, it had produced the European Convention on Human
Rights and then set about establishing the means by which these rights could be protected,
including the European Commission of Human Rights (1954) and the European Court of
Human Rights (1959). Whilst these developments did not create a federal level of citizenship
or a new kind of transnational allegiance, it did provide the citizen with an opportunity to
appeal to the European Court when he or she believed that their universal rights had been
denied by their national government or law courts.
The Council of Europe has also played an important role in the promotion of cultural rights,
including the recognition of the rights of minority groups, the protection of heritages, the
expression of cultural identities and the right of access to other cultural and technological
resources. This, in turn, has broadened the context for thinking about European citizenship so
that it now includes not only the traditional rights and privileges of the citizen in a democratic
state but also the third dimension of the general right to have one’s culture and heritage
recognised. In the highly diverse societies in which most Europeans now live, this notion of
cultural citizenship could become increasingly important as a means of generating a sense of
belonging, particularly amongst groups who often feel marginalised by the dominant cultural
groups.
The other key development towards greater European integration in the period just after the
Second World War was the growth of economic cooperation within Western Europe which
led gradually to the establishment of the European Union. In 1951, six countries set up the
European Coal and Steel Community [ECSC] to coordinate production and industrial
investment. Six years later, the ECSC became the European Economic Community [EEC]
7
and, over the 30 years, the membership expanded from six to 12. In 1987, the 12 members of
the EEC signed the Single European Act and the EEC became the European Union [EU]. The
Maastricht Treaty, which was signed in 1991, then paved the way for the enlargement process
through which the former communist states of Central and Eastern Europe could join the EU2.
Since one of the four main institutions of the European Union is the European Parliament,
with direct elections held every five years in which the citizens of every member state can
vote and elect their own representatives, there is clearly a potential for the development of a
European citizenship that transcends nationality. However, to date, there has not been much
evidence of this happening on a wide scale. The turnout for European elections is usually low
and the vote often reflects people’s concerns with national rather than European issues and
their attitudes to the national government of the day.
Although there is support in some member states for more political integration - in May
2000, for example, the then German Foreign Minister, Joschka Fischer, argued the case for a
full European Federation with its own government and parliament - there is also widespread
opposition to national governments ceding much more national sovereignty to the institutions
of the EU.
At present, it would seem therefore that most progress has been made in developing the rights
dimension of European citizenship but the evidence of active participation in public life at the
European level is more limited and, as yet, there is little evidence of a clear sense of
allegiance at the European level strong enough to transcend other loyalties, particularly
allegiance to the nation state.
However, this rights-based concept of European citizenship which has emerged since the
Second World War – and which has been expanding to incorporate not only civil and political
rights but also social, economic and cultural rights is interesting for two reasons. First, by
shifting the emphasis from national rights to universal human rights, it has shifted the focus
from citizens as “the people” - or the nation – to citizens as “persons”. Second, it has opened
up the possibility that citizens across Europe might share an allegiance to these human rights
which might transcend national borders.
Furthermore this rights-based concept of European citizenship has not simply been an
academic exercise. One of the core activities of the Council of Europe over the last 50 years
or so has been to elaborate the concept by developing a broad network of so-called “European
standards” through nearly 200 conventions and agreements and hundreds more
recommendations3. Since 1949 these common standards, covering almost all areas of public
and private life and relations between individuals and the state (except in the military and
economic fields) were largely introduced into the legislation and practices of all 47 member
states of the Council of Europe, and, most recently, in the Eastern and South Eastern
European countries since 1989-90.
2
A legal definition of European Citizenship within the European Union can be found in Article 17 of
the Treaty establishing the European Community. It states that any person holding the nationality of an
EU member state shall be a citizen of the Union. This entitles that person to move freely within the EU,
vote and stand as a candidate in elections to the European Parliament, petition the European Parliament
and received protection from the diplomatic or consular authorities of any EU member state when they
are in a third country.
3
These recommendations have not been binding instruments on governments but they do represent a
‘common policy’ and the Committee of Ministers asks the government of each member state to ‘inform
it of the action taken by them’ on these various recommendations.
8
But, even if we assume that the peoples of Europe are aware of these rights and are committed
to them, is this sufficient to generate a sense of belonging to a wider European community in
the same way that citizens of a nation state feel that they belong to a particular political and
cultural community? Or are they too culturally diverse for this to ever happen?
European Citizenship and Cultural Identities
For much of the 20th Century, many of the social, ethnic, religious and cultural divisions and
conflicts which have shaped the history of Europe were present but hidden – simmering under
the surface. This was because their presence was masked by the all-embracing ideologies of
the Cold War which dominated international relations and even everyday life. However,
some of the old social and cultural divisions reappeared after the break-up of the communist
bloc in the last decade of the 20th Century.
At the same time, the rest of Europe was also becoming more ethnically and culturally
diverse, partly because of increased population movement within Europe, particularly from
east to west, and partly because of increased immigration from outside Europe, particularly
from the former colonies after they became independent in the mid-20th Century.
Cultural diversity, in itself, does not necessarily create a problem. Problems arise where:
−
−
−
−
The majority feel threatened by the beliefs, values and way of life of a particular
minority;
A minority feels threatened by the beliefs, values and way of life of the majority;
A minority feels marginalised, and discriminated against or feels that their cultural
traditions are not respected by the government or by the majority community;
A minority seem to exclude themselves and opt-out of any participation in the
community.
In these circumstances, governments become concerned about the increasing potential for
social tension, even conflict, particularly in Europe’s larger cities where the social and
cultural mix often seems to be highly volatile. The issue then is how best to create the
conditions which will ensure peaceful and constructive coexistence between the diverse
communities in each society.
One response from governments has been to introduce more policies aimed at addressing the
social and economic deprivation of the more marginalised minority groups. However, it may
also be necessary to examine why these groups appear to be disaffected with and feel
excluded from democratic pluralist politics. After all, the basic principle of pluralist
democracy is that it is supposed to permit the peaceful coexistence of different interests and
convictions within the same political community. Undoubtedly, there is scope for more
political reforms to make pluralism more inclusive. Critics point to the fact that some groups
find it more difficult to get air time on the mass media to present their case or that the
spokespersons who do appear on television are not representative of their communities. They
also argue that, in modern pluralist democracies, groups need a high level of resources and
organisational skills and a network of connections within the political system in order to
exercise any influence over the political process.
9
Finally, the critics also argue that pluralist democracy emerged at a time when people made a
clear distinction between public life and private life. Matters of faith and identity were kept
outside the political arena, while the interests competing for influence tended to be social and
economic. In a multi-cultural, multi-ethnic society, it may be more difficult to retain that
clear distinction between the public and the private and then the question arises as to how
effectively the political system can cope with identity politics.
It has also been argued that more inclusive policies and reforms to the political system will
not be sufficient to ensure that all people feel a sense of belonging and allegiance to the
community in which they now live. This has led to a much greater focus on citizenship
education than was the case 25 years ago. But this raises the issues of what will be the basis
for that allegiance; what, other than self-interest, would bind people to a community if they do
not share the same history, heritage and cultural traditions as the majority of people living
there?
Some have argued that citizenship education and the process of becoming a naturalised
citizen should inform the newcomer about the community they have joined so that they can
become better integrated. Others argue that all of us - and not just the immigrant population
- need to be more aware about what it means to peacefully coexist with people from other
backgrounds, cultures and traditions.
This brings us back to the issue with which we ended the previous section. Is it possible to
shift the focus from “people” to “persons” and generate a sense of belonging or allegiance to a
political culture based around universal human rights?
Human Rights and Core Values
The only problem with generating a sense of allegiance to universal human rights is that very
few rights could be said to be absolute. There are exceptions such as the prohibition of torture
and slavery, but generally you will find that the International Conventions on Rights (and
most countries’ Bills of Rights) will specify certain conditions where it might be acceptable to
deny someone a specific right. Take, for example, freedom of movement. This is a right that
all people enjoy who live in the states which signed the European Convention on Human
Rights.
However, if there was an earthquake or a hurricane, it would be wholly reasonable for the
police to suspend that freedom temporarily to prevent looting, to ensure that no-one else gets
hurt, and to enable the emergency vehicles to get access to injured people. Another example
might be freedom of association. The right to form a political party, for instance, might be
restricted if one of the aims or policies of that party is to abolish all other political parties or
associations and create a one-party state. In most countries, we would also expect that some
of our basic civil and political rights might be temporarily suspended in times of war or
national threat.
There are also lots of circumstances where two or more rights may conflict with each other.
The rights of a minority culture to continue its traditions, customs and religious practices may
conflict with the rights of an individual member of that group if he or she chooses an
alternative way of life or religion. Freedom of speech may not always be permitted if a public
speech incites others to attack people because they have different beliefs or belong to a
10
different ethnic group. Similarly, we may not be allowed to say in public what we think if it
libels some other person or offends them.
In everyday life, therefore, we find that we are often being asked to consider the
consequences of our actions before exercising our rights and, if we choose to ignore the
consequences, we may find that we are the object of widespread social disapproval or we may
even find ourselves in the law courts.
Rights tend to be highly specific, relating to particular circumstances and problems. Most of
them, for historical reasons, also relate to the conduct of the state and its institutions towards
its citizens and the citizens of other states. We need to look deeper. We need to look at the
values which underpin these rights, that is, the values which are concerned with:
-
How we treat each other, even when there are fundamental differences between us
and we disagree with each other about many things that we hold to be important.
-
How we resolve conflicts and disagreements between us when they arise.
The authors of this book have identified a number of core values which we think perform
these twin functions in everyday life as well as in the way the institutions of the democratic
state are supposed to treat the citizen. We have described them as “procedural values”. We
mean by this term the values that should guide the way we proceed in our dealings with each
other at the level of the individual, the group, the community or the nation.
These procedural values are ethical values and they are also at the core of the practice of
democratic politics. They are the values which enable us to talk to each other, live alongside
each other and try and find compromises and solutions to our common problems without
either resorting to violence or refusing to interact with each other in any way. They are also
the values which enable us to live together even when we fundamentally disagree with each
other on religious, political and ideological grounds.
We have used everyday language to describe them rather than use the more precise definitions
of political philosophy because we believe that they are values which do not just belong to the
political arena. They can also guide the ways in which we relate to each other at every level of
social interaction and communal living. We would argue that the following are the most
fundamental of these procedural values:
Dignity: All people have an equal entitlement to respect because of their humanity rather
than because of their importance, status or wealth.
Reciprocity: Treating someone else in the same way as one would wish to be treated.
Fairness: A way of making decisions or passing judgment impartially, without
discriminating between people who are equally deserving or in need and without knowing
whether the outcome will be to one’s own benefit.
11
Toleration: The degree to which we accept the right of others to express alternative ideas and
opinions which we may disapprove of, without attempting to force them to change their point
of view.4
Freedom: To be able to take action for oneself and others and to make choices between real
and realistic alternatives without being coerced.
Respect for Reasoning: A willingness to give reasons why one holds a particular point of
view and to give reasoned explanations for one’s actions, and also to expect others to do the
same.
Respect for Truth: A willingness to be honest and truthful in our dealings with others and to
expect the same truth and honesty from them unless they give us good cause to doubt them.5
It should be emphasised that this is not a book about procedural values. The case studies (and
the accompanying discussion cards) are intended to encourage the user to apply these values
to a variety of issues where human rights conflict. Just as we learn skills by practising them
so we acquire these procedural values by practising them.
That is why this book and the discussion cards that accompany it put so much emphasis on
exploring alternative perspectives on a range of important contemporary questions and issues.
You may have already formed a point of view on some, even all, of these topics. We are not
attempting here to persuade you that there is a correct or preferred view on each and every
one of these issues. What we do hope is that you will be willing and prepared to test your
opinions against those put forward by others who may disagree with you and that you will
accept that they hold those views sincerely and may have good reasons for seeing things
differently from you. In doing so we hope that you will gain a better understanding of their
views and your own.
4
Toleration does not prevent us from expressing our disapproval of certain opinions or from seeking to
persuade people to agree with us.
5
We cannot take part in any kind of social interaction, fairly, freely, with toleration and reciprocity,
without good faith and that good faith depends on our willingness to be honest and truthful and on the
willingness of the others we are interacting with also to be honest and truthful with us. Of course, in
public life, there may be certain circumstances where the truth cannot be told (to respect
confidentiality, to prevent serious harm being done, etc) but then the reasons for withholding the truth
should be capable of being publicly justified and scrutinised.)
12
KEY QUESTION ONE:
How is the ordinary person to be
protected against the arbitrary power of the state?
Mihai Manea and Christopher Rowe
Democracy is based on the principle that the people hold power over legislators and the
government. Power and civic responsibilities are exercised in the name of all citizens, through
their freely elected representatives. Democracy is the institutionalisation of individual
freedoms. In any democratic state, therefore, while respecting the will of the majority, it is
vital to protect the rights of individuals and minority groups. True democracy cannot be
based on the “tyranny of the majority”.
Citizens do not only have rights – they have the responsibility to participate in the political
system, and to accept the rule of law. In its turn, the State has the responsibility to uphold the
rule of law, in accordance with established, open procedures and refraining from the arbitrary
use of state power. The history of the last 50 years or so has seen many instances of such
arbitrary use of state power and human rights violations; the usual justification for such
actions being either the will of the majority of the people, or overriding concerns about
national security.
As a result, a tension exists between the agents of the state, claiming the right to suspend civil
liberties in the face of emergency situations, as opposed to organisations (both within
individual countries and through supranational bodies such as the European Court of Human
Rights) set up to defend the rights of citizens and to hold state agencies to account. This
tension between state and citizens can be illustrated by some significant examples: the rights
of individuals to be exempted from national laws in order to protect their human rights; the
treatment of prisoners and terror suspects; and the rights of individuals placed in psychiatric
hospitals.
The role of the European Court of Human Rights also illustrates the tension that can exist
between national governments and supranational organisations. Set up to implement the
principles of the Council of Europe’s Convention for the Protection of Human Rights and
Fundamental Freedoms (1950), the ECHR rules on complaints against human rights
violations brought by states, groups or individuals. Its rulings have often shaped the changing
relationship between the citizen and the state - often involving matters of religious faith,
intellectual freedom and moral philosophy.
The most stark example of the relationship between state and citizens is the question of the
rights and treatment of prisoners. It can be argued that prisoners are, by definition, guilty of
crimes and have properly been denied their liberty by being locked up – and yet prisoners,
being totally under the power of the state, have special rights to be protected. This right to
protection is not only against direct physical abuse and bad treatment, it also involves
protection from neglect, isolation or loss of dignity.
Many international organisations exist to uphold the rights of prisoners – Amnesty
International, the International Convention on Civil and Political Rights, the United Nations
Standard Minimum Rules for the Protection of Prisoners, and so on. According to the
Universal Declaration of Human Rights, “all prisoners shall retain their human rights and
fundamental freedoms”. In other words, being a prisoner does not mean that you cease to be a
citizen.
The reality is often different. Conditions of detention vary from country to country, and from
facility to facility, but standards are often low. Many prisons are affected by severe
overcrowding, decaying infrastructure, poor education and medical care, abuse by guards and
13
prisoner-on-prisoner violence. In some countries, a culture of secrecy makes it impossible to
even determine what the total prisoner numbers are. Sub-standard conditions are often
concealed from public scrutiny. There are frequent outbursts of prison violence. Such as the
incident in which at least 25 prisoners were killed in a riot at Bogotà’s Modelo prison. In
some instances, prison deaths receive little public attention and are regarded almost as
routine.
The treatment of prisoners is one of the issues where there is a danger of the “tyranny of the
majority” overriding the rights of individuals or minority groups. Public attitudes to prisoners
often suggest “they deserve punishment”, or “the money should be spent on more important
things”. Such attitudes are often reflected in the treatment of prisoners of war or terror
suspects detained without trial. Some argue they are an enemy and a threat – the state “cannot
afford” to respect their civil liberties above the safety of the majority. Abuses have occurred
without much public outcry. Amnesty International has stated that the treatment by the United
States of prisoners detained after the wars in Afghanistan and Iraq, at Guantanamo Bay or at
Abu Ghraib, has emboldened abusive regimes and weakened human rights around the world.
There are less dramatic ways of infringing the rights and protection of prisoners. The prison
population in many countries includes a higher-than-average proportion of people suffering
from tuberculosis, or from AIDS, or from mental health problems, or from drug addiction or
from low educational attainment. The responsibility of the state is not only to protect the
majority from suffering harm at the hands of criminals – it is also the responsibility of the
state to protect the criminals while they are in the charge of the state.
An important aspect of this responsibility concerns people incarcerated in psychiatric
hospitals. Even those citizens who are mentally ill continue to be citizens. Patients in
psychiatric hospitals, or other similar institutions such as asylums or the mental ward of
general hospitals may be there for their own protection, either in the long term because of a
permanent condition, or temporarily in the hope of recovery. Equally, they may be held there
for the protection of the wider community. There have been several cases in recent years
where innocent people have died as a result of violent attack by someone with a psychiatric
disorder. It is not always possible to establish the correct balance between the rights of the
individual and the safety of the public.
Mental hospitals have existed for centuries. The first known asylum, Bethlem Royal Hospital
(“Bedlam”), was founded in London in 1247. For most of Europe’s history, conditions in
mental institutions were cruel and inhumane. The mentally ill tend to be regarded with fear
and lack of understanding by “normal” citizens. Despite some honourable exceptions, such as
Philippe Pinel, the superintendent of the Asile de Bicêtre in Paris in the 1790s, and his
assistant, Jean-Baptiste Pusin, mental institutions remained merely repositories for the
mentally ill, outside mainstream society.
Another reason for the low priority given to mental health was that inmates of mental
institutions tended to be mostly from the lower classes. This is because most were not
admitted voluntarily but committed by a court order. State hospital patients tended to be those
without status or money; wealthier families were able to provide private care and to avoid the
social stigma of being labelled a “public menace”. Only in the later 20th Century, partly as the
result of the dissemination of the work of theorists such as Sigmund Freud and his successors,
has there been a change in public attitudes – though any such change remains patchy and far
from complete.
From the mid-1940s, new methods were invented for treating the mentally ill. This
represented progress, in that hopes were raised for curing mental illness and enabling patients
to return to a normal life in the community; even though the methods used (such as electric or
14
insulin shock therapy, and frontal lobotomy brain surgery) are now seen as primitive and
barbaric. New psychiatric drugs, such as thorazine, revolutionised patient care and allowed
the number of people detained in institutions to be reduced. Along with medical advances,
social and political attitudes were altered, for example the “Care in the Community”
programme launched in Britain.
Although psychiatric science has advanced, the question of the rights of the mentally ill is still
complex and controversial. Can mentally ill people be given treatment, or deprived of their
liberty, without giving their consent? Are mentally ill people capable of giving consent in a
meaningful way? Should it be the child or the parents who decide whether drugs are used to
quieten “anti-social behaviour”? Or does the decision lie with the state?
There are disturbing precedents to suggest that the state cannot, and perhaps should not, be
entrusted with such decisions. Totalitarian regimes have employed psychiatric care for evil
purposes. In Nazi Germany, mental institutions were used for the repression of political
dissidents and “asocial” people. They were deemed insane because of their misguided
opinions, or dangerous because they might infect others. From 1939, the Nazi regime began a
secret campaign to put to death those who were mentally handicapped and were to be
eliminated because they would be a threat to the “biological purity of the race”. Communist
regimes also used psychiatric hospitals to deal with political dissidents.
Such abuse of psychiatric hospitals can be very convenient for any state wishing to repress
opposition. Sophisticated drugs may be used as a means of interrogation, or in order to
destabilise a prisoner’s personality. Labelling people as mentally ill avoids the difficulties of
proving guilt at a public trial. It is effective in stigmatising and subordinating opponents. It
can be used to keep people locked up indefinitely. And because of these factors, and because
there are such examples of the abuse of state power, even the actions of democratic
governments can arouse suspicion.
In recent times, some campaigners have advocated the abolition of long-term hospitals for the
criminally insane. They argue that the legal defence of insanity should no longer be allowed
and that that those currently categorised as criminally insane should be placed either in a
regular prison if guilty of a crime; or in a regular hospital if they are innocent victims of
illness.
The role of the state in the treatment of the mentally ill remains controversial and difficult.
Michael Perlin, an international expert on mental disability law, has said that: “These issues
should matter to all citizens who take human rights seriously, and to all who care about those
who are still locked away in facilities that violate any sense of public decency. Circumstances
I have seen around the world are beyond shocking to the conscience; and this should be
puzzling, because psychiatric treatment is medical treatment, supposedly for benevolent
purposes”.
The treatment of the mentally ill is just one more example of the difficulties of establishing
the ideal balance between the rights of the state and the rights of the individual citizen. The
unjustified use of state power can lead to terrible consequences. Innocent individuals and
minority groups can lose their liberty, be cut off from friends and family, or be denied the
basic rights everybody else can take for granted in a democracy. And yet, the state does carry
the responsibility of protecting all its citizens, collectively as well as individually. If a mental
patient is placed back in the community and murders an innocent bystander, the state is held
responsible. If a terror suspect is let go because there in insufficient evidence for a trial and
then blows up 200 passengers on a train, the state is responsible for that, too.
So, setting out the right principles is one thing; implementing them perfectly in a democratic
society is something else – especially when governments have to live under the constant
15
scrutiny of newspapers and the broadcast media who are always full of the wisdom of
hindsight. There are no easy answers, except the basic answer that all citizens and all
governments should search for the best balance possible between rights and responsibilities.
16
CASE STUDY 1:
“War on Terror”
The case of extraordinary renditions in the
Christopher Rowe
Timeline
1993: Islamist militants organised an attack on the World Trade Center in New York. This
attack accentuated American concerns about the growing threat from anti-western jihadists.
September 1995: The first known case of rendition was an Egyptian terrorist suspect, Talaat
Fouad Qassem, arrested by Croatian police with the help of intelligence provided by the CIA.
He was interrogated by American agents on a ship in the Adriatic Sea and then handed over to
the Egyptian mukhabarat to be interrogated. His fate remains unknown.
11 September 2001: Nearly 3000 civilians were killed in co-ordinated terrorist attacks on the
World Trade Center in New York and other targets in Washington. The Bush administration
planned drastic measures to combat terrorism, including the invasion of Afghanistan.
October 2001: Mamdouh Habib, a suspect holding both Egyptian and Australian passports,
was detained in Pakistan and then flown to Egypt for interrogation. Later, a private plane flew
him to a US base in Afghanistan, from where he was transferred to Guantanamo Bay.
2002: The captured Al Qaeda leader, Ibn al-Shaykh al-Libi, was rendered to Egypt, where he
was allegedly tortured. Information gained by his interrogators formed part of the evidence
used by the Bush administration to justify the invasion of Iraq.
September 2002: A Syrian-born terrorist suspect, Maher Arar, was seized in New York and
flown to Syria for interrogation. After his release, Arar claimed he had been tortured. Later, in
February 2005, a Washington Post article about his case aroused a public debate in the US.
.
February 2003: A terrorist suspect, Hassan Mustafa Osama Nasr, “Abu Omar”, was
kidnapped in Milan by Italian agents working with the CIA and flown to Egypt.
December 2003: US agents seized Khalid-el-Masri, born in Lebanon but a German citizen,
from a bus he was travelling on in “the former Yugoslav republic of Macedonia”. El-Masri
was flown to Afghanistan where he claimed that he was drugged and tortured. There were
strong protests on his behalf from the German government. He was released without charge.
7 July 2005: 56 civilians lost their lives after a series of suicide bombings in central London.
After the event, the British security forces were widely criticised for perceived failures to gain
sufficient intelligence about the bombers to prevent the attacks from taking place.
November 2005: the German newspaper, Handelsblatt, reported that the CIA was using an
American military base in Germany for rendition flights without informing the government.
7 July 2006: The Committee on Legal Affairs and Human Rights of the Parliamentary
Assembly of the Council of Europe published its report and recommendations concerning
“alleged secret detentions and unlawful transfers between member states of the Council of
Europe”.
17
10 January 2007: a headline in Spiegel Online announced: “A Milan prosecutor is making
the CIA nervous. Despite the opposition of his own government, he wants to indict 26 US
agents and five Italian secret agents for the kidnapping of a terrorist suspect, Abu Nasr. Rome
and Washington would prefer that the embarrassing trial would just go away”.
June 2007: Dick Marty, Swiss rapporteur for the Council of Europe Committee on Legal
Affairs and Human Rights issued his final report on investigations into the involvement of
European countries in extraordinary renditions.
What is in dispute here?
The controversy about extraordinary renditions concerns the balance between human rights
and the duty of the state to protect its citizens. This controversy includes legal, moral and
practical issues. What is the legal status of “extraordinary renditions”? Are they legal under
US or international law? Can they ever be morally acceptable? Are they a necessary and
effective means of protecting the mass of innocent citizens against acts of terrorism?
It is difficult to disentangle the precise legal definition of “rendition” from its associations
with other issues such as deportations or torture. “Rendition” is not new. It means the act of
handing over, or “surrendering” a person from one jurisdiction to another – as in extradition,
which is an open procedure sending a suspect to a state where he or she has been accused of a
serious crime. “Extraordinary rendition” is a secret procedure and does not yet have an agreed
definition in international law. “Rendered” persons are often sent to states where there would
be no legal basis for extradition. Unlike extradition, the secret nature of renditions means that
there is no opportunity for suspects to make a legal appeal against it.
“Extraordinary rendition” is also different from deportation. Persons may legally be deported
from a country back to their country of origin for a variety of reasons – but many renditions
involve suspects who were captured by agents working in a foreign country; or were sent for
secret interrogation not to their home state but to a completely different country. There is also
the UN Convention Against Torture stating that suspects should not be deported if they are in
danger of being tortured or executed. The controversy about extraordinary renditions has been
further complicated by the accusation that renditions are invariably associated with torture.
When the “war on terror” began with the invasion of Afghanistan, the main concern for
human rights campaigners was the treatment of “illegal enemy combatants” in the detention
centre at Guantanamo Bay. Then, in 2004 and 2005, several reports drew attention to covert
rendition flights, and particularly the role of European governments in cooperating with the
CIA to allow these “ghost flights” to use European airfields and other facilities to transport
suspects to third party countries for special interrogations. Human rights protests aimed at two
chief targets: firstly against US actions as morally unjustified and illegal under international
law; secondly against the active cooperation with the US by European governments.
In 2004, the British Ambassador in Uzbekistan, Craig Murray, expressed his concerns about
rendition flights bringing suspects for interrogations taking place in Uzbekistan. The British
government reprimanded Murray for making unauthorised public statements and he resigned
in October 2004. In July 2006, the Council of Europe Committee on Legal Affairs and
Human Rights issued a report attacking “secret detentions and unlawful transfers of suspects
between member states”. In Milan in January 2007, Italian prosecutors began legal
proceedings against Italian and American agents for their role in kidnapping and rendition
flights. In June 2007, Dick Marty, Swiss rapporteur of the Committee on Legal Affairs and
18
Human Rights, gave a final report on its investigations, stating that sufficient evidence had
been found to prove the allegations of renditions involving European countries were credible.
Human Rights campaigners argue that prosecutions such as the one in Milan are merely
small, belated steps towards righting a great wrong. They want more and deeper
investigations and maximum public exposure of the findings. On the other side, are those such
as Silvio Berlusconi, Tony Blair and governments of some Eastern European states, who
believe that only decisive and united action by Europe and the US can win the “war on
terror”. For them, investigations into the dark corners of rendition flights are merely a
distraction from the vital tasks of defending innocent citizens.
At the heart of the controversy about renditions are moral issues about secrecy, detention
without trial and torture. Human rights activists claim that the chief reason why the US
renders suspects to “third party” countries is that the intelligence services in those countries
use extreme methods of interrogation that would be unlawful, or highly embarrassing, if
carried out in the United States; and enable torture of suspects that would be illegal in the US.
Defenders of rendition claim it is a totally separate issue from torture; and that it is effective
because the questioning of suspects is carried out by interrogators with expert language skills
and cultural insights not available in the US.
Another issue is the detention and transporting of suspects in secret without trial. It has long
been a key legal principle that those accused or suspected of evil intentions have the right to
be charged with a specific crime or released after a limited time under arrest. Accused persons
should have the right to a legal representative; and for their friends and families to know
where they are. This does not apply to the suspects secretly transferred by rendition flights.
The case against renditions is not only on moral grounds but also on practical considerations.
In the past, many suspects have been “interned” without trial – Germans in Britain or
Japanese in the United States during the Second World War, for example, or suspected IRA
terrorists in Northern Ireland in the 1970s. Such precedents might suggest that internment is
invariably ineffective because there were so many instances of people being detained on the
basis of unreliable information. In Northern Ireland, for example, internment was credited
with gaining many new recruits for the republican cause. Similarly, critics of extraordinary
rendition argue that they simply do not work.
They claim many “terror suspects” have been innocent victims of mistaken identity or false
accusations. Further, they claim that intelligence gained by extreme interrogations is actually
worthless because suspects are driven to confess to crimes they have not committed, or to
give information they know to be untrue. One example of this was the capture of an Al Qaeda
leader, Ibn al-Shaykh al-Libi, in 2002. He was rendered to Egypt and allegedly tortured there.
Information gained from his interrogation was used by the Bush administration to support the
case for invading Iraq. Al-Libi later claimed he had made up the “information” about links
between Al Qaeda and Saddam Hussein’s regime just to get the interrogations to stop.
It is not unusual to hear some world leaders argue that special measures are effective in the
“war on terror” and can be justified by the desperate need to protect people from the
unprecedented dangers posed by international terrorism - the human rights of terror suspects
must be balanced against the rights of the mass of ordinary law-abiding citizens to live their
lives without being attacked by suicide bombers. Legal rights and civil liberties have often
had to be curtailed in the crisis situation of war, it is argued – the defence of civilians against
acts of terror is equally necessary. But this argument depends above all upon the supposition
19
that the use of emergency measures like renditions is actually effective in neutralising the
terror threat.
The case that renditions and secret detentions are counterproductive even on practical grounds
can be extended further. Many observers consider that the use of secret special measures
outside the usual judicial system only undermines public support and respect for the law. It
weakens faith in democracy, both in the Muslim world and in the West. Such actions may
actually create many more new terrorists than would be neutralised by renditions. And yet …
such arguments still need to be balanced against the arguments of those who say that the
world has changed and that the great majority of ordinary people would enthusiastically give
approval to any special measures if it could be demonstrated that those measures had been
important in foiling another plot like that of 11 September 2001. Or would they?
Is there an alternative position which does not assume that there is a simple, stark choice
between effectively countering terrorism or protecting the human rights of the terrorist? There
is no doubt that much work has been done already to develop an alternative position. The
Council of Europe Convention on the Prevention of Terrorism, for instance, is based on the
principle that it is possible and necessary to counter terrorism while respecting human rights,
fundamental freedoms and the rule of law. Some commentators have also argued that
terrorism cannot be effectively brought to an end without addressing the issues and concerns
which have led people to turn to terrorism in the first place. Other commentators have argued
that terrorism is a crime and should therefore be treated like other crimes. This calls for
effective policing rather than a large-scale military response. In the following section you
will find a range of different views on these issues. Some of these are controversial. Some
represent the views of leaders in the countries which spearhead the current “war on terror”.
Some represent the views of critics of the position adopted by those governments. What do
you think?
A variety of viewpoints
Advice to President Clinton from his Vice-President, Al Gore, during a private White
House discussion about the intended abduction of a suspected terrorist and its standing
in international law:
“That’s a no-brainer. Of course it will be a violation of international law. That’s why it has to
be a covert action. But the guy is a terrorist. Go grab his ass.”
A comment attributed to a former CIA case officer, Bob Baer, in 2006:
“If you want a serious interrogation, you send the prisoner to Jordan. If you want them to be
tortured, you send them to Syria. If you want someone to disappear, you send them to Egypt.”
Comments in July 2005 by Craig Murray, British Ambassador in Uzbekistan, from
August 2002 until he resigned in October 2004:
“I saw intelligence material passed to the CIA and MI6 by the Uzbek security services. Much
of this material I knew to be incorrect. The intention was invariably to exaggerate the Islamist
threat in Uzbekistan and to link Uzbek opposition to Al Qaeda. The head of CIA station said
the material was probably obtained under torture but the CIA did not see this as a problem.”
An article by David Ignatius in the Washington Post, 9 March 2005:
“In 30 years of writing about intelligence matters, I have never encountered a spook who
didn’t realise that torture is usually counterproductive. Professional interrogators know how
20
people will confess to anything under intense pain. Information obtained under torture thus
tends to be unreliable – in addition to being immoral.”
At the end of the same article, Ignatius wrote:
“Before you make an easy judgement about rendition, you have to answer the disturbing
question put to me by a former CIA officer: ‘Suppose Mohammed Atta had been captured by
the FBI before 11 September 2001? Under US rules at the time, this man who plotted the
suicide attacks probably could not have been held or interrogated in the United States. Would
it have made sense to render him to another place where he could have been interrogated in a
way that might have prevented 9/11?’ That’s not a simple question for me to answer, even
though I share the conviction that torture is always and everywhere wrong.”
A report on “Outsourcing Torture” from Human Rights Watch in 2006:
“Even if a person is suspected of being a terrorist, it is illegal to send him or her to a place
where there is a risk of torture. Governments are aware of the legal ban on sending suspects to
such countries and they seek written guarantees – so-called ‘diplomatic assurances’ that the
suspect will not be tortured if transferred. But such guarantees are insufficient.”
The US Secretary of State, Condoleezza Rice, replying in December 2005 to criticisms of
US policy:
“Renditions take terrorists out of action. Renditions save lives.”
Jamie Gorelick, a former US deputy attorney general and a member of the 9/11
Commission, attempted to explain the difficulties of finding a fair and effective way of
dealing with suspected terrorists:
“It’s a big problem. In criminal justice, you either prosecute the suspects or let them go. We
can’t always do that with people who you think may be really dangerous. But if you have
treated them in ways that won’t allow you to prosecute them, you end up in this legal no
man’s land. What do you do with these people?”
Comments made in January 2007 by a German prosecutor, Eberhard Bayer:
“Of course it is true that we are dealing with big political issues here. But even if a crime is a
political one it still remains a crime.”
The view of a former CIA lawyer, John Radsan in 2006:
“As a society, we have not yet figured out what the rough rules are. There are hardly any rules
about how to deal with illegal enemy combatants. It’s the law of the jungle. And, right now,
the United States is the strongest animal in the jungle.”
The report by the Council of Europe’s Parliamentary Assembly Committee on Legal
Affairs and Human Rights in July 2006, concerning “alleged secret detentions and
unlawful inter-state transfers involving Council of Europe member states”:
“The Committee has found cause for concern about the conduct of the United States and of
EU member states. It recommended the establishment of: common measures to guarantee
more effectively the human rights of persons suspected of terrorist offences captured in,
detained in or transported through Council of Europe member states; and the inclusion of a set
of minimum requirements for human rights protection clauses in agreements with third
parties, especially those concerning the use of military installations within the territory of
Council of Europe member states.”
21
What Do You Think?
ƒ
If you were the chief legal adviser to the government and you received warnings of an
imminent terrorist attack, would you have authorised “special interrogations” of
suspects regarded by the intelligence services as possessing vital information?
ƒ
If you were a Muslim person of moderate views, how would you react to the news
that one of your neighbours had disappeared and was thought to have been transferred
abroad by a rendition flight?
ƒ
Can it ever be justified to hold prisoners in custody for an unlimited time without
bringing them to court?
22
CASE STUDY 2: When a totalitarian regime is overthrown, should
the secret police files be destroyed or should the archives be
opened so that society can confront its past?
Mihai Manea and Robert Stradling
Background
Any totalitarian regime wants to control every aspect of everyday life. As human rights
activist Rainer Hildebrandt observed in 1948, every communist country required a secret
police in order to control the population. Most regimes, including liberal democracies, have
intelligence-gathering departments that keep an eye on the activities of some individuals and
groups that are considered to be subversive or a threat to national security. The main
differences between these organisations and the secret police forces in totalitarian regimes lay
in:
ƒ the scale of their operations,
ƒ the extent to which they were constrained by the law,
ƒ the extent to which they were held accountable by the democratic process and an
independent judiciary,
ƒ the extent to which their activities are and were limited by the regime’s
commitment to individual human rights.
The question of scale is significant here. The secret police in totalitarian regimes cannot
function effectively without employing or coercing hundreds of thousands of people to act as
its “eyes and ears” – informers prepared to pass on information about the activities and
opinions of their neighbours, work colleagues, teachers, even family members. This was not
just a familiar phenomenon in Communist regimes. It is believed that, in pre-revolutionary
Tsarist Russia, the police had spies in every housing block in St Petersburg and Moscow. The
Gestapo adopted similar tactics, both within the Third Reich and in the countries which were
occupied during the Second World War.
During the Cold War era two of the world's most feared secret police forces, besides the
Committee for State Security (KGB) in USSR, were the GDR's Stasi, and Romania’s
Securitate. The Ministerium für Staatssicherheit (MfS), popularly known as the Stasi (from
the German word Staatssicherheit or ‘state security’), was the secret police and intelligence
organisation of the German Democratic Republic (GDR). Founded in 1950, it was modelled
on the Soviet Ministry for State Security (MGB) which preceded the KGB. The Stasi's motto
was "Schild und Schwert der Partei" (Shield and Sword of the Party), which demonstrates its
connections to the Socialist Unity Party of Germany (SED), the name adopted by the German
communists when the Communist Party merged with the Social Democrats in the Soviet zone
of occupied Germany in 1946.
The Securitate (the Romanian word for Security) was the secret police force of Communist
Romania. Its official name was Departamentul Securită ii Statului (State Security
Department). It was officially founded, with close guidance from Soviet KGB officers, on 30
August 1948 to "defend democratic conquests and guarantee the safety of the Romanian
Peoples' Republic against both internal and external enemies". In proportion to Romania's
population, the Securitate was the largest secret police force in the Communist bloc.
23
The Stasi had influence over almost every aspect of everyday life in the German Democratic
Republic. By 1989, it is estimated that the Stasi had 91,000 full time employees and 300,000
informants. Additionally, Stasi resources were used to infiltrate and undermine West German
government and intelligence. In Romania it has been estimated that the Securitate employed
about 14,000 full-time agents, plus 400,000-700,000 part-time informants. Although the exact
number of collaborators fluctuated throughout the 1970s and 1980s, the figures are high for a
country with a population of about 22 million6.
Timelines
The Stasi in the GDR
1945: Defeated Germany divided into four
zones of occupation. The Soviet zone
includes most of Eastern Germany.
1946: After the Communist Party did badly
in the first post-war German elections, the
USSR called on the German Communist
Party leader, Walter Ulbricht, to bring about
a merger with the Social Democrats in the
Soviet zone. This led to the creation of the
Socialist Unity Party (SED)
1948: USSR cut off all transport links
between Berlin and the West. This led to the
Berlin airlift to bring in supplies to the
Western-controlled sectors of the city.
1949: The USA, France and the UK agreed
to the creation of the German Federal
Republic (FDR) from their three zones of
occupation. The Soviets responded by setting
up the GDR in the eastern sector.
1950: Stasi was founded with Wilhelm
Zaisser as Director and Erich Mielke as
Deputy Director.
1953:
Zaisser was replaced by Ernst
Wollweber who resigned after four years as
Director of the Stasi.
1957 Erich Mielke became Director and
pursued a more active policy of infiltration of
West German political and economic circles
by Stasi informers and spies. The foreign
intelligence section of the Stasi (the HVA)
rapidly expanded under its head, Markus
Wolf.
6
The Securitate in Romania
1945: A soviet-backed government
installed in Bucharest.
is
1947: King Michael of Romania abdicates
and the Romanian People’s Republic is
proclaimed.
1948:
The Securitate is formed with
guidance from Soviet KGB officers. First
Director was General Gheorghe Pintilie
(known as Pantiusa) with two Soviet officers
as deputy directors.
1951: An expanded Securitate began to
systematically arrest alleged opponents of the
regime – “class enemies” - who were sent to
special prisons, usually without any warrant,
trial or inquiry.
1964: The Romanian government declared a
general amnesty and 10,014 people were
released from the special prisons. But arrests
for “conspiring against the social order”
continued. There was a massive increase in
the Securitate’s use of informants.
1965:
Nicolae Ceausescu became
Communist Party leader and head of state.
1980s Dissent amongst Romanians became
more widespread as the economic situation
worsened and people were experiencing food
shortages and power cuts. The Securitate
launched a major campaign across the
country to stamp out dissent.
Source: The Cold War International History Project, Woodrow Wilson Center, Washington DC, USA
24
1969: The West German Social Democratic
Party (SPD) led by the former mayor of West
Berlin, Willy Brandt, came to power in a
coalition with the Free Democrats and
introduced a policy of Ostpolitik – to
improve relations with the Soviet bloc,
especially the GDR.
1974: Willy Brandt was forced to resign as
West German Chancellor as a result of a spy
scandal. It was revealed that one of his most
trusted aides, Günter Guillaume, had been
passing intelligence back to the Stasi.
1970s: The Stasi was actively supporting leftwing terrorist groups such as the Red Army
Faction. They also organised the rescue of
Chilean politicians after the military coup
d’état which brought General Pinochet to
power.
1986: Markus Wolf was succeeded by
Werner Grossman.
1989: The Stasi was re-named the Office for
National Security.
Sept. 1989: Demonstrations against the
GDR regime were broken up by security
forces in Leipzig.
10 Nov. 1989: The Berlin Wall was opened
and 200,000 East Germans crossed into West
Berlin.
3 Oct. 1990: East and West Germany were
re-united for the first time since 1945.
1987: Army forces and Securitate violently
broke up workers’ demonstrations in Brasov.
1989: Units of Securitate sent to Timisoara
in Transylvania to put down demonstrations
against government policy. They fired on
people and 17 demonstrators were killed.
Dec 1989: Events in Timisoara triggered off
protests across Romania and led to pitched
battles between demonstrators (including
soldiers) and the Securitate in Bucharest. The
Communist dictatorship of Ceausescu was
overthrown; he and his wife were arrested,
tried and executed on 25 December.
1990: Elections held and Ion Iliescu, leader
of the National Salvation Front became the
new President. The Securitate was disbanded
and replaced by the new Romanian
Intelligence Service.
2003: Romanians voted in referendum on a
new constitution in preparation for joining
the European Union.
April 2005:
Romania signed the EU
accession treaty
2006: An official report says that up to two
million people were persecuted or killed by
the former Communist authorities.
January 2007: Romania joins the EU.
What is in dispute here?
No one seriously disputes that the secret police in totalitarian regimes have played a
significant role in violating people’s civil and political rights and in creating a climate of fear
and distrust amongst the population. There was also little dispute within most of the former
communist countries about the need to disband the security forces which had served the old
regimes, even if they were replaced by new intelligence-gathering organisations
The issues that divided people at this time, and ever since, were concerned with whether:
ƒ any actions should be taken against the officers who had worked for the security
services;
ƒ the files kept on hundreds of thousands of ordinary citizens should be destroyed or
scrutinised to identify both the informers and their victims;
25
ƒ
ƒ
any actions should be taken against people currently in positions of authority who had
collaborated with the former regime, particularly with the security forces;
any actions should be taken against the members of the public who had acted as
informers.
These are issues that arise after the downfall of any regime where people’s rights have been
consistently violated. There was little debate about what should be done in Germany and Italy
after the Second World War. The decisions were taken by the occupying powers although the
extent to which former Nazis and Fascists were removed from positions of authority varied,
depending on which occupying power was in control. A similar process of dealing with
collaborators happened in most of the countries that had been occupied by Axis powers.
Perhaps one of the ironies of this in some former communist countries is that the decisions to
hand over lists of wartime Gestapo collaborators to the new communist secret police
significantly enhanced the power of the latter in the 1940s and ‘50s.
The issue also arose in Portugal and Spain in the period after the dictatorships of Salazar and
Franco had come to an end. It then arose again in South Africa after the banned opposition
parties were legalised and the African National Congress and the Inkatha Freedom Party
cooperated with the Afrikaner-dominated white government to introduce a new multi-racial
democratic constitution.
In recent European history, there seem to have been six different solutions to this issue and in
some cases post-transitional governments have used a combination of two or more of them,
depending on the seriousness of the degree of collaboration. The first of these is “the terror”,
characterised by mass arrests, executions, imprisonment without trial or exile. While “the
terror” is most commonly associated with the French Revolution, it was also employed with a
similar degree of ruthlessness by the Bolsheviks, both against anti-revolutionaries after the
Civil War and also against those who they alleged to be opponents of modernisation, even if
they were in the Communist Party.
A second option has been criminal prosecutions. Punishments have tended to vary from
execution and long prison sentences for the most serious offences to loss of property for
profiteers and loss of jobs for minor officials.
The third option - offering unconditional amnesty to all those who held positions of authority
under the old regime or collaborated for their own gain or acted as informers - has seldom
been employed but it was the option chosen by the first democratically-elected government in
post-Franco Spain. This was widely described at the time as the “forgive and forget” option.
It was made possible because members of the last Franco government took an active role in
negotiating the transition to democracy.
The fourth option – the truth and reconciliation commission - was the one adopted in South
Africa after the ANC had come to power through democratic elections. Here Archbishop Tutu
and President Mandela played an important role in mobilising public support for this
approach rather than vengeance against the former oppressors and their collaborators. The
objective here was to bring the victims and the perpetrators together before the commission to
talk about what had happened and then to bring about a reconciliation between them.
The fifth option is the one adopted in Greece when the Greek Republic was declared. The
files were taken out of the archives and burned in public so that no-one could ever use them to
threaten, pressurise or blackmail anyone because of their past.
26
The final option is the approach which has been widely used in Central and Eastern Europe
since the end of communism. This is commonly known as “lustration”. This word is derived
from the Latin word “lustratio” meaning purification by religious rites. This word, in turn, is
derived from the Latin word “lustro” meaning to review or examine. By the mid-1990s, socalled “lustration laws” had been introduced in the Baltic States, Czech Republic, Slovak
Republic, Hungary, Poland, Germany, Romania, Bulgaria, Albania, Russia and Ukraine.
In most of these states, the laws that were passed enabled a government department,
parliamentary committee, or independent commission to examine the files in the archives of
the secret police, and in some cases the Communist Party, to check if anyone holding or
seeking public office had previously worked for the secret police or collaborated with them
(or with agents of the Soviet Union) as informers. In some cases, this process has led to
someone being prosecuted. In other instances, the named individual has been disqualified
from holding public office for a fixed period of time. Most commonly the individuals
concerned have been “outed”, i.e. publicly named and shamed but without any further action
being taken. Indeed, this has increasingly been used as a potential political weapon by both
government and opposition parties. Interestingly, in most Central and Eastern European
countries, neither membership of the Communist Party in the past nor the holding of public
office under the Communists, has disqualified those individuals from holding public office in
the present era.
What have been the main arguments for actions such as lustration and criminal
prosecutions to be taken against the informers and collaborators with the old communist
regimes?
First, it was argued that although democratic institutions had been established and the
new governments had come to power through free and fair elections the state
apparatus, the local administrations and the trade unions had remained virtually
identical to the bureaucracy that had been in place under the old regime. It was
assumed that these people would slow down the reform process, even subvert it.
Second, it was also argued that the lustration process would ensure that the new
political elite would also be untainted by the past.
Third, it was argued that it was important that everyone should learn the truth about
the past. There were two motives at work here. People had been living under a
totalitarian regime which had claimed a monopoly of truth and used censorship,
misinformation and the security police to sustain its monopolistic power. It was
understandable therefore that ordinary people would want the archives to be opened
up so that they could find out who had reported them to the authorities. At the same
time, supporters of lustration also argued that there was a right to information issue
here as well: the right to know what information about themselves was held on record
and the right to know about the background of election candidates before deciding
whether or not to vote for them.
Fourth, it was argued that lustration was a necessary process in building up trust in
the new regime and in the people who now held high office.
Fifth, it was widely argued that, if the secret files were not opened up and examined
and the results of that examination made public, then there was a real risk that some
27
people seeking public office would be open to blackmail and vulnerable to pressure
once they held positions of authority.
Sixth, in some states, particularly those annexed by the Soviet Union (e.g. the Baltic
states), lustration was also seen as part of the process of verifying the individual’s
loyalty to the post-1989 state, particularly where individuals may have been serving
officers in the KGB or other soviet organisations under the previous regime and were
suspected of anti-state activities since 1989-90.
Finally, it was also argued that a proper, legal lustration process would calm down the
kind of over-heated mutual denunciation by competing politicians which took place
in most Central European countries in the first few years after the democratic
transition.
What kinds of arguments were put forward by the critics of lustration?
First, in a number of former communist countries purges of the secret police and
other departments involved in internal security had been carried out in the months
after the transition and before the lustration laws were passed. Whether or not those
who remained in the state apparatus would have subverted the democratic process
cannot really be answered since not many were removed from office as a direct result
of lustration rather than the earlier purges.
Second, since in a number of countries the communist party was not banned and
continued to contest elections, and in some cases joined other political parties in
coalition governments, it is not clear to what extent the lustration process ensured that
political elites emerged that were untainted by the past. Certainly some critics of
lustration argue that, in practice, it never served this purpose.
Third, some critics have argued that lustration laws violate people’s human rights.
The Helsinki Committee for Human Rights which was originally established to
monitor human rights violations in communist countries has argued that no person
should be criminalised or punished for an alleged offence committed under the
previous regime which was not in fact an offence punishable by law under that
regime. This, argues the Committee, would be a case of making something illegal
retrospectively, which the various International Conventions on Human Rights
regards as a rights violation. They also observed that the lustration process puts the
onus on the person who has been indicted to prove his innocence rather than requiring
the committee which has named him or her to prove guilt. In their view, it would be
extremely difficult, when examining files collected under a totalitarian regime, to
draw a clear distinction between the innocent and the guilty in a situation in which
many persons might be both.
Fourth, some critics have also argued that it might be very unwise to rely on any files
that were produced by security police when the motives of those compiling an
individual file were not known. What they are referring to here was the widespread
practice of security police inventing negative information about individuals who were
critical of the regime or even of a particular policy so that they could be liable to
blackmail.
28
Finally, some critics argued that lustration could undermine public trust in the new
democratic regime if they felt that politicians were using the archives to dig up
information to discredit their opponents and critics.
A variety of viewpoints
The writer and first President of the Czech Republic, Vaclav Havel, expressed the
argument for a lustration process:
“…our society has a great need to face that past, to get rid of the people who have terrorized
the nation and conspicuously violated human rights, to remove them from the positions that
they are still holding”.
but he was reported later as saying:
“I believe that the call for revealing the names of all those who were somehow connected with
the police—regardless of when and why—is very dangerous. This is a bomb that can blow
[up] any moment and once again poison the social climate, introduce elements of fanaticism,
misdeeds, illegality, and injustice…We must be able to face our past, name it, draw
conclusions from it and mete out justice; but this has to be done honestly, with consideration,
tact, generosity, and imagination. Those who admit their guilt and expiate should be
forgiven.”.
Jorge Semprun, a Spanish author, described why many in post-Franco Spain wanted to
put the past behind them:
“If you want to live a normal life, you must forget. Otherwise those wild snakes freed from
their box will poison public life for years to come”.
Quoted in Adam Michnik & Vlaclav Havel, Confronting the Past, 1993.
A top-ranking official in the Czech Interior Ministry explained how his office viewed
lustration.
“I believe that those who knowingly collaborated, even if they did nothing harmful, even if
they were just playing games, should be lustrated…We are attempting some kind of moral
cleanup here, to clean the society of those who morally compromised themselves. And one of
the criteria we’re using is that people should not have knowingly collaborated with the StB—
it’s as simple as that.” (Lawrence Weschler, The Velvet Purge (1992).
The position presented to the Polish Senate was:
“The removal of former agents and collaborators of the security services from important state
functions, together with the enactment of legal measures to prevent them from assuming such
functions in the future, is a basic requirement of justice and an essential condition for the safe
development of democracy in Poland.” (Charles Bertshci, East European Quarterly, XXVIII
(1995).
The British historian who specialises in the modern history of Central and Eastern
Europe, Timothy Garton Ash, made the following comment about lustration laws:
“After 1989, the (roughly speaking) left-liberal, post-Solidarity leaders advanced several
arguments for not making a public reckoning with the communist past, including lustration.
29
They were initially in a coalition government with communists, who had just peacefully
conceded power, and the Red Army was still there. There were more urgent things to do:
building a market economy, a liberal democracy and the rule of law. Beyond that, some of
them - such as Adam Michnik, the influential Solidarity activist and political writer - argued
for doing it “the Spanish way”. Like Spain after Franco, Poland after Jaruzelski should let
bygones be bygones. This approach can now be seen to have failed. In fact, about the only
place I know where it has succeeded is Spain - and even there, only at a price. In every other
country where the nasty past was not confronted, it is still plaguing current politics.”
Article in the UK newspaper, The Guardian, 24 May 2007.
The academic and former European Commissioner, Ralf Dahrendorf, has suggested
that:
“a country following a period of totalitarian or dictatorial rule should first lay the foundations
for the future, then turn to tackling the past. First build your liberal democracy, market
economy and the rule of law, as West Germany did in the 1950s and Poland in the 1990s;
then address the issues of the recent past.”
The International Labour Organization has criticised certain lustration laws on the
grounds that they
“may violate fair employment laws, especially if the individual is not afforded the right of
appeal before removal from his position.”
Adam Michnik, one of the leaders of the opposition to communist rule in Poland,
suggested that:
“it is absurd that the absolute and ultimate criterion for a person’s suitability for performing
certain functions in a democratic state should come from the internal files of the secret police”
Joanna Rohozinska, East European Democratic Centre in Warsaw, Poland, also picks
up the concern raised by Michnik:
“[What is] puzzling, is the inclination to take ……….files from the former regime in general
at face value. While the system was in power, it certainly bred suspicion and distrust of
authority, even for those working within it. Why put stock in the products of such a
discredited system now?”
She then used the experience of Lech Wałęsa, the Solidarity leader in Poland, when
documents were revealed that apparently implicated him as a collaborator with the
secret police:
“In Wałęsa's case, the evidence involved questions surrounding the identity of an
informant named ‘Bolek’, who worked for the secret services from 1970 to 1976……
During the trial, a SB report from 1985 concerning the fabrication of documents
designed to compromise Wałęsa by insinuating connections with the Communist
special services was revealed. According to the report, the SB had created false
documents for years, including fictitious anonymous information, allegedly authored
by Wałęsa under the pseudonym ‘Bolek’ and payment receipts for his services. It was
shown that these materials were used within Poland and abroad, and were even sent to
the Nobel Peace Prize Committee in 1982 in an attempt to compromise Wałęsa's
30
candidacy (where they must have had some effect since his candidacy was put off for
a year)….. the court cleared Wałęsa”.
Milos Zeman, in a speech on the lustration process to the Czech Federal Assembly in
1991 emphasised:
“[If] a candidate runs for public office, his eventual voter has the full right to know all
relevant facts about the candidate, and I also consider a relevant fact the information whether
he was, or was not a collaborator of StB, whether he has a criminal record, or whether he
suffers from unrecoverable diseases. On the other hand, it is the free will of voters whether
they will elect such a candidate.”
In response to criticisms that the motivation behind lustration in the Czech Republic
was vengeance, Vojtech Cepl, Justice of the Czech Constitutional Court responded:
“If revenge had been our motivation, there are more effective ways of going about it that
inflict a far greater sanction than symbolic acts of condemnation, lustration and restitution.”
What do you think?
Across Europe, from Northern Ireland to Kosovo, people are still trying to come to terms with
the recent past and with the terrible things which people have done to each other, regardless
of whether they were neighbours, colleagues or holding positions of authority and trust. What
should they do:
ƒ
Should they, in the name of justice and human rights, throw open the archives
and publicly confront the things that people have done to each other? or
ƒ
Should they burn the archives, try to forget what has happened and concentrate
on making sure such events never happen again?
31
KEY QUESTION TWO: Does the state need to protect people
from themselves?
Robert Stradling
The state is the political organisation which has a monopoly over the use of legitimate force in
a particular territorial area. Today, increasingly, that territory coincides with a geographical
area which contains a nation and so we commonly use the term “nation state”. But,
historically, that has not always been the case. We have had city states like ancient Athens or
medieval Florence. We have had small states ruled by princes or dukes rather than monarchs.
Territories have been conquered and ruled by an external power, and some states have created
vast empires which needed to be governed. Even today, we can point to some states, mostly
former colonies, which contain several large nationalities and where anarchy or civil war is
only prevented either because those in power have more weapons and security forces than the
others or because the people living in that territory have agreed to some system of
government which they feel does not discriminate against some national groups and in favour
of others.
In the previous key question, we saw that there has been an ongoing struggle over the
centuries between those who have power and those who do not. In many ways, the political
history of the world throughout the ages has revolved around that struggle and the successful
and unsuccessful attempts by the ruled to protect themselves from the exercise of arbitrary
power by the rulers, whether the regime or state was a monarchy or dictatorship with one
absolute ruler, an autocracy ruled by an elite or occupying power, a theocracy where the
religious clerics made the decisions, or a democracy based on an idea of popular sovereignty
where the government in some way represents “the will of the people”.
This notion of popular sovereignty - of government by and for the people – emerged during
the Enlightenment and found its first real expression in the period after the French
Revolution. By the middle of the 19th Century, more and more people wanted to live in
nation states where they could vote for their leaders and these leaders would be representative
of people like themselves – same nationality, similar social backgrounds.
Of course, at that time, “the people” did not necessarily mean everybody. It meant those who
were believed to have a stake in the political system – the tax payers, the owners of property,
farmers, businessmen, factory owners and employers. The kind of state they wanted was one
that would keep the national borders secure from any threat posed by foreign powers, protect
their property, maintain law and order, keep taxes low and enable them to get on with living
their lives and running their businesses with the minimum of state interference. In other
words, they wanted the role of the state to be severely restricted.
Mostly they wanted a strict line to be drawn between what they thought was the public
sphere of life and what was the private sphere. Issues associated with religion, morality and
economics were thought to be private and not the business of the state. But, even then, people
tended to be rather ambivalent about this distinction and many would still support legislation
that imposed their particular view of morality on the rest of society by, for example,
prohibiting prostitution, homosexuality, gambling, blasphemy, and so on.
After the Great War of 1914-1918 and the Russian Revolution of 1917, attitudes towards the
role of the state changed once again. When the Tsarist order was overthrown in Russia in
1917, the Russian bourgeoisie hoped that it could be replaced by the kind of liberal,
32
constitutional state that had emerged elsewhere in Europe in the late 19th Century. However,
the chaos of war, the shortage of food and other essentials, the rising cost of everything and
the closing down of factories because of lack of raw materials meant that the public mood
became more radical and more hostile to a liberal democratic solution to their problems. The
main beneficiary of this changing mood was the Bolshevik Party which proposed to end the
war, introduce major social reforms, grant land to peasants, hand over the factories to the
workers and transfer political power from the bourgeoisie to the workers’ soviets.
The revolutionary mood spread to other parts of Europe, particularly in those countries that
had been defeated in the Great War, such as Germany and Hungary. Even in the countries
that had been victorious, there was deep resentment and a widespread desire for social and
political change. The men who had fought in what many believed was a futile war, and the
women who had run the domestic industries during the war, were not prepared to accept the
restoration of the old pre-war order. They wanted the vote and other civil and political rights.
It is no accident that the right to vote was extended to almost all adults in so many countries
during the 10 years after the end of the First World War.
The spread of socialist and communist ideas and the impact of the economic depression in the
1920s and early 1930s meant that many people who had only just gained the right to vote
wanted the state to do more to protect people from the effects of poverty, unemployment and
ill health. The economic depression that began in the United States and then spread
throughout the world between 1929 and 1933 was something for which most liberal
democracies were poorly prepared. They had assumed that the state could do very little to
intervene in such a situation and that they had to wait for the economic markets to revive and
for market forces to resolve the crisis.
However, President Franklin D. Roosevelt came to power in the United States in 1932 with a
package of welfare and economic policies – that came to be known as the New Deal - that
were designed to intervene in ways that would protect the weak, provide people with work
and a minimum wage and stimulate the economy. By the mid-1930s, most western
governments were actively intervening in their economies.
That does not mean that more state intervention was universally welcomed. Businessmen, in
particular, reacted very negatively to this expanded role for the state. In some countries, they
sent their money abroad where they thought it would be safer and, in the United States,
business fought every aspect of President Roosevelt’s New Deal programme and officials
working for the administration were denounced as crypto-communists (i.e. secretly
communist while publicly denying it).
After the Second World War, most of Europe was devastated. More than 30 million people
were killed, many others were displaced persons and refugees, there were food shortages and
industrial production was only a third of what had been produced annually before the war.
Financial assistance from the United States was crucial but European governments also
needed to invest in their economies, modernise, provide the elderly and those injured during
the war with pensions and introduce some kind of welfare system to guarantee a basic level of
health and social security. They went about this process in different ways but it meant that
governments of all political persuasions – conservative, liberal, social democrat and
communist – had to be more interventionist in social and economic life.
Since that time, the debate about the role of the state and where the dividing line should be
drawn between the public sphere and the private sphere – where the state has no role to play -
33
has continued. Perhaps the most extreme version of this was the emergence of the totalitarian
state in the 20th Century. A system of government which shares some common features with
dictatorships and autocracies - such as an official ideology, government dominated by one
political party and one ideology, a large secret police force and total control of the army. But,
in addition, it has some unique characteristics that justify the term “totalitarian”.
Traditionally, the totalitarian state has also sought to infiltrate and control almost every aspect
of people’s private lives as well – the workplace, the home, the school, their religious
practices and the clubs and associations to which they belong. An earlier chapter looked at
how control of the mass media and the role of the secret police and their army of informers
are crucial to the continued existence of a totalitarian state.
However, over the last 50 years, we have also seen that not only totalitarian states but also
liberal democracies have tended to become increasingly involved in more and more aspects of
our private lives than ever before. Probably, most people in liberal democracies now accept
that the state should not just be concerned with national security, maintaining law and order,
protecting people’s property and ensuring that their civil and political liberties are protected.
There is a widespread recognition that the state also has a role to play in meeting people’s
needs, reducing poverty and preventing one group from discriminating against other groups
on the grounds of race, ethnicity, religion, physical and mental disabilities, gender or sexual
preferences.
Most would also agree that the state has a role to play in regulating some of the activities of
industry and commerce to ensure that people are not cheated or defrauded when they pay for
goods and services and to make sure that the production of goods and foodstuffs does not
damage people’s health or their environment. Indeed the International Declarations and
Conventions on Human Rights now require all the states who have signed up to them to
guarantee a whole range of people’s social and economic rights (as we have seen elsewhere in
this booklet).
Pause for a moment and try to draw up a list of some of the different ways in which your
freedom of choice has been restricted by legislation. You will probably have thought of laws
about, for example, riding a motorcycle without a crash helmet, not wearing a seatbelt when
driving in a car, consuming or supplying drugs like cannabis, ecstasy, marijuana and heroin,
smoking cigarettes in a public place, getting drunk, parking in an unauthorised place,
travelling on public transport without a ticket, having more than one husband or wife,
truanting from school, nudity at public beaches, burning the national flag, the practice of
prostitution or practising a number of professions, such as the law, teaching and medicine
without a licence.
Why has the introduction of such laws been controversial and on what grounds have they
been fiercely debated?
The traditional liberal position is that the state has a right to intervene in people’s private lives
by prohibiting certain kinds of behaviour if this would prevent Person A from doing harm to
Person B but it does not have the right to intervene if the only person likely to be harmed is
Person A himself. However, liberals who take this line usually accept that there are certain
exceptions to this. For example, it may be legitimate to intervene and prevent Person A from
doing something if he or she does not fully understand the risk they may be taking or if they
have severe learning difficulties and are not able to understand the risks involved. For similar
34
reasons, liberals usually accept that there may be circumstances where young children may
need to be protected in case they do harm to themselves.
At the heart of this argument is the belief that, wherever possible, the state should avoid
intervening in people’s lives. They should be free to exercise their personal autonomy or
right to make their own decisions on matters that affect only them (or themselves and their
families) and do no harm to anyone else.
However, where you stand on an issue like this rather depends on what you mean by “harm”.
In this context, the term “harm” has never been used just to refer to physical injuries. 19th
Century liberals expected the state to take action to prevent people from suffering loss
through having their property stolen. They also recognised that harm can be done to
someone’s reputation if they are slandered or libelled in some way and that laws should be
passed to protect a person’s reputation from false accusations and false statements about
them.
Being slandered can cause emotional or psychological harm and, in the 20th Century, we have
seen the introduction of an additional liberal argument for state intervention. In this case, it
would be to prevent offence being given to a particular social group. Most modern liberal
democracies now include legislation to prevent people from acting or saying things in ways
that are likely to give offence. Usually included here are actions and statements that are racist
or sexist or are likely to be offensive to a particular ethnic or religious group.
Some have also argued that there may be circumstances where the state should intervene to
prohibit something from happening even though people are not harmed, physically,
psychologically or economically, and their personal autonomy is not limited in any way. This
is where behaviour is morally repugnant because it violates the dignity of the people
concerned.
Think of the case of slavery. The liberal argument for prohibiting slavery is that the slave has
no, or very limited, personal autonomy and that there are likely to be circumstances in which
the slave suffers harm as well. Now, for the sake of argument, imagine a society in which
slavery is still practised, and then imagine a situation in which a poor beggar, living in a
society where the rulers still practise slavery, offers himself as a slave in return for a cash
payment which he can give to his family to help them. Having gained the trust of his owner
he comes to be treated as one of the owner’s family. He eats with them, he is paid for his
services and can send this money to his own family. He is not mistreated and he has as much
freedom to make decisions about his life as those who work for the owner as freemen and
freewomen.
So in this particular situation the slave does not experience any physical or psychological
harm as a result of being a slave and has more autonomy to make decisions than when he or
she was a poor but free beggar. If asked, this slave would probably say that he was better off
now, his family was better off, slavery was his choice and he now has more autonomy than
when he was a beggar. So does that make slavery more acceptable? The answer most of us
would probably give is that slavery is still morally repugnant because it is a violation of each
person’s dignity, even if the slave does not suffer harm or experience any significant loss of
personal autonomy.
Are there other actions or circumstances that could also be regarded as morally degrading or
repugnant even if the individual person involved is not forced into doing it, is economically
35
better off as a result and is not offended by the actions which she or he does? Some would
argue that prostitution also comes into this category but there does not seem to be the same
degree of consensus in society about its prohibition as there is about the prohibition of
slavery.
In recent times, there has also been a debate about what is sometimes described as a
victimless crime. That is where the state has introduced legislation to enable people to be
prosecuted for actions that do no actual harm to others. A typical example that is often given
by political activists who fear that people’s freedoms are being eroded is where laws have
been passed to enable local authorities to fine people who have parked their cars in
unauthorised places. Another example often quoted is legislation requiring riders of motor
cycles to wear crash helmets.
The critics have coined the word “paternalism” to describe legislation of this kind. In other
words, they claim, the state is treating us all like children who cannot make sensible decisions
for ourselves, They would argue that, if a motorcycle rider, who is riding without a crash
helmet, has a serious accident, then the only person harmed is the rider and therefore riders
should be able to decide for themselves whether or not to wear a helmet.
The counter argument to this is that the situation is nearly always more complicated than the
opponents of legislation on crash helmets acknowledge. An accident of this kind will almost
always have consequences for others as well as for the rider. There will be the possible
trauma of others involved in the accident. There will be the consequences for the rider’s
family. Police will be called out to the accident. If the rider is injured, an ambulance will be
called as well. Then there will be the cost to the health service if the injured rider requires
hospitalisation and an operation. That, in turn, will mean that there is a cost to taxpayers as
well. This is known as the “public charge argument” in favour of restrictions on personal
autonomy and freedom.
Interestingly, it is rare for governments to be consistent in applying the case for state
intervention to prevent people harming themselves. For example, the argument for requiring
motor cycle riders to wear crash helmets could also be applied to the smoking of cigarettes. If
there is clear evidence that there is a high risk of lung cancer from smoking tobacco, then it
could be argued that cigarettes should be banned on the grounds that large numbers of
smokers requiring treatment is a major cost to the health service and therefore to tax payers.
But, in practice, governments have tended to stop short of banning cigarettes. Instead, they
try to discourage people by printing health warnings on each cigarette packet and raising taxes
on them to make them very expensive.
Increasingly, governments are also beginning to ban the smoking of tobacco in public places
as well as to prevent smokers from polluting the air and possibly increasing the risk of others
suffering from what is known as “passive smoking”, i.e. inhaling someone else’s cigarette
smoke. Similarly, whilst it is known that eating a lot of food with a high fat content
significantly increases the likelihood of heart disease, governments have not considered
banning such foods but preferred instead to provide the public with information about the
risks involved and leave them to make up their own minds.
One of the reasons why governments can be inconsistent on these issues is that they make a
judgment in each case on what is likely to be the public response. There is a recognition that
sometimes prohibition can be counter-productive. The obvious example of this is the
prohibition of alcohol in the United States in the 1920s. It drove the production and
36
consumption of alcohol underground and a whole criminal underworld emerged in order to
meet the large and increasing demand for prohibited alcohol.
To conclude, then, if we look back at what has happened over the last 150 years, we can see
that, in every country, the role of government has expanded enormously. Most people now
want a lot more from government and have been prepared to pay a larger proportion of their
income in taxes in order to pay for these additional services. What has happened is that the
concept of “protection” has evolved. Now we not only want the government to protect us
from fear, threat, disorder and crime, we also want to be protected from poverty,
homelessness, pollution, preventable illnesses and diseases, and the effects of economic
inflation and unemployment. People may still argue about the extent to which people want
the state to intervene in their lives but there is no doubt that the principle of some level of
state intervention is now generally accepted. Gradually the debate has shifted and focuses
increasingly on:
The limits of state intervention. Which aspects of our lives are none of the state’s
business? Will criminalising certain activities be effective or counter-productive in
changing people’s behaviour? How far does one have the right to be different or even
to harm oneself? To what extent should people be left free to do what they want if
others are not harmed?
The extent to which we can say that people have actually consented to certain
freedoms being restricted. The idea of consent here is very important since in a
democracy the legitimacy of the state is based on the consent of the people. So, if the
government party did not say at an election that it would introduce a particular law,
and there was no referendum to ask the public if they wanted such a law, then has the
electorate consented to it? Some would argue that regular free and fair elections give
the government the right to act on our behalf and not seek our permission for every
piece of legislation. Others argue that, where a law is clearly controversial, the
government needs to consult the public before it acts.
37
CASE STUDY 3: The banning of tobacco smoking in public
places
Mihai Manea and Robert Stradling
Background
Most modern societies have introduced policies of some kind which are designed to restrict
the smoking of tobacco. These usually include:
ƒ a tax on tobacco to make cigarettes more expensive and thereby reduce demand and
consumption;
ƒ a health warning on packets of cigarettes to inform the smoker of the risk to their
health;
ƒ health education in schools about the risks to long-term health of smoking;
ƒ public health campaigns using the mass media;
ƒ a ban on advertising on television and public advertising boards;
ƒ a ban designed to prevent cigarette companies from advertising their products through
sponsorship of sporting events.
Increasingly, employers and owners of entertainment and public transport facilities have
voluntarily introduced smoke-free spaces or even banned smoking on their premises
altogether. However, in recent times, a number of European governments have either
introduced or are debating an official ban on smoking in workplaces, offices, public buildings,
cafés, restaurants, theatres, cinemas and other public places.
Pope Urban VII issued the world's first known public smoking ban (1590), as he threatened to
excommunicate anyone who “took tobacco in the porchway of or inside a church, whether it
be by chewing it, smoking it with a pipe or sniffing it in powdered form through the nose”.
But, as the timeline below shows, it is only very recently that governments have started to
consider more general bans on smoking in public spaces.
Timeline
1 January 2004: The Netherlands banned smoking in certain public areas such as railway
stations and offices but, in places such as hotels, bars and restaurants, controls on smoking
remained voluntary.
March 2004: Ireland banned smoking in bars, restaurants and enclosed workplaces. People
smoking in these places face a fine of up to €300.
5 April 2004: Malta introduced a ban on smoking in public places.
1 June 2004: Norway announced a ban on smoking on the streets and in bars and cafes.
August 2004: Montenegro introduced restrictions on tobacco advertising and banned
smoking in public places.
10 January 2005: The Italian government introduced a smoking ban in all enclosed public
spaces with a fine of €275 for smokers who ignore the ban.
38
1 January 2006: Spain banned smoking in offices, shops, schools, hospitals, cultural centres
and on public transport. Belgium banned smoking in enclosed workplaces but separate
smoking rooms were allowed in catering establishments.
March 2006: Scotland introduced a ban on smoking in public places.
Summer 2006: The Croatian Government announced it would pass a new law banning
smoking in public places but later decided a new law was not necessary and that existing
restrictions on smoking in workplaces should be enforced more effectively.
5 September 2006: Restrictions on smoking introduced in Luxembourg but separate
smoking rooms permitted.
December 2006: The coalition federal government in Germany revised its proposals to ban
smoking in public places because they might be unconstitutional. It decided instead to leave
the decision to the 16 federal Länder.
January 2007: Ban on smoking in public places introduced in Lithuania.
22 March 2007: Germany’s 16 Federal Länder agreed to ban smoking in restaurants and bars
but separate smoking rooms were permitted.
April 2007: Restrictions on smoking in public were introduced in Wales and Northern
Ireland. Restrictions also introduced in the Czech Republic.
22 April 2007: The German federal health ministry introduced a bill to ban smoking on
public transport and in federal buildings.
1 June 2007: Finland and Iceland introduced restrictions on smoking in public.
5 June 2007: Estonia banned smoking in cafes, restaurants, bars, nightclubs and on public
transport. Smokers who ignore the ban face a fine of €80.
16 June 2007: The French Prime Minister announced that a ban on smoking in offices,
schools and public buildings would be introduced in February 2008.
July 2007: Restrictions on smoking in public were introduced in England.
What is the issue here?
Three main arguments are usually given in favour of smoking bans. The first is that it would
reduce the number of adults suffering from heart disease, bronchitis, emphysema, impotence,
lung cancer, arterial narrowing and other diseases caused by smoking. There is often a
secondary issue here about the need to reduce the cost of providing health care, particularly
where the diseases and health problems are preventable. A second argument frequently given
for banning smoking in enclosed public places is that it would prevent non-smokers from
having to inhale other people’s smoke. Recent medical research has shown that passive or
second-hand smoking (i.e. smoke passively inhaled by non-smokers after it was exhaled by
active smokers) causes the same problems as direct smoking, including lung cancer,
39
cardiovascular disease and lung ailments, bronchitis or asthma. In 2002, a study by the
International Agency for Research on Cancer of the World Health Organization concluded
that non-smokers are exposed to the same carcinogens as active smokers.
A third argument for smoking bans asserts that restrictions on smoking in cafes, bars,
restaurants and other enclosed spaces where the public gather can substantially improve the
air quality in such establishments. Some research has also shown that improved air quality
results in decreased toxin exposure among employees in offices.
Smoking bans have been criticised on a number of grounds. The most common criticism is
phrased in terms of a general dislike of government regulation of personal behaviour. One
version of this argument which is frequently expressed in the United States and other
countries where there is a long libertarian tradition, such as in the United Kingdom, is that
smokers who freely choose to smoke and are harming themselves, have the right to do so, in
the same way that they are free to choose to take their own lives. Those who adopt this line
tend to argue that the prohibition of smoking creates a “victimless crime”. In order to do this,
they usually have to challenge the research on the risks of ill health through exposure to
passive smoking.
Another “rights-based” version of the argument against bans on smoking in bars and similar
public venues is that it violates the owners’ property rights. Here the argument is that workers
and customers who enter a private establishment or household that permits smoking are said
to have implicitly consented to the rules set by the owner of the establishment. So, for these
opponents of a smoking ban, this is a problem of individual rights, the relationship between
the citizen and the state, just as the anti-smoking lobby focus on the rights of the non-smoker
to a non-polluted and healthy environment.
Representatives of the tobacco manufacturers and retailers and of the catering and
entertainment industries often argue that smoking bans seriously affect their business and hint
at the hypocrisy of governments whose revenue is significantly increased through taxation on
tobacco products. Indeed, in some countries, it appears that the authorities do little to actually
enforce their smoking prohibitions, but continue to profit from tax on tobacco products.
Finally, there are those opponents of smoking bans and other measures to reduce smoking
who argue that they are not effective in reducing the numbers of people who smoke. They
point to the examples where the percentage of smokers in the population has remained the
same or only dropped by a small amount after the introduction of a smoking ban. For
example, in Ireland, the proportion of people who were regular smokers remained roughly the
same in spite of the ban. Similarly, they point out that the high cost of a packet of cigarettes
in Norway and the increase in price by 20% in France in 2003 did not noticeably reduce the
numbers of smokers in those two countries.
On the other hand, those in favour of smoking bans and other anti-smoking measures point
out that in Italy tobacco retailers experienced a 20% fall in sales of cigarettes after the ban had
been introduced. Some researchers have also indicated that there is evidence in some
countries, such as the United Kingdom, that bans on smoking in public places may lead to
more smoking at home and in the streets and car parks outside cafes, bars, restaurants and
workplaces.
A variety of viewpoints
40
Irish Prime Minster, Bertie Ahern, speaking in 2004 about his government’s decision to
introduce a smoking ban in public places said:
“Health and quality of life issues are important to people in their place of work….Being in a
room in which there are smokers means being exposed to at least 50 agents known to cause
cancer and other chemicals that increase blood pressure, damage the lungs and cause
abnormal kidney function.”
Dr Peter Maguire, Northern Irish member of the British Medical Association’s Science
Committee, referring to the introduction of a smoking ban in Ireland said:
"As an Irishman, who in the name of God would have thought the Irish would be the first in
Europe to ban smoking in public places? It's a national hobby in Ireland."
Norwegian Health Minister, Dagfinn Hoybraten, said that a smoking ban is needed in
Norway:
“to protect people who work in the catering industry from the effects of second-hand
smoke…[The] change was not conceived to reduce smoking, but [he hoped it] would be a
positive secondary effect.”
A spokesperson for Freedom2Choose, a pressure group in the UK which has lobbied
against a smoking ban, argued that steps could be taken to encourage people to stop
smoking without resorting to legislation:
“We are opposed to an outright ban on smoking in public but we’re not opposed on health
grounds. Nobody is stupid enough to think that smoking is good for you, but we see a ban as a
sledgehammer to crack a nut”.
A spokesperson for ASH, a pressure group which has lobbied for a smoking ban in
Scotland, saw the ban as a means of improving everybody’s quality of life:
“Tobacco has done so much damage to Scottish society, these new laws will help us to
improve everyone’s quality of life. ASH Scotland strongly endorses this move from the
Scottish Executive. It is a bold and radical proposal to find a Scottish solution to a Scottish
problem”.
The director of Forest, a pro-smoking pressure group in the UK stated that his
organisation would continue to oppose a smoking ban even after it had come into law in
Scotland:
“The executive has `decided to snub the silent majority in favour of the vociferous antismoking minority…This is not the end of the smoking debate. It has only just begun.”
A nightclub manager in Oslo, contemplating the effect of the smoking ban in Norway,
said:
“We hope that business won’t be hit…It’ll take a few months to find out, but the biggest
uncertainty is how the law will be applied. Will we lose our licence if someone has a
cigarette and we can’t persuade them to stop?”.
41
A representative of the Scottish Licensed Trade Association, which represents those who
are licensed to sell alcohol and food to the public, expressed their concern:
“We’re very disappointed but we’re not surprised. It seems the priorities of the executive are
to criminalize ordinary people in pubs instead of tackling real crime in our cities and towns.
We will continue to fight this decision. We owe that to the licensed trade, which after today
[when the smoking ban came into force] could be decimated.”
A spokesperson for the British Heart Foundation in Scotland said:
“We hope the ban will encourage smokers to give up smoking and significantly reduce their
chances of coronary heart disease.”
Evidence of the impact of smoking bans on the catering and entertainment industries
has tended to be contradictory.
The Irish Licensed Vintners Association, which represents the majority of businessmen
licensed to sell alcohol on their premises commissioned research on the economic impact
of the smoking ban on their industry:
“Research carried out by the marketing research company, Behaviour and Attitudes, confirms
the negative economic impact of the Smoking Ban on the Dublin licensed trade, with turnover
down by as much as 16%, and overall employment levels cut by up to 14% since the
introduction of the Smoking Ban.”
On the other hand, research carried out by researchers from Harvard University on the
impact of a smoking ban in Massachusetts, USA [introduced in July 2004] found that:
“Analyses of economic data prior to and following implementation of the law demonstrated
that the Massachusetts state-wide law did not negatively affect statewide meals and alcoholic
beverage excise tax collections.” [i.e. tax revenues increased because sales had increased].
What Do You Think?
Some people say that the role of government is to protect its citizens from the actions of
others rather than to protect us from ourselves. Others argue that a range of government
measures, including laws, to persuade people to live healthier lives and make safer choices are
in the interests of everyone. What do you think?
42
CASE STUDY 4: The Right to Live and the Right to Die
Mihai Manea & Robert Stradling
Background
The European Court of Human Rights (ECHR) is based in Strasbourg. It was created to rule
on alleged violations of the human rights set out in the European Convention for the
Protection of Human Rights and Fundamental Freedoms of 1950 and the various amendments
and protocols that have been added since that time. Although each member state of the
Council of Europe can propose a judge to preside at the European Court, they are not there to
represent their own countries’ national interests. They are expected to be impartial and
independent.
In recent years, more and more European citizens have brought their complaints of alleged
violations of their rights to the Court. For instance, in just four months in 2003-2004, the
Court dealt with more then 7,300 cases.
The European Court of Human Rights has dealt with many important cases. In 1999, for the
first time since the Russian military invaded Chechnya, the Court in Strasbourg agreed to hear
cases of violation of human rights submitted by Chechen civilians against the Russian
Federation. In 2003 and 2004, the court ruled that “sharia is incompatible with the
fundamental principles of democracy”, mainly because the sharia rules on basic women’s
rights and religious freedoms violate human rights as established in the European Convention
on Human Rights. In 2004, a woman whose pregnancy had been wrongly terminated in a
French hospital took her case to the Court after French courts had ruled that the doctor could
not be prosecuted for homicide as the foetus did not have a right to life which was separate
from that of the life of its pregnant mother. The European Court upheld the ruling of the
French courts and set a precedent on the legal status of the unborn baby across Europe. The
Court ruled that the accidental abortion of the foetus during an operation on the mother did
not constitute manslaughter of the foetus.
This particular case brings us to what is still perhaps one of the most controversial areas of
human rights: the right to life. Article 2 of the European Convention on Human Rights states
that:
“Everyone’s right to life shall be protected by law. No one shall be deprived of his
life intentionally save in the execution of a sentence of the court following his
conviction of a crime for which this penalty is provided by law.”.
As you can see from this, the people who drew up the Convention in 1950 were expected to
take into account the fact that a significant number of member states still executed criminals
found guilty of murder and treason and their police and security forces carried weapons which
they might be expected to use in times of war or self-defence or when trying to arrest a violent
criminal or trying to quell a riot or insurrection. Since 1950, public opinion on capital
punishment and on the powers of police and security forces has changed. Protocol 6 to
Article 2 calls on member states to restrict the use of capital punishment to times of war and
national emergency and the more recent Protocol 13 has called for the total abolition of
capital punishment, even in wartime. Most European states have agreed to this, but capital
punishment has been retained by three of the most powerful states in the world: the USA,
43
China and the Russian Federation (although in Russia there is a moratorium – or suspension
of the death penalty for an agreed period of time - which has been renewed until at least
2010).
In addition to capital punishment, there are a number of other issues associated with the right
to life which are proving controversial. These include abortion, medically-assisted suicide,
euthanasia7 and embryonic stem cell research8.
This case study will focus on the ethical debate surrounding the issue of voluntary euthanasia.
We will explore the arguments on both sides, as they emerged in the case of a terminally ill
British woman, Diane Pretty, who was in the advanced stages of motor neurone disease. She
attempted to get legal immunity from prosecution for her husband if he helped her to end her
life. Immunity was refused and she appealed to the High Courts and ultimately took her
appeal to the European Court. The judges there ruled unanimously that the refusal of the
British courts to allow Diane's husband, Brian, to help her to die did not contravene her
human rights.
Timeline of the case
November 1999: Diane Pretty is diagnosed with motor neurone disease, a degenerative
illness which progressively affects the muscles causing increasing paralysis as the patient
loses mobility in the limbs and difficulties with speech, breathing and swallowing, even
though her mental faculties are not affected. There is no cure.
March 2000: Diane was confined to a wheelchair.
June 2000: Diane’s husband, Brian, wrote to the British Prime Minister asking for a change
in the law so that he could assist his wife to end her life once her condition became too bad to
carry on and her muscles were so wasted that she could not commit suicide without
assistance. The law in the UK is very clear on this. Anyone who helps another to die, even a
loved one, would be committing an offence punishable by imprisonment.
August 2001: Mrs Pretty wrote to the head of the Crown Prosecution Service which is
responsible for reviewing criminal proceedings by the police in England and Wales and which
makes decisions on whether or not to prosecute in the most complex and sensitive cases. She
asked him to grant her husband immunity from prosecution if he helped her to commit
suicide. Her request received public support from a number of civil rights organisations in the
7
Euthanasia: sometimes called mercy killing. It describes the act of killing or allowing someone to die
painlessly when they are suffering from an incurable and progressively worsening disease or condition.
8
Embryonic stem cell research. Human stem cells have been used for some time in medicine to test
new drugs and to transplant tissues and organs into people whose skin tissue or vital organs have been
permanently damaged. But human stem cells require donors and these tend to be in short supply.
Embryonic stem cells, as their name suggests, are derived from embryos that were fertilised artificially
in a clinic just a few days earlier. Potentially there is no limit to the number of stem cells that could be
reproduced in this way. Each cell is capable of long-term self-renewal. Once transferred to a culture
dish, they will keep growing and dividing. After a while, they can be clustered with other cells to form
different types, such as muscle cells, blood cells. nerve cells, etc that might then be used to treat
different diseases. The ethics of doing this and whether or not the 5-day-old embryo is a living thing
(which could have developed into a foetus if it had been transplanted into a human uterus has become
the subject of a major political, moral and scientific debate).
44
UK, including Liberty and the Voluntary Euthanasia Society (VES). The Director of Public
Prosecutions (the DPP) acknowledged the “terrible suffering” which she and her family were
experiencing but said that he could not grant immunity from prosecution to her husband in
these circumstances.
31 August 2001: A High Court Judge in England granted Mrs Pretty the right to challenge
the DPP’s ruling in the courts.
18 October 2001: Three High Court judges rejected Diane’s appeal that the DPP’s ruling
was an infringement of her human rights, particularly her right to self-determination and her
right to be protected from inhuman or degrading treatment. The judges rejected her appeal
concluding that the UK was not yet ready to accept the idea of assisted suicide.
November 2001: Diane Pretty then took her case to the highest appeals court in the UK but
the five senior judges who reviewed her case, confirmed the decision of the High Court.
Diane announces that she will now make one last appeal, this time to the European Court of
Human Rights in Strasbourg.
19 March 2002: Mrs Pretty was now so ill that she had to travel to Strasbourg by ambulance.
Her case was heard by seven Human Rights judges and the hearing lasted 90 minutes.
29 April, 2002 The Human Rights judges announced that they had unanimously rejected Mrs
Pretty’s appeal. In their verdict, they acknowledged that “The Court could not but be
sympathetic to the applicant's apprehension that without the possibility of ending her life she
faced the prospect of a distressing death” but concluded that “to seek to build into the law an
exemption for those judged to be incapable of committing suicide [without assistance] would
seriously undermine the protection of life…..and greatly increase the risk of abuse”.
3 May 2002: Diane Pretty was admitted to a hospice with breathing difficulties.
11 May 2002 : Mrs Pretty slipped into a coma and died at the hospice with her husband at her
side. In a statement issued later, he said that “Diane had to go through the one thing she had
foreseen and was afraid of – and there was nothing I could do to help…..And then for Diane it
was over, free at last”.
What is in dispute here?
Our concern here is only with what is either referred to as “voluntary euthanasia” or
“assisted suicide” and applies only to those people whose illness is terminal and who have
become so incapacitated that they are no longer physically able to take their own lives but
they are still sufficiently mentally competent to express the wish to die (or have made it clear
in advance that they wish this to happen once they have lost both their physical and mental
capacity to die by their own hand).
In the 1970s and ‘80s, there were a series of legal cases in the Netherlands which led to
guidelines being produced in 1984 that removed the possibility of Dutch physicians being
prosecuted for euthanasia if the patient was competent to make a voluntary and informed
decision to die, the patient’s suffering was unbearable and the physician’s prognosis9 was
confirmed by another physician.
9
Prognosis: forecast of the outcome of the disease.
45
In 2001, the Netherlands legalised physician-assisted suicide. Since then, around 4000
terminally ill patients each year have requested their doctors to end their lives with a lethal
injection that kills in minutes. Similar legislation was introduced in Belgium later in the same
year. In Switzerland, there is no law that actually permits assisted suicide by doctors or
family members for terminally-ill patients but the authorities generally regard it as a humane
action and prosecutions are rare and only where it can be proved that the person helping
someone to die has done so for personal gain. There are now a small number of organisations
in Switzerland, such as Dignitas, which will ensure that a terminally-ill patient is examined by
a doctor to assess their medical condition and then the patient is provided with a cocktail of
drugs or pills which will kill them. The death is witnessed by two people and then the
authorities are informed of the death.
The case which Diane Pretty and her advisers presented to the judges at the European Court
of Human Rights argued that the UK government, by denying her husband immunity from
prosecution if he assisted her to commit suicide, was violating five of her human rights.
First, she claimed that the value of personal autonomy or self determination (being able to
freely choose for yourself what you do with your life) was fundamental to Article 2 of the
European Convention on Human Rights, which guarantees the right to life. If this was the
case then, she argued, this should also include the right to end her life when she chose in a
manner of her own choosing once this was preferable to carrying on living.
Second, she argued that the UK government was violating Article 3 - which prohibits
inhuman or degrading treatment or punishment – because they were condemning her to a
prolonged life of extreme pain and intolerable suffering.
Third, because suicide was no longer illegal in the United Kingdom since the Suicide Act of
1961, her right to respect for private life (Article 8) was being violated since she was too ill to
be able to commit suicide without the assistance of her husband.
Fourth, Mrs Pretty claimed that her right to freedom of conscience was being violated because
she was being prevented from practising her belief in voluntary euthanasia for the terminally
ill.
Finally, Diane Pretty argued that she was being discriminated against (Article 14) because,
under the 1961 UK Suicide Act, a physically healthy person could legally commit suicide
whereas she was not allowed to end her life because she was physically unable to do so
without assistance.
Her advisers argued that, in the United Kingdom, there was often a discrepancy between what
the Suicide Act of 1961 said and what actually happened in practice. While the Act legalised
suicide, it stated that assisting another person’s suicide was a crime to be punished by 14
years imprisonment. At this time, it was common practice in British hospitals for doctors to
write “Do Not Resuscitate” [or DNR] on the notes of some patients who were elderly and
frail. Others have given terminally-ill patients overdoses of painkillers knowing that this will
kill them.
There have also been numerous instances where doctors have switched off life-support
machines at the request of patients who were in a permanent coma. And yet, by the year 2000,
only one doctor, a consultant rheumatologist, had been convicted of attempting to perform a
46
mercy killing. He had injected a 70 year-old, terminally-ill patient with potassium chloride
which stopped her heart. She was in constant and extreme pain and frequently begged her
doctor to end her life. He was found guilty but given a suspended sentence, the Medical
Council only reprimanded him and he continued to practise medicine. There have also been
several cases where non-medical persons who have ended the suffering of close family
members have been charged with murder but juries have been unwilling to convict them.
In response to Mrs Pretty’s appeal, the seven European Court judges ruled that Article 2 of
the European Convention on Human Rights had not been violated. That it was “a distortion
of language” to argue that the right to life could also include the right “to choose death rather
than life”. Whilst acknowledging that without the chance of being assisted to end her life
“she faced the prospect of a distressing death”, they did not believe that this meant that the
state was making her endure “inhuman or degrading treatment”.
Further, they did not consider that a law “designed to safeguard the weak and vulnerable” and
“designed to reflect the importance of the right to life by prohibiting assisted suicide” could
be said to violate Article 8: the right to the respect for private life. They also ruled that Diane
Pretty’s freedom of conscience had not been violated since Article 9 was not intended to
cover every possible action that might be motivated by a belief. Finally, the judges also ruled
that the UK Suicide Act of 1961 did not violate Article 14 of the Convention prohibiting
discrimination since it was reasonable in drafting a law to include exemptions because of the
risk of abuse by those who offer assistance. In short, the Court had more or less ruled that, in
certain circumstances, the right to life had primacy over these other rights.
However, the debate continues. At the opposite ends of the spectrum are two positions which
are unlikely to find much common ground between them. At one end is the “pro-life”
position, adopted by, for example, the Catholic Church, that suicide is a sin and that it is
equally sinful to assist someone else to commit suicide. At the other end of the spectrum is
the “pro-choice” position: that a person, who is terminally ill, understands the prognosis of
their illness and is mentally competent to make a decision should have the right to choose to
die with dignity at a time of their choosing and in a manner of their choice. In those
circumstances, and in a country where suicide is legal, then, they argue, it would be immoral
to use the law to prevent them from being helped to exercise that free choice.
Within the medical profession, there is also a debate going on between those who argue that
as doctors they have pledged themselves to do everything possible to sustain life and those
who argue that there are circumstances where it would be immoral to prolong the suffering of
someone who is, in any case, terminally ill.
However, in the middle of that spectrum are a lot of “grey areas” which people are still
debating. For example, those opposed to voluntary euthanasia often argue that modern
medicine is so advanced now that no one needs to die while suffering from intolerable pain
and distress. The supporters of mercy killings argue that the side effects of such treatments,
including nausea, incontinence, permanent drowsiness and possibly, in the later stages, total
dependency on a mechanical respirator reduces a patient’s quality of life to the point where it
is not really worth living, especially without any hope of improvement.
Some opponents of voluntary euthanasia also challenge whether we can always be sure that a
dying person’s request to be helped to die is genuinely voluntary and made by a person who is
mentally competent, especially if they are in permanent pain and taking drugs which can
leave them mentally confused. In response, the “pro-choice” supporters tend to argue for a
47
“cooling off” period before the termination of life is permitted in case the individual changes
his or her mind. They also argue that people should have the right to make “advance
declarations” of what should happen in the event that they later become terminally ill and lose
their capacity to make an informed decision.
Some people, particularly in the medical profession, have introduced a distinction between
“mercy killing” and “allowing someone to die”, by which they mean not resuscitating
someone or switching off a life support system. This is sometimes described as “passive
euthanasia”, presumably because the intention is to “let someone slip away” by removing the
artificial means of supporting them rather than to actively kill them. However, others have
argued that this is a false distinction since either way the intention is still to bring about the
end of someone’s life.
Finally, some people argue that, once a society agrees to voluntary euthanasia, even with all
the safeguards, it is setting foot on a “slippery slope” that will inevitably lead to nonvoluntary euthanasia where the deaths of terminally-ill people will be assisted even if they are
not mentally competent or physically able to give their consent. Then, so it is argued, the
possibility of euthanasia will be extended beyond the terminally ill to include people in a
permanent coma and even those who are severely disabled. Those who favour voluntary
euthanasia usually respond to this by questioning why anyone who favours voluntary
euthanasia on the grounds that people should have the right to freely choose how to end their
lives would be in favour of people being killed even though they had not exercised this
choice.
A variety of viewpoints
A member of the UK Voluntary Euthanasia Society (VES) – an organisation set up in
1935 by doctors, lawyers and church representatives – offers a personal view of the case
for voluntary euthanasia:
“VES campaigns to put the wishes of terminally ill patients first. A ‘good death’ is one which
complements rather than clashes with our vision of ourselves. Because we are all individuals,
we must be allowed to make choices about what for each of us, is a good death. This requires
a two-way dialogue with our doctors, where our wishes about our own lives are respected.
Doctors must realise that, where they are unable to cure, they must offer acceptable
alternatives – alternatives which are acceptable to us. The desire to have control over our lives
is a fundamental part of our humanity.”
Tamora Langley, quoted on the BBC News website, 19 March 2002.
An alternative position is offered by a spokesman for the Pro-Life Alliance [PLA] – an
organisation set up to oppose the legalisation of any medical procedures which terminate
life, such as abortions and euthanasia. It has also contested elections as an issue-based
political party:
“The drive for legalised euthanasia shares common roots with the legalisation of abortion in
1967” [in the UK]. “Promoters of these practices take a utilitarian view of human life rather
than viewing all human life as uniquely created and deserving of absolute respect. We know
that, in order to legalise abortion, individuals began by breaking the law in order to change
it….We are being ‘softened-up’ by these heart-rending cases” [such as that of Diane Pretty]
“in order to achieve a change in the law…..The PLA supports the extension of large-scale
funding of hospices that provide care for terminally-ill adults, children and infants. The
48
science of pain relief within the hospice movement provides the opportunity for dignified
death rather than the starving and dehydrating to death via so-called passive euthanasia.”
Mike Willis, Chairman of the Pro-Life Alliance, quoted on the BBC News website, 28
March 2000.
Rachel Hurst, the director of a pressure group, Disability Awareness in Action,
welcomed the judgment of the European Court of Human Rights in the Diane Pretty
case:
[It would be] “very wrong for justice to say in certain circumstances people can die. It would
be a slippery slope and many people who did not want to die could be affected”.
In 2005, a select committee of the British upper parliamentary chamber, the House of
Lords, when considering a Bill on Assisted Dying for the Terminally Ill, argued that
palliative care for the terminally ill should always be available but some dying patients
did not want more care, they wanted an assisted death:
“The demand for assisted suicide or voluntary euthanasia is particularly strong among
determined individuals whose suffering derives more from the fact of their terminal illness
than from its symptoms and who are unlikely to be deflected from their wish to end their lives
by more or better palliative care10”.
For the last 10 years, opinion polls have consistently shown a high level of public support
for voluntary euthanasia. For example:
A survey carried out in 1996 found that:
“82% of respondents believe people suffering from painful, incurable diseases should
have the right to ask their doctors for help to die”.
Roger Jowell et al, British Social Attitudes: the 13th report (1996).
A National Opinion Poll Survey carried out in the UK in September 2004 showed that:
“82% of British people support a change in the law on assisted dying and 47% would
be prepared to help a terminally-ill loved one to die at their request”.
The General Secretary of the Royal College of Nursing [RCN], Beverly Malone, said in
September 2004, that the RCN opposed any legislation to permit assisted dying for the
terminally ill:
“The RCN policy of opposing assisted dying is crucial to protect the nurse-patient
relationship. We know that nurses deliver the vast majority of patient care and are trusted
advocates for the people they look after. Anything jeopardising that trust would undermine
the foundations of our relationship with patients and could have potentially disastrous
consequences for nursing, our patients and their families”.
In 1997, Annie Lindsell, who died later that year of motor neurone disease, went to the
High Court in the UK to establish the principle that doctors could legally administer
life-shortening drugs for the relief of mental as well as physical distress. She said at the
time that:
“The outcome was an important victory for patient autonomy and human rights and hoped it
would mean brave doctors would no longer have to fear prosecution by the police.”
10
Palliative care means treatment that relieves suffering but does not effect a cure.
49
However, Michele Wates, who has had multiple sclerosis for over 20 years, was
concerned about some of the possible implications of legalising assisted dying for the
terminally ill:
“As someone with a long-term progressive illness and as a campaigner for the rights of
disabled people, I argue that we should regard with extreme caution the language of antidiscrimination, choice and human rights frequently used by those who promote the notion of
legalised assisted killing……….If a non-disabled friend, who does not, as far as either of us is
aware, have a serious illness, were at some point in the future to become depressed and
suicidal, doctors would seek to treat them for depression. If I, as a person with a serious,
progressive illness, were to become suicidal, it would be a matter for debate…..as to whether
I should be treated for depression or assisted to die at my own request. The difference is that
I, unlike my friend, could argue that I was suffering unbearably as a result of my illness and
that my illness is terminal. The argument would then come down to whether the professionals
concerned agreed with me at that point that my illness is ‘terminal’, that my ‘suffering’ is
caused by my illness, and that my suffering is indeed ‘unbearable’. In my friend’s case, there
would be no such discussions to be had…..I am seeking to ensure that in the future I will have
the same assurance as my friend that, were I to become suicidally depressed, doctors would
…treat me for that depression …..rather than seeing it as their task, or even their legal
obligation, to assist me in carrying out my wish to die.”
Speech given at a Conference on “Making Sense of Health, Illness and Disease” in
Oxford, England, July 2005.
What do you think?
Suppose you were a Deputy in the Assembly or a Member of Parliament and you were
approached by a non-governmental organisation [NGO] who wanted you to support their call
for legislation to permit medical staff and close relatives to assist a terminally ill person to die
without fear of prosecution for murder or manslaughter. How would you respond? What
would be the main arguments that you would introduce to support your position.
If you live in a country such as the Netherlands, where legislation already exists to permit
voluntary euthanasia, then how would you respond as a Deputy or MP to an NGO that was
asking you to support their demand that this particular Act should be repealed and voluntary
euthanasia should be made illegal once again.
50
KEY QUESTION THREE:
Do we have the right to freely
express ourselves in any way we wish?
Robert Stradling
The simple answer to this question is “Yes”. Unless you are physically incapable of speech or
other forms of communication, then there is nothing to stop you saying what you think, and
even if you are physically unable to express your opinions, there is nothing to stop you from
thinking them. However, what governments and other people with power or authority over us
can do is punish people for their opinions, beliefs or ideas after they have expressed them and
this may deter many of us from saying what we think.
As a result, the history of the last 500 years or so has been marked by periods when people
who did not want to conform to the established beliefs, values and ways of doing things in
their society struggled for greater freedom of personal expression. Sometimes these were
religious non-conformists struggling against accusations of heresy by the established church.
Sometimes they were scientists like Galileo, brought before the Inquisition for claiming that
the sun does not revolve around the earth. Sometimes they were artists, writers and
performers facing censorship of their work by the state or the church. Often they were
political reformers and rebels seeking to limit the powers of authoritarian monarchs and
governments.
However, there have been important moments in modern history when the right to freedom of
expression captured the imagination of ordinary people as well as political activists. The first
period when this happened was in the late 18th Century when there were popular uprisings
against despotism and arbitrary rule. In 1776, the 13 American colonies, having fought a war
of independence, signed the Declaration of Independence which ended their ties to the British
Crown.
Over the next decade, a debate continued between those who wanted some kind of central
government that could ensure that the 13 new states could act together to protect themselves
from foreign powers and those who feared that central government would restrict the rights
and liberties of the individual. Eventually, the US Constitution, which was agreed in 1787
and the 10 amendments to it which were agreed in 1791, guaranteed the rights of the
individual to freedom of speech, religion, assembly, a free press, the right to keep and bear
arms, the right to trial by an impartial jury, protection from cruel and unusual punishments,
and so on.
In 1789, the French Revolutionaries, seeking protection from despotic rulers, published The
Declaration of the Rights of Man and of the Citizen, which stated that “The free
communication of thoughts and of opinions is one of the most precious rights of man”. The
terror which followed the French Revolution eroded many of these rights in practice,
especially the right to freedom of speech. Nevertheless, the principle was now established
and became one of the cornerstones of the constitutions which were introduced through
reforms and revolutions in so many countries over the next 150 years. But, in almost every
country, there continued to be a tension between the desire of the state to fully exercise its
power and authority and the desire of ordinary citizens to exercise their right to freedom of
expression.
At no time was this tension more apparent than in the first half of the 20th Century, when
many liberal democracies ruled by constitutionally-elected governments were replaced by
51
dictatorships and totalitarian governments. The atrocities which happened in two world wars
and the deaths of millions in the Holocaust because of their race and the deaths and human
treatment of many others in the concentration camps and the labour camps because of their
nationality, ethnicity or political beliefs led governments in 1945 to seek a means to prevent
these terrible atrocities from ever happening again. In 1948, the Universal Declaration of
Human Rights was adopted by the United Nations General Assembly and, two years later, on
4 November 1950, the European Convention on Human Rights was agreed. In Article 10 of
that Convention, it stated:
“Everyone has the right to freedom of expression. This right shall include freedom to
hold opinions and to receive and impart information and ideas without interference
by public authority and regardless of frontiers”.
The international conventions and treaties on human rights which emerged after the Second
World War can only be directly enforced against governments not individual people. As
noted earlier, they were primarily concerned with protecting the individual citizen from
corrupt, tyrannical and despotic government. But, over the last century the debate about
freedom of expression has also been concerned increasingly with protecting the individual’s
right to express unpopular opinions which are contrary to prevailing public opinion. As the
19th Century philosopher, John Stuart Mill, put it:
“If all mankind minus one were of one opinion, and only one person were of the
contrary opinion, mankind would be no more justified in silencing that one person,
than he, if he had the power, would be justified in silencing mankind”.
This brings us to a very important aspect of the whole debate about freedom of expression.
It is not likely that anyone would want to stop someone from expressing the opinion that
“flowers are beautiful” or “a walk in the park on a summer’s day is a nice thing to do” even if
they disagreed with such views. They simply would not feel strongly enough about such
opinions as to want to suppress them. But, if someone said something that was deliberately
intended to shock or offend a group of people or they made fun of others’ religious beliefs or
way of life, then there is a much stronger likelihood that some of these people who have been
shocked or offended might want to stop that person (and anyone else with similar opinions)
from expressing his or her views in public. Would they be right to do so?
The traditional liberal position presented by philosophers like J.S. Mill is that the only
circumstance in which it would be legitimate to limit free speech would be if the expression
of certain opinions might lead to people being harmed and their other human rights being
violated. A typical example often quoted here is that it would be a misuse of the freedom of
speech if someone ran into a theatre and shouted “Fire”’, even though there was no fire, and
this then led to mass panic and people were seriously hurt. But we are now used to a number
of other circumstances in which most people would think it was right that freedom of speech
was constrained. For example, we have laws preventing people from being libelled; laws
which make blackmail a crime; laws which prevent companies from lying about their
products and laws which prevent them from advertising dangerous products to children.
Most people today would accept that there are legitimate circumstances where freedom of
expression could be restricted to prevent harm to others. But people are still divided on some
issues. For example, some people argue that pornography should be banned because they
believe that those who view it are corrupted by it. Some people would also ban television
programmes and films showing acts of violence because they believe that some viewers,
especially young people, would copy this violent behaviour. A third issue relates to what is
often referred to as “hate speech”. This is where a person or group makes a public speech
52
designed to stir up hatred against another person or group which could then lead to that person
or group being attacked or treated badly or their rights being violated in some way.
The problem with all three issues is that those who have supported restrictions on free speech
in such circumstances have found it difficult to establish a clear causal link between the
speech or form of expression and actual harm to others. The evidence of people being
harmed or corrupted by pornography or violent films is weak and contested by experts.
Where it is possible to make a direct link between someone’s speech and harm being done to
a group of people, then in most societies there already exists legislation which covers this. It
is usually referred to as incitement to violence or riot. The problem arises where people wish
to ban all so-called “hate speeches” on the grounds that they might lead to attacks on other
people but there is no evidence that such attacks have happened or there is no evidence that
the perpetrator of an attack had listened to the “hate speech”.
More recently, some have argued that there are situations where freedom of expression should
be restricted not because the expression has led to actual harm being done to someone or
some group, but because the expression or speech has caused offence. They recognise that
causing offence is less serious than causing actual harm. But, nonetheless, they argue that
“offence” could be sufficient grounds for restricting freedom of expression. But what if the
people who claim to be offended are simply being over-sensitive? Suppose they claim the
right to hold views which are offensive to others but demand that views that are offensive to
them are censored? Suppose the views that are claimed to be offensive are held by many
other people to be justifiable criticisms?
All societies have some laws which restrict freedom of expression in circumstances where it
is widely thought that the expression might give offence. For example, many societies have
laws prohibiting blasphemy and sacrilegious acts; i.e. forms of expression which insult or
offend someone’s religion and religious beliefs. Most countries stop people from doing
things which might be offensive to the majority of people, such as being naked in a public
place.
Generally this “offence principle” is only applied in circumstances where offence is
unavoidable. For example, a book or a film may be offensive to some people but they do not
have to read or view it. It may present more of a problem if the film is shown on television at
a time when most people are likely to be watching. So, the decision to limit freedom of
expression on the grounds of the offence it causes tends to depend on the context: how many
people are likely to be offended, could they have avoided the offensive material, what were
the motives of the person who produced that material, and so on. In other words, there is no
universal, hard and fast rule about offensiveness. It all depends on the circumstances.
Nevertheless, in most liberal democracies today, some forms of expression have been banned,
not because they cause harm or unavoidable offence to some people but because these
expressions are not consistent with some other fundamental values in a liberal democracy.
For example, there might be a case in a liberal democracy for denying someone an
opportunity in public to make a speech which encouraged others to be intolerant of the values
and way of life of a minority, or to violate their rights as a citizen in any other way.
Many European states now have laws of this kind. For example, France has laws which
prohibit public speech or writings that deny the Holocaust or that incite racial and religious
hatred or violence against people because of their sexual orientation. Denial of the Holocaust
is also banned in Austria. Germany has restrictions on hate speech, including neo-Nazi ideas.
The Parliament of the United Kingdom has also passed laws prohibiting incitement to racial
and religious hatred.
53
Essentially, this means two things. First, it means that, in a liberal democracy, freedom of
expression does not have a special privilege that places it above other rights and values. But it
is an important right that needs to be protected from those who would seek to deny it to some
or all of us. Second, it means that those who wish to either deny freedom of expression to
some people on a particular matter or who wish to continue to express their views even if they
offend or demean others have to state their case. They cannot simply claim that they have a
universal right to do something (whether this is to say what they think or to protect their
religious views from being criticised by others). They must convince the rest of us that their
claims are more valid than those of the others who oppose them.
The American philosopher, Stanley Fish, has observed that, in a pluralist democracy
characterised by a diversity of groups, interests, values and ways of life, “[free speech is] not
always the appropriate reference point for situations involving the production of speech”.
Wherever the right to say what you think is challenged, then we need to examine the
consequences of expressing one’s views or not expressing them and whether or not more is to
be gained or lost by preventing these views from being expressed.
References
John Stuart Mill, On Liberty, Clarendon Press, 1980.
Stanley Fish, There’s No Such Thing as Free Speech…and it’s a good thing too, Oxford
University Press, 1994.
54
CASE STUDY 5: Free Speech or Religious Offence: The Case of
the Danish Cartoons mocking the Prophet Mohammed
Robert Stradling
Timeline of the issue
July 2005: Danish Muslim leaders met the Danish Prime Minister to complain about press
coverage of Islam. The PM, Anders Fogh Rasmussen, responded that the Danish Government
cannot tell newspapers what to print and not print.
30 September 2005: After a Danish author, Kare Bluitgen, complained that he was unable to
find an illustrator for his book about the Prophet because of the Islamic tradition forbidding
the portrayal of his image, the Danish newspaper, Jyllands-Posten, wrote an editorial
criticising self-censorship in the Danish media. This was accompanied by 12 cartoons
produced by anonymous artists. Some of the images were not particularly critical of Islam,
some were clearly provocative, including an image of the Prophet wearing a turban that
appears to be a bomb. Other cartoons also associated Islam with terrorism.
17 October 2005: The Egyptian newspaper al Fagr reprinted some of the cartoons and
described them as insulting and “a racist bomb”. The paper predicted a public outcry, but
there were few protests at this time.
20 October 2005: Ambassadors to Denmark from 10 Islamic countries requested a meeting
with the Danish Prime Minister to complain about the cartoons.
December 2005: A delegation of Danish Muslim leaders went to the Middle East to discuss
the issue with political leaders and Islamic scholars. At a meeting in Mecca of the
Organisation of the Islamic Conference (OIC), concern was expressed at “rising hatred
against Islam and Muslims”. The cartoons were described as “desecration of the image of the
Holy Prophet Mohammed”.
10 January 2006: The Norwegian newspaper, Magazinet, reprinted the Danish cartoons.
26 January 2006: Saudi Arabia recalled its ambassador from Denmark. Libya closed its
embassy there. Boycotts of Danish goods spread from Saudi Arabia and Kuwait to other
Arab countries.
30 January 2006: Jyllands-Posten apologised for any offence which the cartoons had caused,
having previously refused to apologise. The statement said: “In our opinion, the 12 drawings
were sober. They were not intended to be offensive, nor were they at variance with Danish
law, but they have indisputably offended many Muslims for which we apologise”. The Danish
Prime Minister welcomed the apology but defended the freedom of the press.
1-2 February 2006: Some newspapers in Austria, France, Germany, Italy and Spain reprinted
the cartoons.
4-14 February 2006: Islamic protests, including attacks on Danish embassies, took place in
Afghanistan, Indonesia, Iran, Iraq, Pakistan, Lebanon and Syria.
55
17-19 February 2006: 16 demonstrators were killed by police when attacking Christian
communities in the Nigerian city of Maiduguri. Police opened fire on protesters and used tear
gas to disperse them in Pakistan. Denmark temporarily closed its embassy in Pakistan. Over a
period of six months, a total of 139 protesters were killed in clashes with police and security
forces, mainly in Afghanistan, Libya, Nigeria and Pakistan.
What was in dispute here?
The controversy started as an attempt by a Danish newspaper to see if Danish cartoonists
would self-censor their work for fear of reprisals by Muslim radicals. This took place within
the context of concern being expressed by some people working in the Danish media
following the murder of the Dutch film maker, Theo van Gogh, and an assault on a lecturer at
the University of Copenhagen because he had read extracts from the Koran to non-Muslim
students. At first, there was very little interest in the cartoons outside Denmark. It only
became an international news story after a group of Danish Muslim leaders went to the
Middle East to discuss the cartoons with government representatives there.
The condemnation of the cartoons at the OIC meeting in Mecca, followed by attacks on
Danish embassies and the boycott of Danish goods, made it into an international issue and
then the cartoons began to appear on the Internet and were reprinted in newspapers in some
other European countries. This, in turn led to an escalation in the response by more radical
Moslems, who saw the cartoons as yet one more example of a growing tendency in the west
since the bombing of the World Trade Center on 11 September 2001 and the subsequent “war
on terror” to demonise Islam and to portray Muslims as terrorists.
By this stage, the editors of European newspapers who were reprinting the cartoons were
either arguing that this was an important issue about freedom of expression and freedom of
the press or they were saying that, while they found the cartoons offensive or distasteful (and
would not have done what Jyllands-Posten did in the first place), they were now publishing
them because this had become a major international story of great public interest. Some
Muslim critics of Jyllands-Posten, particularly of the Sunni persuasion, emphasised that it
was blasphemous for anyone to produce an image claiming to be of the Prophet Mohammed.
Most Muslim critics, however, concentrated on the “messages” in the cartoons which they
perceived to be overwhelmingly insulting to Islam and the Prophet. While a few extremists
made death threats against the editor of Jyllands-Posten and the cartoonists and others
attacked and burned Danish embassies, others simply called on the Danish Government to
disassociate itself from the cartoons and the newspaper which had published them.
As the issue escalated, other western governments began to issue statements. One of the first
was the State Department of the US Government which criticised the cartoons as an
unacceptable incitement to religious hatred. Others were not quite so openly critical of
Jyllands-Posten but tended to adopt the line that freedom of speech and freedom of the press
were important rights but they had to be exercised responsibly and, in this instance, they
believed that those newspapers which had published the cartoons had been irresponsible in
exercising their right to free expression. This, in turn, led to criticisms from some western
observers that western governments were not doing enough to defend the rights of freedom of
expression and a free press. Interestingly, some Christian groups who had become
increasingly concerned about the decline of Christian values in the west and plays, films and
56
publications which they regarded as blasphemous were sympathetic to the complaints from
Muslim groups.
At the same time, divisions also emerged between liberal Muslims, many of them living in
Europe, and those who adopted a more traditional, fundamentalist position. In London, for
example, in March 2006, there was a march organised by Muslims protesting against the
cartoons and a march organised by Muslims who supported freedom of speech.
A variety of viewpoints about the issues associated with the
cartoons
An extract from the editorial in Jyllands-Posten which accompanied the cartoons:
“The modern, secular society is rejected by some Muslims. They demand a special position,
insisting on special consideration of their own religious feelings. It is incompatible with
contemporary democracy and freedom of speech, where you must be ready to put up with
insults, mockery and ridicule. It is certainly not always attractive and nice to look at, and it
does not mean that religious feelings should be made fun of at any price, but that is of minor
importance in the present context. [...] we are on our way to a slippery slope where no-one
can tell how the self-censorship will end. That is why Morgenavisen Jyllands-Posten has
invited members of the Danish editorial cartoonists union to draw Muhammad as they see
him.”
Ten Ambassadors from Arab Countries based at embassies in Copenhagen issued a
written statement on 20 October 2005:
“We deplore these statements and publications and urge Your Excellency’s government to
take all those responsible to task under the law of the land in the interest of inter-faith
harmony, better integration and Denmark’s overall relations with the Muslim World.”
The Danish Government responded by letter to the Ambassadors saying:
“…freedom of expression has a wide scope and the Danish Government has no means of
influencing the press. However, Danish legislation prohibits acts or expressions of
blasphemous or discriminatory nature. The offended party may bring such acts or expressions
to court, and it is for the courts to decide in individual cases.”
A Muslim writer based in London, Ziauddin Sadar, drew a parallel between the Danish
cartoons and the anti-Semitic images that emerged in Germany in the 1930s and went on
to argue that:
“Freedom of speech is not about doing whatever we want to do because we can do it. It is
about creating an open marketplace for ideas and debate where all, including the
marginalised, can take part as equals.”
The Secretary General of the Arab League, Amr Mousa, also drew a parallel with AntiSemitism and accused the West of double standards when it came to freedom of
expression:
57
“What about freedom of expression when Anti-Semitism is involved? Then it is not freedom
of expression. Then it is a crime. But when Islam is insulted, certain powers ….raise the issue
of freedom of expression. Freedom of expression should be one yardstick, not two or three.”
The Egyptian Minister of Foreign Affairs, Aboul Ghait, wrote to the Danish Prime
Minister and the Secretary General of the United Nations explaining that what was
needed was:
“An official Danish statement underlining the need for and the obligation of respecting all
religions and desisting from offending their devotees to prevent an escalation which would
have serious and far reaching consequences.”
Thomas Kleine-Brockhoff , the Washington Bureau Chief of the German news weekly,
Die Zeit, explained his journal’s decision to reprint the Danish cartoons:
“When the cartoons were first published in Denmark in September, nobody in Germany took
notice. Had our publication been offered the drawings at that point, in all likelihood we
would have declined to print them….out of a sense of moderation and respect for the Muslim
minority in our country. News people make judgments about taste all the time. We do not
show sexually explicit pictures or body parts after a terrorist attack. We try to keep racism
and anti-Semitism out of the paper. Freedom of the press comes with a responsibility. But
the criteria change when material that is seen as offensive becomes newsworthy. That’s why
we saw bodies falling out of the World Trade Center on Sept. 11, 2001. That’s why we saw
pictures from Abu Ghraib. On such issues we print what we usually wouldn’t….To publish ,
does not mean to endorse. Context matters.”
A State Department spokesperson, Sean McCormack, presented the official line of the
US Government on the controversy surrounding the cartoons:
“Anti-Muslim images are as unacceptable as anti-Semitic images, as anti-Christian images or
any other religious belief. But it is important that we also support the rights of individuals to
express their freely held views.”
A journalist based in the United States, Christopher Hitchens, who tends to write from a
right wing, libertarian perspective, responded to the State Department’s statement as
follows:
“How appalling for the country of the First Amendment [protecting freedom of speech and
the freedom of the press] to be represented by such an administration.”
The French Foreign Minister, Philippe Douste-Blazy, expressed the view that:
“Freedom of expression confers rights, it is true – it also imposes the duty of responsibility on
those who are speaking out.”
Maryam Namazie, an Iranian Human Rights activist, now living in Britain, spoke at a
rally in London on 25 March 2006 on why she opposed calls to take action against those
who published the cartoons:
“Defining certain expressions and speech as sacred is merely a tool for the suppression of
society. Saying speech and expression offends is in fact an attempt to restrict it. And of
58
course what is held most sacred and deemed to offend the most, especially in this New World
Order, is criticism and ridiculing of religion and its representatives on earth. Why do it if it
offends? Because it must be done. Because ridiculing is a form of criticism, a form of
resistance, a serious form of opposing reaction……..It must be criticised and ridiculed
because that is how, throughout history, society has managed to advance and progress.”
However, the author of a biography of the Prophet Mohammed, Karen Armstrong,
took a different view of how the modernising process takes place:
“Each side needs to appreciate the other’s point of view. I think it was criminally
irresponsible to publish these cartoons. They have been an absolute gift to the extremists - it
shows that the West is incurably Islamophobic……On the other hand, in a secular Europe,
freedom of speech has developed as one of our sacred values. We fought hard for it, but we
have to remember it carries responsibilities. For example, do we have a right to say whatever
we want even if it is false and dangerous? We are seeing here a clash of two different notions
of what is sacred and this is part of the modernising process.”
What Do You Think?
ƒ
If you had been the editor of a European newspaper, would you have published the
Mohammed cartoons or would you have regarded it as irresponsible?
ƒ
Do we have an obligation to think about how our opinions might offend someone else
before we express them publicly or is it more important that people say what they
think regardless of the offence it causes and the consequences that may arise?
ƒ
Was the reaction to the cartoons in parts of the Muslim World understandable and
reasonable given the offensive and insulting nature of the content or was it out of
proportion to the offence?
59
CASE STUDY 6: The right to march to commemorate one’s
cultural history: the case of Northern Ireland
Robert Stradling
Timeline:
The events in Northern Ireland over the 40 years of “The Troubles” (beginning in the late
1960s) cannot be understood without taking a long time perspective.
15th -17th Centuries: Henry VIII, King of England, after a military occupation of Ireland,
also named himself King of Ireland in 1534. His successors continued to increase their power
in Ireland mainly through giving land to English Protestant settlers. Attempts to establish
Protestantism in Ireland led to frequent revolts by the Irish Catholic population.
1685-1690: After James II became King of England and Scotland in 1685, growing popular
dissatisfaction with him and his Catholic supporters led English Protestants to invite Prince
William of Orange to take the throne. James II fled to Ireland and organised a Catholic army
to support his claim to the British throne. Many Protestants, especially in the north of Ireland
supported William of Orange. In July 1690, William’s army defeated James’ army at the
Battle of the Boyne just north of Dublin. This battle is still commemorated today on every
July 12th when the Orangemen (as the Ulster protestants who supported William called
themselves) march throughout the province to mark the decisive defeat of James II.
18th Century: During this period, a series of laws were passed that disadvantaged Catholics
in Ireland – they could not hold public office, vote or serve as members of parliament, enter
the legal profession, carry weapons, etc.
1801: An Act of Union was passed which abolished the Irish parliament and formally united
Ireland with Great Britain to become the United Kingdom.
19th Century: Throughout the 19th Century, there were Irish protests and revolts because of
a series of famines and because many Irish tenants were evicted from their homes by English
landlords and forced to emigrate to the United States, Canada and Australia.
1888 –1914: In the second half of the 19th Century, there was a growing movement in
Ireland for self government (or “Home Rule”). After two unsuccessful attempts to introduce
legislation for Irish Home Rule in the British Parliament, a third Home Rule Act was
approved by Parliament in 1912. In January 1913, the Protestant Ulster Volunteer Force
(UVF) was formed to use force to resist the changes that had been introduced.
1914 – 1920: In 1913, many Irish Catholics formed the Irish Volunteers (IV) to counter any
use of force by the UVF. In 1916, some members of the IV proclaimed an Irish Republic and
seized Dublin’s General Post Office. Fighting between the IV and British forces lasted five
days. The rebel volunteers surrendered but those who had taken part in the Easter Rising
came to be known amongst the Catholic supporters as the Irish Republican Army (IRA). In
May 1916, 15 of the captured rebels were executed, the others were imprisoned.
1920 –1922: The British Government divided Ireland into two areas (partition) and
introduced a separate parliament for each. The parliament in Dublin served 26 Irish counties,
mainly in the south and mainly Catholic. The parliament in Belfast served six northern
60
counties, where the majority of Protestants lived. The 26 counties formed the Irish Free State.
The other six counties remained in the United Kingdom. Violence broke out in the north as
Catholics showed their opposition to partition. As British forces left Ireland, a year-long civil
war broke out within the Irish Free State between those who supported partition and those
who wanted a united Ireland.
1968-69: In reaction to ongoing discrimination against Catholics in Northern Ireland a Civil
Rights Association was formed which was influenced by similar developments by black
activists in the USA at that time. Several civil rights marches were held which were broken up
by the police with excessive force. In 1969, the marches organised by the Protestant Orange
Order, used by some to also express their opposition to the civil rights movement, led to riots
and also to counter-demonstrations. The Northern Ireland Government called for British
troops to be sent in to put down the riots. Barricades were set up in the Catholic area of Derry
and, to avoid bloodshed, the British troops took no action to remove them. The barricaded
areas came to be known as “no go areas”. At the same time, the IRA split into two factions:
the Official IRA and the more hardline Provisional IRA.
January 1972: A march organised by the Civil Rights Association to protest against the
introduction by the British Government of internment without trial for para-militaries was
held in Derry although it had been banned by the government. British soldiers manned
barricades to prevent the march accessing the city centre. In a confrontation the troops
opened fire and killed 14 protesters. This came to be known as “Bloody Sunday”. After this,
Catholic support for the Provisional IRA increased dramatically and soon after the Northern
Ireland government was suspended and the province went back to being ruled directly from
London.
November 1974: The Provisional IRA launched a bombing campaign in Northern Ireland
and in the rest of Britain and the British government responded by introducing a Prevention of
Terrorism Act which allowed suspects to be detained without charge for up to seven days.
This was followed throughout the 1980s by protests and hunger strikes by IRA prisoners.
More violence broke out in the province by both the IRA and Ulster Loyalist groups such as
the UVF.
1996 – 1998: Peace talks were held, chaired by the US Senator, George Mitchell, and
eventually the IRA announced a ceasefire and, after prolonged discussions, a Peace
Agreement was reached at Easter 1998 which was supported by a large majority of the
electorate in a referendum.
1997: Because one of the main sources of tension between the republican nationalists and the
loyalists were the marches and parades through the other community’s area to commemorate
key events in their history the British Government set up the Parades Commission, a quasijudicial organisation, to decide whether or not restrictions should be imposed on any marches
or parades.
What was in dispute here?
The roots of the bitter troubles which have so affected people’s lives in Northern Ireland over
the past 40 years are deeply buried in Ireland’s past: the seizure of land in Ireland by NormanEnglish nobles in the 12th and 13th Centuries; the divisions which emerged after the English
Tudor monarchs claimed the throne of Ireland and then encouraged English protestants to
settle there to help control the Catholic Irish population; the later wave of settlement by
61
Scottish Protestants in the province of Ulster in the north of Ireland; the defeat of the Catholic
King James II in 1690 at the Battle of the Boyne, 20 miles to the north of Dublin, by the
forces of the Protestant Prince William of Orange who had supplanted James on the British
throne; the centuries of anti-Catholic discrimination which then followed; and the division of
Ireland into two parts in 1922, with six counties remaining in the United Kingdom and the
others forming the Irish Free State which later became the Republic of Ireland.
All these events and developments contributed to the emergence of two communities in
Northern Ireland with very different world views: the majority (but only in the north) who
were Protestant, pro-Unionist (i.e. supporting the Union with the rest of the United Kingdom)
and loyal to the British crown; and the minority (but part of the majority in the rest of Ireland)
who were Catholic, republican and Irish nationalists who believed in a united Ireland.
For over 300 years, these two world views have influenced where people live, worship, go to
school and university, work and meet, and, in the last 100 years, they have also strongly
influenced how they vote and who they vote for. As the BBC Ireland correspondent, Dennis
Murray, once put it:
[The victory of William of Orange in 1690] “was more than 300 years ago but might
as well be the day before yesterday in Northern Ireland terms”.
Marches and parades have long been a way in Northern Ireland through which the past has
been commemorated, particularly by the Protestant loyalists (the Catholic nationalists have
also used marches as a form of protest). Every year, for example, the Orange Order, with a
membership of over 75,000 protestants, organises marches around the “The Twelfth” to
commemorate William of Orange’s victory on 12 July 1690 and other significant events in
Protestant history, such as the apprentice boys of Derry closing the city gates against James
II’s army.
Throughout the winter, the Orange Order Lodges drill and rehearse for the marching season in
July, when thousands of men dress up in their best dark suits, bowler hats, furled umbrellas
and orange sashes and march behind a pipe and drum band, singing traditional Ulster
Protestant songs such as “The Sash My Father Wore” and “The Billy Boys”, along a route
which had been decided as long ago as the late 18th Century.
In 1996, one of the oldest parades, first held in 1807, became a flashpoint which led to trouble
across the province. For many years, the Portadown Orange Order had marched from its
Lodge in the town centre out along the Obins Road to the parish church in the village of
Drumcree and then returned to the town centre along the Garvaghy Road. Protestants saw
the march as a traditional expression of their culture while most Catholics either ignored the
parade or went on holiday for the weekend. But attitudes hardened in the 1960s. Until then
the Garvaghy Road had been a country lane but, in the late 1960s, a housing estate was built
along the road with homes for about 6,000 people, mostly Catholics. The Obins Road had also
become a predominantly Catholic residential area.
At the same time, Catholic frustration against the discrimination they were experiencing
boiled over into protests and then violence with paramilitary activity intensifying on both
sides. During the height of the troubles, the march continued but with a massive police and
army presence and opposition from the nationalists. In 1996, the Catholic residents of
Garvaghy Road and Obins Road expressed again their demand that the Orange march should
be re-routed to avoid their estates and this time the police agreed and laid down an alternative
route for the march both to Drumcree Church and back into Portadown. A security cordon
was placed across the Garvaghy Road to prevent the marchers from entering but loyalist
groups attacked the police and this also sparked off trouble elsewhere in Northern Ireland.
62
As a direct result of the violence and ongoing tension, the British Government in 1997 set up
the independent, non-governmental quasi-judicial organisation in Northern Ireland called the
Parades Commission. It was given the power to set conditions on any parade or march if it
was anticipated that it could lead to disorder, conflict or tension in the area where it was
planned to take place. Any decision which is taken by the Parades Commission is legally
binding on the marchers and their organisers and on the residents of the areas where the
marchers are due to parade.
The Parades Commission upheld the decision to ban Orange marches along the Garvaghy
Road. Each year, since then, the organisers of the parade seek permission to march along
their traditional route and each year the Commission refuses permission. Each year, the
marchers leave Drumcree Church and march down the hill to a bridge which leads to
Garvaghy Road and there they are stopped by the security forces.
Since 1997, the Parades Commission has also re-routed marchers elsewhere in Northern
Ireland or introduced other restrictions. Their decisions have often been seen by one side or
the other as contentious. Perhaps one of the most controversial of these was a parade in the
Whiterock area of West Belfast which, traditionally, had involved the Orangemen marching
from the loyalist Shankill Road through a barrier in Workman Avenue – which separated the
loyalist community from the nationalist community and then along the mainly nationalist
Springfield Road.
The Parades Commission decided in 2005 to prevent the Orange Order from gaining access to
the Springfield Road and this led to police officers being fired on and attacked with petrol
bombs and blast bombs. It was also followed by rioting and violence in loyalist and
nationalist areas across Belfast. The total cost of the disturbances was estimated to be €4.5
million. In 2006, the Commission decided to allow just 50 marchers from one Orange Lodge
to parade down the Springfield Road while the rest of the parade would go through the
alternative route used in 2005. Although this decision was criticised by some loyalists and
nationalists, the parade passed off peacefully.
When marches or parades are disputed, both sides assert that their rights are being infringed.
Those who want to march claim that it is part of their right to assemble, their right to freedom
of expression, and their right to freedom of thought and religion. At the same time, people
who do not want marches or parades to pass through the community where they live claim the
right to freedom from intimidation and harassment and the right to privacy. In situations like
this, any human rights documents such as the European Convention on Human Rights and the
UN International Convention on Civil and Political Rights can provide a framework for
debate but do not, in themselves, provide a solution.
The Northern Ireland Parades Commission suggests one possible way of trying to resolve
conflicts such as this but its effectiveness depends on both the will of the community as a
whole to accept a compromise and a perception on both sides that the Commission is
impartial and independent from interference by government and representatives of all sides.
A variety of viewpoints about the issues
The leader of the Social Democratic Labour Party in Northern Ireland (which gets most
of its electoral support from the Catholic nationalist community) gave his view in 1994:
“People [in Northern Ireland] don’t march as an alternative to jogging. They do it to assert
their supremacy. It is pure tribalism, the cause of troubles all over the world.”
63
Drew Nelson, The Grand Secretary of the Orange Order in Northern Ireland and a
member of the Northern Ireland Assembly, gave a different perspective:
“The marches are a celebration of our continued survival as a community in this island and of
our freedom to express our culture in this way.”
The Chairman of the Northern Ireland Parades Commission, Roger Poole, gave his
reasons for continuing to restrict the Orange Order parade in Portadown as follows:
“It is our hope that an accommodation can be reached in Portadown which will bring longterm stability…….Currently, we believe that a parade along the Garvaghy Road may serve to
destabilise the situation in that area. We are committed to facilitating a process of dialogue
and mediation which I genuinely believe can help resolve this long running issue.”
A spokesperson for the nationalist Garvaghy Road Residents’ Association, whilst
welcoming the Commission’s decision to continue to restrict the route of the Orange
Order march in Portadown, still questioned the neutrality of the Commission in 2006:
“Few, if any people, expected the present commission to overturn almost eight consistent
rulings prohibiting Orange marches from Garvaghy Road…..However, this decision does not
alter the fact – the widely held view amongst the nationalist community, that this Commission
is clearly imbalanced in terms of make-up and representation.”
In 2005, the Deputy Leader of the Democratic Unionist Party, Peter Robinson, publicly
stated that unionists should not support the Parades Commission:
“This unelected and unaccountable Quango11 has made inconsistent determinations, punished
those who obey the law by banning their parades and thus rewarded those who engage in
violence and has encouraged dialogue and then thrown it back in people’s faces.”
A spokesperson for the Orange Order, when asked about the violent response by
loyalists to the Parades Commission’s decision to stop marches from going down the
nationalist Springfield Road in West Belfast in 2005, said:
“It’s the frustration of Protestant people as to what they can do to have their ordinary voice
heard. We just feel so frustrated that there is a cultural veto through the Parades Commission
for the republican, nationalist community.”
After the 2006 parade in the Whiterock area of West Belfast passed off peacefully, some
community representatives welcomed the new spirit of compromise and conciliation but
others, whilst relieved that the previous year’s violence had not been repeated, still believed
that their side had had to make more concessions than the other:
Tommy Cheevers, a member of the North and West Belfast Parades Forum said:
“If we can achieve a peaceful summer we’ll have played our part…One side cannot have a
veto and it has to be done in a spirit of compromise and accommodation, it can’t be just one
side’s story told here.”
The leader of the Ulster Unionist Party, Sir Reg Empey, responded:
“Congratulations are due to a number of people whose work has contributed to a peaceful
Whiterock parade which….could result in a peaceful summer for 2006.”
11
A Quango is a quasi-autonomous non-governmental organisation – a term often used by critics who
believe that a certain NGO is not impartial and has been set up to do the government’s bidding.
64
Drew Nelson, spokesperson for the Orange Order, talked of Protestant concessions:
“It was a hard decision …to limit the numbers [of marchers] walking through the Workman
Avenue gates on to the Springfield Road, but nevertheless [the Orange Order was] prepared to
make sacrifices”.
Tom Hartley, Sinn Fein councillor in Belfast (i.e. the party with the strongest nationalist
support), saw the Parades Commission’s decisions as biased:
“It will not be lost on the wider nationalist community that, in the first test for the Parades
Commission, which of course includes Orange Order members and sympathisers, they have
decided to directly reward last year’s violence and intimidation with a parade along the
Springfield Road.”
What Do You Think?
Are there any circumstances where a community – whether it represents the majority or a
minority - should NOT be allowed to exercise its cultural rights by publicly celebrating its
history and cultural traditions?
Would you hold the same opinion on this issue regardless of whether the community wishing
to celebrate its history and cultural traditions was the majority or a minority?
If the minority in question are people who were born outside the country (or whose ancestors
came from another country) wished to celebrate their cultural traditions by marches and
public events AND some representatives of the majority population did not approve of this,
should they still be allowed to go ahead with their march?
65
KEY QUESTION FOUR: Does everybody have the right to live
where they wish?
Christopher Rowe
The simple answer to this question would seem to be “Yes”. It can be argued that it is a basic
human right to be left alone to live in peace in the place you regard as home. It is plainly
wrong to force people to leave their homes and to deny them the right to return there.
Democratic societies always have the responsibility to ensure freedom of movement and
freedom from fear. And yet the issue is not a simple one. Most legal rulings have derogations
to allow for special circumstances12. There are also many important practical and moral
factors limiting the freedom to live where you choose. Above all, there are conflicts between
rival freedoms – when one person’s choice of where to live involves the denial of the same
choice to others.
The history of the last 500 years or so has been marked by many instances of people being
uprooted from their homes as the result of violence that was either ordered by the government
or was something that was allowed to happen because the government was guilty of failing to
protect its citizens. The Jews were expelled from Spain by government decree in 1492. Many
of their descendants were expelled once more when their new home, Salonica, came under
Nazi rule after 1941. Many Huguenots were forced to emigrate from 17th Century France.
Petty criminals were transported to Australia from 18th Century Britain. Tsarist Russia
sentenced thousands of prisoners to be sent to Siberia. Stalin later sent millions to the gulags.
In the 1940s, Stalin forcibly deported whole communities such as the Crimean Tartars.
More than five million European Jews died in the Holocaust. Millions more were forcibly
displaced from their homes and fled elsewhere. 700 000 Palestinians were forced into exile
after the formation of the state of Israel in 1948 – 50 years later, there were nearly four
million Palestinian refugees claiming the right of return. In the Balkan Wars of the 1990s,
huge numbers of people became victims of “ethnic cleansing”. The so-called Marsh Arabs of
southern Iraq were deliberately displaced by Saddam Hussein’s policy to drain the marshes
that were the basis of their existence.
The moral lessons to be drawn from cases such as these seem to be clear-cut. The people who
were displaced were being denied their basic freedom to live where they wished, along with
other basic freedoms such as religious belief and cultural identity. It seems easy to identify the
innocent victims and to condemn the evil perpetrators.
However, there have been important moments in history when displacement has been
accidental and where the moral position is less clear.
At the end of the Second World War, for example, millions of refugees moved westwards as
the Red Army invaded Germany. The political borders of the Soviet Union and Poland were
moved far to the west and many ethnic Germans became homeless. After the changes in
Eastern and Central Europe in 1989, it was difficult to argue that the right of these ethnic
Germans to return to their old homes should override the rights of the Polish people who had
been living there for two generations. In the same way, the strong moral case in favour of a
safe national homeland for Jews conflicts directly with the rights of displaced Palestinians.
12
The term ‘derogation’ is used where a law, regulation or legal ruling does not apply under certain
specified conditions. For example, a government may sign an International Agreement or Convention
which states that people cannot be detained indefinitely without trial but there may be certain
exceptions, such as the internment of foreigners from a country with which the host country is at war.
66
It is also true that governments can sometimes have valid reasons to force people to move
against their will – to make way for something that will benefit the wider community, such as
burying a valley under a new reservoir, or demolishing houses to make way for a new road or
for vital new housing development. In such cases, it is plainly important to balance the rights
of individuals or small groups against the rights and economic interests of the majority. There
cannot be an absolute right to stay where you have always lived; or to go back to the home
from which you or your forbears were displaced.
Equally, it is not possible to claim an absolute right to choose to move to make a new home
elsewhere. The core of the problem is the conflict between the rights of incomers and the
rights of those already living there. Sometimes when such conflicts arise from ideas of
“civilised” peoples colonising “primitive” lands, there is an obvious conflict between people
claiming the same land. One example of this is the settlement of the American West in the
19th Century by pioneers (many of them escaping from persecution) at the expense of the
indigenous American Indians. Another is the displacement of the aboriginal people of
Australia to make way for British settlers.
Both these examples reveal the ways in which moral perceptions change over time. When the
flood of immigration into Australia and the American West began, most Europeans believed
that the settlers were bringing the benefits of modern progress to “empty” or “backward”
territories. In recent times, there has been much greater appreciation of the rights and values
of indigenous peoples; and much more criticism of the harmful consequences of their
displacement.
Other issues are less clear-cut but nonetheless difficult to resolve. Some places have to be
protected from the undesirable effects of development and population growth. This can apply
to historic towns, or areas of outstanding natural beauty, or nature reserves protecting wildlife.
Sometimes, governments restrict the influx of people because of worries that too rapid
population change might lead to social tensions.
Most people today, therefore, would accept that there are legitimate circumstances where
freedom of movement could be restricted to prevent harm to others. But people are still
divided on some issues. For example, international law requires that asylum-seekers must be
accepted and absorbed into democratic societies and many people would argue that there is a
moral imperative to protect innocent victims from persecution. However, others, including
some European governments, claim increasingly that it can be difficult sometimes to
differentiate the genuine asylum seeker from the economic migrant who claims to be fleeing
persecution. They usually go on to argue that immigration must be strictly controlled in order
to prevent excessive social and economic pressures on the host communities from destroying
social cohesion. These issues are made more complicated and more intense by questions
relating to ethnic and cultural differences.
Many people argue that the traditional approach to asylum-seekers was based on relatively
small, and thus manageable, numbers; and belonged to a time when it was much more
difficult to travel vast distances. In recent times, they argue, the mass movement of refugees
caused by regional wars, as well as the much greater ease of international travel, has produced
a situation in which it is simply not possible to extend protection to all those under threat.
This problem has been intensified by the emergence of illegal trafficking of people by
unscrupulous organisations.
As we have already seen, the debate about asylum-seekers is also bound up with economic
migration. The vast gulf between wealthy and poverty-stricken societies means that there is a
huge and growing number of economic migrants willing to take great risks in order to gain
jobs and homes in affluent developed economies. Many people claiming to be asylum-seekers
67
are actually pursuing economic advancement – they will travel through many countries where
they could be safe in order to reach the destination they feel would provide the maximum
economic opportunity.
Economic migration is a classic example of the balance between the rights of the individual as
against the rights of the community. Many economists argue that economic migrants
invariably bring benefits to the societies that accept them. The dynamic economic growth of
the United States in the second half of the 19th Century and the first half of the 20th Century is
one example of this. The growth and economic success of the European Union owes much to
freedom of movement.
Yet arguments such as this do not lead to the conclusion that people have an absolute right to
live where they wish. Rather, they suggest that a balance exists between those wishing to
move and those considering whether or not to receive them. Almost all governments have
established systems to manage the process of economic migration. These systems include
quotas to limit overall numbers; and differentiation between prospective migrants according
to the skills that are needed most.
There are also sensitive political difficulties. Questions of immigration are easily mixed with
questions of ethnic or cultural differences. Those opposed to immigration are frequently
accused (sometimes but not always justifiably) of doing so for motives based on prejudice and
discrimination. This has made it very difficult for politicians and journalists to address
immigration policies because of the dangers of becoming accused of xenophobia.
One particular issue concerns the right of the host country to deport illegal immigrants and
failed asylum-seekers, who frequently appeal against their removal and claim protection
under the European Convention on Human Rights. Such appeals are often lengthy, expensive
and politically controversial. In recent years in Britain, for example, influential pressure
groups have emerged. On one side, human rights groups speak up for the rights of those
facing deportation; on the other, Migrationwatch UK campaigns for tougher controls and for
Britain to opt out of the ECHR. Similar concerns about immigrants possibly undermining
social cohesion have been expressed in France, Spain and Italy.
This brings us to a very important aspect of the whole debate about freedom of movement and
the right to live where you choose. In an ideal world, it would be relatively easy to agree on
the general principle that people should be left alone to live in peace and should have the
freedom to travel in peace. It can be claimed that:
“Everyone has the right to freedom of movement, without interference by public
authority and regardless of frontiers”.
But there are clearly practical considerations that make this principle impossible to sustain.
The rights of one individual or group cannot be upheld always and everywhere, regardless of
circumstances. It can be claimed that:
“If all mankind wished to live in the most beautiful place in the world, it would lose
its beauty and become despoiled by overcrowding, pollution and conflict”.
It would appear that defining the right to live where you wish, as with so many rights, can
only be based on a balance between rights and realities. Most people would agree that
individuals should be able to live in their own way wherever they wish. But most people
would also agree that it is the responsibility of governments to ensure economic prosperity
and social cohesion. Striking the right balance between the rights of the individual or of
minority groups on the one hand, as opposed to what might be the “tyranny of the majority”
on the other hand, remains endlessly complex and difficult to achieve.
68
Essentially this means two things. First, it means that, in a liberal democracy, freedom of
movement and residence does not have a special privilege that places it above other rights and
values. But it is an important right that needs to be protected from those who would seek to
deny it to some or all of us. Second, it means that those who wish to deny freedom of
movement have to prove their case. Governments and pressure groups cannot simply claim
that they have a universal right to remove people or to prevent them from moving to new
homes because they feel like it, or because their country is “full”. They must provide
convincing, valid reasons for denying the rights of others.
References
The Council of Europe, The European Convention on Human Rights Rome 1950.
Norman Naimark, Fires of Hatred: Ethnic Cleansing in Twentieth Century Europe, Harvard
University Press, 2001
Migration Citizenship Education, Forced Migrations: From Lausanne to Yugoslavia,
www.migrationeducation.org
69
CASE STUDY 7: Political refugees or economic migrants?
Europe’s changing response to immigration
Robert Stradling
Timeline:
1st Century AD: Failed rebellions against the Roman Empire led to many Jews fleeing their
homeland and settling round the Mediterranean and in Central and Eastern Europe
1492: Jews were expelled from Spain after the Christian Reconquista.
1618-1648: The Thirty Years War sharply reduced the population of Northern Europe and left
millions seeking refuge from devastation.
1845-1847; Famine in Ireland caused mass emigration. During the next fifty years, millions
of economic migrants left Europe for North America and other overseas destinations.
c1890-1914: Political and religious persecution in Central and Eastern Europe caused mass
emigration of Jews and other groups of religious and political refugees.
1914–1920: The First World War caused enormous migrations of people. Many soldiers
were trying to get home and millions of civilians, forced from their homes by the fighting, had
become refugees. The Russian Revolution in 1917 and the collapse of the Ottoman and
Austro-Hungarian empires also added to the millions on the move.
1921-1923: The League of Nations, established by the Peace Conference after the war, set up
the High Commission for Refugees led by the Norwegian polar explorer, Fridtjof Nansen, to
assist the many Russian, Armenian, Assyrian, Turkish and Greek refugees who were
displaced by war, revolution and the political changes which were taking place immediately
after the war.
1930: The High Commission for Refugees was replaced by the Nansen International Office
for Refugees. It introduced the international Nansen Passport for refugees and stateless
citizens and, in 1933, persuaded 14 of the League of Nations member states to sign the
Refugee Convention – the first attempt to establish international human rights on the
treatment of refugees.
1933-1939: The rise to power of the National Socialists in Germany led to a rapid increase in
refugees and this led to the League of Nations creating a special High Commission for
Refugees coming from Germany in 1933. The scope of the High Commission was extended
to Austria and the Sudetenland late in the 1930s.
1938–1939: Hundreds of thousands of Spanish Republicans fled to France after being
defeated by Franco’s Nationalist forces.
70
31 December 1938: The Nansen Office and the High Commission were replaced by the
Office of the High Commissioner for Refugees.
1939-1945: By the end of the war, Central and Western Europe was full of refugees. Many
were soldiers and prisoners of war trying to get home. But even more were civilians fleeing
the invading forces and national minorities leaving the countries they had settled in before any
reprisals were taken against them. In all 50 million people were left homeless by the war: of
which 25 million were in the USSR and 20 million in Germany.
1943: The Allied forces created the United Nations Relief and Rehabilitation Administration
(UNRRA) to provide help to the refugees and displaced persons in the areas being liberated
from control by Axis forces.
1945-1947: United Nations established. In 1946, the UN set up the Commission on Human
Rights which, a year later, produced the UN Declaration of Human Rights. The International
Refugee Organization (IRO) was set up by the UN to finish the work of the UNRRA in
resettling European refugees after the war.
1947: The partition of the Indian sub-continent into India and Pakistan created 18 million
refugees as Muslims living in the new India were exchanged with Hindus and Sikhs living in
the new Pakistan. Twenty-five years later, many Bengalis rebelled against Pakistani rule and
over 10 million Bengalis sought refuge from the fighting in neighbouring India.
1948-1949: The proclamation of the State of Israel led to the first Arab-Israeli War and led to
hundreds of thousands of Palestinians seeking refuge in neighbouring Arab states. Many of
their descendents still live in the refugee camps that were created in 1948. Before 1948, there
were over three quarters of a million Jews living in Arab states, descendents of people who
had lived in the region for over 2,500 years. After the first Arab-Israeli War, many of them
also became refugees.
14 December 1950: The UN set up the Office of the High Commissioner for Refugees
(UNHCR) to take over the work of UNRRA and the IRO not just in Europe but around the
world. The work of the UNHCR continues today.
1950-1953:The Korean War created over one million refugees.
1951: The UN issued the UN Convention Relating to the Status of Refugees (the Geneva
Convention) which established the circumstances under a which a person qualified as a
refugee and proclaimed the rights which accompanied that status.
1960s: Thousands of non-Communist Chinese migrated to Hong Kong.
1975-1980: When South Vietnam fell to North Vietnamese communist forces, many refugees
tried to escape by boat, which gave rise to the phrase “boat people” at that time. Most
eventually emigrated to the United States, Canada and France.
1960–1990: The colonisation of Africa in the second half of the 19th Century created
administrative borders that often included peoples from different tribes, languages and
customs. After independence was achieved in the 1950s and ‘60s, most of the newly created
countries were ruled by Western-educated elites who had embraced the idea of nationalism
and wanted to create nation-states along western lines. This often led to internal conflict and
71
civil wars and, by the mid-1980s, 12 wars had been fought in Africa and 13 heads of state had
been assassinated. From 1968-1992, the number of African refugees increased from 860,000
to 6,775,000. Many of them sought asylum in neighbouring African countries.
1990s: A decade when conflict around the world created millions of refugees. In Europe,
there were the refugees from the conflicts in the former Yugoslavia. In the Middle East, there
was the fall-out from the Iran-Iraq War and then the First Gulf War arising from the Iraqi
invasion of Kuwait and also the conflict in Afghanistan.
2000-2005: Since the start of the new millennium, the Middle East and Africa have continued
to be the main sources of refugees seeking asylum in Europe, particularly refugees from Iraq
and Afghanistan and from the conflicts in the Sudan, Lebanon, Angola, Somalia and Rwanda.
What is in dispute here?
In 2005, there were 8,661,994 people in the world who were officially classified as refugees
under the 1951 UN Convention relating to the Status of Refugees. That is anyone who
“owing to a well-founded fear of being persecuted for reasons of race, religion, nationality,
membership of a particular social group, or political opinion, is outside the country of his
nationality, and is unable to or, owing to such fear, is unwilling to avail himself of the
protection of that country”.
Since 1951, the categories of people who are now entitled to the protection of the United
Nations High Commission for Refugees (UNHCR) has been expanded to include stateless
persons (who do not have a recognised nationality) and displaced persons seeking to return to
their country of origin or who are displaced within their own countries due to war or civil war.
If we include all of these categories, and the people who are applying for political asylum in
another country, then the total number of persons about whom the UNHCR was concerned in
2005 was just over 21 million. At that time, 772,592 people were asylum seekers and of these
around one third were seeking asylum within the European Union.
Where do all these refugees come from? In the 1990s and in the first five years of the 21st
Century, the largest groups of refugees were escaping conflict in:
Africa: particularly from Angola, Eritrea, Liberia, Rwanda, Sierra Leone, Somalia
and Sudan.
Middle East: particularly from Afghanistan, Iraq and Lebanon
Europe: particularly from Bosnia and Herzegovina, Croatia, Serbia and Montenegro.
In addition, many other refugees were fleeing from countries that were stable but where they
were suffering persecution and the constant abuse of human rights.
Most refugees tend to seek sanctuary in neighbouring countries where they will find a similar
way of life and people who speak the same language or practise the same religion. This is
very clear if you look at Table 1, which shows the 12 countries with the largest numbers of
refugees in 2005. Countries such as Pakistan, Iran and Saudi Arabia received most of their
refugees from Afghanistan, Iraq and the Occupied Palestinian Authority. Albania received
most of its refugees from Kosovo. The refugees to Chad, Kenya, Tanzania and Uganda
mainly came from neighbouring African countries. The United States now takes many of its
refugees from Latin America. Over the last decade, Germany and the United Kingdom, on
72
the other hand, have received refugees from a wider variety of sources: the former
Yugoslavia, the Middle East, Africa, Asia and the former Soviet Union.
Table 1: Countries which had received the largest numbers of refugees by the end of
2005
Rank
Order
1
2
3
4
5
6
7
8
9
10
11
12
Countries receiving
refugees
Number of refugees
by the end of 2005
Pakistan
Iran
Germany
Tanzania
United States
United Kingdom
China
Chad
Uganda
Kenya
Saudi Arabia
Armenia
1,084,694
974,302
700,016
548,824
379,340
303,181
301,041
275,412
257,256
251,271
240,701
219,271
Although the definition of a refugee is very clear in the 1951 Geneva convention and
subsequent UN protocols and conventions on the status of refugees, there has been a growing
tendency, within the European Union, particularly in the West European member states with a
long history of offering asylum to political refugees, to blur the distinction between “asylum
seekers” and “economic migrants”. This tendency to use the two terms as if they were
interchangeable is most apparent in the mass media but it has also increasingly emerged in the
public statements of many politicians [see e.g. the quotes below from the British press and
from Nevzat Soguk].
It is not difficult to see why the blurring of this distinction has happened. It is not surprising,
for example, if those responsible for border control question why someone travels thousands
of kilometres to Western Europe to seek asylum when most of his or her compatriots seek
refuge in a neighbouring country. Similarly, the statistics for applications for asylum over the
last 30 – 40 years tend to show that they decrease during times of economic recession and
increase during periods of economic growth which would suggest that applicants may be
escaping war or political persecution but their choice of asylum country is also economically
motivated. In the United Kingdom, for example, applications were low in the 1970s and
early 1980s but increased dramatically from 3,998 in 1988 to 44,840 in 1991 to 98,900 in
2000 and peaked at 103,080 in 2002. However, while this rapid increase in asylum
applications certainly coincided with a period of economic growth, it is also undeniable that it
coincided with conflict in the former Yugoslavia, the Middle East, and in several African
states.
The United Kingdom provides an example of the blurring of this distinction between political
refugees and economic migrants across Europe. In the UK, this blurring of the distinction also
coincided with an increasingly restrictive policy on other immigrants. After the Second
World War, faced by a shortage of unskilled labour, the British Government offered free entry
and citizenship to people from its former colonies (now the Commonwealth). Those
73
privileges came to an end through a series of legislative changes introduced by various British
Governments between 1962 and 1993.
In the 1960s, the political debate in the UK about immigration focused on the perceived threat
which immigration from the Indian sub-continent, the Caribbean and Africa posed to racial
harmony. In support of this view, policy makers pointed to so-called “race riots” in some
British cities and the rise of right-wing political parties and movements who were opposed to
any further non-white immigration. In 1972, the European Commission publicly criticised the
UK legislation on immigration as “racially discriminatory”.
From the mid-1970s onwards, UK Governments – and governments of other ex-colonial
powers in Europe - began to use a two-fold argument for denying entry to immigrants,
particularly from Asia and Africa. First, there was the risk of intensified racial conflict.
Second, there was the argument that the political refugees were choosing to seek asylum in
Western Europe – often after travelling through other countries that could have offered them
asylum – was for economic reasons.
At this point, the focus in the public debate on political asylum shifted from the reason why
people needed to leave their country of origin to the reason why they wanted to enter another
country. The popular press in the UK (as can be seen from the quotes below) began to equate
the term “asylum seeker” with “economic migrant” and then started adding the adjective
“bogus”. As the numbers of asylum applications continued to rise throughout the 1990s, the
popular right-wing press ran a campaign claiming that Britain was “a soft touch”, i.e. a
country which put few obstacles in the way of would-be asylum seekers.
In practice, while the UK received the highest number of asylum applications in the European
Union, in 2000, over 64% of them were rejected. By 2005, the rejection rate was up to 85%
and the number of applications that year had fallen to 23,750. Perhaps it is not surprising that
one British observer noted that, if this was evidence of “a soft touch”, then it must be “an iron
fist in a velvet glove”.
By 2001-2002, a set of myths had emerged about “typical asylum seekers” and the Londonbased Public Information Office of the UNHCR found it necessary to issue a briefing to
counter these myths. To counter the popular myth that Britain was being “flooded” with
bogus asylum seekers, the UNHCR pointed out that the number of political refugees accepted
was only 0.5% of the UK population. To counter the myth that “Britain was top of the
Asylum League”, they pointed out that the UK currently ranks ninth within the EU in terms of
applications per head of population.
Since the enlargement of the European Union, the debate has shifted once again. There is
now less emphasis in the popular right-wing press on asylum seekers – bogus or genuine –
and rather more emphasis on the numbers of migrant workers from Eastern Europe seeking
employment in Britain and whether they are pricing British workers out of the job market by
working for lower wages.
A variety of viewpoints about the issues
Article 33 of the 1951 Geneva Convention on Refugees states that: “No Contracting State
shall expel or return a refugee….to the frontiers of territories where his life or freedom, would
be threatened on account of his race, religion, nationality, membership of a particular social
group or political opinion”.
74
According to research carried out by the UK Government in 2002, around 53% of refugees
to Britain have academic qualifications while over 65% speak two languages as well as their
mother tongue. The same research [Glover et al] showed that people born outside the UK,
including refugees, contributed more to the economy in taxes and national insurance than they
consumed in benefits and public services. The net gain to the UK economy from the
immigrants of employable age in 1998-99 was approximately €4 billion. [Glover et al,
Migration: and economic and social analysis, Research, Development and Statistics
Directorate, Home Office, UK 2001]
Liza Schuster, Centre on Migration, Policy and Society, Oxford University writes:
“In seeking to assert control over their borders, European states have developed regimes, sets
of practices that once would have only been possible in war-time, but that today are
considered ‘normal’, part of the everyday experience of hundreds of thousands of people
across Europe. The practices selected include forcible dispersal, detention and deportation.
….For the governments who have introduced them……[these are] necessary instruments in
pursuit of a government’s responsibility to maintain the integrity of its borders. However,
these measures only seem reasonable until one imagines using them against one’s own
population.”
Sarah Spencer, Director of Citizenship and Governance at the Institute of Public Policy
Research in the United Kingdom:
“The lesson of history is that immigrants and refugees can bring significant benefits,
economic and cultural. While public debate on this issue is yet again dominated by proposed
legislation to impose ever tighter restrictions, it is a lesson that appears to have been lost.”
In January 2003, UK Prime Minister, Tony Blair, suggested that Britain might
withdraw from its obligations under the European Convention on Human Rights:
if its “latest wave of asylum reforms failed to stem the flow of unfounded asylum seekers”.
The UK Prime Minister during the 1970s and 1980s, Margaret Thatcher, expressed her
concern about immigrants and refugees:
“People are rather afraid that this country might be rather swamped by people of a different
culture”.
The United Nations High Commission on Refugees (UNHCR), commenting on proposals
by the EU to control immigration more effectively:
“As a result of [EU member states] increasingly restrictive immigration policies, resorting to
the services of smugglers has often become the only viable option for many genuine asylumseekers who seek sanctuary in the European Union.”
Ruud Lubbers, former UN High Commissioner for Refugees, feels that governments are
concentrating on how to manage the symptoms rather than the causes that create
refugees:
75
“I really wonder how governments can justify spending millions on reinforcing borders, on all
kinds of deterrence measures, on custody and detention centres, on all these costly domestic
approaches, yet they refuse to invest in tackling the problem at source, where solutions should
begin.”
The more right-wing popular press in the United Kingdom have consistently called for
tougher policies on immigration including refugees seeking asylum in the UK using
language that implies that the refugees are seeking entry under false pretences:
“At last someone’s had a concrete idea on what to do about illegal immigrants. Fly
them back where they came from in RAF transport planes.” Editorial in The Sun
newspaper, 24 May 2002;
”Refugees will join the rising number of criminals and drug addicts living in country
communities.” Daily Mail, 15 July 2000;
“Britain tops the asylum league”. Daily Express 1 March 2002;
“UK confirmed as asylum capital”. Daily Mail, 28 February 2002.
Wendi Adelson, of the University of Miami Law School, writing of the attitudes of the
UK press towards asylum seekers in the early 2000s:
“The British press depicts a situation where the majority of individuals attempting to migrate
to Britain are poor, often from Britain’s former colonial holdings, and in search of economic
betterment……..In reality, the largest number of asylum seekers to Britain in 2003 came from
Iraq, Zimbabwe and Afghanistan; while all three are developing countries in the global south,
it is more likely that war and politics impelled their migration than the search for economic
improvement.”
Another academic, Professor Nevzat Soguk of the University of Hawaii, and a specialist
in migration studies, writing in 1999, also comments on public perceptions and the way
in which the mass media presents issues about migration and asylum seeking in such a
way that challenges the legitimacy of migrants from the developing south to be treated
as political refugees:
“Words and phrases like poverty, the South, tide, flood, fortress, plague, invasion and many
more all converge to produce images of two unassimilable desire-worlds that stand in
contradiction to one another. One is the prosperous, secure and democratic world of the West
European; the other is an amorphous tide, a flow that is besieging Europe from all directions
and forcing it to become a fortress in self-defence.”
Looking at the statistics on asylum seekers for 2005, the UN High Commissioner for
Refugees, Antonio Guterres said:
“These figures show that talk in the industrialised countries of a growing asylum problem
does not reflect the reality…Indeed, industrialised countries should seriously ask themselves
whether by imposing ever tighter restrictions on asylum seekers, they are not closing their
doors to men, women and children fleeing persecution……With the numbers of asylum
seekers at a record low, industrialised countries are now in a position to devote more attention
to improving the quality of their asylum systems, from the point of view of protecting
refugees, rather than cutting numbers…Despite public perceptions, the majority of refugees in
the world are still hosted by developing countries such as Tanzania, Iran and Pakistan.”
What Do You Think?
Some people think that European governments need to do more to reduce the numbers of
people entering their countries illegally who then claim political asylum, while others think
76
that European governments are spending too much on reducing the numbers of immigrants
and not doing enough to protect refugees who are escaping political or religious persecution
and torture. What do you think?
Is it possible or useful or relevant to try to distinguish between asylum seekers and economic
migrants?
77
CASE STUDY 8: The process of becoming a minority
Introduction
Almost every state in Europe contains one or more ethnic and cultural minorities. Some are
indigenous or native to a particular land, such as the Saami in northern Scandinavia. Some
are described as indigenous minorities. That is, they have inhabited the land in a particular
area for thousands of years and chosen to retain a certain distinctiveness, usually in terms of
their language, culture and heritage.
Many minorities are migrants. Some will be economic migrants seeking a better way of life;
some will be refugees from persecution and oppression; some will have moved to their new
home when that country was occupied or annexed by forces from their native land. Some, are
returnees, expelled from the country they have lived in for many years because they were
associated with “the enemy” or the oppressors. Some, like the “Pieds noirs” in France and
former colonials elsewhere in Western Europe, returned to their homelands yet found
themselves treated as if they were different, even a relic of a previous time.
Some others find that minority status is thrust upon them. In modern times this has usually
happened in the following circumstances. In some cases it has happened because the victors
after a war have re-drawn the boundaries of a country and the inhabitants of an area or region
now find that they are citizens of a different or new nation state. In other cases it has
happened because an empire, colonial power or federation has broken up and the military and
administrative officials and their families who have remained are now a national and cultural
minority.
Sometimes the so-called minority is actually a numerical majority, like the blacks in South
Africa during the Apartheid era. This highlights an interesting characteristic of the term
“minority group”. It tends to be applied to any group that is disadvantaged in terms of
political power, wealth, employment, education and social status, even if it is a numerical
majority within the population and indigenous to that country. Another important feature of
the term is that minority group membership is not only attributed to a people because of
certain shared characteristics (ethnicity, religion, language, culture or lifestyle), it may also be
widely used by its members to enhance their sense of identity and solidarity.
Over the last 50 years or so, the recognition that most minorities are disadvantaged and
marginalised by the dominant majority populations has led to demands for more protection.
This, in turn, has led to international conventions to establish the civil, political, cultural,
economic and social rights of minority groups. In some cases, governments have responded
by introducing what is often referred to as affirmative action - policies designed to redress the
disadvantages, usually by establishing quotas of places for minorities at universities and for
jobs in public services.
Generally, however, although more is now being done to protect the rights of minorities, their
status in most societies seems to be under continuous scrutiny. In some stable liberal
democracies, governments have been expressing concern about the extent to which various
minorities have integrated with the rest of society. This concern usually coincides with a
downturn in the economy, increased unemployment or unrest and conflict breaking out
between the different social groupings.
In countries which have recently undergone major political changes - as happened in some
Central and Eastern European states after the fall of communism - the new governments
were often concerned about the allegiance and loyalty of some minorities, especially if they
78
belonged to a national group which forms the majority in a neighbouring state. This concern
sometimes fuelled nationalistic and xenophobic attitudes during political and economic crises
or rising tension with neighbouring states.
Below, we have three mini case studies to illustrate some of the issues associated with
minorities. They focus in particular on the process of “becoming a minority”. The first is
about the “Pieds noirs” in France – the white settlers of French origin who were repatriated
after Algerian independence. The second case is about the Germans who remained in
western Silesia in Poland after the Second World War, at a time when millions of Germans
were either being forcibly repatriated to Germany from Central and Eastern Europe or were
fleeing ahead of the Red Army. Now this remnant in Silesia appears, in the words of the
author, to be “a ‘natural’ component of the population” in that region.
The final case study is of Bosnia and Herzegovina (BiH) where the population is made up of
three large national groups, none of which constitutes a numerical majority, and other small
minorities such as the Roma and Jews. Conflict and then civil war was sparked off by the
break-up of the former Yugoslavia in the 1990s and the intervention of neighbouring states.
After the international community became involved, hostilities eventually ceased and the
Dayton Peace Agreement, signed in Paris in 1995, brought in an international peacekeeping
force and created the Office of the High Representative to oversee the implementation of the
civilian aspects of the Dayton Accord.
After the mini case studies, we have a rather different kind of activity to stimulate discussion
about some of the issues that arise in countries where tensions break out between different
ethnic and national groups. In spite of the fact that the example - an imaginary country called
Ubia - has three large minorities and no numerical majority, it is not Bosnia and
Herzegovina. For one thing, the minority populations of BiH are far more geographically
spread than the populations of Ubia. It also shares some characteristics with a lot of different
places around the world: BiH, Cyprus, Lebanon, Kosovo, Nagorno-Karabakh, Northern
Ireland, Rwanda, etc. The activity is designed to encourage you to think about the problems
involved in attempting to bring about the conditions for peaceful coexistence and cooperation
in circumstances such as this.
The “Pieds noirs” in France
Jean Petaux
The term Pieds-noirs" (literally black feet) describes, in very general terms, the French
population living in Algeria who arrived in mainland France in the great repatriation that
followed immediately after the proclamation of Algeria's independence on 3 July 1962. The
term was initially pejorative but was rapidly adopted and claimed by the French of Algeria
themselves, since it enabled them to re-establish a special identity for themselves, thus
distinguishing them from Algerians living in France, either as immigrant workers or as
"Harkis" (Algerians who had fought as auxiliaries in the French army against the
independence supporters of the FLN), and from the mainland French.
A caste-based society until 1962
The figures available to historians on this French population in Algeria before the separation
of 1962, which put an end to France's only real colony of settlement between 1830 and that
date, are somewhat variable. Specialists quote the statistic recorded in 1960 of 1,021,047
persons (including about 130,000 Jews) in the whole of Algerian territory, including the
79
Sahara, alongside 9,487,000 Algerian Muslims. In a first research study in 1958, the young
sociologist, Pierre Bourdieu, defined Algerian society as a caste-based one, in which the
French naturally constituted the superior caste. But this group was itself extremely divided
according to origins, since many foreigners – Spanish, Italians, Maltese and even Germans –
had emigrated to Algeria in the 19th Century. Under a law of 1889 applying the jus solis
principle, their children automatically became French.
These new French entered the category of so-called French by origin by merging with them
via the traditional republican routes of school and military service, not to mention the shared
experience of two world wars. At the same time, this group of the population retained its
original sub-cultures by keeping alive certain traditions and cultivating strongly distinctive
features. This phenomenon was reinforced by the fact that each of these communities settled
more or less distinctly in one or other of the major towns and cities, such as Algiers, Oran and
Constantine.
The French in Algeria – a socially heterogeneous group
One particular representation of the French in Algeria was to treat them all as part of the
settler community, and more particularly the colonial elite, and to consider them socially and
economically homogeneous and mutually supportive. These images are partly erroneous. In
1954, 75% of the 354,500 economically active French in Algeria were in paid employment
(64% in mainland France). Of these employees, 70% had fairly modest jobs, as manual
workers and clerical-type staff. Only 32,500 of this same group were in agriculture, including
18,400 with their own farm or holding (9.1% of the active population compared with 26.2%
in mainland France in that same year of 1954). On the other hand, government employees
were much more frequent in Algeria: 28% of the active population in 1954, compared with
12% in mainland France. This significant percentage of Pieds-noirs in the civil service or
state-run enterprises was to ease their reintegration when they moved to France in 1962.
A large-scale return in a very short period
At the start of the 1960s, the authorities in mainland France and Algeria began to notice a
regular and continuous inflow of French from Algeria to the northern Mediterranean shore.
After May 1962, the inflow became a flood, partly following the Evian agreements signed by
the belligerents on 18 March 1962 but also as a result of the climate of violence, accentuated
by the "scorched earth policy" of opponents of Algerian independence. Between 1 April and
31 December 1962, 936,231 persons left Algeria for France, to become the repatriated settlers
from Algeria, soon referred to as thePpieds-noirs.
Rapid social integration in France while retaining a strong identity and culture
The first few years of their return to France were a painful experience for a group of people
traumatised by this tragic outcome, who saw their repatriation as the punishment of history
and considered themselves to be the victims of high politics, and in particular of General de
Gaulle's policy of abandoning Algeria. But, thanks to the strong economy in the France of the
1960s, this new influx of manpower was rapidly absorbed. However, their material
integration would never cure the Pieds-noirs of a real sense of nostalgia and frustration, made
worse by the feeling that they were misjudged by their mainland compatriots. These treated
them either as a privileged group who had outrageously exploited the Muslims of Algeria
until the latter were forced to expel them, or as second-class French, who had hardly arrived
in the country before they were claiming comparable, if not superior, occupational and social
positions to the home population.
80
Hardly surprising then that the French of Algeria should strive to keep the flame of their
individual and collective memory burning, through their cultural lives. In the south of France
and the Paris region, where most of them settled, numerous festivals and similar events were
established. One example is the annual gathering in Nîmes, where the Virgin of Santa Cruz,
itself brought back from Oran in 1962, attracts numerous expatriates grouped by specific
towns, neighbourhoods and villages of Algeria. Witness also certain emblematic figures of
popular culture with whom this group identifies: the singer Enrico Macias, the actors Roger
Hanin, Robert Castel, Guy Bedos and Marthe Villalonga, the film maker Alexandre Arcady
and so on.
A number of writers have perpetuated the written memory of a minority that constantly seeks
to rediscover an identity in relation to a lost land. Albert Camus, who died in 1960, is a
symbolic if distant figure of this period. Marie Cardinal has helped to keep alive an intense
and unifying memory of the happy days of the Algerian pieds-noirs between 1914 and 1962
while, more specifically, André Chouraqui, has done much to safeguard the memory of the
Jews of Algeria, a quite distinct group within the pied-noir minority.
The slow reconstruction of a history suspended in time and a memory reconciled
More and more Pied-noir families have made the return journey, accompanied by second and
third generations for whom Algeria is a totally foreign land but one whose memory is
constantly evoked. Nothing now distinguishes these families from the rest of the mainland
population, yet family recollections and a collective memory passed down through
generations have kept intact a desire to return among the oldest and a shared desire to realise
their family history among young people who have never known the land of their ancestors.
Now that bilateral relations between France and Algeria are once more on a more normal, if
still chaotic, footing, these journeys into the past are proliferating. They put the seal on a form
of reconciliation to a communal memory for a group of French who have never really
recovered from their history.
These shared reunions are an opportunity for genuine fraternisation between both shores of
the Mediterranean. It allows the Pieds-noirs finally to turn the page on this sad chapter and
come to terms with the past, which is now dead and gone. For Algerians, it is an opportunity
to reweave the threads of an individual and collective memory that official history, too long
imbued with a specific ideology and geared to a specific purpose, sought to ignore or distort
by treating every French person in Algeria as an absolute tyrant and every Pied-noir as a
combination of colonialist exploiter and self-confessed racist.
The Germans in Poland – the example of Silesia
Jacek Wòdz
Silesia is a very distinctive region. On the one hand, it is “typical and representative” of
Europe and, on the other hand, it is a patchwork of national and ethnic groups resulting
directly from its history, mainly during the 19th and 20th Centuries. A vast swathe of Europe
was divided between three great empires at that time: German, Russian and AustroHungarian. In the eastern part of Silesia, we find the Dreikaiserecke (“triangle of three
emperors”) where, for over a 100 years, these empires bordered on one another. And, as if
the region's history was not complicated enough, the development of the south was heavily
influenced by Czech culture at that time.
81
It should also be noted that the massive shift of the Polish-German border at the end of the
Second World War (from east to west) left a deep imprint on the make-up of the region's
population. The Germans, in the majority in the central and western part, fled the advancing
soviet army. Those who stayed were obliged to leave, initially on the orders of the Russian
army and later by decision of the Polish civil authorities. That happened in 1945 and, in the
years that followed, particularly in the region's central and western areas (Opole Silesia and
Lower Silesia), their place was taken by Poles who had been forced to leave the former Polish
territories in the east taken by the USSR.
The communist regime that governed after 1945 froze any debate on the status of Poland's
German minority, and it was not until after 1989-90 that the issue resurfaced and national and
ethnic identities in Silesia were defined and redefined. In Upper Silesia and Opole Silesia, we
can list five different identities, and the process of national and ethnic self-definition of
certain social groups is not yet complete. The majority are Polish but there are also groups
defining themselves as Polish-Silesian, Silesian-Silesian (or simply Silesian with no
indication of any dominant German or Polish cultural preference), German-Silesian and of
course German. Those who define themselves as Germans outright, while being Polish
citizens, live mainly in the western part of Upper Silesia (more or less the Katowice
voivodeship, or province) and Opole Silesia. There are very few of them in Lower Silesia
(Wrocław voivodeship).
After the democratisation in Poland (1989-90), the associations of Polish citizens of German
nationality gained public recognition, but not without some difficulty. That recognition is
especially conspicuous in Opole Silesia, where a substantial group made up of that minority
now lives. It has two deputies in the national parliament (Sejm) and several representatives on
the regional council (sejmik) and also strongly influences decision-making at the level of the
different communes within the voivodeship. The German minority is a visible player in the
political life of the Opole voivodeship, and there is no doubt that civic rights for its members
are fully recognised. In Upper Silesia, this minority is clearly in evidence especially at the
local level and in cities such as Katowice.
In terms of activity in the public sphere, social, charitable and cultural associations enjoy a
high profile mainly in the voivodeship of Opole and in Upper Silesia in cities such as Gliwice
and Katowice. Their areas of interest include Polish-German reconciliation. They frequently
refer to European values and seek grants from various European funds.
So, in brief, what is life like in Silesia for a member of the German minority? The answer is
fairly good since, within the region, this minority is regarded as a “natural” component of the
population. But it has to be said too that certain nationalistic ideas at the national level (and
non-existent in the region) are worrying for this minority. When there are social conflicts in
Silesia, a spirit of reconciliation and dialogue naturally comes to the fore and makes it
possible to resolve them locally.
Bosnia and Herzegovina
Damir Agicic
Bosnia and Herzegovina had a long tradition of living in a multi-denominational and a multinational community. For centuries, the territory of Bosnia and Herzegovina (BiH) was on the
border of the Byzantine and Western European spheres of influence and the beginnings of
Islam in the country are linked to the rule of the Ottoman Empire. The rule lasted from 1463
82
until 1878 when the Ottoman province was occupied by the Austro-Hungarian Empire, based
on the decisions of the Berlin Congress. The Austrian authorities encouraged the creation of a
joint Bosnian nation which would include all the population of the province, but the attempt
failed. By the end of the 19th Century in Bosnia and Herzegovina, a Serbian national
consciousness had developed amongst the Orthodox inhabitants and a Croatian consciousness
amongst the Catholic ones. At first, the Muslims sided with the one or the other ethnicity. It
was only in the second half of the 20th Century that a separate Muslim nation appeared.
In the Serbian national consciousness and, to a lesser degree in Croatian consciousness, a
Christian-Muslim antagonism plays an important part, as well as the tradition of the struggle
against the Ottoman conquerors. Although the vast majority of the Muslim population of
Bosnia and Herzegovina are domestic Slavs by origin and come from the islamicised
inhabitants of Bosnia and Herzegovina and not Ottoman settlers, in the perception of their
Christian neighbours, they are often identified with the heirs of the Turks.
On the other hand, the beginning of the Croatian-Serbian conflict regarding the affiliation of
Bosnia and Herzegovina dates from the late 19th Century. Each of the two neighbouring
nations believed that the territory of BiH should belong to it: due to historical claims, as well
as the presence of the Serbian, or the Croatian population on its territory. This conflict
assumed a bloody shape during the Second World War when the Croatian ustasha regime
harassed Serbs, and Serb guerrilla chetnik units retaliated against the Croatian and Muslim
civilian populations.
In socialist Yugoslavia, the same conflict – present not only on the BiH territory - hibernated
under the veil of the official brotherhood and unity of the Yugoslav peoples. Among the
republics of SFRY, Bosnia and Herzegovina represented a true “miniature Yugoslavia”,
because a very mixed population of wide variety in terms of their national and religious
belonging lived there.
The ethnic composition of the population of BiH in 1991:
Muslims
Serbs
Croats
Yugoslavs
39.5%
32.0
18.0
7.9
Socialist Yugoslavia was a totalitarian state where the entire government was in the hands of
the Communist Party. Until his death in 1980, the decisive role was played by the Communist
leader, Josip Broz Tito, and a personality cult was created around him. After Tito’s death, the
Yugoslav state fell into an economic and political crisis. On the one hand, the weaknesses of
the Socialist economy became visible and, on the other hand, brotherhood and unity lessened
and certain nationalisms surfaced. As elsewhere in Europe, at the end of 1980s, the
Communist regime in Yugoslavia was facing collapse.
Many nations, each for their own reasons, were dissatisfied with the Yugoslav state. Different
political powers were aiming either to reform the federation in the light of decentralisation or
to secede its republics. Serbia, under Slobodan Milošević and the Yugoslav People’s Army
83
dominated by Serbs, stood against the secession of other republics, especially those with parts
of the Serb population. The war which took the world by surprise started in Croatia in 1991
and in Bosnia and Herzegovina in 1992. The international community reacted reluctantly and
belatedly to the atrocities happening in those wars.
The war in Bosnia and Herzegovina lasted four years and it saw the conflict of the Bosniacs
and Croats - who favoured the independence of the republic - against the Serbs who were
trying to prevent it. Later in the war, there was a conflict between the Bosniacs and the
Croats. Therefore, the three largest national communities of Bosnia and Herzegovina were in
conflict with each other. Many war crimes were committed during the war, international
conventions were violated, ethnic cleansing and genocide were perpetrated.
“Ethnically cleansed territory” became an ideal for nationalists of all sides. Attempts were
made to erase the presence of “the others”, not only by banishing people but also by
demolishing the traces of their presence, especially their places of worship. Some towns
inhabited by mixed populations were divided in two parts (Sarajevo and Mostar).
The case of Sarajevo became particularly interesting. Located in a long and narrow valley, it
was besieged by Serb forces. The Serbs were deployed on the surrounding hills which
enabled them to bomb the city easily with shells. Some outskirts of Sarajevo were under Serb
control and Serb snipers were shooting from the upper floors of tall apartment buildings. The
city soon experienced shortages of food, water and firewood. At the same time, it was not
only the Bosniacs living in the city but the Croats and Serbs as well. A portion of the latter
remained in Sarajevo under the siege of their fellow Serbs.
It is interesting to note that all sides in Bosnia and Herzegovina were invoking the right of a
nation to self-determination. The key difference between the belligerents was in how they
interpreted that right. The Bosniacs (as well as the Croats at the beginning) regarded it as a
right of the BiH population as a single body that voted, in a referendum, for independence and
the inalterability of the borders of the former Yugoslav republics. This is one of the
fundamental principles of international relations in the modern world.
The Serbs believed the Serbian people in BiH had the right to decide whether they wanted to
remain living in Bosnia. They boycotted the independence referendum and tried to establish
military control over as much territory as possible, intending to annex it to Serbia. In doing
so, they persecuted non-Serbs and committed ethnic cleansing since the population of Bosnia
and Herzegovina was highly mixed before the war and there was no compact territory
inhabited mainly by Serbs and next to the Serbian border, which could be easily annexed.
It is an oversimplification, but it could be argued that Bosnia and Herzegovina has been
experiencing an intensive clash between two of the most important ideas and ideals of the 20th
century. On the one hand, there is the idea of the multicultural civic society based on
democracy and tolerance. On the other hand, there is the idea and ideal of self-determination
leading to an ethnically distinct nation state. This fundamental division, despite 10 years of
international administration in Bosnia and Herzegovina since the end of the war, remains the
basic conflict within Bosnian and Herzegovinian society.
Fragments of private letters13:
13
Quoted from “Rat 1991-1995 u privatnim pismima“, Gordogan, No.4/5, summer-autumn 2004., pp.
52-141.
84
Sarajevo, 3 April 1994
“Sarajevo is destroyed a lot … It cannot be described. Trams operate, but are passing
under the chetnik skyscrapers where, two months ago, snipers did a very good job,
which means that not a fly could pass alive, let alone a man, or a tram (a crowded
one). But still, the trams are passing, and cars. It does not mean this will last long.
Little before our arrival, a tram was shot at and two women were killed, but people
have lost sensitivity because life is so cheap around here.”
Sarajevo, 28 July 1994
“Sarajevo is in a panic because the road (the one we came on) is closed as well as the
airport. Prices are already up and you can feel that something is going to snap again.
In my latest letter, I wrote I was waiting for the shooting to start again. Snipers are
slowly moving into action again. They haven't got the green light yet, but they have
got the amber one for sure … Of course, it is enough for a few people to get wounded
and some killed.”
Sarajevo, 2 March 1993
“Being in Sarajevo today means defying Fascism, working means building a free state
and shooting means destroying chetniks who don't deserve to be treated as humans.
What I can tell you is a story from the frontline because each part of Sarajevo today is
a frontline.”
Zenica, 7 January 1994
“Don't listen to all the rumours about us in Zenica, I mean, there are certain problems,
you know while you were here, but we can still move around, going into the town or
to the church is not banned yet, I am still working, I haven't lost my job, I'm just on
unpaid leave because I asked the manager for it, for the winter, and he let me. I should
go back to work on 15 February if we don't leave Zenica by then, I applied for a
Convoy to Zagreb. I and the children applied and Zdravko can't go because they are
not letting the fit for military service. He will follow me when he gets the chance.”
What Do You Think?
This is an activity designed to stimulate group discussion.
A short history of Ubia.
Ubia is an imaginary country but it shares certain characteristics with several real countries
around the world. As you can see from the map it shares borders with two much larger
countries – Ossia and the Republic of Oksidia. It also shares a coastline with both of these
neighbouring states.
The east of the country is mainly agricultural, the north west is mountainous and the south
west is more industrial. There are five large cities in Ubia and several smaller towns. The
two largest are Colonis, the capital city, which was founded during the Roman Empire, and
Portomaz, a busy international port. Both are multicultural cities with people from the three
main minorities and also from other nationalities and religions.
85
The population of Ubia is around 3.2 million and mainly made up of three large ethnic
minorities who tend to form the majority population in the three main regions of Ubia. The
Theos first migrated to the Ubian region about 300 years ago to escape religious persecution
in their own land. They settled at first in and around Portomaz and then gradually moved into
the mountainous area to the north-west, which is very similar to their original homeland.
Although many of the Theos are direct descendents of the original migrants others living in
Ubia are converts to the Theos religion.
Most of the south west of the country is populated by people of Oksidan origin. In the distant
past, the border between Ubia and Oksidia was not as clearly established as it is now. In 1919,
after the peace treaties at the end of World War I, a clear border was drawn up between the
two countries which meant that people with a shared ethnicity, language and heritage lived on
both sides of the border. A similar development happened in eastern Ubia with Ossians living
on both sides of the border after the 1919 treaty.
Throughout Ubia’s history, there have been periods of peace and periods when relations
between Oksidia and Ossia were tense and conflict would spread into Ubia. In the 17th
Century, the Kingdom of Ossia tried to establish an empire across the whole region. Ubia was
invaded and became part of Greater Ossia. After a prolonged war between Ossia and Oksidia
the independence of Ubia was restored by the Great Powers, although the Ossian nationalist
movement which emerged in the 19th Century revived the dream of a Greater Ossia including
Ubia.
Oksidia and Ossia fought on opposite sides in both the First and Second World Wars and the
fighting spread into Ubia on both occasions. This was partly because civil war broke out
between the different minorities but also because both sides wanted to gain control of the rich
coal, iron ore and oil deposits in the south west of Ubia.
Ten years ago the Nationalist Party came to power in Ossia and revived the idea of a Greater
Ossia. Ossian nationalists across the border in Ubia complained that they were being
persecuted by the Oksidan-Theos coalition government that was running the country and
called on their neighbour to invade and help them gain control. Civil war broke out again
between the Ubi Ossians on one side and the Theos and Ubi Oksidans on the other. The
Ossian government supplied weapons and military support and the Republic of Oksidia
mobilised its troops along the border with Ubia.
After two years of fighting, a UN Commission persuaded the leaders of the different factions
within Ubia and representatives of the governments of Ossia and Oksidia to sit down together
and discuss a possible Peace Agreement. Eventually it was agreed that a north-south peace
line would be drawn from Portomaz down through Colonis to the southern border and this
would be policed by UN peacekeeping forces. At the same time, a European Union
Commissioner was appointed to establish an international team to help run Ubia until
conditions for peaceful coexistence and cooperation could be established.
Your Task
Imagine you are part of the team invited to advise the European Union Commissioner on how
to create the conditions that could lead to peaceful coexistence and cooperation between the
different ethnic minorities in Ubia so that, eventually, elections could be held and a
government elected that would be recognised as legitimate by all sides, including the
international community. What would your advice be?
86
NOTES: You will need to consider a lot of questions and issues:
Do you, for example, concentrate on restoring law and order, getting essential services
running and start re-constructing the economy and only then start thinking about restoring
democracy or do you start by bringing together a group from all sides to draw up a new
constitution so that elections can be held as quickly as possible?
Do you set up a Transitional Government of un-elected people who are acceptable to the
different minorities and are willing to work together? If you do that, do you set a timescale
and a deadline for restoring democracy?
Do you work closely with local warlords and militia leaders in each region to establish local
governments first before trying to set up a national government?
Do you opt for a federal constitution similar to Switzerland or Canada where the different
ethnic groups have a large degree of autonomy over their own regions? If so, how do you
deal with national issues, such as transport, communications, taxation, foreign relations?
How do you protect the rights of minorities in each region? What if the party that is elected in
one region wants to secede from the federal state of Ubia and join Ossia or the Republic of
Oksidia?
Do you opt for some kind of power-sharing system of government, as in Northern Ireland,
where the different minorities will have some places in the government regardless of how
many votes they win at national elections? If so, how are you going to persuade former
enemies to sit down and work together? What can you do to encourage cooperation amongst
people who do not trust each other?
Portomaz
Ubia
Colonis
Ossia
Republic of Oksidia
KEY: Ethnic majorities in Ubia
Ossians
38%
Others
Oksidans
31%
Multicultural centres
Theos
22%
87
9%
KEY QUESTION FIVE: Can there be a just war?
Jean Petaux
The Christian origins of “just war”
In the first three centuries of Christianity the Church tended to follow the pacifist teachings of
Jesus Christ. But St Augustine was not only a Christian he was also a Roman citizen and he
tried to reconcile the pacifism of early Christianity with the obligation of a Roman to fight for
his country when required to. He attempted to do this through the idea of a “just war”. He
described peace as the universal aim of the city of God:
“We do not seek peace in order to be at war, but we go to war that we may have
peace.”
He thought that war was always a sin but sometimes it was a necessary sin in order to remedy
worse sins. If war succeeded in this objective and brought about peace, order and stability
then it could be justified. He also went on to say that:
True religion looks upon as peaceful those wars that are waged not for motives of
aggrandisement or cruelty, but with the object of securing peace, of punishing evildoers, and of uplifting the good.
Eight centuries later another Christian theologian, St Thomas Aquinas, further developed the
thoughts of St Augustine on the nature of a “just war”. Peace, he argued, cannot be imposed
or obtained through fear. There has to be concord or agreement between those who were in
conflict with each other. Peace, he argued, requires us to be “at peace with ourselves”, with all
our different appetites being in harmony. This is how peace is truly achieved. War is always,
of course, waged by those seeking peace for themselves, but a just war is possible in defence
of peace in general. Aquinas believed that war can be morally acceptable, but, to be so it
must meet three conditions:
1. Authority. War is not a matter for private individuals. It is waged for the public good (a
just cause), and has to be decided on by those responsible for that good. So, declaring war is
the prerogative of the sovereign or government responsible for the common good.
2. Just cause. According to St Augustine, a just war is one that “avenges wrongs, when a
nation or state has to be punished”. He believed that there were three just causes for war: self
defence, punishing people who have done wrong, restoring people, land or property wrongly
taken by others. St Thomas Aquinas wrote in the same vein, saying that it was just to attack
those who deserved it because of some fault of theirs. It was later that the famous four
conditions for a just cause were added, and these have been regularly taken up in Catholic
doctrine, including the very latest papal texts:
ƒ The damage inflicted by the aggressor on the nation or community of nations must be
lasting, grave, and certain;
ƒ All other means of ending the aggression must be shown to be impractical or
ineffective;
ƒ There must be serious prospects of success;
ƒ Using arms must not cause graver evils or disorders than does the evil that is to be
eliminated.
88
According to this view war can be seen as an act of justice only if (a) the people or nation
against whom the war is fought have committed serious wrongs against others (b) all other
means of bringing them to account have been tried and failed and (c) the force that is used is
in proportion to the wrongs they have committed.
3. Just intention. Even then, having a just cause or reasons for going to war is not enough.
The intentions of those who fight a war against wrongdoers must also be just. As St Thomas
Aquinas put it, the intention must be to promote good or to avoid evil.
The contemporary approach
Two general principles are accepted by the international community, and underpin some of
the international conventions and treaties which are designed to reduce the likelihood of wars
breaking out and to regulate the conduct of war when it happens: discrimination and
proportionality.
Discrimination requires the belligerents to differentiate between civilians and military, and to
attack only the latter. A strike affecting an innocent third party is tantamount to an attack on
that person, which is a violation of the right to wage war.
Proportionality, is not quite the same. It requires only that the reaction is proportionate to the
aggression. There should not be massive reprisals for a minor act of aggression; a border
skirmish should not lead to the use of weapons of mass destruction, and so on.
Where these two principles are ignored by one or more sides in the conflict you have a
situation of total, all-out war, a fight against a whole nation, without any distinctions and by
every available means.
Michael Walzer, one of the main just war theorists in modern times, has returned to the three
main questions that are fundamental to the ‘just war’ theory:
ƒ Are there just causes for going to war?
ƒ Is the war conducted in a just way?
ƒ Will the peace agreements be fair to all parties?
From these three questions he derived a number of assertions:
1.
War, if it is to be just, must be started as a last resort, which means that all nonviolent possibilities must have been considered beforehand.
2.
In principle, only the international community, represented by the United Nations
and its Security Council, is entitled to authorise a war. In practice, there may be a
widespread international belief that all other attempts to stop the aggressive
actions of a country or alliance have failed but the use of legitimate force against
the aggressors is blocked because one member state exercises its right of veto. Or,
alternatively, a major power declares war, in spite of the opposition of the
international community, in the knowledge that it is too powerful to be stopped.
In either case the issue of legitimate authority for going to war is raised.
3.
The likelihood that such a war will succeed must be greater than the damage it
causes. The violence used during the conflict must be proportionate to the
damage suffered, and a distinction must be made as far as is possible between
89
civilian populations and military aggressors. A real problem arises during
guerrilla-style action, for then it is difficult to distinguish between civilians and
military.
4.
The ultimate aim of such armed intervention must be the restoration of peace.
AntiAnti-terror activities and the “just war” concept
concept
In a 1999 recommendation, the Parliamentary Assembly of the Council of Europe described
an act of terrorism as “any offence committed by individuals or groups resorting to violence
or threatening to use violence against a country, its institutions, its population in general or
specific individuals which, being motivated by separatist aspirations, extremist ideological
conceptions, fanaticism or irrational and subjective factors, is intended to create a climate of
terror among official authorities, certain individuals or groups in society, or the general
public”.
Michael Walzer points out that, generally speaking, action against terrorism is not an act of
war, but a policing activity, and that a police campaign against terrorism is, by its very nature,
likely to be more limited than a war on terrorists waged by the military.
Referring to the categories that he had identified in relation to acts of war, he says that a just
war against terrorism must be proportionate to the acts committed. He advocates making a
distinction between civilian and military victims of terrorism, or between combatants and
non-combatants.
It nevertheless remains the case that the means employed by democratic states in their fight
against terrorism must not be contrary to the values upheld by those same states, as they
would otherwise become terrorist states themselves. This means that regardless of the
methods used by the terrorists there is no justification for the police or security forces of a
democratic state using means that would violate the basic civil and political rights of the
terrorists. That would mean that there is no justification for the democratic state, which is
also a signatory of the UN and European Conventions on Human Rights, using torture,
arbitrary detention, arbitrary executions without trial, excessive force, violence against the
family and relations of the terrorists, and so on.
What do you think?
Can you think of any international wars or civil wars in recent times that would meet the
criteria of a “just war” as outlined above?
Can you think of any circumstances where the actions of terrorists against their own or a
foreign state could be described as a “just war”?
Can you think of any circumstances in which the actions taken by a democratic state against
the threat of terrorism could be described as a “just war”?
90
CASE STUDY 9: The “War against Terror”
Robert Stradling
Background
The term “war on terror” was coined by President George W. Bush to describe all the
measures which the US Administration and its coalition partners introduced following the
coordinated attacks on the World Trade Center in New York and the Pentagon in Washington
on 11 September 2001. The measures taken ranged from increased security at airports to
military action against states such as Iraq which the US Administration believed were
sponsoring global terrorism.
Not long after the attacks on the World Trade Center, President Bush described the war on
terror as an open-ended ideological struggle that “will not end until every terrorist group of
global reach has been found, stopped and defeated”. He subsequently explained that:
“today’s war on terror is like the Cold War. It is an ideological struggle with an
enemy that despises freedom and pursues totalitarian aims…I vowed then that I
would use all assets of our power of Shock and Awe to win the war on terror. And so
I said we were going to stay on the offence two ways: one, hunt down the enemy and
bring them to justice, and take threats seriously; and two, spread freedom”.
Although the events of 11 September 2001 aroused a great deal of sympathy around the world
for the United States and many states quickly initiated measures to counter the possibility of
similar acts of terrorism, some of the steps taken by the Bush administration soon proved
controversial and the debate about whether or not a pre-emptive war against Afghanistan and
Iraq, the Guantanamo Bay detention centre and the so-called “extraordinary rendition” of
terrorists (see Case Study 1) were justifiable or represented violations of international law and
human rights conventions.
Timeline
11 September 2001: Al-Qaeda terrorists attacked the World Trade Center and the Pentagon
using hi-jacked planes.
20 September 2001:
President Bush delivered an ultimatum to the Taliban regime in
Afghanistan to hand over Osama bin Laden and the other al-Qaeda leaders suspected of
planning the 9/11 attacks.
October 2001: US-led NATO forces invaded Afghanistan.
13 December 2001: Following repeated calls in the 1990s from Osama bin Laden and certain
Islamic fundamentalist groups based in Pakistan for a jihad against India, an attack was
carried out on the Indian Parliament.
14 December 2001: The first video by Osama bin Laden was released. In this, he talked
about the 9/11 attacks and threatened continued jihad against America and its allies.
91
October 2002: The US government alleged that Iraq poses a global threat because it could
use weapons of mass destruction to support terrorism. UN Resolution 1441 was passed
unanimously by the Security Council and called on Iraq “to comply with its disarmament
obligations or face serious consequences”. Saddam Hussein then allowed UN inspectors to
access Iraqi sites. The US Congress authorised President Bush to use force if necessary to
“prosecute the war on terrorism”.
22 October 2002: Mounir al-Motassedeq went on trial in Germany accused of membership
of a terrorist cell. He was found guilty in 2003 and sentenced to 12 years. Another court then
ordered a retrial at which he was sentenced to 15 years in prison on 19 August 2005.
29 October 2002: A bombing in a Bali nightclub killed 202 people. On the same day,
Chechen separatists seized a theatre in Moscow taking members of the audience as hostages
and demanding the withdrawal of Russian troops from Chechnya. On the third day of the
siege, special forces pumped gas into the theatre’s air conditioning system and then entered
the building. According to official figures, 39 terrorists and 129 hostages were killed.
20 November 2002: The US Administration announced that it had assembled a “Coalition of
the Willing”, i.e. states prepared to support a war against Iraq if it did not agree to all its
weapons of mass destruction being destroyed.
1 March 2003: Khalid Sheikh Mohammed, believed to be one of the al-Qaeda planners of
the 9/11 attacks, was captured in Islamabad in Pakistan.
20 March 2003: The invasion of Iraq by coalition forces began.
1 May 2003: President Bush claimed victory for US-led coalition forces in Iraq.
12 May 2003: Terrorist bombings in Saudi Arabia.
27 June 2003: Ali Abdul Rahman was arrested in Saudi Arabia and accused of planning the
12 May bombings.
13 December 2003: Saddam Hussein was captured by US forces.
15 December 2003: Suicide bombers attacked two synagogues in Turkey.
16 January 2004: US Central Army Command issued a press release announcing an
investigation into the mistreatment of Iraqi prisoners in Abu Ghraib prison after a military
policeman revealed photographs depicting abuse.
11 March 2004: Ten bombs exploded on four early morning commuter trains travelling into
Madrid. 191 people were killed and 1,800 injured.
22 April 2004: Two suspected terrorists who were arrested in Spain were charged with
helping to plan the 9/11 attacks. On 26 September, a Spanish court found them guilty and
they were sentenced to jail.
30 April 2004: US Military charged six soldiers with torturing prisoners in Abu Ghraib.
92
1 May 2004: A British newspaper published pictures of an Iraqi prisoner who said he had
been beaten by British troops.
25 August 2004: A Pentagon investigation concluded that the abuses of prisoners at Abu
Ghraib were due to individual misconduct, lack of discipline and poor leadership.
1 September 2004: Pro-Chechen rebels took 1,200 hostages at School Number One in
Beslan, North Ossetia in the Caucasus region of the Russian Federation. After three days, a
gunfight started between Russian security forces and the hostage takers and 344 civilian
hostages were killed, including 186 children.
15 January 2005: US soldier, Charles Graner, was sentenced to 10 years in prison for
abusing Iraqi detainees.
7 July 2005: In London four suicide bombers set off their bombs on three underground trains
and a bus and 52 people were killed and 700 injured.
January 2006: A video was released in which Osama Bin Laden offered the United States a
truce if they changed their Middle East policy. The offer was rejected by the US
administration.
4 May 2006: Zacarias Moussaoui was jailed for life in the USA for his role in the 9/11
attacks.
8 June 2006: Abu Musab al-Zarqawi, self-styled leader of al-Qaeda in Iraq, was killed in a
US air strike.
29 June 2006: The US Supreme Court ruled that terrorist suspects held at Guantanamo Bay
could not be tried by military tribunals.
July 2006: Following the killing of three Israeli troops by Hezbollah, Israel invaded southern
Lebanon where Hezbollah has several bases.
30 December 2006: Saddam Hussein was executed for crimes against the Iraqi people.
9 January 2007: US planes conducted air strikes against alleged terrorists in Somalia.
What is in dispute here?
Terrorism is not a new international concern. Governments have tended to use the word as a
label for any group prepared to use violence - assassinations, kidnappings, hostage taking
and bombings – to achieve its political ends when they feel that peaceful change is not
possible through the normal political processes. In this respect it is worth remembering that
some individuals who became highly respected statesmen, such as Nelson Mandela in South
Africa, Menachem Begin in Israel, Jomo Kenyatta in Kenya, were either detained or hunted
as terrorists when they were younger. It is not unusual in modern times for yesterday’s
“terrorists” to become tomorrow’s government.
However, the idea of putting pressure on the government, an occupying power or the
international political community by spreading fear and alarm amongst the ordinary
93
population, usually through indiscriminate and unpredictable violence, really developed in the
second half of the 20th Century. After the Second World War, the Middle East became a
hotbed for terrorism. Jewish terrorist groups such as Irgun Zvai Leumi and the Stern Gang
used acts of terror against British and Arab targets to put pressure on the British to withdraw
their troops from Palestine prior to the creation of the state of Israel.
The Arab-Israeli war which followed and the displacement of many Palestinians then helped
to create the conditions in which terrorist groups emerged and gained popular support.
Throughout the 1970s and ‘80s, supporters of the Palestine Liberation Organisation (PLO)
hijacked aircraft to publicise the Palestinian case and to put pressure on the international
community to take action on their behalf.
In Europe itself, a number of left-wing terrorist groups emerged in the 1970s to challenge the
social and political order. These included the Red Brigades in Italy, Action Directe in France
and the Red Army Faction in West Germany. However, since the early 1980s, terrorism has
tended to be linked to:
ƒ
ƒ
ƒ
opposition to the global economic and political power of the United States;
opposition to Israeli occupation of the West Bank (and to the existence of the state of
Israel) with the emergence of groups such as Hamas and Hezbollah;
nationalist aspirations, such as the Provisional IRA in Northern Ireland, ETA in Spain
or the Chechen separatists in the Russian Federation.
Whilst established governments usually make no distinction between terrorist and criminal
acts and seek to deny the terrorist “the oxygen of publicity”, there is no doubt that most
contemporary terrorist groups have emerged from communities and populations that feel
powerless and helpless and feel that no-one listens to them and that their previous efforts to
use peaceful, legitimate forms of negotiation and political action have not been taken
seriously by those in power.
At the same time, three factors have also contributed greatly to the impact of terrorism in
recent times. The first of these is technology. Even small terrorist cells with very few
resources can get hold of weapons or manufacture explosives that can do great damage when
used in public spaces. Also modern communications technology has helped them to plan
their activities without being detected.
The second factor is the publicity which they now receive from the global mass media. An
explosion or a kidnapping or an assassination will get such widespread coverage in the media
that its impact will be international. The aircraft crashing into the World Trade Center in
New York meant that the everyday experience of catching a plane or a train changed for
everyone. The impact was global.
The third factor has been the rise of a new phenomenon in recent years – state sponsorship of
terrorism. During the Cold War, the superpowers were prepared to provide financial and
material support to certain terrorist groups if this suited their global strategic objectives. It is
ironical that the current US administration criticises other nations for providing this kind of
support to Osama bin Laden and al-Qaeda when they used to enjoy US support for their
terrorist actions in Afghanistan when it was occupied by Soviet troops. Today, it tends to be
certain states in the Middle East who appear to be providing support and a safe haven for
Islamic fundamentalist groups prepared to engage in terrorist acts in the region and elsewhere
in the world.
94
Over the last decade, the nature of terrorism appears to have been changing. No-one was
prepared for the scale of the attack on the World Trade Center or the number of deaths and
casualties. It is also clear that some terrorist groups can now get their hands on highly
sophisticated weapons. When five members of a Japanese cult released the gas Sarin on the
Tokyo subway in 1995, killing 12 people and injuring others, this raised the possibility of
terrorist groups using weapons of mass destruction in the future.
The emergence of suicide bombers amongst the Palestinian population on the West Bank and
then in Iraq after the war also introduced a new phenomenon - the amateur terrorist.
Previously intelligence services had invested a lot of resources in the surveillance, infiltration
and gathering of information about terrorist cells; now the terrorist could be anyone on the
street, bus or train. As Brian Michael Jenkins of the RAND Corporation – a US organisation
which carries out research on terrorism and counter-intelligence – has pointed out:
“investigations of ‘terrorist activity’ moved from preventive to reactive”.
Some observers have suggested that, because of this new phenomenon, it has become more
difficult for intelligence services to target their resources on individuals and terrorist cells and,
soon the public begins to regard whole communities as potential terrorists leading to greater
intolerance and the alienation of the young people within those communities. Zbigniew
Brzezinski, a former US security adviser, has recently observed: [The “war on terror”] “has
bred intolerance, suspicion of foreigners and the adoption of legal procedures that undermine
fundamental notions of justice. Innocent until proven guilty has been diluted if not undone,
with some – even US citizens – incarcerated for lengthy periods of time without effective and
prompt access to due process. There is no known, hard evidence that such excess has
prevented significant acts of terrorism, and convictions for would-be terrorists of any kind
have been few and far between.”.
This highlights the ongoing debate which has been sparked off by the American and
international response to the events of 9/11, especially the enhanced domestic security, the
military occupation of Afghanistan and Iraq, Guantanamo Bay and the cooperation by some
European countries in the US policy of rendition where suspected terrorists are secretly sent
to another country for interrogation, possibly using methods which would be illegal in the
USA.
On one side, there are those who argue that the risk that terrorists might use similar methods
or even weapons of mass destruction indiscriminately against civilian populations is so great
that extraordinary measures are called for. In their view, the possibility that a terrorist group
might use weapons of mass destruction on an innocent population calls for the suspension of
suspected terrorists’ civil rights and might even involve new limitations on the civil liberties
of the whole population in order to reduce the risk.
On this basis the UK Prime Minister in 2006, Tony Blair, argued that “We hear an immense
amount about their [i.e. terrorists] human rights and their civil liberties. But there are also the
human rights of the rest of us to live in safety”, President Bush has adopted a similar line in a
number of public speeches, arguing that the policy of detaining and questioning suspected
terrorists outside the United States - which had been criticised by the United Nations Human
Rights Committee – was justified on the grounds that “These are dangerous men with an
unparalleled knowledge about terrorist networks and their plans for new attacks” and that
“the security of our nation and the lives of our citizens depend on our ability to learn what
95
these terrorists know…. We’re getting vital information necessary to do our jobs, and that’s
to protect the American people and our allies.”.
On the other side, there are those who argue that the detention in Guantanamo Bay and other
detention centres in Iraq and Afghanistan of persons who fought for the Taliban or the Iraqi
army or are suspected members of al-Qaeda and other terrorist groups violates their civil and
political rights, including their right to a fair trial before an independent tribunal and their
right not to be detained indefinitely. Critics of the US Administration, including the UN
Commission on Human Rights and Amnesty International, also argue that the circumstances
of the prisoners’ detention, including prolonged solitary confinement, the interrogation
techniques which are used and the practice of extraordinary rendition are violations of the
Convention against Torture and the Geneva Conventions on the treatment of prisoners of war.
In response, the US Administration, the Pentagon and a number of independent American
observers have argued either that the prisoners are “unlawful combatants” not prisoners of
war and are therefore not protected by the Geneva Conventions or, more recently, that the
relevant articles in these Conventions are “vague and undefined and each could be interpreted
in different ways by American or foreign judges” (President Bush, 6 September, 2006).
The question of how to treat persons suspected of planning or carrying out terrorist acts in the
homeland (as opposed to acts carried out in a foreign country or war zone) has also generated
an ongoing public debate. Just 45 days after the al-Qaeda attack on the World Trade Center,
the US Patriot Act was passed, which gave U.S. law enforcement agencies new powers for
fighting terrorism in the USA and abroad. These included more powers to detain and deport
suspected persons and to access people’s telephone and email communications and their
medical, financial, and other records.
A year later, the Homeland Security Act was passed which created a new Department of
Homeland Security with new powers to monitor the activities of US citizens as well as
visitors. Both pieces of legislation had their critics within the USA who felt that they would
imperil such constitutional rights as freedom of speech, religion and assembly, the right to
privacy and the right to counsel and a fair trial.
In the United Kingdom, the government responded to the events of 9/11 by bringing in
emergency laws to allow terrorist suspects to be detained without trial. This process, known
as internment in the UK, had previously been used during the Second World War to detain
fascists and other suspected enemies of the state, and it was reintroduced in the 1970s to
detain members of the Provisional Irish Republican Army and the Ulster Volunteer Force in
Northern Ireland.
This had not previously posed a problem but, in October 2000, the UK passed the Human
Rights Act which incorporated the European Convention on Human Rights into British law.
On that basis internment could only be reintroduced if Parliament voted to opt out of Article 5
of the European Convention which states that: “Everyone arrested or detained……shall be
brought promptly before a judge or other officer authorized by law to exercise judicial power
and shall be entitled to trial within a reasonable time or to release pending trial”. The
Convention includes a clause in Article 15 that enables a member state to opt-out of Article 5
or other Articles “in time of war or other public emergency threatening the life of the nation”.
This stated intention by the UK Government proved highly controversial both in Britain and
elsewhere in Europe.
96
In 2001, the UK Government introduced the Anti-Terrorism, Crime and Security Act which
allowed the government to detain indefinitely any non-British citizen suspected of being a
terrorist and also allowed them to freeze bank accounts and seize other financial assets that
might be used by suspected terrorists. This Act was superceded by the Prevention of
Terrorism Act in 2005 which allowed the government to impose “control orders” that would
restrict an individual’s liberty for the purpose of protecting members of the public from a risk
of terrorism.
The critics, while acknowledging that there was a real terrorist threat, argued that these new
government powers could increase the likelihood of miscarriages of justice and that the best
way of dealing with suspected terrorists was through the courts using normal legal
procedures. The most common defence of the new legislation was that the need to protect the
freedom of British citizens to go about their lives without fear of terrorism or the threat of
terrorist acts was more important than the civil rights of a small number of suspected
terrorists.
A variety of viewpoints
Defining the terrorist can be quite difficult.
Professor George Lakoff at Berkeley, University of California, has argued that:
“wars are conducted against armies of other nations. They end when the armies are defeated
militarily and a peace treaty is signed. Terror is an emotional state. It is in us. It is not an
army. And you can’t defeat it militarily and you can’t sign a peace treaty with it.”
Former US National Security Adviser, Zbigniew Brzezinski, takes a similar line:
“Constant reference to a ‘war on terror’ did accomplish one major objective: it stimulated the
emergence of a culture of fear. Fear obscures reason, intensifies emotions and makes it easier
for demagogic politicians to mobilize the public on behalf of the policies they want to pursue.
The war of choice in Iraq could never have gained the congressional support it got without the
psychological linkage between the shock of 9/11 and the postulated existence of Iraqi
weapons of mass destruction.”
Ken McDonald, head of the Crown Prosecution Service in the UK, is firmly of the view
that terrorists are criminals not soldiers and that the UK response should be:
“proportionate and grounded in due process and the rule of law……….On the streets of
London there is no such thing as a war on terror. The fight against terrorism on the streets of
Britain is not a war. It is the prevention of crime, the enforcement of our laws, and the
winning of justice for those damaged by their infringement.”
On the other hand, Brian Michael Jenkins of the RAND Corporation highlights the
problems that can arise from either defining terrorists as criminals or as prisoners of
war:
“If terrorism is considered a criminal matter, we are concerned with gathering evidence,
correctly determining the culpability of the individuals responsible for a particular act, and
apprehending and bringing the perpetrators to trial. Dealing with terrorism as a criminal
97
matter, however, presents a number of problems. Evidence is extremely difficult to gather in
an international investigation where all countries may not cooperate with the investigators.
Apprehending terrorists abroad is also difficult. Moreover the criminal approach does not
provide an entirely satisfactory response to a continuing campaign of terrorism waged by a
distant group, and it may not work against a state sponsor of terrorism. If, on the other hand,
we view terrorism as war, we are less concerned with individual culpability. Proximate
responsibility – for example, correct identification of the terrorist group – will do….The focus
is not on the accused individual but on the correct identification of the enemy.”
The process of the so-called “war on terror” since the events of 11 September 2001 has
proved controversial. The case for indefinite detention of suspected terrorists and for
the use of interrogation methods that critics would regard as violating the suspects’ civil
rights has been made by the US Administration and numerous advisers.
One of these former US Government advisers, Jay Farrar of the Center for Strategic &
International Studies, Washington D.C., draws a parallel between the detainees in
Guantanamo Bay and people who are “stateless”:
“These ‘detainees’ [at Guantanamo Bay] skirted international norms and abandoned their
rights as sovereign nationals when they chose to participate in the stateless pursuit of
terrorism. [They have] moved from one recognised nation state to another in an effort to
frustrate and evade international laws that could be invoked to hold them accountable for their
actions…..In the aftermath of the attacks of 11 September, the United States has chosen to
redefine the status accorded to international terrorists and their non-state sponsors….They are
now being treated accordingly, and will be held accountable within the framework they
created and chose.” Interviewed by the BBC on 16 January 2002
By contrast, the human rights NGO, Amnesty International argued that:
“Amnesty International considers those held in Guantanamo are presumed to be prisoners of
war. If there is any doubt about their status, it is not the prerogative of the US secretary of
defence or any other administration official to make this determination. According to Article
5 of the Third Geneva Convention the US must allow a ‘competent tribunal’ which is
impartial and independent, to decide on their status. This is also the position held by the
International Committee of the Red Cross (ICRC), the most authoritative interpreter of the
Geneva Conventions.” Interviewed by the BBC on 16 January 2002
Adam Roberts of Oxford University argues that there is a precedent for denying
prisoner-of-war status to terrorists, noting that the UK government did not classify
members of the provisional IRA as prisoners of war but:
“It did recognise that international standards in the treatment of prisoners – particularly no
torture – did apply to them.”
However, the Report of Manfred Nowak, UN Special Rapporteur for the UN
Commission on Human Rights, focused on the treatment of detainees in the
Guantanamo Bay centre:
“The executive branch of the United States Government operates as judge, prosecutor and
defence counsel of the Guantanamo Bay detainees: this constitutes serious violations of
various guarantees of the right to a fair trial before an independent tribunal as provided by
98
Article 14 [of the International Convention of Civil and Political Rights)…..Attempts by the
United States Administration to redefine ‘torture’ in the framework of the struggle against
terrorism in order to allow certain interrogation techniques that would not be permitted under
the internationally accepted definition of torture are of utmost concern…..The lack of any
impartial investigation into allegations of torture and ill-treatment and the resulting impunity
of the perpetrators amount to a violation of Articles 12 and 13 of the Convention Against
Torture.”
An official spokesman for the UK Prime Minister, defending the Government’s
intention to introduce legislation to indefinitely detain suspected terrorists, said:
“Britain is closed to terrorism, and we will take whatever action we can….People will object
to it, but we are absolutely determined to get the balance right between human rights, which
are important, and society's right to live free from terror.”
Liberty, the UK-based Human Rights organisation, expressed concern that the antiterrorism legislation in the UK (2005) ran the risk of giving the executive:
“sweeping statutory powers to impose severe restrictions on individual liberties [that would
then] be applied in an arbitrary, unfair and disproportionate manner…with little substantive
judicial supervision”.
The UK Minister of State for Community Safety, Crime Reduction, Policing and
Counter-Terrorism told a parliamentary committee in 2005 that:
“Dealing with the terrorist threat and the fact that at the moment the threat is most likely to
come from those people associated with an extreme form of Islam, or falsely hiding behind
Islam, if you like, in terms of justifying their activities, inevitably means that some of our
counter-terrorist powers will be disproportionately experienced by people in the Muslim
community.”
Yahya Birt, Research Fellow at The Islamic Foundation in the UK, and a converted
Muslim, expressed concern that discussion about cultural diversity in Britain will be
redefined by reference to terrorism:
“The most important point that British Muslims can make is to assert that [issues like
multiculturalism] cannot be completely redefined by reference to terrorism for the simple
reason that whatever the causes of disaffection or disadvantage are among Muslim
communities, there is no causal conveyor belt leading inevitably to the London attacks”.
Tariq Ramadan, President of the European Muslim Network, also expressed concern
about the risk that a simplistic analysis of the nature and causes of terrorism in
multicultural societies can also lead to simplistic assertions about entire minority
communities:
“On December 8 last year, Tony Blair called on minorities to conform to ‘our essential
values’, stating that they have ‘a duty to integrate’. The Muslim community, because it is
perceived as ‘badly integrated’, has become suspect. But this cannot justify sweeping
measures applied to an entire segment of the population on the basis of a misdiagnosis. The
vast majority of British Muslims have absolutely no problem with the British values cited
above. Their cultural and religious integration is already a fact, as proven by the millions of
99
citizens who live peaceably in this country. The problem today is not one of ‘essential
values’, but of the gap between these values and everyday social and political practice. Rather
than insisting that Muslims yield to a ‘duty to integrate’, society must shoulder its ‘duty of
consistency’. It is up to British society to reconcile itself with its own self-professed values; it
is up to politicians to practice what they preach.”
What Do You Think?
Do you think, as some people do, that the need to protect ordinary people from the threat of
terrorist action is more important than the civil rights of a small number of suspected terrorists
or do you think, as others believe, that suspending the rights of a small number of possible
terrorists is the first step on the slippery slope to everyone’s civil rights being threatened?
100
CASE STUDY 10: Cultural monuments or human lives? The
case for the protection of cultural property
Christopher Rowe
Timeline
February 1944: Allied forces advancing into northern Italy were held up by strong German
defences of the Gustav Line on the ridge at Monte Cassino. Allied commanders thought,
wrongly, that the historic Benedictine monastery at Cassino was being used as part of the
military defences. Heavy bombers attacked the monastery and it was almost totally destroyed.
August 1944: The German officer in command of Paris, General Choltitz, was ordered by
Hitler to destroy the city before German forces withdrew. Choltitz disobeyed the order.
February 1945: Mass Allied bombing raids badly damaged the historic city of Dresden in
Saxony. Nearly 30,000 civilians died and numerous historic monuments were destroyed.
Later, between 1996 and 2006, the ruined Frauenkirche was rebuilt by an international team
of architects, as a symbolic act of reconciliation and international peace.
August 1945: The historic seaport of Nagasaki in southern Japan was obliterated by an
atomic bomb, shortly after the first atomic bomb had been dropped on Hiroshima.
Winter 1991-1992: The historic seaport city of Dubrovnik in Croatia was bombarded for
several weeks by Serb forces. Many buildings were damaged and there were strong
international protests against the destruction of cultural heritage.
August 1992: The National Library in Sarajevo was bombarded by Serb artillery during the
siege of the city and almost totally destroyed by fire. 1.5 million books were burned.
November 1993: The historic Stari Most, the old bridge over the Neretva river at Mostar, was
destroyed by Croatian forces. The bridge had been a cultural landmark since its original
construction in 1566. Work to rebuild the bridge began in November 2004.
September 2000: The Israeli politician, Ariel Sharon, made a controversial visit to the site of
the Al Aqsa mosque in Jerusalem. The mosque is part of the Noble Sanctuary that has
particular importance in the eyes of Muslims; it stands on the site of the Temple Mount that
has particular significance for Jews. There were Muslim concerns that Israeli construction
work on the site would weaken the foundations of the Al Aqsa mosque. Sharon’s visit
inflamed religious tensions and helped to bring about the second Palestinian intifada.
March 2001: The Bamiyan Statues – huge historic Hindu monuments set into a cliff face in
the mountains of the Hindu Kush – were destroyed on the orders of the Taliban regime in
Afghanistan on the grounds that the statues represented an “infidel” religion. This deliberate
act of destruction provoked a storm of international protest and condemnation.
November 2002: There were strong Serbian protests against the United Nations peacekeepers
in Kosovo for allowing the destruction of the Serbian Orthodox church of St Basil of Ostrog.
101
.
April 2003: Following the US-led invasion of Iraq and the toppling of Saddam Hussein, there
was a period of lawlessness during which the national museum in Baghdad was extensively
looted and many priceless works of art were stolen or destroyed. The American authorities
were widely criticised for failing to prevent this.
February 2006: The famous Golden Mosque at Karbala in Iraq, an especially holy shrine for
Shia Muslims, was badly damaged in an attack blamed on Sunni extremists attempting to
provoke civil war. In the days that followed, there were many reprisals against Sunni
mosques.
What is in dispute here?
The controversy about cultural property is all about balancing the value of precious objects
against the value of human lives. In war, should commanders risk greater loss of life by taking
steps to protect cultural monuments? In peace, should governments allow the protection of
cultural monuments to take priority over economic progress? In matters of religion, should the
cultural property of rival religions always be accorded equal respect?
It is important to distinguish between acts of deliberate destruction on the one hand and
accidental damage on the other. In some cases, the destruction of cultural monuments is a
calculated act of war, consciously promoting nationalist or religious conflict. At other times,
the tragic destruction of cultural heritage occurs because of what is termed “collateral
damage” in wartime – or because of the drive for modernisation and progress, as older
buildings are torn down to be replaced by newer ones.
It is also important to define carefully what “cultural property” actually is. As well as great
buildings such as churches, palaces and museums, there are often humbler smaller vernacular
buildings such as old village houses that can be of great cultural and historical significance.
There are many historical examples, such as 19th Century Paris and Vienna, where great
cultural monuments were built on the ruins of earlier structures that would now be regarded as
immensely valuable cultural heritage if they had not been knocked down in the name of
progress. Over time, perceptions often change about what cultural property deserves to be
admired and protected.
Sometimes cultural heritage is to be found not in buildings or artefacts but in landscapes. In
most countries, national parks have been established to prevent the spoliation of beautiful and
historic landscapes by unsuitable development or settlement. Nor is it always something as
extensive as a national park. Sometimes it can be a single tree. At the height of the siege of
Sarajevo, people desperate for fuel came to cut down a tree for firewood. A woman resident
in the nearby block of apartments frantically tried to prevent them – for her the tree was
precious and timeless, the one spiritual thing in a concrete war zone.
The concept of cultural “property” also raises questions about who actually owns the
property. It is often argued that great cultural monuments belong not just to the country or
civilisation in which they are located but to the wider world. In recent years, many sites have
been designated as having World Heritage status. Many cities have a history and an identity
that combines different religions and cultures. One of the great dangers intensified by wars,
especially ethnic and civil wars, is that cultural property becomes “ethicised” – labelled as
being representative of one culture to the exclusion of all others.
102
On one level, the debate is about the right balance between the need to preserve the past and
the need for modernity and economic progress. In the city of Liverpool, designated European
Capital of Culture for 2008, there has been intense controversy about the city’s regeneration
being harmed by excessive concerns to protect the past at the expense of the future. All over
Europe, there are similar controversies whenever the protection of historic buildings comes
into conflict with the need for new roads, new supermarkets or new skyscrapers.
On another level, the debate concerns the need to respect other cultures than one’s own. Most
people would agree that the cultural monuments of other cultures and religions should be
treated with care and respect, even after those who created and believed in them are no longer
present. There are no Romans anymore in the Greek city of Thessaloniki but the many Roman
archaeological sites are looked after with great care and pride. Perhaps in the future, the
architectural heritage of the centuries when the city was under Ottoman rule will be treated in
the same way. In the Balkans, the legacy of the wars of the 1990s has created the danger of
the “ethnic cleansing” of cultural heritage as well as of people.
On the deepest philosophical level, the debate concerns the value to be placed on cultural
monuments as compared to the value of human lives. It can be argued, for example, that the
terrible destruction of Hiroshima and Nagasaki forced Imperial Japan into a surrender that
would otherwise have been delayed by many months during which millions would have died.
It is often argued that buildings can be repaired and rebuilt but human lives cannot. And yet
the destruction of cultural monuments causes a deep sense of loss and outrage.
After the destruction of the historic bridge over the Neretva at Mostar in 1993, a Croatian
journalist attempted to explain this sense of loss. He wrote:
“Why do we feel more pain looking at the image of the destroyed bridge than we do when
looking at the images of people? Perhaps it is because we see our own mortality in the
columns of the bridge, more than in the deaths of the people. We expect people to die. We
expect our own lives to end. But the destruction of cultural monuments is something else.
The beautiful old bridge at Mostar was built to outlive us. It transcended our individual
destinies. The death of a man is one of us; the death of the bridge is all of us forever”.
103
A variety of viewpoints
Comments by a German tank commander leading the attack on Ypres in 1940. He had
fought in the long battles for Ypres in the First World War. His men were asking him to
order air strikes on the tower of the medieval Cloth Hall apparently being used by the
forces defending the city to guide their artillery:
“No. No Stukas. For this city, one war is enough.”
Report of the Battle for Monte Cassino in 1944:
“During the first days of the battle, the Allies spared the Monte Cassino monastery from air,
artillery or ground attacks, even though it was a crucial strongpoint. But sightings of German
defenders within the monastery walls prompted General Freyberg to request its destruction by
air and artillery bombardment. On 15 February 1944, 230 Allied bombers pounded the
historic site. Though most of the monastery and its outer walls was destroyed, the German
defenders were able to shelter in underground chambers. Even though in ruins, the monastery
remained a strong defensive position.”
Comments early in 1945 by Sir Arthur Harris, Chief of RAF Bomber Command:
“The feeling over the destruction of Dresden could easily be explained by any psychiatrist. It
is connected with German bands and Dresden shepherdesses. Actually, Dresden was a mass
of munitions works, an intact centre of government, and a key transportation centre. It is now
none of those things.”
An interview after the war with Martin Mutschmann, the former Gauleiter of Dresden:
“Interrogator: What do you have to say about the air raids on Dresden?
Mutschmann: It’s terrible, the quantity of cultural valuables destroyed in one night. Dresden
was a city infinitely rich in artistic treasures. Now all that is gone.
Interrogator: So you are not at all concerned about the human victims?
Mutschmann: Of course, a very great number of human beings died. I just meant that artistic
treasures cannot be replaced.”
A report by Colin Kaiser, former director of the UNESCO office in Sarajevo, in
September 2000:
“Before the war, many Serbs, Muslims and Croats took equal pride in their secular buildings
such as the Sarajevo National Library. All this was changed by war. Although we perceive
destruction as barbaric, its perpetrators see it as an act of creation – as the creation or
liberation of a mythical rural society, with the symbols of the unwanted ‘other’ eliminated
from the horizon. In the cities, a common civic identity was destroyed. Secular and sacral
buildings became ‘ethnicised’. Before the war, nobody in Mostar would have said that the
Old Bridge was a ‘Muslim’ monument – but its destruction by Croatian tanks turned it into
one.”
104
Comments by the American writer, Susan Sontag, in an interview on Croatian Radio,
December 1991:
“I want to emphasise my feeling of horror, because of the absolute ruthlessness of the war.
I’m horrified above all about Dubrovnik. Whatever happens, this war will end. All wars end.
And what will happen then? Dubrovnik no longer exists, as well as many other cities, and
lives. But Dubrovnik has a very special status – it belongs to everyone.. People understand
that you don’t bomb a Venice, nor the historic centre of Rome. You don’t attack and destroy
Dubrovnik. That simply must not be done, whatever the war may be like.”
Comments by Leszek Kolakowski, interviewed on Croatian Radio, January 1992:
“The Serbs say they have to be able to defend the Serbian minority. All right, but I don’t see
how that can be said in the case of the siege of Dubrovnik, that great jewel of the
Mediterranean. What Serbian minority needed to be defended there?”
A letter to the Guardian newspaper from Professor J.P. Maher in October 2000:
“I read in your recent report references to the ‘pounding of the beautiful Croatian town of
Dubrovnik in 1991’. This is a fraud. Since 1991, the press has dozens of times printed the
hoax that the Pearl of the Adriatic was reduced to rubble. Those stories were fakes. In March
1992, I visited Dubrovnik to see for myself the truth about the war. The old city was never
destroyed. It was barely scratched. Dubrovnik’s destruction was an invention of PR
companies in the hire of the war criminals who broke up Yugoslavia without negotiations.”
The librarian of Sarajevo’s National Library, Kemal Bakarsic, describing the fire of
August 1992:
“All over the city, sheets of burned paper, fragile pages of grey ashes, floated down like a
dirty black snow. Catching a page, you could feel its heat and, just for a moment, read a
fragment of text in the strange kind of black-and-grey negative until the page melted to dust in
your hand.”
Report from the American Schools of Oriental Research, April 2003:
“The looting of the Baghdad Museum is the most severe blow to cultural heritage in modern
history, comparable to the sack of Constantinople I, the burning of the library at Alexandria,
the Vandal and Mongol invasions and the ravages of the Spanish conquistadores.”
From “Museum Madness in Baghdad”, an article in Middle Eastern Quarterly by
Alexander Joffe, Spring 2004:
“In April 2003, in the mayhem that followed the collapse of Saddam Hussein’s regime,
looters entered the Iraq National Museum in Baghdad. They stole and destroyed artefacts and
caused damage to the museum. Western archaeologists created their own narrative of these
events and promoted it in the world media. They claimed the US authorities had deliberately
failed to stop the looting and were possibly complicit in it. There is only one problem with
this saga of culpability and guilt – it bears no relation to reality. The looting of the museum
was far less devastating than was originally claimed.”
105
Comments on the bombing of the Golden Mosque at Samarra by Abdulaziz Sachedina,
a religious studies professor and expert on Shiite Islam:
“This was a shrine where I once sat with my teachers to study law and theology. Even when
the Golden Mosque with its blue dome was a Sunni mosque, the Shia and Sunni communities
came together there to worship and to pay their respects to the Prophet Muhammad. It is
heartbreaking and deeply disturbing to see Muslims engaged in destroying this monument that
celebrates the spiritual heritage of Islam.”
What Do You Think?
ƒ
If you were the government official responsible for the defence of a town or city
threatened with attack, would you give top priority to the protection of cultural
monuments and art treasures?
ƒ
If you were a military commander in wartime, would you change your plans, and risk
heavier casualties among your troops, in order to avoid damaging historic buildings?
ƒ
Can it ever be justified to order the destruction (or to allow the ruin by neglect) of the
cultural monuments of a different culture or religion?
106
KEY QUESTION SIX: What is more important: maintaining a
healthy national economy or ensuring that everyone is entitled
to the basic necessities of life?
Robert Stradling
Human rights are international norms or standards which help to protect all people
everywhere from political, legal, social and economic abuse or mistreatment and from
discrimination because of their gender, age, ethnicity, nationality, religion or cultural
background.
We are probably most familiar with those international norms which are commonly referred
to as civil and political rights: the right to a fair trial when charged with a crime, the right to
freedom of speech, the right to associate with others to pursue our own political and civil
interests, the right to practise our religion or to be an atheist, the right not to be enslaved or
tortured and the right to vote and participate in political activity.
Governments are expected to uphold the civil rights of every person who resides in their
countries, regardless of whether or not they were born there, are citizens or migrants and
refugees from other countries or even tourists. Governments can usually expect international
condemnation and even economic and political sanctions (such as a refusal to buy products
from a country or lend money for economic development) when they regularly and
systematically violate people’s civil and political rights. They may find that their actions are
criticised in a report by the United Nations Human Rights Committee or they may even find
that they have to defend their actions in the International Criminal Court or the European
Court of Human Rights. In cases of extreme violations of human rights, the Security Council
of the United Nations may also authorise military intervention to protect people from the
actions of a particular government.
But what happens when a government fails to ensure that all of its people have adequate
housing or medical care? In 1948, the Universal Declaration of Human Rights was adopted
by the General Assembly of the United Nations, with no opposing votes and only eight
member states abstaining. This Declaration makes it clear that everyone is entitled to the
same rights and that individuals as well as states must take responsibility for the rights of
others. Included in the list of rights specified in the Universal Declaration were social,
economic and cultural rights. These included equal rights for men and women, access to
employment opportunities, fair pay for work, safe and healthy working conditions, the right to
rest and leisure, the right to form and join trade unions, the right to strike, the right to an
adequate standard of living, the right to adequate housing, the right to health care and
education.
Declarations have no power in law. The next step, therefore, was to draw up a Treaty or
International Covenant – the United Nations International Covenant on Economic, Social and
Cultural Rights. It took nearly 20 years (1966) before it was ratified (or agreed) by 133
member states. It took a further 10 years (1976) before the Covenant became international
law binding on those states which had signed it.
To what extent are these economic, social and cultural rights universally recognised 60 years
after the Declaration of Human Rights was adopted by the United Nations? During the
second half of the 20th Century, most liberal democracies introduced some kind of welfare
state where all people receive benefits and entitlements to ensure that they have an adequate
standard of living, including at least a basic minimum of social security and health care. In
107
some cases, this was funded through taxation and, in other instances, it was funded by people
making payments to social insurance schemes. Most communist states also made universal
provisions for medical care, pensions for the elderly, public housing and employment though
some of the economic and social rights, such as the right to strike, were not necessarily
guaranteed.
However, if we look around the world today, a lot still needs to be done before everybody
could be said to be provided with the basic necessities of life: sufficient food, good health,
adequate shelter and a job that provides enough income to take care of themselves and their
families. A billion adults in the world cannot read or write. Over 500 million children do not
receive any primary education. More than 1.5 billion people do not have access to proper
sanitation or water that is safe to drink. Everyday, over 35,000 children die because they are
starving or have diseases which could be prevented by immunisation and cleaner water and
sanitation. For many of these adults and children, the economic and social rights listed in the
UN International Covenant must seem like unattainable goals rather than universal
entitlements.
Perhaps it is not surprising then that the whole idea of economic and social rights has been
controversial ever since the end of the Second World War. The Canadian writer, Michael
Ignatieff, has suggested that, as growing mutual mistrust between the Soviet Union and its
wartime allies in the west developed into the Cold War, so two distinct perspectives on human
rights emerged – one socialist and the other capitalist. The former emphasised the importance
of economic and social rights while the latter, particularly the United States, tended to give
more emphasis to civil and political rights.
Indeed the main reason why two separate International Covenants were produced - one on
civil and political rights and the other on economic, social and cultural rights - was because
it would enable governments, particularly those allied with the United States in the Cold War,
to ratify the Convention on Civil and Political Rights without also having to accept the
Convention on Economic, Social and Cultural Rights. For a similar reason, economic and
social rights were not included in the European Convention on Human Rights (1950) but were
covered in a separate document, the European Social Charter.
However, in reality, the actual situation at the time was much more complex than this simple
ideological divide would seem to indicate. In the first post-war elections, voters in several
West European countries elected governments which were committed to the idea of the
welfare state. There was a recognition that steps had to be taken to ensure that the social and
political unrest which had emerged as a result of the Great Depression in the 1920s and early
1930s, and which led to mass support for fascism in Europe, would not happen again.
It is also true that the Soviet Union and governments in the rest of the communist bloc gave
more priority to economic and social rights than to freedom of speech and some other civil
rights. However, the Soviet Union had actually been one of the eight member states which
abstained from voting on the Universal Declaration of Human Rights and it was developing
countries such as Chile, Cuba, Panama and the Philippines who took the lead in drafting the
key documents which informed the International Convention on Economic, Social and
Cultural Rights.
Ever since the Universal Declaration and the European Convention on Human Rights were
first published, there has been a lively debate about the status of economic and social rights in
comparison with civil and political rights. This debate has focused, and continues to focus, on
a number of questions and issues:
ƒ
Are civil and political rights more important than economic and social rights?
108
ƒ
Are economic and social rights really statements about what people need in the ideal
world rather than norms or standards which have to be guaranteed for all regardless of
whether or not a government has the resources to do so?
ƒ
If a government was democratically elected to reduce spending on public services
such as education, public housing and medical care, would it not be undemocratic if
the law courts took action against that government for violating people’s economic
and social rights?
We do not have the space here to examine each of these issues in real depth. Instead, we will
briefly outline the different positions and leave you to decide for yourselves which position, if
any, you agree with.
Are civil and political rights more important than economic and social rights?
It is often claimed that the most important rights are those associated with personal freedom,
and that of these liberties the most important are freedom from torture and slavery and the
right to a fair trial. Certainly these are the civil rights which have no conditions attached to
them, whereas the International Conventions on Human Rights usually spell out certain
conditions where an individual might not be able to exercise a particular civil or political
right. For example, in most countries, children and young people under the age of 18 do not
have the vote. We may have freedom of speech but this does not mean that we can slander or
libel someone without facing legal consequences or that we can pass on information which
may endanger national security. But it is widely believed that no-one can truly live in dignity
and live the life of a free and independent person if they are tortured or treated like a slave or
even live under the threat of torture or slavery.
Some of the people who argue that certain freedoms or civil and political rights are the most
important ones tend also to describe economic and social rights as “second class rights”. By
that, they mean rights that governments can only guarantee when they have sufficient
resources to do so. They also mean that such rights should have a lower priority for every
government than the civil and political rights.
However, others argue that economic and social rights are just as important as civil and
political rights. In their view, we cannot live dignified lives, we cannot flourish and we
cannot function as free and independent persons if we are starving, homeless, living in
extreme poverty and unable to read and write. What is more, they argue, if we are living in
these appalling conditions and we lack the basic necessities of life, then we will probably not
be able to exercise our civil and political rights either.
Are economic and social rights really statements about what people need in the
ideal world rather than universal rights?
Of course, at one level, the answer to this question is quite straightforward. Most of the
countries in the world have signed the United Nations Declaration on Human Rights and
ratified the International Convention on Economic, Social and Cultural Rights and this
Convention has had the force of international law since 1976. So, simply put, these are rights
because international law says they are rights.
But some critics continue to argue that economic and social rights are not rights in the same
sense as civil and political rights and, indeed, even in some of the most economically
developed countries in the world, certain economic and social rights are not being enforced or
guaranteed as rigorously as civil and political rights. These critics tend to fall into two
distinct camps. The more extreme position is that rights are norms which are universal and
109
apply to everyone at all times. But, so they argue, economic and social rights are not
universal in this sense. The right to form or join a trade union only applies to those workers
in industries where it is possible to organise the workforce into a union. The right to medical
care only applies to people who are sick. The right to public housing only applies to people
who are unable to provide accommodation for themselves and their families. And so on.
Those who take this view sometimes go on to argue that it is unjust to expect people who
provide for themselves and their families to contribute money to meet the needs of others. A
typical example of this position is to challenge why people who do not have children should
contribute through taxation to the education of other people’s children.
A common response to those who argue that economic and social rights are not rights in the
normal sense of the word is to argue that the word “universal” is being used in a misleading
way here. They suggest that a right is universal, not because we all exercise it, but because
we all could exercise it if we needed to. Many people will go through life without being
arrested by the police and charged with a crime. So they will not need to claim the right to a
fair trial. Similarly, most people are likely to go through life without encountering a situation
where they might be tortured or enslaved. What makes these rights universal is that we could
all claim them as a right if we needed to. The emphasis here is on the word “if”’.
Those who argue that economic and social rights are rights in the same sense as civil and
political rights also tend to point out that rights are protections against the arbitrary decisions
of those who are in power or authority over us. Over the last 200 years, people have struggled
to establish universal civil and political rights precisely because their rulers often exercised
power in arbitrary ways. People were punished for saying things that the rulers did not like,
or because they tried to organise themselves to oppose the rulers, or they were just thrown
into prison and left there without ever receiving a proper and fair trial.
They usually go on to argue that the same situation also applies to economic and social rights.
These exist as universal rights so that public officials cannot arbitrarily decide who can and
cannot get medical treatment or education or emergency housing. That is why international
and national laws on economic and social rights usually include measures to prevent
discrimination against minorities and other groups in society when they try to access public
services.
A more moderate critique of economic and social rights as “rights” takes the line that they are
goals or ideals which every government should strive to achieve but will only be able to do so
gradually as their country becomes more industrialised and wealthy and acquires the financial
and economic resources to make it possible to achieve those goals. To support their position,
these critics usually point out that the International Convention on Economic, Social and
Cultural Rights specifically states that governments should “take steps….to the maximum of
its available resources, with a view to achieving progressively the full realization of [these]
rights.”.
These critics argue that this statement demonstrates a clear difference in the status of
economic and social rights compared with civil and political rights. All countries are
expected to guarantee the latter rights for all if they have signed the Convention but are only
expected to introduce the economic and social rights gradually as and when their economies
develop sufficiently to be able to afford to do so. On this basis, these critics assume that the
government of a developing economy will not be able to ensure that all of these economic and
social rights are guaranteed and even that the government of a developed economy might
have similar problems during an economic crisis. Indeed, in the economic recession of the
1980s, some Western European states introduced restrictions on the rights of trade unions.
110
Those who disagree with this position usually argue that the phrase “achieving progressively
the full realization of [these] rights” only refers to certain but not all economic and social
rights and that all governments, regardless of their levels of economic development are
expected to take steps to end discrimination and arbitrary decisions by public officials
regarding access to public services.
They also argue that the phrase “to the maximum of its available resources” does not mean
that governments can use this as an excuse for not taking action until they are developed
economies or until economic conditions are favourable. What it means, they suggest, is that
governments are obliged to ensure at least a minimum level of social and economic support
for those who are starving, homeless and suffering form serious diseases that could be treated.
Furthermore, they argue that, where a government is unable to respond effectively to a major
disaster, such as widespread famine, then other countries have an obligation under the
International Convention to provide aid and technical assistance.
Are economic and social rights undemocratic?
If the state violates someone’s civil or political rights that person can take their case to court
and ask a judge to rule on it. If the judge decides that that person’s rights have been violated,
then the judge can demand that the government takes action to rectify the situation. But, as
some of the critics argue, political parties usually disagree with each other about how much of
the national budget should be spent on public services such as education, health, social
security, public housing, etc. If judges ruled that not enough was being spent on education or
housing and required a democratically-elected government to spend more on a particular
social service, then this would be undemocratic. The critics of economic and social rights
usually see this as yet one more reason why they are “second-class rights” compared with
civil and political rights.
The opponents of this view tend to point out that; in most liberal democracies it already
possible for people to appeal to the courts for a ruling if they believe that the decision of a
public official has been arbitrary or unfair. So, for example, if public officials were
discriminating against people from another ethnic group or religion so that they could not get
adequate medical care or schooling for their children then the judicial system could take
action against them.
Some also argue that there are other situations where the judges have the right to intervene
and demand that governments take action. These are situations where an individual or group
of people lack the bare minimum of basic necessities for life, such as shelter, food and
medical care. Then, it is argued, they should be able to appeal to the courts and a judge could
rule that the government has an obligation under national and international law to help these
people.
This remains a controversial issue. For example, suppose a hurricane destroys the homes of a
community of people. These people then move on to some land and erect tents as emergency
shelter. The landowner then asks the police to remove these people from his land because he
wants to build on it. The people then ask the government for help but the government says it
does not have the resources to help them (and is worried that this would set an expensive
precedent in a country prone to hurricanes). The community are left to try and get help from
charities and international aid agencies. A lawyer offers to help the group and takes their case
to the high court. The judges rule that, under the International Convention, the government
has a duty to provide emergency housing for the whole community until they are able to
support themselves or return to their re-built homes.
Would this be undemocratic? What do you think?
111
CASE STUDY 11: Did the end of communism leave the elderly
and vulnerable in a worse position?
Robert Stradling
Timeline
19-22 August 1991: An attempted coup took place in the Soviet Union while President
Gorbachev was on holiday in the Crimea. The coup leaders sent tanks into the centre of
Moscow to threaten the White House or Parliament building. Thousands of Russians went to
the Parliament building to defend it. Boris Yeltsin, President of the Russian Soviet Federated
Socialist Republic, declared the coup to be a criminal act. The leaders were then arrested.
24 August 1991: Gorbachev returned to Moscow and resigned as General Secretary of the
Communist Party but retained his office of President of the Soviet Union.
August 1991: The World Bank approved a special $50 million trust fund to provide technical
assistance to the Soviet Union to support economic reforms.
25 December 1991: Gorbachev resigned as President. The Soviet Union ceased to exist.
July 1992: The Russian Federation joined the World Bank and the International Monetary
Federation (IMF) to secure more financial and technical assistance.
1991-1995: The Russian Federation under Yeltsin attracted large international loans and
foreign investments and a small number of “oligarchs”, bought up state enterprises at very
low prices and became billionaires. The circumstances for many ordinary people got worse
rather than better. By 1995, the Russian Gross Domestic Product had fallen by 40%
Unemployment and inflation increased. Many people saw their savings disappear and the low
paid and people on fixed incomes, such as pensioners, were particularly badly affected.
1997 –1998: A financial crisis in Asia led to a serious drop in the price of oil and other raw
materials. This affected Russia badly since oil, natural gas and metal ores accounted for 80%
of her exports. The sudden drop in foreign earnings from its exports created a financial crisis
in Russia as well. Inflation rose to 84%, banks closed, and welfare costs increased.
13 July 1998: the IMF and World Bank approved a support package to Russia of $22.6
billion to support economic reforms. One of the required reforms was to change the old Soviet
system of social benefits to ensure that welfare assistance was targeted on those most in need.
1999-2000: The Russian economy began to improve partly as a result of international
financial support but also because world oil prices were increasing.
2001-2004: Some reforms in social benefits were introduced and planning began for a more
comprehensive reform programme in 2004; but there was nervousness within the government
about the possibility of popular anger if these reforms were introduced after what had already
happened to people’s savings, pensions and job security in the 1990s.
112
29 July 2004: Several thousand pensioners, war veterans, disabled people and victims of
Chernobyl gathered in Moscow to demonstrate about the Kremlin’s plans to replace social
benefits and privileges by cash payments.
3-5 August 2004: The State Duma of the Russian Federation approved Federal Bill 122 by a
vote of 304 to 120. The Communist Party, Rodina (Motherland Party) and most of the
independent deputies voted against it. This new law replaced many of the social benefits and
privileges inherited from the Soviet era by cash payments. The social groups who most relied
on these benefits and privileges were pensioners, the disabled, war veterans, victims of
Chernobyl and many public officials, including military, police and customs officials.
22 August 2004: President Putin signed Federal Law Number 122-FZ , also known as the
Law on Monetarisation.
1 January 2005: Federal Law 122 came into operation. On the same day, large increases in
transport fares and in the maintenance costs for housing also came into effect. Most people
did not feel the effects of the changes in the first week of January because of the extended
New Year holiday in Russia.
9 January: Thousands of pensioners marched in protest against the reforms in St Petersburg.
Their protest was timed to coincide with the 100th anniversary of the massacre of workers in
St Petersburg and the outbreak of the 1905 Revolution.
10 January: When Russians returned to work after the New Year holiday, the protests began
to spread. Several hundred protestors stopped traffic on the highway connecting Moscow
with its international airport. In Kaliningrad, policemen refused to pay for bus travel. The
next day, pensioners in Tolyatti, the home of the Lada factory, tried to break into the mayor’s
office in protest against the loss of social benefits. Other pensioners’ protests took place
across Russia, including blocking highways.
18-19 January: The Russian Government allocated 105 billion roubles to be spent on
improved pensions and transport subsidies for pensioners, police and the military (who all had
free public transport before the reforms). The money came from revenues from higher oil
prices. In Moscow, the Federal Finance Minister Alexei Kudrin claimed the protests had been
organised by the Communist Party and the National Bolshevik Party. The National Bolshevik
Party leader, Eduard Limonov told Ekho Moskvy, the independent Radio Channel, that this
was news to him - but he would be delighted if it were true.
20-21 January: Television coverage of the protests led to even more demonstrations in many
other Russian cities, with the obstruction of main highways and city centre streets to paralyse
traffic. Public transport workers complained about their treatment by police and military
personnel. In Tula, there were 40 assaults on bus and tram conductors in just three days.
February 2005: Some of the regional authorities announced that they would temporarily
continue with some of the social benefits and privileges to reduce social tensions. On 1
February, most consumers found that their bills for public utilities such as electricity, heating
and water had also greatly increased.
March – June 2005: Protests continued, often supported by opposition political parties, and
were particularly widespread in Moscow City, Moscow Region, Volgograd Region, Nizhny
Novgorod Region and Bashkiria.
113
2005 – 2006: In practice, the regional authorities acted very cautiously. By the end of 2006 a
mixed system, with both cash payments and some in-kind benefits and privileges still existed
in some regions of the Russian Federation.
What was in dispute
In the second half of the 20th Century, most liberal democracies introduced some kind of
welfare state which guaranteed at least a basic minimum of social security and health care.
Socialist states tended to make universal provisions for medical care, pensions for the elderly,
public housing and employment; but some other social rights, such as the right to strike, were
not always guaranteed. Under the Soviet system, housing and public utilities like water,
electricity and gas, were provided at low, subsidised prices for all. And a network of social
benefits and privileges (or lgoty) were allocated to certain categories of citizens. These
benefits were usually in-kind rather than financial payments - free or subsidised use of public
transport, medication, dental care, sanatoriums, solid fuel, telephones and so on.
During the Soviet era, there were three main categories of beneficiary who were entitled to
these benefits:
•
•
•
The “deserving disadvantaged” – those who, through no fault of their own, needed
assistance (e.g. the disabled, pensioners, people on very low incomes);
Those who had given special service to their country (e.g. war veterans, those who
went into the nuclear plant at Chernobyl to clear up after the disaster);
Those working for the state for whom the benefits were a hidden supplement to their
salaries (e.g. police, military, administrators, etc).
The transition from a socialist state to a liberal democracy and from a centralised command
economy to a market economy was painful and prolonged. The “oligarchs” did well out of
the process by buying up state enterprises, particularly public utilities, at very low prices.
Many people suffered badly during the transition. Inflation ate up their savings, there was no
longer any job security and there were food queues again. Pensions bought less than under the
Soviet regime, and many factories paid their employees in goods instead of money. The old
certainties of the communist era had been replaced by a highly uncertain situation.
Amongst some groups in society, it led to a kind of nostalgia for some though not all aspects
of the old soviet system. The historian, Tony Judt, quoted one elderly Russian couple who
said to their interviewer:
“What we want is for our life to be as easy as it was in the Soviet Union, with the
guarantee of a good, stable future and low prices and, at the same time, this freedom
that did not exist before”.
During the 1990s, the Russian economy struggled. Production fell, more goods had to be
imported which had to be paid for in foreign currency and at higher prices because of
inflation. To obtain foreign currency, the Russians needed to export oil, natural gas and other
raw materials. There was little demand abroad for manufactured goods such as Lada cars.
The financial crisis of the late 1990s made things worse. The World Bank and the IMF, put
together a new financial package to support the Russian Federation but, in return for that
114
support, they expected economic reforms. Among these reforms was a call to phase out social
benefits and privileges and replace them by cash payments.
Those in the Russian Government, the World Bank and the IMF who supported the reforms
believed that they would ensure that social assistance would be targeted more on those who
needed it; that targeted support would be less expensive than the current system; that the
whole system would be more transparent and accountable.
The reformers identified the following criticisms:
•
The benefits are provided regardless of need. For example, the better off the household
the more it would get in housing benefits. The independent Institute for Social Policy in
Moscow estimated that the richest segment of the population received 20% of total
available benefits while the poorest 10% of the population only received 4% of benefits.
•
It was not known how many people actually made use of their privileges and benefits. For
example, there was no means of checking how many pensioners actually used public
transport or how many journeys they made each day. Transport services and companies
had to make estimates of the financial cost of providing free travel for those entitled to it
without knowing how much the actual cost of this was and then the city and regional
authorities had to decide whether or not these estimates seemed reasonable and realistic.
•
In view of the state of the Russian economy after democratic and economic transition, the
World Bank and IMF did not believe that the government could afford to keep social
benefits at the level set in the early 1990s and also carry out the other changes needed to
stimulate economic growth.
Nevertheless, the reforms also had their opponents. Even within the government, there were
some who thought that the reforms could lead to the kind of popular protest that had led to the
Orange Revolution in neighbouring Ukraine. Others thought that the cost of replacing in-kind
benefits with cash payments would be too high. They also had doubts about the capacity of
the administration at federal, regional and local level to distribute the payments efficiently.
Opponents believed that, with inflation running at 20%, the cash payments would steadily
decline in real value. Some also noticed that senior government officials would keep their
subsidised housing, state-rented dachas, free travel, free medical care and pension schemes.
In practice, the changes were not as wide-ranging as had been anticipated. The protests led
some regional and local authorities to reinstate the social benefits and privileges, at least on a
temporary basis. Even where reforms were introduced, most regional and local authorities
opted for a mixed system with some benefits and privileges retained and some cash payments.
At the same time, government spokespersons claimed that most pensioners and other
beneficiaries of privileges were now better off under the new system. The protesters, they
claimed, were being misled by agitators and opposition political parties, particularly the
communists and the left-wing nationalists, Rodina.
Opinion polls taken in 2006 indicated that around 60% of pensioners believed that the role of
the state in Russia was to look after its citizens; about a third of Russians in the polls wanted
to return to state planning while only 10% thought that the introduction of free enterprise
should be a government priority.
115
A variety of viewpoints
In the first months of 2005, the news media in Russia, particularly the independent ones, were
full of interviews with people who had been beneficiaries of social benefits and now believed
that their standard of living would fall as a result of Federal Law 122-FZ (2004):
Nina Simyonova, a pensioner addressing a demonstration outside Moscow on 15
January 2005:
“Many of us have children and grandchildren in the city, but now we can’t afford to travel
there. What are we supposed to do, sit at home? Who is responsible? The leadership, starting
with Putin.”
A pensioner at another demonstration explained:
“Now, with my benefits, I can take the trolley bus for free to go shopping. I can go to one
market, then another, even a third one, and hunt for the cheapest food. If our benefits go, I’ll
have to pay for my tickets. That would cost 40 roubles a day.”
Rafail Islamgazin, a retired army colonel, told Komsomolskaya Pravda:
“I received [the equivalent of] some 7,500 rubles worth of benefits, but my monetary
compensation is 930 rubles while my utility bills have increased by 150%. The state must
really hate its defenders to taunt them like this.”
A Krasnodar Krai police captain told Novye Izvestiya:
“Free travel was not just the one last remaining benefit, but a work necessity. Ordinary
policemen are not assigned cars and you have to wait around a long time for a duty car. So
that means you now have to travel several times a day at your own cost. The last time they
raised our salaries was two years ago.”
The official response to the protests sought to blame it on a minority of agitators. President
Putin thought that not enough had been done to prepare everyone for the reforms. Opposition
deputies tended to see the protests as the start of something bigger.
Finance Minister Aleksei Kudrin, quoted by Reuters, described the protesters as:
[only representing] “1% of all recipients of benefits…We are on a very dangerous line. In
wanting to be heard, they are disrupting transport, blockading roads, dealing an economic
blow to regions and harming those who cannot be reached by ambulances”.
Vladimir Ryzhkov, independent member of the State Duma said:
“The initial protests were spontaneous. But now we shall see more organised mass
demonstrations across the whole country. People – elderly people – have been driven to
desperation by this reform.”
116
Kommersant reported that President Putin said:
“The government and the regions have not completely carried out their task that we spoke of
– which was to not make the situation of those who depend on state assistance any worse.”
Acting Governor of the Moscow Oblast, Aleksei Panteleev, suggested that the protests
were organised by provocateurs who were not pensioners:
“Our law enforcement organs have videotapes of all those people younger than pension age
who are travelling back and forth from city to city, inciting the population to close streets and
engage in other violations of the law. They have been detained in accordance with the law”.
Yegor Gaidar, former Russian Prime Minister, was quoted in the Financial Times, 17
January 2005:
“The reform was not properly thought through…the government did not have a consolidated
position, the calculations were sloppy and the cabinet allowed itself to be dragged into a
lengthy open debate which showed its weakness.”
Sergei Glaziev, nationalist politician, announcing his intention to propose a referendum
on the issue said:
“I think President Putin will lose at least half of the electorate over this.
astonished. I have received thousands of letters. We see a lot of frustration.”
People are
Meanwhile economic and social policy analysts also expressed their views.
Sergei Smirnov of the Institute for Social Policy at the Higher Economic School said:
“Federal authorities did not discuss the reform plan with the regions, did not precisely define
who was responsible for what, and did not explain the details of the plan to receiving
benefits.”
Economic analyst, Stanslav Belkovsky, suggested that opposition to the reforms was as
much emotional as economic:
“For this nation, the role of the state as a father and mother is of paramount importance…Its
much more important than the money.”
Lila Ovcharova, health care analyst at Moscow’s independent Institute for Social Policy,
said:
“Russia’s system of privileges was never designed to support the poor…the poorest 10% of
the population receive 4% of existing benefits, while the richest 10% receive 20%.”
In 2007, Anastassia Alexandrova of the Institute for Urban Economics in Moscow and
Raymond Struyk of Chemonics International in Washington observed that:
“The reform of in-kind privileges in Russia can be assessed as making very limited progress,
compared to what could have been achieved…..[I]mproved accounting, modest results in
117
transition towards cash benefits and zero progress in introduction of targeting [on those most
in need] do not appear worth the implementation difficulties and political price.”
What Do You Think?
Some people think that civil and political rights are universal and must be guaranteed for
everyone, but economic and social rights (such as the right to join a trade union or the right to
take strike action or equal rights for men and women in the workplace) are less important and
we should not expect such rights to be guaranteed by law until the country’s economy is
sufficiently developed to provide the necessary resources. What do you think?
Here is a view that some observers and economic advisers were expressing at the time of the
political and economic transition after communism in the 1990s:
“When major political and economic changes have to be made, there will always be
‘winners and losers’. The important thing is to introduce the changes as quickly and
as effectively as possible so that the country can rapidly develop its economy and the
government will soon have the resources to help those who were most badly hit by the
changes”.
What do you think?
118
CASE STUDY 12: Has government intervention been effective
in promoting the principle of Equal Pay for Equal Work?
Christopher Rowe
Timeline of the issue
27 March 1957: Article 141 of the Treaty of Rome (the EC Treaty) enshrined the principle
that men and women should receive equal pay for equal work.
April 1970: The Labour Government in Britain passed the Equal Pay Act. It came into force
in 1975.
10 February 1975: EEC Directive 75/117 harmonised the laws of member states relating to
the application of the principle of equal pay for men and women.
9 February 1976: EEC Directive 76/207 implemented the principle of equal treatment for
men and women as regards access to employment, vocational training, promotion and
working conditions.
20 December 1996: EEC Directive 96/97 provided for equal treatment of self-employed men
and women; and protected rights of self-employed women during pregnancy and motherhood.
2 October 1997: The Treaty of Amsterdam, amending and updating the 1957 EC Treaty, set
the objective of integrating equality of women and men into all the activities of the European
Community. This integration (known as “gender mainstreaming”), was a “fundamental
Community principle”. Article 141 authorised positive discrimination in favour of women.
December 2000: The European Action Programme on Equal Opportunities (2001-2005) was
issued to implement the Community’s Framework Strategy for Gender Equality. For 2001,
the priority theme was equal pay for women and men. Later priorities included the balance
between work and family life, women in decision-making roles, and promoting change in
gender roles and overcoming sexist stereotypes.
October 2004: A joint media release by the Conference of European Public Service and
Education Unions, held in Geneva, stated that: “Despite all the equal pay legislation, equal
pay for work of equal value by women and men has yet to become a reality in Europe.
Women continue to be concentrated in public services, often trapped in low-paid, undervalued
jobs. As public service leaders in Europe, we are calling for investment where it really
matters. What we need is not blind support for privatisation but proper rewards for
undervalued public service jobs.”
February 2006: The European Parliament agreed in principle to the establishment of a
European Institute for Gender Equality.
December 2006: An interim report by the British Equal Opportunities Commission stated
that: “Our current investigation is about Pakistani, Bangladeshi and Black Caribbean women
in Britain. Our research highlights the importance of providing equal opportunities for ethnic
minority women at work. It demonstrates a mismatch between the aspirations of young
women and their ability to find work that matches their skills and abilities.”
119
January 2007: the Guardian newspaper reported on a legal case of 2006 as follows: “Jessica
Starmer won her sex discrimination case against British Airways last year after she was
refused permission to cut her hours in order to look after her two-year-old daughter. At the
time, she was one of only 152 women pilots out of 2,932 BA pilots.”
What was in dispute here?
Equal pay for equal work is part of a larger debate about equality for women and girls. During
the 20th Century, there was a shift in attitudes – not only about gender issues but also about
equality in other fields – racial equality, the rights of the disabled, or fixing a minimum wage
to protect low-paid workers. Especially since 1970, there has been general support for
government intervention to promote equality.
There has been a great deal of legislation, both by the European Community and by individual
states, to create the conditions for true equality in the workplace. But the principle of equal
pay for equal work has proved to be extremely difficult to achieve in practice. In 1957, the EC
Treaty enshrined the principle of equal pay for equal work – in the 50 years since then, there
has been a stream of directives and amendments to make the principle a reality. Many
national governments have passed Equal Pay laws. And yet, despite all the legislation,
average earnings of women across Europe remain persistently lower than those of men.
There are two broad schools of thought about why this should be so; the “Choice Theory” and
the “Discrimination Theory”. According to the Choice Theory, pay equality has been
achieved wherever male and female workers have similar levels of availability, experience
and educational qualifications. This suggests that the disparities in pay between men and
women are the result of different life choices. Men choose more adventurous (and more
highly-rewarded) careers; women choose careers in the caring professions, or organise their
careers to fit in with being a mother, or caring for elderly relatives. This is why many women
choose part-time work; or interrupt their full-time jobs to bring up pre-school children.
The Discrimination Theory claims women have no genuine choice at all because artificial
barriers prevent true equality of opportunity. There is gender segregation at work - in
hospitals, most (highly-paid) doctors are men; most (less well-paid) nurses and cleaners are
women. Jobs predominantly done by women are undervalued. Few firms give the same levels
of pay or the same employment and pension rights to part-time employees as to their full-time
workers. This disproportionately affects women. Structural factors and social attitudes make it
much more difficult for women to gain promotion than for their male colleagues.
During the past 50 years, therefore, it has become clear that the question of equal pay is much
more straightforward than the more complex question of equal work. If the Choice Theory is
correct, how can social and cultural attitudes be changed so that women make (and are able to
make) different choices? If the Discrimination Theory is correct, how can modern societies
remove the various barriers that block the path to true equality of opportunity?
This involves many factors, such as what is taught in schools, what the social and religious
influences are within families, what images and stereotypes are projected by the entertainment
industry and mass advertising. There is strong evidence that there has been a significant shift
in attitudes over recent decades. Statistical research supports the view that the biggest single
factor determining women’s pay levels is the time when they were born. For women born
before the Second World War, income averaged about 60% of male earnings; for the
120
generation born after 1945, it averaged about 70-75%; for those born in the mid-1960s, it was
about 80-85%; for those born after the mid-1980s, it was 90-95%. It can be argued that this
upward trend has been brought about by the changes in the choices made by women as
successive generations reflect new cultural attitudes.
Supporters of the Discrimination Theory are less optimistic They see a never-ending struggle
against structural factors blocking women from being able to make a free career choice. One
such structural factor is the state of the economy. In times of full employment and shortages
of specific skills, many job opportunities become open to women that were not previously
accessible such as in wartime, with the widespread employment of women in jobs
traditionally thought of as “men’s work”. Conversely, in times of economic depression,
women’s job opportunities have been disproportionately reduced. This suggests choice plays
little part - circumstances are all-important. So eliminating the adverse factors holding back
women’s pay and rights requires intervention, both by detailed legislation and by education.
It is argued that intervention has to include both action against negative discrimination and
positive discrimination in favour of women. One key factor is motherhood and childrearing.
For women, this is not really a choice – only women can have babies, not men. To achieve
true equality, it is necessary to protect women against falling behind in their career while
pregnant or staying at home with small children. Such protection includes paid maternity
leave, and measures to ensure women do not miss out on pension rights or promotion –such
as equal rights to part-time and full-time workers, or allowing the possibility of job-sharing.
Legislation to outlaw discrimination has addressed numerous interrelated issues. One is the
“closed shop” – restrictive practices by a trade union or a professional association to control
entry to certain jobs and reserve them for males only. Another concerns the advertising of
posts and the interviewing of candidates for them. Legal measures have attempted to ensure
that women are not excluded because of the unequal way the job specification is drawn up.
Interviewing panels can no longer ask women applicants such questions as “are you
married?”; or “do you intend to become pregnant in the near future?”.
The concept of positive discrimination is perhaps the key factor in the debates about equality.
Many campaigners for women’s rights argue that it is not enough to eliminate overt
discrimination. They argue that the entrenched negative attitudes in key parts of the economy
and the professions can only be overcome by legislation and regulation; that “equality of
opportunity is not enough”. Against this, some women claim that it actually prevents them
from achieving equality on their own merits, while libertarians argue that the state has no
business interfering in personal matters like attitudes to family life. Others argue that positive
discrimination is expensive and inefficient - the problem should be left to market forces.
This leads to a debate about the controversial issue of “interference” by the state. Is legislation
essential to compel reluctant employers to accept the principles of true equality? Should
employers have the right to pay their employees whatever market forces will allow? Many
employers argue that regulations designed to promote equality are burdensome and expensive,
imposing unfair extra costs and making their enterprises uncompetitive. As a result, there are
reduced employment prospects for everybody, women as well as men.
It is notable how much more far-reaching the interventions of government became over time.
The original assertion of principle by the EC Treaty of 1957 was relatively simple and limited
– “equal pay for equal work”. Over the next 50 years, the aims became much more ambitious
– “equal treatment”, rather than merely pay; “gender mainstreaming”, “changing sexist
121
stereotypes” and so on. The position of women in employment has changed greatly since the
1950s. It is not fanciful to talk about a “gender revolution”. But there is continuing debate
about the impact of positive discrimination and government regulation.
Is it true that state intervention has successfully accelerated change and brought about a vastly
improved situation for women in employment? Or has such intervention been a costly waste
of resources – both because it has helped to make businesses uncompetitive and because it has
had less impact on actual working practices than market forces would have done if left alone?
A variety of viewpoints about the issues associated with equal pay
Comments about one of his female colleagues made by an experienced secondary school
teacher in the north of England after the Equal Pay Act came into force in 1975:
“She always takes the day off when her children are sick. I don’t. I regularly stay
behind to help with after school activities and I come in every Saturday morning to
run team games. She doesn’t. Why should she be paid the same as me?”
The reply by his female colleague when she was told what he had said:
“That’s because he has a little wife at home looking after his children while he’s
running around the football field.”
“France Tries Again To Give Women Equal Pay”, a special report for The Guardian by
the British journalist Jon Henley in May 2005:
“The French national assembly launched a campaign yesterday to raise pay for
women, who, despite laws dating back to the 1970s, still earn 25% less than men.
‘This gulf is unacceptable morally and economically’ said Nicole Ameline, the state
secretary for equality. She claimed her bill, to achieve wage parity in five years, was
‘modern’ because it relied on cooperation with employers, not coercion. Critics said it
was toothless and did not address real issues such as part-time working. In part, the
lowly place of women in French life is a legacy of the legal code drawn up by
Napoleon, who once said: ‘Women belong to men just as trees belong to gardeners’.”
An article by the American journalist, Rana Foroohar, in the 27 February 2006 issue of
the current affairs magazine Newsweek International:
“Here’s a pop quiz on gender equality. Where are women most likely to reach the
highest rungs of business power? Choice A offers new mothers only 12 weeks
maternity leave, little subsidised childcare, no paid paternity leave - and has a
notoriously hard-driving business culture. Choice B gives new mothers up to three
years paid time off work after having kids. Government agencies protect workers at
the expense of business and favour a kinder, gentler corporate culture. So which place
is better for women who want to make it to the top? If you chose A, the US, you’d be
right. If you chose B, Europe, think again. Women are 45% of high-level decisionmakers in the US. In the UK, women hold 33% of the top jobs. In Sweden –
supposedly the very model of gender equality – it’s 29%. In Germany, it is 27%; in
Italy, it’s a pathetic 18%. Europe is killing its women with kindness.”
122
Comments made in October 2006 by Leena Linnainmaa, President of the European
Women’s Association:
“The situation will only become fairer for women executives when more men take
paternity leave. The fact that women take maternity leave is a great burden on their
careers. We strongly urge men to take paternity leave; and we urge those countries
that have no specific legislation on the right to paternity leave to amend their laws.”
A letter to the Editor of the evangelical American newspaper Today’s Christian Woman
January 2007:
“The crusty old codgers who run the Wimbledon tennis tournament have been
criticised because they don’t pay the women players as much as the men. It’s
outrageous. And I think they are absolutely right. Don’t get me wrong. I’m all in
favour of equal pay for both genders – as long as it’s for equal work. But it’s not
equal work. The men play best-three-out-of-five sets, and matches often last four
hours. The women play best-two-out-of-three sets, and matches rarely last two hours.
So it’s simple. Less time on the job, less pay.”
Comments by a feminist British journalist in 2006:
“Leave it to market forces? The most efficient thing market forces ever produced was
the slave trade. If we’d left it to market forces, slave ships would still be sailing
merrily along.”
What Do You Think?
ƒ
Are there any jobs that in your opinion should be regarded as “men-only” or
“women-only” careers?
ƒ
Are women generally less well paid than men because they have chosen jobs for
different motives than simply how much money they might earn?
ƒ
What would be your reaction if you applied for a job and discovered that the
interview process would be influenced by positive discrimination in favour of
female candidates?
ƒ
Do you think that the issues about equal pay should be decided by governments –
or by business?
123
CASE STUDY 13: Should women and girls have the same right
to education as men and boys?
Timeline of the issue
1787: The feminist writer Mary Wollstonecraft published Thoughts on the Education of
Daughters. The book’s subtitle was “Unfortunate Situation of Females, Fashionably Educated
and Left Without a Fortune”.
1865: The English feminist Emily Davies, together with Elizabeth Garrett Anderson and
Millicent Fawcett, formed the Kensington Society, campaigning for women’s rights. Emily
Davies was instrumental in the founding of the first all-women’s college, Girton, in 1869.
March 1870: Marianne Hainisch, founder of the Austrian women’s movement, wrote an
article on women’s right to education but found that that no newspaper was willing to publish
it. She then held a meeting in Vienna, demanding parallel school classes for girls. In response,
the First Austrian Savings Bank donated 40,000 gulden for the foundation of a girls’ school.
1872: Newnham College was established at Cambridge University – the first all women’s
college since the beginnings of the university in the 14th Century – but women were not
awarded full degrees by the university until 1947.
1888: Marianne Hainisch initiated the League for Extended Women’s Education, agitating for
women to be permitted to enrol in higher education.
1952: The International Conference on Public Education at Geneva recommended ministers
of education to promote the access of women to education at all levels.
1960: The Convention against Discrimination in Education was established by the
UNESCO General Conference held in Paris.
3 September 1981: CEDAW (Convention on Elimination of All Forms of Discrimination
against Women) came into force in accordance with the 1979 decision by the United Nations
General Assembly.
September 1989: The first case in France of girls being excluded from school for refusing to
remove the hijab. This began a lengthy controversy over the wearing of religious symbols in
French schools – it reached a climax of public debate in France in 2004. Similar conflicts over
headscarves in schools arose in Italy in November 2006; in Smolyan, Bulgaria, in December
2006; and in Germany in January 2007.
1995: The Fourth World Conference on Women, held in Beijing, issued its declaration on
education of women and girls as a key factor in the elimination of poverty.
February 2004: In France, a law prohibiting the wearing of “conspicuous religious symbols”
in schools was passed by 494 votes to 36.
June 2006: The European Ministerial Conference on Equality Between Women and Men was
held in Stockholm. The main theme of the Conference was “Human Rights and Economic
Challenges in Europe – Gender Equality”. The conference placed special emphasis on
achieving greater equality of educational opportunities for women and girls.
124
What is in dispute here?
There are two chief aspects of the controversy about the rights of women and girls to equality
in education. The first is the question whether women should have access to education; the
second question is whether that education should be identical to the education of men and
boys – or should allow for a variety of approaches based on gender differences. Over and
above this is the issue of religious and cultural traditions – should state education be uniform
for all, or should it reflect religious and cultural preferences? Further, is attendance at school a
matter of choice for children and parents – or should it be compulsory, enforced by the law?
In western societies, the first important battle – for the principle of equal rights to education
for women and girls – has effectively been won. Few people any longer resist the idea of girls
being educated. In the majority of schools, girls and boys are educated together – schools for
boys or girls only have become the exception, not the norm. In recent years, the main focus
has been on education rights for women and girls in the developing world.
The issue of what women and girls should be taught, as opposed to if they should be taught at
all, has proved more difficult to resolve. Traditional attitudes tended to regard certain parts of
the school curriculum as very gender-specific – cooking and domestic science exclusively for
girls, engineering and craft subjects exclusively for boys, and so on. These attitudes also
influenced vocational training – nursing and other caring professions for girls, industrial
apprenticeships for boys. In recent years, many educational initiatives have been launched to
change perceptions and to overcome “sexist” stereotypes.
The question of legal compulsion to attend school is controversial. Some parents demand the
right to educate children at home. Some parents wish to withdraw their children from certain
lessons at school – sex education, for example, or physical education and team games, or
lessons touching on issues such as “creationism” and science. The question “who decides” whether education is an entitlement or a legally enforceable responsibility remains intractable.
Perhaps the greatest challenge to the idea of equality for all in education comes from efforts to
promote multiculturalism. In France, for example, all pupils, whatever their cultural, religious
or ethnic background, were provided with the same secular education. This concept came
under strain, especially from demands for religious tolerance. It was argued that “the equality
of uniformity” leads to prejudicial treatment of minorities. The most controversial example of
this concerned the issues of religious symbols being worn at school – Catholic crucifixes,
Sikh turbans and the forms of dress worn at school by Muslim women and girls.
From 1989, the question whether Muslim girls had the right to wear a headscarf (niqab) at
school became a very contentious public issue. Many girls were excluded from their schools.
There were divided opinions among teachers and administrators. It caused particularly intense
debate in France in 2004. Similar concerns arose in other countries in 2006 and 2007. In Italy,
the government proposed banning the veil not only in schools but anywhere in public. In
Britain, a Muslim classroom assistant lost her job because she refused to remove her veil after
pupils had complained it was difficult to understand what she said in class. Two Muslim girls
were excluded from school in the Bulgarian town of Smolyan. In Germany, a Munich court
upheld a ban on Muslim teachers wearing headscarves - leading to protests against nuns
wearing head-covering habits in Catholic schools.
The headscarf controversy forms part of a wider debate about what constitutes equal rights to
education. On the one hand, equal access to education is vital to overcome discrimination and
to remove stereotypical images of females “belonging in the kitchen and the nursery”. On the
125
other hand, forcing complete uniformity of educational provision may be seen as conflicting
with other essential freedoms.
A variety of viewpoints
The pioneering English feminist Emily Davies, writing in 1896:
“Let it be understood that the choice for women in education is not between a life wholly
given over to study, or a life entirely spent on domestic duty. The aim of these new all-women
colleges will not be to change the occupations of women but to ensure that whatever they do
is done well and makes use of their abilities. Whether as mistresses of households, mothers,
teachers, or workers in the fields of art, science and literature, their work is presently held
back by lack of training.”
Women’s Liberty and Man’s Fear, an article written by Teresa Billington Greig in 1907:
“Man is afraid of women. He proves it every day. History proves it to him. Man is afraid of
Woman because he has oppressed her. Men fear that the end of domination may come; and
that women’s rebellion may mean not merely the throwing off of the yoke but vengeful
retaliation against men’s tyranny.”
Adolf Hitler, speaking to the NSW (National Socialist Women organisation) in 1934:
“We do not consider it correct for the woman to interfere in the world of the man. We
consider it natural that these two worlds, the spheres of women and of men, remain distinct.
To the one belongs strength of feeling, the strength of the soul. To the other belongs the
strength of toughness, of decision, of willingness to act. Education must prepare girls and
boys for their separate roles, mutually valuing and respecting each other.”
Comments by Jaime Torres Bode, Director-General of UNESCO, to the Conference on
Obstacles to Equality of Educational Opportunities for Women, December 1949:
“I cannot over-emphasise the truly fundamental importance of women’s education. The man
who said that a child’s education begins with the education of his mother was nor being
funny. More than half the the population of the world is female. The constant association of
children with their mothers during their early years gives women a decisive part in the
upbringing of the human race. The campaign to secure educational opportunities for women is
a precondition for all other efforts to achieve a just and lasting peace in the world.”
The “Amman Declaration” of the Middle East/North Africa Summit, 1995:
“Education is empowerment. It is the key to establishing and reinforcing democracy. In a
world where creativity and knowledge play an ever-greater role, the right to education is the
right to participate in the modern world. The priority of all priorities must be the education of
women and girls. There can be no enduring success in basic education until the gender gap is
closed.”
Women Watch; Information and Resources on Gender Equality and the Empowerment
of Women, 2005:
126
“Education and training for women and girls is a human right and an essential element for the
full enjoyment of all other social, economic, cultural and political rights. It is not enough just
to enrol girls and women in education and training. Education must challenge existing power
relationships and be the basis for attitudinal and behavioural changes in girls and boys, and
women and men.”
The Hijab and the Republic: Headscarves in France by David Macey, June 2004:
“A modern democracy has, probably for the first time, ruled by law on what certain girls can
wear to school. Very few commentators on France have any doubt as to the real target.”
A report on Freedom of Religion and Religious Symbols in the Public Sphere, produced
for the Bibliothèque du Parlement in France, March 2006:
“Different countries apply varying interpretations to the balance between religious freedom
and other freedoms. While some governments attempt to accommodate all forms of religious
expression in a neutral manner, others often apply a more restrictive and formally secular
approach. In particular, France applies its historical policy of “laïcité” in a way that enforces
strict secularism in the public sphere, relegating overt forms of religious expression to the
private sphere. This has important implications for equality in education.”
What Do You Think?
Are there any subjects that, in your opinion, should be studied at school only by either girls or
boys?
Is attending school a right, a privilege, or an obligation? Should young people be compelled
by law to attend school?
What would be your reaction if you applied for a place at university and discovered that
places would be awarded on a quota system to ensure gender equality?
127
KEY QUESTION SEVEN: Why do human beings seem to find it
so difficult to look after their environment?
Zosia Archibald and Robert Stradling
What is the environment?
When we think about “the environment”, we may be considering a number of different
subjects, although they are interrelated. Originally this was an old French word, meaning the
things and places that “surround” us. In the 19th century, with the systematic study of plants
and animals in their indigenous settings, the term acquired a much more specific and technical
meaning as the habitat or eco-system which is not only inhabited by a living organism but
which also sustains it. In the scientific study of ecology, the environment comprises the sum
total of the biological, chemical and physical factors which sustain particular living
organisms. In practice, of course, most environments, even microscopic ones, will sustain
different kinds of living organism, and most living things are themselves environments for
other organisms, e.g. the parasites that live off other insects and animals. Or, as the satirist
and poet, Jonathan Swift, put it more graphically in the 18th century:
So, naturalists observe, a flea
Hath smaller fleas that on him prey,
And these have smaller fleas to bite ‘em,
And so proceed ad infinitum.
In the 20th century the term was extended to include the built as well as the natural
environment, particularly as concern grew about the impact of the urban environment on the
health and quality of life of the people living in it.
Is concern about the environment a new phenomenon?
Some of the oldest surviving written texts from Eurasia suggest that people have had a very
acute awareness of their environment for thousands of years. We can see from the
mythologies of early societies that, to a greater or lesser extent, people have always
speculated about their relationship to and dependence on the rest of the natural world. From
the moment that humans started to record their thoughts and memories we find anxieties
about their environment. For example, the oldest written story in the world, the Epic of
Gilgamesh, the Sumerian hero, was written more than 4,000 years ago. It tells of the coming
of a great flood which destroys humankind except for one family who survive by building an
ark and from them springs a new race who populate the world once the flood has subsided.
This myth subsequently spread throughout the so-called Fertile Crescent around the great
rivers – the Nile, Tigris and Euphrates - re-emerging, of course, in the Hebrew Bible with the
story of Noah and his family. The histories of these early civilisations abound with stories
about the human disasters resulting from floods, droughts, severe storms, plagues of locusts
and longer-term climatic changes.
At the same time the gradual evolution from hunter-gatherer nomadic tribes to settled
communities planting and harvesting crops and keeping sheep and cattle highlights that even
in pre-historic times humans were not just at the mercy of the climate and other natural
phenomena they were also exploiting and cultivating natural resources and consciously
adapting their environment to meet their needs.
128
While it is probably the case that the majority of environmental crises until modern times had
natural causes, it is also the case that humans have been depleting natural resources ever since
they started to form settlements. However, until recent times the earth was not heavily
populated so people could always move on to uncultivated, undeveloped environments when
they had depleted local resources and left the land infertile. That option is no longer
available.
Today many of our environmental crises are the results of the actions of human beings. Our
use of land and the over-exploitation of natural resources that cannot be renewed, the waste
we create and what we do with it, our wasteful material lifestyles, the impact of toxins on
local and global eco-systems and the impact of the emission of so-called ‘greenhouse gases’
on the climate, have all had a negative impact on our environment. The consequences are
familiar:
ƒ Melting icecaps and rising sea levels
ƒ Floods, droughts, hurricanes and other climatic extremes
ƒ Disappearing forests
ƒ Expanding deserts
ƒ Polluted air and water
ƒ Depleted fish stocks
ƒ Radiation contamination of areas where nuclear weapons were tested, nuclear
waste is stored or there have been explosions and leaks from nuclear power
plants
Scientific journals have been running stories about these environmental catastrophes since the
Second World War. But, in the last few decades, the mood has changed. James Lovelock, a
British scientist, developed the “Gaia Theory” in 1972, according to which the planet earth is
a self-regulating, living system. In 2006, he published a new chapter to this theory, in a book
called The Revenge of Gaia. In this latest book, Lovelock argues that:
“Humanity, wholly unprepared by its humanist traditions, faces its greatest trial. The
acceleration of the climate change now under way will sweep away the comfortable
environment to which we are adapted. Change is a normal part of geological history;
the most recent was the Earth’s move from the long period of glaciation to the
present warmish interglacial. What is unusual about the coming crisis is that we are
the cause of it, and nothing so severe has happened since the long hot period at the
start of the Eocene, 55 million years ago, when the change was larger than that
between the ice age and the 19th Century, and lasted for 200,000 years.”
James Lovelock, The Revenge of Gaia – Why the Earth is Fighting Back and How We
Can Still Save Humanity (London, Allen Lane, 2006).
A growing body of evidence of environmental change
During the last half century, environmental scientists have developed more accurate methods
of analysing environmental changes. Rachel Carson’s Silent Spring (1962) was the first
extended study of the long-term effects of chemical pollution. “Environmental impact
assessments” have become a normal part of any process of development, not just in countries
with highly developed industrial strategies, but equally in those where industrialisation has
been less intense. It has become increasingly clear that changes have not been limited to local
situations, such as the pollution of rivers and lakes by mining and manufacturing plants.
129
Traces of toxic chemicals are present in the bodies of all living things but in comparatively
low concentrations, which do not affect our daily lives.
Much more serious is the proportion of gases in the atmosphere, which affect global
temperatures, and thus the overall conditions for life. Scientists worldwide have long been
divided about the evidence for climate change and equally equivocal about the remedies.
Controversy has focused on the “greenhouse gases”, particularly carbon compounds, such as
methane and carbon dioxide. These gases transmit light in a way that resembles the glass
panels of a greenhouse. The light focuses heat on the earth’s surface while infra-red radiation
is not released back into the atmosphere. The huge increase in carbon dioxide levels, from
280ppm (parts per million) in 1750 to 375ppm today, an increase not seen for 420,000 years,
can be attributed in large part to the burning of fossil fuels since the beginnings of the
Industrial Revolution.
Among the less pleasant side effects of increasing temperatures are the positive feedback
loops that rising temperatures create. So long as lands covered by ice sheets remain large, the
light that falls on them is reflected back into space, keeping temperatures cold. But glaciers at
the ice caps are melting at an increasing rate, which reduces this effect. Methane held in ice
crystals is also released in the process. Increasing temperatures in the oceans reduce the
amount of algae growing in their waters, which also reduces the capacity of these microorganisms to absorb carbon dioxide down into the ocean depths. Higher temperatures also
reduce the rate at which forests reproduce themselves. Trees absorb carbon dioxide and
methane but release them at night, and when they begin to decompose.
It is becoming difficult to deny the effects of climate change. Critics today are more likely to
question the pace of these changes than to be sceptical that changes are occurring. The only
way in which increasing levels of greenhouse gases can be reduced is by cutting back on the
ways in which humans and animals contribute to this process, as well as by reducing the gases
by physical or chemical means.
What can be done?
From the 1960s onwards many national governments began to develop environmental
protection policies and passed laws designed to reduce pollution. International Aid Agencies
introduced support programmes to assist the victims of environmental catastrophes,
particularly in developing countries in Africa and Asia. International non-governmental
organisations also emerged around this time to raise public awareness and support for
programmes designed to protect endangered species. Then from the 1970s onwards a number
of international organisations, such as the United Nations, the Council of Europe and the
European Union, through various international declarations and conventions, have taken a
lead in trying to persuade governments to adopt policies aimed at promoting environmentallysensitive economic development, controlling pollution and ensuring that future generations
will live in an environment that does not threaten their health, wellbeing and human rights.
In 1972, for example the UN held a conference in Stockholm which led to the Declaration on
the Human Environment. Seven years later the Council of Europe agreed the Convention on
the Conservation of European Wildlife and Natural Habitats (1979). In the 1990s the Council
of Europe ratified additional Conventions aimed at the use of civil and criminal law to prevent
any activities by manufacturing and other corporate bodies that might threaten the natural
environment. Other declarations, conventions and recommendations have been issued over
130
the last ten years by both the UN and the Council of Europe that have called on governments
to take action to protect each individual’s right to a clean and healthy natural and man-made
environment.14
Human Rights and Responsibilities
Although environmental campaigners have been calling for many years now for a special
category called environmental rights this has tended to be regarded as a problematic category.
Essentially the problem lies with the fact that human rights, by definition, are about the
entitlements and protections that individual people should expect from their governments.
With regard to the environment and environmental protection the human rights agenda over
the last 60 years or so has tended to focus on two broad themes.
The first of these has been how to protect the individual from the consequences of the actions
and policies of governments and multi-national corporations. As such, the emphasis here has
been on the right to adequate health, food, water, air and shelter; the right to a healthy and
working environment; the right of access to information about the environment; the right to be
consulted about environmental decisions and developments, and so on.
The second broad theme has been about what might be described as environmental justice.
This stems from a recognition that the weak, the poor, the indigenous communities and some
cultural and ethnic minorities have often been the ones who have suffered disproportionately
from environmental catastrophes. They are most likely to live in unhealthy urban
environments or rural environments contaminated by toxins and other pollutants. The issue
here has focused mainly on equality of treatment. They have also sometimes suffered from
the environmental policies of governments and international agencies. In Africa, for instance,
governments, with the support of the World Bank and environmentalist organisations, have
set up over 10,000 protected wildlife reserves. One estimate indicates that over 14 million
people have been displaced from these areas, mostly without any form of compensation.
Those who have been allowed to stay have been banned from using their traditional hunting
grounds. Their cultural rights have tended to be ignored, they have rarely been consulted and
the governments and international wildlife agencies have been unwilling to negotiate a
compromise position or even consider the possibility that these indigenous peoples might
recognise that it is not in their interests to allow endangered species, including those which
they hunt, to become extinct.
Essentially, instances such as this highlight that major environmental problems such as
pollution, global warming, depletion of natural resources and endangered species are highly
complex and are not necessarily resolved by coercive measures by governments and
international agencies. We all share in the responsibility for these crises and we all have a
role to play in finding effective ways to address the problems.
14
See, for example, the Stockholm Declaration of the United Nations Conference on the Human
Environment (1972); the Convention on the conservation of European wildlife and natural habitats
(Bern, 1979); The Convention on civil liability for damage resulting from activities dangerous to the
environment (Lugarno 1993); the Convention on the protection of the environment through criminal
law (Strasbourg 1998), the UN Convention on access to information, public participation in decision
making and access to justice in environmental matters (Aarhus 1998) and the ongoing work within the
Council of Europe to draft a European Charter on general principles for Protection of the environment
and sustainable development.
131
As ordinary people and citizens we are not just victims of the environmental harm caused by
governments, manufacturing firms, unscrupulous landowners and large multinational
corporations. The choices we make, the lifestyles we lead or aspire to, our political
expectations and demands and, above all, our attitudes to the natural world have all helped to
create the environmental crises now facing the world. For most of human history people have
seen themselves as separate from and superior to all other animals and in a fundamental way
we have also dissociated ourselves from the natural world (i.e. regarded ourselves as separate
from it). This has meant that we have looked at the natural world as something that we can
exploit for our own benefit rather than as something for which we should be responsible and
accountable. Unless those attitudes change it seems unlikely that the problems described here
and in the following case studies will ever be effectively resolved.
132
CASE STUDY 14: The Kyoto Protocol and the debate on the
speed and impact of climate change
Zosia Archibald and Robert Stradling
Background
Serious international attempts to control global output of greenhouse gases have been under
way since the “Earth Summit” of 1992 in Rio de Janeiro where the United Nations
Framework Convention on Climate Change was adopted. While the Convention set out the
basic principles for action on climate change, it did not establish specific targets for cutting
the greenhouse gas emissions which are thought to be partly responsible for global warming.
After five more years of increasingly intense negotiations, representatives from 189 countries
met in the ancient Japanese city of Kyoto to consider amendments to the Convention which
came to be known as the Kyoto Protocol.
Those countries with developed economies that agreed to ratify the protocol were committing
themselves to reducing their emissions of carbon dioxide and other so-called greenhouse
gases15 by a specified amount within a specified period of time. Using 1990 emission levels
as a baseline, the countries ratifying the protocol were expected to cut global emissions by
5.2% by 2008 - 2012. The targets varied from country to country. The member states of the
European Union were expected to cut their emissions by 8% and Japan by 5%. But large
developing countries, such as India, China and Brazil were not set targets at this stage.
However, the Kyoto Protocol did not come into force as a legally binding treaty until two
conditions were met:
ƒ
It had to be ratified by at least 55 countries;
ƒ
It had to be ratified by sufficient nations to account for at least 55% of the world’s
emissions of greenhouse gases.
The first target was met by 2002 but when President Bush was elected in 2001 the United
States pulled out of the Kyoto Protocol and Australia also chose not to ratify. The withdrawal
of the USA was a major blow to the United Nations because it is responsible for over a third
of the world’s emissions of greenhouse gases. This meant that Russia’s decision became
crucial if the second condition of the Protocol was to be met. Russia finally ratified the treaty
on 18 November 2004 and the Kyoto Protocol came into force 90 days later and is now
binding on all those countries which had signed it and had also been set targets for reducing
their emissions.
15
Although 99% of the earth’s atmosphere is made up of nitrogen (78%) and oxygen (21%) and these
are crucial to supporting life they do not regulate the climate. This is carried out by some of the gases
that make up the remaining 1% of the earth’s atmosphere: the greenhouse gases. These include:
Carbon dioxide, Methane, Hydrofluorocarbons (HFCs), Perfluorocarbons (PFCs), chlorofluorocarbons
(CFCs), Sulphur hexafluoride, Nitrous oxide, ozone and water vapour. All these gases absorb heat.
Without them the earth would be 30C colder. However, if we produce too much of them then we get
global warming.
133
At time of writing 141 countries and states have ratified the treaty. Many of them are
developing countries that are likely to suffer most from the effects of global warming. They
do not yet have to commit themselves to specific targets but they are required to report their
emission levels and develop programmes for responding to climate change. Some countries,
including France, Sweden and the UK, have already met their specified targets but it is
anticipated that many others will not meet their targets by 2012 given their current rates of
progress.
Because countries vary so greatly in the levels of their greenhouse gas emissions a system has
been introduced where highly polluting countries can buy unused ‘credits’ from those
countries which are allowed to emit more gases than they actually do. Countries can also gain
credits by large-scale tree planting and soil conservation - to absorb carbon – and by helping
developing countries with similar projects. This process is usually described as emissions
trading or carbon offsetting.
When the leaders of the world’s major industrial economies, met for their G8 Summit16 in
Gleneagles, Scotland in July 2005, they recognised their responsibility for a proportion of past
emissions, and agreed to work with developing nations to help build appropriate capacity to
counteract the negative effects that many of these countries are likely to experience in the
near future.
When they met again in Heiligendamm in Germany in June 2007, there was a recognition that
much more needed to be done to reduce gas emissions and the proposal on the table was to
cut greenhouse emissions in half by 2050. However, the Americans had come to the G8
Summit with their own proposal which did not include specific targets and timescales. In the
end, a compromise was negotiated which did not include the 50% reduction target but enabled
the EU members of the G8 group to say that there had been a significant shift in the US
position on global warming.
In the next few decades, greenhouse emissions by the countries of South-East Asia are likely
to rise dramatically as the economies of these nations expand. There are some hopes that
China, India, and Brazil will agree to join a new stage of the Convention after 2012, and many
individual states within the USA have expressed a willingness to comply by similar
regulations.
Timeline
1957 Oceanographer, David Keeling, established the first continuous monitoring of Carbon
dioxide levels in the atmosphere and found that levels were increasing each year.
1979 The First World Climate Conference was held. It calls on governments to address the
problem of predicting and preventing human-made changes in the climate.
1985 First major international scientific conference on the greenhouse effect. Scientists
reported that gases other than carbon dioxide contribute to global warming.
1987 The warmest year since records began.
16
The G8 members are Canada, France, Germany, Italy, Japan, Russia, UK and USA.
134
1988 An international meeting in Canada of scientists specialising in climate change called
for 20% cuts in carbon dioxide emissions around the world by 2005. The United Nations set
up the Intergovernmental Panel on Climate Change (IPCC) to analyse and report on the
scientific evidence.
1990 The IPCC reported that the earth had warmed by 0.5C in the 20th Century and warned of
the need to reduce greenhouse emissions. The UN began negotiations for a Convention on
Climate Change.
1992 The “Earth Summit” was held in Rio de Janeiro, where the UN Framework Convention
on Climate Change was adopted by 154 countries.
1995 The hottest year on record. The IPCC reported that global temperatures will have risen
between 1C and 3.5C by 2100. The report also stated that global warming is partly humanmade.
1997 At an international conference in Kyoto, Japan agreed to the Kyoto Protocol which
required the industrialised nations who sign up to it to reduce greenhouse gas emissions by an
average of 5.2% by 2012.
1998 The hottest year on record.
2000 The IPCC revised its predictions on future greenhouse emissions and warned that the
earth could warm by 6C during the 21st Century.
2001 President Bush was elected and renounced the Kyoto Protocol because he feared that it
would damage the US economy too greatly. Most other industrialised nations decided to go
ahead and ratify the Kyoto Protocol without the USA.
2002 Australia decided not to ratify the protocol but it was ratified by the member states of
the European Union. Russia delayed ratification.
2003 Europe experienced the hottest summer for over 500 years. Scientists reported an
increase in the annual rate of growth in the levels of greenhouse gases in the atmosphere.
2004 Russia ratified the Kyoto Protocol which meant that the conditions for implementing the
Protocol could come into force in 2005.
2005 At the G8 Summit meeting in Scotland, the leading industrialised nations agreed that
they must do more to counter the effects of global warming particularly in developing
countries. Also agreed to start the process of discussing targets for emission reduction after
the current deadline of 2012.
2007 At the G8 Summit meeting in Germany, they discussed the possibility of reducing
greenhouse emissions by 50% by 2050. The USA agreed to work within the UN framework
rather than introduce a parallel programme with other countries such as China, India and
Brazil.
What is at issue here?
135
As the timeline shows, the debate about the nature and impact of climate change and the most
appropriate steps to take to address the problem have been ongoing since the 1980s. Some of
the debate has been very technical but it is also an issue which has mobilised ordinary people
to join environmental campaigning groups and even to take to the streets to protest about the
actions of their own governments and of the international political community. The thousands
of people who have protested at recent G8 Summits illustrates the fact that many people have
deep anxieties about the environment and feel that these are not being adequately addressed at
the international level.
Essentially, this is neither a single nor a simple issue - although the campaigners for and
against change have often presented it as if it was. There are a number of related issues here.
First there is the inter-governmental debate. In the United States, the Clinton administration
took an active role in the discussions in Kyoto and expressed a commitment to reduce their
emissions by 7%. The US team was led by Vice President Al Gore, who has subsequently
been an active critic of the Bush Administration’s policies on climate change and won an
Oscar for his documentary film, An Inconvenient Truth, which addressed the challenge facing
the world. However, even under Clinton the USA seemed to be “dragging its heels” about
ratifying the Kyoto Protocol. When the Republican, George W. Bush, became US President
in 2001, he made it clear that the United States would not ratify the Kyoto Protocol, even
though it was the world’s largest emitter of greenhouse gases. He believed that the changes
necessary to meet the Kyoto targets would damage the US economy and cost millions of
American jobs.
Like Australia, another major industrialised country which did not ratify the Protocol, the
Bush Administration was unwilling to sign a treaty on climate change which exempted two
other major global polluters, China and India, from having to reduce their emissions. He
regarded their exemption as both unfair and fatally flawed. The other industrialised countries,
which had ratified the Protocol, and committed themselves to reducing emissions by 2012
now embarked on a prolonged (five-year) dialogue with the USA and Australia to try to
persuade them to join what one EU adviser described as the “climate change bandwagon”. To
some NGOs and independent observers and journalists, the dialogue seemed to have more to
do with economics and the competitive advantage in global markets of the US, Japan and the
EU rather than with environmental science.
The Climate change debate also had a north-south dimension. As we have already noted,
although many developed and developing countries ratified the Kyoto Protocol, it was only
the industrialised nations who were set specific emission reduction targets to be achieved by
2012. Rapidly industrialising nations such as India and China were exempt as were the less
industrialised, developing nations of Asia, Africa and South America. Some of their leaders,
particularly those in countries experiencing some industrial growth, tended to point out that
the developed north expected those countries in the south who were just beginning to
industrialise to show a level of environmental awareness and responsibility which they had
not shown during their phase of rapid industrialisation.
The developing nations who were most vulnerable to catastrophe as a result of climate change
– the low-lying countries of South-East Asia, including the Philippines, many Pacific islands,
and Bangladesh; the semi-arid tropical regions of East Africa – were likely to be the least well
prepared for these eventualities. The Alliance of Small Island States – many of whom fear
that they might disappear beneath the waves as sea levels rise as a result of global warming -
136
organised themselves in the mid-1990s to present a unified position at Kyoto and subsequent
international climate control conferences. They called for a 20% cut in global emissions by
2005 and for them the Kyoto agreement - 5.2% cuts by 2012 – was too little too late. By
contrast, there was not much evidence of cooperation amongst the other developing states and
after Kyoto and the later international Climate Control conference in Montreal, it was not
unusual to find environmentalists and the mass media in Asian and African countries
complaining about the absence of an agreed Asian or African agenda for climate control.
Since the early 1980s, there have also been a number of ongoing debates within the
international scientific community. Scientists have disagree about the rate at which change is
taking place, how best to counteract it and the exact causes of climate change - particularly
the extent to which climate change is man-made. At the same time, the scientific debate has
often been politicised by the politicians, the mass media and some of the scientists
themselves. The English-speaking mass media, for example, particularly in the United States,
has tended to give a great deal of coverage to any environmental scientist or economist or
social scientist who has been sceptical about the evidence of climate change or its likely
impact on people’s lives.
Nevertheless, when President Bush announced in 2001 that the United States was not going to
ratify the Kyoto Protocol because “there was still scientific disagreement over the issue”, the
national science academies of 17 countries – representing the elites within their scientific
communities – published a joint statement in the US journal, Science, which concluded that
“Doubts have been expressed recently about the need to mitigate risks posed by global
climate change. We do not consider such doubts justified… It is at least 90% certain that
temperatures will continue to rise by 5.8C in the 21st century”.
Now that the majority of environmental scientists around the world agree that global warming
is a problem, that temperatures are increasing and that this is partly due to human activities
the debate has shifted to a new issue: the “tipping point” when the level of greenhouse gas
emissions would be so high that they would trigger changes that would be irreversible. The
particular concerns here are the melting ice caps, desertification and the destruction of sea life
as a result of a temperature rise in the oceans and seas, especially around the coral reefs. At
present there is a lot of uncertainty about when the “tipping point” might occur but the debate
has influenced the decision of the European Union to press for a 50% cut in greenhouse
emissions by 2050. At the same time, some scientists are arguing that the tipping point has
already occurred for some of the small island nations in the South Pacific, such as Kiribati.
Finally, there is also an ongoing debate about what should be done. On the one hand, we
have the radical conservationists, who favour taxes on transport, restrictions on travel (since
air travel accounts for a substantial component of greenhouse gas emissions), an emphasis on
the re-cycling of waste and a personal commitment to reducing one’s “carbon footprint”. On
the other side, we have the government planners who have introduced emissions trading and
argued that new technologies, such as zero-emission power stations and switching to
hydrogen-powered vehicles will be an effective way of combating climate change, without
necessarily having to change our lifestyles dramatically.
A variety of viewpoints
According to the United Nations the Kyoto Protocol is:
137
“An agreement under which industrialised countries will reduce their collective emissions of
greenhouse gases by 5.2% compared to the year 1990 (but note that, compared to the
emisison levels that would be expected by 2010 without the Protocol, this limitaiton
represents a 29% cut). The goal is to lower overall emissions of six greenhouse gases –
carbon dioxide, methane, nitrous oxide, sulphur hexafluorfide, HFCs and PFCs – calculated
as an average over the five-year period of 2008-2012. National limitations range from 8%
reductions for the European Union and some others to 7% for the US, 6% for Japan, 0% for
Russia, and permitted increases of 8% for Australia and 10% for Iceland.” Press Release 10
July 2006.
President Bush in a press release issued on 13 March 2001 said:
“I oppose the Kyoto Protocol because it exempts 80% of the world, including major
population centers such as China and India, from compliance, and would cause serious harm
to the U.S. economy…… I support a comprehensive and balanced national energy policy that
takes into account the importance of improving air quality….. Any such strategy would
include phasing in reductions over a reasonable period of time, providing regulatory certainty,
and offering market-based incentives to help industry meet the targets. I do not believe,
however, that the government should impose on power plants mandatory emissions reductions
for carbon dioxide, which is not a ‘pollutant’ under the Clean Air Act. Coal generates more
than half of America's electricity supply. At a time when California has already experienced
energy shortages, and other Western states are worried about price and availability of energy
this summer, we must be very careful not to take actions that could harm consumers. This is
especially true given the incomplete state of scientific knowledge of the causes of, and
solutions to, global climate change and the lack of commercially available technologies for
removing and storing carbon dioxide.”
The Indian environmentalist, Neelam Singh, criticised the US Administration’s
objection to the Kyoto Protocol because it exempted India and China:
“Bush can attack India and China all he likes. But there is no getting away from the fact that
the United States which is highly industrialised is also one of the highest emitters of carbon
dioxide.”
President Bush’s chief science adviser, John H. Marburger said:
“There’s no agreement on what it is that constitutes a dangerous climate change…We know
things like this are possible but we don’t have enough information to quantify the level of
risk.”
A US environmental lobby group criticised the arguments raised by the Bush
administration against the Kyoto Protocol (National Resources Defence Council NRDC)
“The Bush administration has done absolutely no analysis to substantiate its claim that the
Kyoto Protocol or domestic policies to reduce carbon dioxide pollution from power plants
would seriously harm the U.S. economy. While industry trade associations have published
many misleading claims of economic harm, two comprehensive government analyses have
shown that it is possible to reduce greenhouse pollution to levels called for in the Kyoto
agreement without harming the U.S. economy. In 1998, the White House Council of
Economic Advisers concluded that the costs of implementing the Kyoto Protocol would be
138
‘modest’ -- no more than a few tenths of 1% of gross domestic product in 2010, equivalent to
adding no more than a month or two to a 10-year forecast for achieving a vastly increased
level of wealth in this country. A subsequent and more detailed study by five Department of
Energy national laboratories found that policies to promote increases in energy efficiency
would allow the United States to make most of the emission reductions required to comply
with the Kyoto Protocol through domestic measures that have the potential to improve
economic performance over the long run.”
Al Gore, writing about his documentary film, An Inconvenient Truth:
“Humanity is sitting on a ticking time bomb. If the vast majority of the world's scientists are
right, we have just 10 years to avert a major catastrophe that could send our entire planet into
a tail-spin of epic destruction involving extreme weather, floods, droughts, epidemics and
killer heat waves beyond anything we have ever experienced.”
Sir Robert May, the UK government's chief scientific adviser in the 1990s, said:
“Global climate change is a worrying reality and nobody can afford to delay action in
tackling it. Some people have unjustifiably sought to undermine the work of the IPCC, but
governments should be left in no doubt that it offers the best source of expertise on climate
change. It has brought together scientists from all over the world, and their deliberations
transcend national boundaries and the interests of individual countries.” Guardian, 18 May
2001.
Although Canada was one of the first developed nations to ratify the Kyoto Protocol not
all Canadians welcomed the treaty:
“Implementing the Kyoto Protocol would force us to pay a higher price than we would have
to pay to cover any damage that might be caused by global warming. Kyoto is purported to be
an agreement about the environment. But if you take a closer look, it is, in fact, all about
economics and all about policy that would benefit certain countries (mostly in Europe) over
others (primarily the United States).” The Toronto Star, 4 December 2005.
The Natural Environment Research Council in the United Kingdom emphasises the high
degree of agreement amongst environmental scientists about global warming:
“The overwhelming consensus among climate change scientists is that human activities,
particularly those producing greenhouse gases, are responsible for much of the climate change
we’re seeing. The climate also changes naturally over time. This may account for some of the
warming, but not all. This consensus is apparent from work that climate researchers
(including many NERC scientists) have submitted to the United Nations Intergovernmental
Panel on Climate Change (IPCC). The IPCC is recognised worldwide as the definitive source
of information on climate change. In 1995, the IPCC reported that the balance of evidence
suggests that humans have a noticeable influence on global climate. In a further report in
2001, the IPCC concluded that most of the warming seen over the last 50 years can probably
be attributed to human activities. Computer-based climate models and actual observations
from the last 140 years match most closely when the models include emissions from human
activities.”
139
But Philip Stott, Professor of Biogeography at London University, raises a note of
caution about the use of computer models:
“Climate is one of the most complex systems known, yet we claim we can manage it by trying
to control a small set of factors, namely greenhouse gas emissions.”
What Do You Think?
Do you agree with those people, like former US Vice President Al Gore, who believe that
global warming is a “ticking time bomb” and we all have to change our lifestyles now, or do
you agree with those who believe that the problem can be solved by developing new
technologies?
140
CASE STUDY 15: How are we going to meet our increasing
energy needs in the 21st century?
Zosia Archibald and Robert Stradling
Timeline
1763
James Watt developed the steam engine.
1760 – The first wave of the industrial revolution, fuelled by coal, took place in Britain,
then Belgium and the northern states of the USA.
1850
1800 - First electric battery invented by Alessandro Volta in 1800. In 1820, Marie Ampere
discovered that a coil of wires acted like a magnet when a current is passed through
1821
them. A year later, Faraday invented the first electric motor.
1837
The first industrial electric motors were produced.
1850 – The second wave of the industrial revolution in Germany, Northern Italy,
Scandinavia, France, the Netherlands and parts of Central Europe. Again, coal was
1900
the main energy source for the emerging heavy industries.
1859
Edwin Drake sunk the first oil well in Pennsylvania, USA.
1860s
The rapid expansion of the steel industry in the industrialised world led to a major
growth in the demand for coal.
1870s
By now, US oil wells were producing 11 million barrels of oil per year. This was
mainly used as kerosene for domestic oil lamps. John D. Rockefeller gained virtual
control of the entire industry through his Standard Oil Company.
1878
Thomas Edison invented the electric light bulb and the Edison Electric Light
Company is founded.
1879
The first commercial electric power station was opened in San Francisco.
1892
General Electric Company was formed in the USA.
1895 – A series of major breakthroughs in physics (by Einstein, Röntgen, Bequerel,
Thomson, the Curies and Rutherford) changed the way we see the physical world,
1914
no longer as lumps of matter but as aggregates of atoms which could be split
because of their structure as systems of particles. This also paved the way for
nuclear fission.
1909
The first electricity storage plant is built, located in Switzerland.
19031919
The first automobiles had emerged in the late 1880s but it was not until Henry Ford
established the first moving assembly belts at his Detroit plant that mass production
really became possible. By 1918, Ford was producing 500,000 cars a year.
141
19141920
New battleships built in the period leading up to the Great War (1914-18) were
designed to run on oil rather than coal. This decision had long-term implications for
international relations in the 20th Century. The Middle East became a central focus
for international politics as the major powers sought to secure their oil supplies.
1927
Mass production of automobiles meant that the demand for petroleum increased
rapidly. The Ford motor company alone was producing over 15 million cars per
annum.
1942
Scientists produced nuclear energy in a nuclear reactor.
August US airforce dropped atom bombs on Hiroshima and Nagasaki to end the war with
Japan. Over 100,000 people died instantly; many more died within a year from
1945
injuries and radiation.
1952
The world’s first nuclear reactor for developing commercial energy was opened in
the USA.
1954
The development of the first silicon solar power collectors.
1956
The UK passed the first Clean Air Act which required industry and households to
burn smokeless fuels. This was after 4,000 people were killed by the London smog
– a mixture of smoke, fog and chemical fumes. Oil was now rapidly replacing coal
as the main energy source in industrialised countries.
1961
The International Clean Air Congress was held in London.
1973
By the early 1970s, the major industrial economies were heavily dependent on
cheap oil supplies from the Gulf States and Saudi Arabia. In October 1973, Egypt
and Syria declared war on Israel. Diplomatic support for Israel from the US, Japan
and most of Western Europe led the oil-producing Arab states to restrict oil
supplies. The price of oil rose dramatically and caused an economic recession.
1979
An accident occurred at the Three Mile Island nuclear generating plant in the USA
when the nuclear core suffered a partial meltdown. Nobody was killed.
1980s
The first Wind Farms were developed in the United States and Europe.
1986
An explosion at the Chernobyl nuclear reactor plant in Ukraine led to massive
radiation fallout in Ukraine, Belarus and Russia later spreading on wind currents to
much of Europe. 300,000 people were re-settled. Statistics vary as to how many
died at the time or have died since from exposure to radiation.
19802007
The Iran-Iraq war from 1980-1988, the two Persian Gulf wars between Saddam
Hussein’s Iraq and the US-led coalition, the post-war occupation and insurgency in
Iraq and unrest elsewhere in the region have highlighted the vulnerability of a large
part of the world’s oil supplies. The security of Europe’s natural gas supplies also
came into question when supplies from Russia to the rest of Europe were disrupted
by a price dispute between Russia and Ukraine in 2005-06.
142
What is at issue here?
The leading industrial economies of today bear very little resemblance to their predecessors at
the beginning of the 20th Century. Many of the coal mines have been closed. In 1900, coal
was the main source of energy and, while it is still an important source, it is increasingly
supplemented by a wide range of other energy sources, including oil, natural gas, nuclear
power and hydroelectricity. Many of the iron and steel works have been closed because it is
cheaper to import steel from other countries. Many of the large automobile producers have
moved their factories to countries where labour costs are lower and the workforce is not
unionised.
More generally there has been a gradual switch from heavy industries to
electronics and plastics and from manufacturing to the provision of financial and retail
services.
Some things did not change much during the 20th Century. The richest countries in 1900 and
in 1960 are more or less still the richest although their group has been enlarged to include the
so-called tiger economies of South East Asia. The gap between the rich north and the poor
south continues to grow as does the gap between rich and poor within the northern
hemisphere.
Although methods of industrial production may have changed over the last 100 years scarce
resources continue to be used in wasteful and inefficient ways and people have become
increasingly concerned about the long-term effects of environmental pollution, including
global warming, acid rain, ozone depletion and deforestation.
Concerns about the environmental pollution caused by some fuels; the security of supplies of
oil and natural gas, and the possible rate of depletion of fossil fuels such as oil and natural gas
and widespread opposition to new nuclear reactors being built have provoked the
governments of most industrial nations to start thinking about how best to invest in
alternatives.
These include ways of generating electricity from solar power, wind and waves, more use of
hydro-electric and geothermal power stations where the local environment permits, generating
gas from burning household and agricultural waste on a large scale – known as biogas; and
growing certain crops for processing into oils and gas – known as biomass.
Most recently, some of the leading industrialised countries have been investing in the
development of nuclear fusion. It is thought that many of the disadvantages of current nuclear
power stations might be avoided if it were possible to create energy by fusing hydrogen atoms
to make helium, rather than by the fission (breakdown) of radioactive particles. The isotopes
Plutonium 239 and Uranium 235 are particularly suitable for nuclear reactors because their
fissile properties can be controlled to release energy for industrial and commercial use. But
such radioactive materials are nevertheless potentially dangerous and could be misused. Until
now, scientists have not had much success with fusion experiments.
A new $12 billion project, the International Thermonuclear Experimental Reactor (ITER),
sponsored by the EU, USA, Russia, Japan, South Korea, and China, was launched in June
2005. Originally conceived in 1985, this project will create a research facility at Cadarache,
in Southern France, where laboratory experiments to generate energy by fusion will be carried
out with the aim of constructing a demonstration plant by the 2030s. If successful, such a
plant could produce energy on a commercial scale by the 2050s.
143
Every method of generating energy on a large scale has its advocates and critics. The
supporters often over-state the potential benefits of their preferred energy source while the
critics often over-emphasise the problems and disadvantages. What we have tried to do in the
grid below is identify some of the main advantages and disadvantages that can usually be
found in the literature on energy supplies. We leave you the reader to make up your own
mind on this important issue.
The advantages and disadvantages of different sources of energy
FOSSIL FUELS: These are hydrocarbons such as coal, oil and natural gas, which are formed
from the fossilised remains of dead plants and animals. Together, fossil fuels produce around
70% of the world’s energy.
Advantages
Disadvantages
It is still cheaper to extract and process fossil None of the fossil fuels are sustainable. That
fuels on a large scale than any of the is to say, we consume them at a much faster
rate than nature can produce new stocks for
alternatives.
extraction.
In most industrialised countries, networks of
power stations using fossil fuels are linked up Recent scientific estimates suggest that there
to electricity distribution grids. A major are 250 years of coal deposits, 70 years of
switch to other fuel sources would be very gas and 45 years of oil left – at present rates
of extraction. As more developing countries
expensive.
industrialise, the demand for oil and gas will
Global stocks of coal are still abundant and increase, supplies will deplete at a faster rate
although experts disagree on whether or not and the price of these fuels will increase.
extraction and production of oil and gas have
now peaked, supplies are not yet seriously Coal produces more CO² emissions than any
other fuel. Oil and natural gas also produce
threatened.
higher levels of CO² than nuclear and
The power generators for fossil fuels are renewable fuels.
relatively compact.
Some coal deposits are also high in sulphur
Oil and natural gas do not produce as much which, after burning, causes acid rain.
carbon dioxide as coal.
Governments in countries that are dependent
on imported oil and natural gas are becoming
increasingly concerned about the possibility
of the supplier countries withholding supplies
as a political or economic bargaining tool.
NUCLEAR ENERGY: This can be obtained in two main ways. Nuclear fission obtains
energy from the breaking apart of very large atomic nuclei. This creates heat which is used to
boil water that then produces steam to drive a steam turbine which generates electricity.
Nuclear fusion releases energy by joining very small nuclei together. The fusion reaction
produces a lot of “fast” neutrons which heat up the reactor and this produces steam which
turns the turbines to generate the electricity. As yet, there are no nuclear fusion power
stations. Electricity from nuclear power is either produced by conventional reactors or from
what are known as “fast breeder” reactors.
144
Advantages
Disadvantages
Conventional nuclear reactors: these are fuelled by U235 which has been extracted from
natural uranium. There are nearly 450 nuclear reactors in the world producing around 16% of
global electricity.
Large amounts of electricity can be produced The construction of nuclear power stations is
very expensive.
by moderate-sized nuclear power stations.
It can take over 10 years from the decision to
build a nuclear power plant to getting it into
Conventional
nuclear
power
stations operation.
normally produce very little atmospheric
The maintenance costs are high compared
pollution (unless there is an accident).
with other kinds of power generation.
Very few accidents have occurred since
Scientists disagree about stocks of uranium
nuclear power plants were first built.
but agree that a significant increase in the
The waste produced by nuclear power plants number of reactors around the world would
is much smaller than for fossil fuel power deplete stocks in 50-75 years if these reactors
were conventional and still needed to extract
stations.
U235.
The fuel costs are relatively low.
Nuclear waste from conventional reactors
needs to be stored out of contact with the
biosphere for thousands of years. Difficult
and expensive.
De-commissioning the ageing nuclear
reactors is also difficult and expensive.
There have been very few accidents but the
fear is that the consequences of an accident in
the future could be globally catastrophic.
There is also a widespread fear that nuclear
reactors could be a target for terrorists or
wartime bombing and here too the
radioactive
contamination
could
be
catastrophic.
Fast breeder reactors
Advantages
Disadvantages
More efficient and cheaper use of fuel. Fast The disadvantages of fast breeder reactors are
breeder reactors use almost all of the uranium similar to those listed above for conventional
fuel rather than just the 1% which is U235.
reactors, e.g.:
ƒ High construction, maintenance and deBy using most of the energy in uranium, fast
commissioning costs;
breeder reactors should continue to generate ƒ Non-sustainable uranium stocks;
electricity for a much longer time than ƒ Widespread concerns about accidents;
conventional reactors.
ƒ A potential target for terrorists and
wartime bombing.
The fast breeder process means that there is
less nuclear waste than from conventional In addition, the fast breeder reactors have not
145
reactors.
yet proven themselves as commercially
viable.
Most of those that became
The nuclear waste still has to be stored away operational in the US, UK, Japan, France,
from the biosphere but only for hundreds Germany, Russia, UK and USA have been
rather than thousands of years.
closed down, either because of popular
opposition to them or because cheaper forms
Normally there is hardly any atmospheric of energy were available.
pollution.
There is also concern that the fast breeder
reaction process can produce weapons-grade
plutonium so, if developed globally, this
could greatly increase the risk of nuclear
weapon proliferation.
Nuclear Fusion
Advantages
Disadvantages
Fusion reactors cannot melt down so the risks Nuclear fusion is not yet a proven
associated with accidents are greatly reduced. commercial option.
Commercial fusion reactors would probably
use atoms of lithium and deuterium rather
than uranium and there are large stocks of
both.
So far, the fusion reactor devices which have
been developed have not created significantly
more energy than they use. That is why
ITER has been developed.
Unlike with fossil fuels there would be We will not know for another 25 years if it
will be viable and not gain any benefits until
negligible pollution and greenhouse gases.
after 2050.
Fusion reactors could not only generate large
amounts of electricity but also hydrogen
which could also be used as an alternative
fuel for vehicles.
A small nuclear fusion experimental reactor
(JET) was built in the UK in 1983. ITER is
the next step for testing the potential of
fusion power reactors to generate electricity
on a large scale.
ENERGY FROM RENEWABLE FUELS: resources that cannot be depleted or are selfgenerating. These include hydropower through dammed rivers, wave power, wind power,
geothermal power from volcanic steam, solar power, biogas, which is obtained from
burning organic waste, and biomass which involves extracting energy from crops such
as`corn or sugarcane either as an oil or through burning. Renewable fuels currently account
for about 14% of the world’s energy supplies.
Advantages
Disadvantages
These are renewable, sustainable sources of The most advanced forms of generating
energy.
electricity from renewable sources – hydro,
wind, wave and geothermal - are only
Minimal levels of atmospheric pollution.
suitable in certain climatic and environmental
conditions, i.e. where there are fast flowing
rivers, a lot of wind, a coastline, volcanic
Do not produce greenhouse gases.
146
geysers, etc.
A variety of different means of extracting
energy from natural resources means that Some of these sources of energy are
each country can choose those which are intermittent, e.g. not generating electricity
when there is no wind or sun.
most abundant locally.
For many countries, the use of renewable Other sources, particularly biogas and
fuels would reduce their dependence on biomass, require a great deal of fuel to
produce fairly small amounts of energy.
imported fuel from other countries.
The technology is developing rapidly not Some critics argue that it would be unethical
only for extraction on a large scale but also to convert large areas of agricultural land to
grow biomass crops to sustain the way of life
for large-scale storage and distribution.
of people in developed countries when many
Some
renewable
fuels,
particularly people in developing countries in Africa and
hydroelectric power, wind power and, to a parts of Asia are still starving.
lesser extent wave power and geothermal
power are already producing large amounts Some of the power plants, e.g. the dams for
of electricity for distribution in some generating hydroelectricity, can destroy local
countries (e.g. hydropower in Austria, ecosystems and habitats, including people’s
homes and way of life.
Norway and Scotland).
The technology already exists to enable
individual households to obtain some of their
electricity from solar panels or small wind
turbines.
Some of these sources are controversial
because people who live near them protest
that they are “visually polluting”, e.g. lots of
wind turbines across the countryside, the
large solar “chimneys” which would be
necessary to convert large amounts of solar
radiation into electricity.
Critics argue that, in many countries,
renewable fuels would only ever contribute
small amounts of energy to the distribution
grids and they would still have to rely on
fossil fuels and nuclear power for much of
their energy in the foreseeable future.
What Do You Think?
Imagine that the government of the day has decided to carry out a public consultation exercise
on its future energy policy. You have been invited to take part in a focused group discussion
with a sample of other young people from your locality. You have also been sent a short
questionnaire and been asked to fill it in and bring it to the discussion group meeting.
You can make use of the information in the grid above and, if you need further information to
help you make up your mind, you will find a lot on the Internet – but bear in mind that a lot of
this information will be provided by people and organisations who want to persuade you to
opt for one particular energy source, whether it be nuclear power, biomass crops, solar
energy, oil, natural gas or coal. Please bring your completed questionnaire to the
discussion group.
147
Which of the following energy policy options would you like the government to promote
and invest in? [You can tick more than one option].
Re-open coal mines which still have sustainable seams of coal and invest in the technology to
extract and safely dispose of pollutants including carbon dioxide and other greenhouse gases.
Provide tax incentives to automobile companies to develop vehicles that do not rely on petrol
and diesel (e.g. that use hydrogen or biogases such as ethanol)
Build a new generation of safe, more efficient fast breeder nuclear reactors to be operational
by 2020.
Invest in the development of power stations that can generate electricity from those renewable
fuels that are locally available. Which ones would you give priority to:
Build more Solar power plants
Construct more Hydroelectricity plants
Locate more wind turbine farms in rural areas
Locate more wave turbine power stations on the coast
Encourage farmers to switch from food crops to biomass crops
Convert all sewage works and waste disposal landfill sites to generate biogas
Build geothermal plants in those areas where there are hot springs
Provide grants to individual people who wish to install solar panels or small wind turbines on
their properties to generate some of their electricity.
Launch a nation-wide campaign to encourage people to use energy more efficiently, e.g. use
public transport, fit long-life light bulbs, have holidays at home, etc.
At present, most of our energy is generated from fossil fuels, while the amount generated by
nuclear power is less than 10% and the amount of energy generated by renewable fuels is about
15-20%. In the grid below, fill in the percentages from the main fuel sources that you would like
the government to aim for by 2030.
Percent of energy generated by 2030
From fossil fuels (coal, oil, natural gas)
From nuclear power
From renewable fuels
148
KEY QUESTION EIGHT: Is democracy enough?
Robert Stradling
Textbooks often define democracy in terms of its distinctive political institutions and
processes: a constitutional framework, free, fair and regular elections, a multi-party system,
representative assemblies or parliaments, separation of powers between legislature, executive
and judiciary, and so on. But a state can have all of these institutions and yet the political
party in government can still behave in an authoritarian and undemocratic way.
It should not be forgotten, for example, that the National Socialists in Germany came to
power through the democratic process. At the Reichstag elections in March 1933, the
National Socialist German Workers’ Party won the largest number of seats (288) which
represented 43.9% of the vote. The second largest party was the Social Democrats with 120
seats and the Communist Party with 81 seats. The total number of seats in the Reichstag was
647 so the National Socialists were 36 seats short of the majority needed to form a
government. However, they were able to form a coalition with the nationalist German
National People’s Party (DNVP), which had won 52 seats at the election. Virtually the first
act of the coalition government was to ban the German Communist Party. This then gave the
National Socialists a majority without needing the support of the DNVP.
The National Socialists then set about using the constitutional process to establish a
totalitarian one-party state. This was possible because the German Republic, which had been
created in 1918 after the abdication of the Kaiser, emerged during a period of great social
unrest and revolution throughout Germany. In drafting the new constitution - the Weimar
Constitution – it was thought to be sensible to include an emergency clause that would enable
the President to ask the government to enact whatever laws were necessary to maintain public
order without needing to consult the Reichstag.
The National Socialists in 1933 used this clause in order to gain total control. However, to
bring this about, they needed to get two-thirds of the deputies in the Reichstag to vote for an
Enabling Act. They succeeded in doing this with the support of the DNVP and the Catholic
Centre Party, and, because the Communists were not allowed to vote, and also because the
deputies were intimidated by the Brown Shirts who were present in force in and around the
assembly. The Act was passed on 23 March 1933 (just 18 days after the elections) by 441
votes to 94, with the only opposition coming from the Social Democrats (SPD). By May, the
DNVP had dissolved, the SPD had been banned and so had the trade unions. In July, the
Catholic parties were declared illegal and, on 14 July, the National Socialists declared
themselves to be the only legal political party in Germany.
This is an extreme case of a political party using the constitution and the democratic political
institutions and processes to gain power in order to destroy democracy and create a
dictatorship. However, J.L. Talmon has coined the phrase “totalitarian democracy” to refer to
a system of government where citizens have the right to vote and the political representatives
are lawfully elected but, in between elections, the citizens have little or no influence over the
decision-making process and the main organs of the mass media are controlled or heavily
influenced by the government. Now it is possible to detect some anti-democratic trends in
most political democracies. There is not a clear-cut distinction that can be made between
liberal democracies, totalitarian democracies and authoritarian dictatorships. It is probably
more sensible to think of these “concepts” on a single spectrum and then to examine each
149
state and identify the tendencies within each towards liberal or totalitarian or authoritarian
practices.
When a political system moves from being a dictatorship or autocracy to a democracy, the
first thing that usually happens is that the provisional or transitional government drafts a
constitution, establishes an assembly and organises elections. Meanwhile, people with
different political views form political parties and begin to campaign for support from the
electorate. After the fall of communism in Central and Eastern Europe in 1989-90 and the
break up of the Soviet Union and Yugoslavia in the 1990s, we saw many countries going
through this transitional process.
However, experienced political observers have argued that transitional democracies have not
fully demonstrated their commitment to democracy until they have passed the “two turnover
test”. This means not just that they need to have experienced two elections but rather that
they need to have experienced at least two elections in which the ruling party (or coalition)
has been defeated and replaced by another party (or coalition) without there being any
violence or resistance to the change of government. The introduction of the appropriate
institutions and procedures is only the first step - the means to an end. It is still necessary to
demonstrate how deeply the electorate is committed to the idea of democracy.
So what is this democratic idea? It revolves around certain basic principles. The first of these
is popular sovereignty or popular control. That is, that people have the right to influence the
process by which public decisions are made and a right to influence the people who make
those decisions. Not only that, they also have a right to hold those decision-makers
accountable for their actions and decisions. The second key principle is political equality that every citizen is of equal worth. Their votes count the same regardless of their social
status, position, wealth, age, gender, religion, etc.
The third main principle here could be called “reciprocity”. That is, that people will accept
decisions which they may disagree with and which may not be in their interests because they
believe that the decision-making process was fair and not intentionally biased against them.
They also believe that, on some other issue, the decision might well be in their favour and
then other people who disagree with this decision will also accept it because they too believe
that there is no built-in bias against them. Similarly people will accept the election of a
government for four or five years, even though they disagree fundamentally with its policies,
on the basis that the election was fair and that they have a chance of electing an alternative
government at the next election.
In other words, the essence of democracy lies not just with its institutions and processes but
also with its citizens. They authorise democratic governments to act for them and those
governments, in turn, must remain accountable to the citizens and responsive to their
opinions, wishes and needs. This means that citizens need to be ready, willing and able to
play an active part in the political process, to respect the civil and political rights of other
citizens and to treat them as political equals.
This is not to say that the political institutions which we commonly find in a modern political
democracy are not important. They are the mechanisms by which these principles can be put
into practice. Free and fair elections, a multi-party system, parliaments, separation of powers,
along with an independent mass media, have proved to be effective means of ensuring some
level of popular control and governments which are responsive to public opinion. But it is
also worth noting that, in the older western liberal democracies, some of these institutions pre-
150
dated the emergence of political democracy as we know it today. In most countries, mass
political parties did not emerge until the late 19th or early 20th Century. Elections were often
riotous affairs with intimidation, bribery and vote-buying being quite common and it is not
until the 20th Century that most western democracies introduced universal adult suffrage.
By now, you may be wondering why the idea of “majority rule” has not been included in this
list of democratic principles. Is this not the essence of the democratic process? Not
necessarily. The political process is all about trying to find a way of reaching decisions that
will be binding on everyone regardless of whether or not those decisions were in everybody’s
interests or reflected everybody’s wishes and demands. If agreement can be reached through
discussion, persuasion, negotiation and compromise, then so much the better.
The decision to abide by the will of the majority is, in many respects, the last resort when all
other means of reaching agreement have been exhausted. If the same group or groups in a
democracy always seem to be in a minority, then there is a risk that we may have what some
observers have called “the tyranny of the majority”. This is why the principle of “reciprocity”
is so important. It is far more likely that people who seem to be in the minority will willingly
accept the decisions of a majority if they think, firstly, that there is a chance that sometimes
they might be in the majority instead and, secondly, that, if they were in the majority, then
others would be willing to abide by their decisions as well.
Now, in everyday politics in most liberal democracies, this reciprocity is possible. At election
time, the supporters of the larger political parties hope to secure enough votes to form a
government while the supporters of the smaller parties hope that they will win enough votes
to influence the decision-making process and even to be invited to join a coalition government
with other parties. Then, during the lifetime of the democratically-elected government, most
people will anticipate that they will agree with some of the decisions taken and disagree with
others and that some of the decisions will be in their interests while others will not. That is
the core of democratic politics. However, for this process to work effectively and not lead to
the “tyranny of the majority”, it is essential that the majorities which support a particular
opinion or policy or action will not always be the same.
The problem with majority rule emerges when politics revolve around issues of identity race, nationality, language or religion. Identity is far less changeable (and less responsive to
argument and persuasion) than opinions about the best way to finance and organise the health
service, education or support for the elderly. Then there is the danger that one section of
society, representing a minority identity group, will be permanently excluded from any share
in the governmental process. In such circumstances, the principle of reciprocity can break
down and, for that minority, the rule of the majority ceases to be legitimate and the minority
can become disillusioned with democratic politics and may even turn to violent methods
instead.
Since modern societies are characterised increasingly by their diversity – not only people with
different languages, religions, cultures and ethnicities but also with different beliefs, life
styles and identities - the resulting potential for conflict between the different groups and
interests is why we need politics. As the political philosopher, Hannah Pitkin has pointed out:
“What characterises political life is precisely the problem of continually creating unity, a
public, in a context of diversity, rival claims and conflicting interests. In the absence of
rival claims and conflicting interests, a topic never enters the political realm; no political
decision needs to be made. But for the political collectivity the ‘we’, to act, those
151
continuing claims and interests must be resolved in a way that continues to preserve the
collectivity.”.
How can modern societies preserve some sense of unity if conflicts emerge that prove
difficult to manage democratically? Pluralist democracies have survived largely because
moral and religious differences have seldom become politicised. A fairly clear line has been
drawn between the public sphere and the private sphere. This is not to say that this line is
never crossed but the real problem arises when a totalitarian government comes to power and
seeks to extend its control into almost every aspect of people’s lives. It can also happen if the
governing elite sees no distinction between the political and religious spheres or when the
governing elite only allows people to be citizens – to be politically equal - if they meet
certain criteria such as the same nationality, ethnicity, religion or culture. Those who fail to
meet these criteria are permanently excluded from the political process and can become
classified, as in the Third Reich, as “non-persons”.
So what does unite people from diverse backgrounds who do not necessarily share the same
political opinions and allegiances? We have already discussed the basic principles that are
central to the democratic process and these are underpinned by values that are concerned with
respecting the dignity of the individual person: treating each person with the same degree of
respect and believing that each person is of equal worth. However, there is also a second set
of ethical values around which a pluralist democracy can unite: universal human rights. On
the one hand, it is clear that the democratic process cannot function effectively if all citizens
do not share freedom of speech, freedom of association, freedom of assembly, freedom from
torture and unjust imprisonment and freedom of movement. Without these freedoms, you
cannot have a multi-party system, an independent media, free and fair elections, a government
that is controlled by, and accountable to the public or active political participation.
Until the 20th Century, many democrats believed that civil and political rights, such as those
described above, would be sufficient to ensure that citizens could freely and actively
participate in the democratic political process. However, it is now more widely accepted that
if people lack education, good health and a basic standard of living they will probably not
have the capacity to exercise those civil and political rights. This is why so many states have
also signed Conventions that commit them to seeking to protect the social, economic and
cultural rights of their citizens as well.
So, to return to our key question. Democracy is not enough if what we mean by that is the
existence of a state which has democratic institutions and procedures. One of the case studies
attached to this section of the booklet explores the problems of transplanting democratic
institutions to Iraq when there is no democratic tradition and democratic political culture.
Even when steps are taken to ensure greater popular control of the democratic process – for
example, through enhancing the potential for using the Internet to engage more people
directly in the political process (the second case study) - there is still a risk that this will
decline into the tyranny of the majority unless there are also safeguards that will ensure that
everyone has an equal opportunity to influence the decision-making process.
Democracy is a project that we can never fully complete. We can hope that we are always
moving in the right direction but, in most democracies, there are anti-democratic tendencies
and citizens always need to be vigilant to ensure that the institutions of the state do not
infringe our democratic rights even when the state appears to be doing this for the best of
motives.
152
Can democracy
transplanted? The example of Iraq
CASE
STUDY
16:
take
root
when
it
is
Robert Stradling
Timeline
16th Century – 1918: The territory now called Iraq was part of the Ottoman Empire but it was
three provinces rather than one: Baghdad, where the majority were Sunni Muslims, the
Kurdish area of Mosul in the north and Basra in the south, where the majority were (and still
are) Shiite Muslims.
25 April 1920: After the defeat of Turkey and the end of World War I, the victors met at San
Remo and agreed that Iraq should be placed under a League of Nations Mandate to be
administered by Britain.
23 August 1921: Faisal, a member of Syria’s Hashemite royal family, was brought in by the
British to be crowned as Iraq’s first king, Faisal I.
3 October 1932: Iraq became an independent state.
July 14 1958: King Faisal II was overthrown and killed in a military coup.
February 1963 – July 1963: A series of coups, sometimes by the Arab Socialist Baath Party,
and at other times by the military, meant that the country remained politically unstable. A
final coup on 18 November 1963 led to the Baathists taking control of the country.
16 July 1979: President Ahmad Hasan al Bakr, who was in power for 11 years resigned and
was succeeded by Vice-President Saddam Hussein.
4 September 1980: An eight-year war between Iraq and Iran began in which almost a million
people died before the ceasefire on 20 August 1988.
August 1990: Iraq invaded Kuwait. The UN Security Council Resolution 660 called on Iraq
to withdraw. Iraq did not withdraw its forces and UN Resolution 661 imposed economic
sanctions on Iraq.
November 1990 – March 1991: UN Resolution 678 authorised a coalition of member states
led by the United States to “use all necessary means” to enforce UN Resolution 660. On 16
January 1991, coalition forces began aerial bombing and, on 24 February, their forces entered
Kuwait. On 3 March, Iraq accepted terms for a ceasefire and withdrew its remaining forces.
March-April 1991: Rebellions broke out against the rule of Saddam Hussein in the north
and south of Iraq. These were suppressed. The United States called on Iraq to end all military
activity in Northern Iraq to protect the Kurdish population. A year later, US and Britain set
up a no-fly zone in Southern Iraq to protect the Marsh Arabs from bombing attacks by the
Iraqi air force.
153
16-19 December 1998: After Iraq refused to cooperate with the United Nations Special
Commission to Oversee the Destruction of Iraq’s Weapons of Mass Destruction, the US and
UK launched an aerial bombing campaign to destroy Iraq’s nuclear, chemical and biological
weapons programmes.
February 2001: After over two years, during which Iraq refused to cooperate with the UN
over inspection for weapons of mass destruction, the US and Britain carried out aerial
bombing to disable Iraq’s air defences. This action had little international support.
11 September 2001: The al-Qaeda aerial attack on the twin towers of the World Trade Center
in New York.
September 2002: US President George W. Bush told the UN General Assembly either to
confront the “grave and gathering danger of Iraq” or stand aside and leave the United States to
act. The British Government published a dossier that claimed to be based on available
evidence of Iraq’s military capability. In November, UN weapons inspectors returned to Iraq.
February 2003: Donald Rumsfeld, the US Secretary of State for Defence, claimed that the
Americans had “bullet-proof evidence” of links between the Iraqi leadership and al-Qaeda.
President Bush said that Iraq was the new frontline in “the war on terrorism” and went on to
say: “In Iraq, a dictator is building and hiding weapons that could enable him to dominate the
Middle East and intimidate the civilised world – and we will not allow it”.
20 March- 9 April 2003: War began with American missiles bombing targets in Baghdad.
US forces entered central Baghdad and coalition forces gained control in the north and the
south. Most of the coalition forces came from the US, the UK and Poland but, in all 29
countries, sent troops.
1 May 2003: President Bush officially declared the end of major combat operations but
coalition forces now faced intensified guerrilla activity. The Coalition Provisional Authority
(CPA) was set up to run the country.
14 December 2003: Saddam Hussein was captured in Tikrit.
June 2004: The US transferred sovereignty to an interim Iraqi Government led by Prime
Minister Iyad Allawi. The Americans’ preferred leader was Ahmed Chalabi but he was not
acceptable to other Iraqis on the Council.
13 January, 2005: Nearly eight million Iraqis voted in elections for the Transitional National
Assembly. Not many Sunnis voted. The Shia United Iraqi Alliance won a majority of seats in
the Assembly. Negotiations began within the elected Iraqi National Assembly to form a
government. Ibrahim al-Jaafari, leader of the largest party, the United Iraqi Alliance, and Jalal
Talabani of the Patriotic Union of Kurdistan was appointed President.
August 2005: A draft constitution was approved by Shia and Kurdish negotiators but rejected
by the Sunni representatives.
October 2005: In a referendum, 79% of the voters approved the new Constitution to create
an Islamic federal democracy although many Sunnis abstained or voted against.
154
15 December 2005: Iraqis voted for the first government and parliament since the invasion
by coalition forces. The Shia-led United Iraqi Alliance emerged as the largest party in the new
Assembly but did not have a majority. A coalition government was formed.
30 December 2006: After a year-long trial, Saddam Hussein was found guilt of crimes
against humanity and executed.
January 2007: President Bush announced a new strategy for Iraq and sent more troops to
improve security. A UN report said that more than 34,000 Iraqi civilians had been killed in
violence during 2006.
What was in dispute here?
There are a number of highly contentious issues surrounding events in Iraq, not least the
decision by the US-led coalition to invade Iraq in 2003. However, the issue which concerns us
here arises from the stated intention by the US Government to introduce a democratic
constitution and democratic institutions and processes in Iraq as quickly as possible. Speaking
on 27 February 2003, President George W. Bush declared “All Iraqis must have a voice in the
new government, and all citizens must have their rights protected”.
But, is it realistic to believe or assume that liberal democracy, with genuinely representative
parliamentary institutions and effective guarantees of human rights, can be introduced from
scratch in a country which is deeply divided on ethnic and religious grounds, where law and
order has not been restored in some areas after a devastating war and where insurgents are
still killing security forces and ordinary civilians?
The critics argue that liberal democracies emerge once the right conditions exist within a
country. Two academic experts on the history and politics of Iraq quote Thomas Jefferson,
author of the American Declaration of Independence and the third US President, who once
said: “Democracy must be rooted in the soil if it is to grow”. The experts go on to observe that
“very few plants, and certainly not democracy, grow from the top down”. [W. Polk & J. Lund,
Understanding Iraq, 2006, p.197].
Others have picked up this theme and argued that the Anglo-Saxon and European traditions of
democracy emerged in countries where the people were trying to remove or limit the powers
of the rulers who had exercised absolute power over them rather than find a means of creating
order out of chaos and anarchy, even if the period of revolution when power was transferred
from monarchs or dictators to the people often seemed to be chaotic and anarchic. According
to this argument, if the Iraqi people, rather than the coalition forces, had overthrown Saddam
Hussein, then they might have chosen to introduce a democratic constitution, create
democratic institutions and hold free and fair elections.
Even then, according to some observers, a short period of democratisation is likely to descend
into chaos, disorder and conflict when the people’s political allegiances are divided on ethnic
and religious grounds rather than on the basis of alternative political and economic policies.
To support their case, they point to the situation in Rwanda and Sudan. Rwanda had been a
Belgian colony in the Horn of Africa until independence in 1962. During the colonial period,
the minority Tutsi tribe filled most of the senior posts in the administration and the army.
After the first democratic elections, the majority Hutu tribe formed the first government.
Many Tutsis left the country but the exiles formed the Rwandese Patriotic Front and the
155
Rwandese Patriotic Army (RPA) which then invaded Rwanda in 1990 leading to a long civil
war. During this period, over 800,000 Tutsis living in Rwanda were massacred. However,
the RPA succeeded in taking the capital city in 1994 and this was followed by a massacre of
Hutus. In all, over a million people have died since independence and another two million
have fled the country - out of a population of around six million.
Sudan gained its independence from the British in 1956, but the roots of ethnic and religious
conflict were already there. The majority living in the north of the country were Arab and
Muslim; the majority living in the south were African and Christian. In 1958, there was a
military coup by Muslim forces from the north. Five years later, civil war broke out between
north and south which lasted for nine years. Civil war resumed when the government
attempted to impose Sharia law on the whole country. In 1989, another coup brought the
National Islamic Front to power. Conflict has continued since then and has even broken out
between the anti-government forces in the south. Meanwhile, many ordinary people have
continued to suffer famine, poverty and the brutality of life during a civil war.
The critics highlight the parallels between Iraq and Rwanda and Sudan. They point out that,
after independence and democratisation, political parties and movements emerged in Rwanda
and Sudan around ethnic and religious divisions. They were not willing to negotiate or
compromise or accept decisions with which they disagreed - all of which are essential if
pluralist, democratic politics are to work.
They point to the geographical divisions in Iraq
with the Sunnis living mainly in the centre and west of the country, the Kurds in the north and
the Shia - the largest group in the population – concentrated in the south.
Whilst the critics accept that similar geographical divisions have existed elsewhere without
necessarily leading to the conflicts experienced in Rwanda and Sudan, often because some
kind of federal system has been introduced with a lot of autonomy for each region, they doubt
whether this would work in Iraq. First, they argue, the three groupings are not completely
separate geographically. In fact, the population of Baghdad, the capital city, is mixed.
Second, the potential wealth of Iraq lies in its oil but the oil wells are not evenly distributed
across the whole country and the Sunni, in particular, fear that they would lose out if the oil
was mainly controlled by the Shia and the Kurds.
Some of the critics argued that the US strategy in Afghanistan would be more appropriate for
Iraq. Here was another country with ethnic and religious divisions. Several large ethnic
groups exist, such as the Pashtuns, the Tajiks and the Uzbeks, which, in turn were divided into
different, sometimes warring tribes. In this case, the US Government had attempted to set up a
national unity government, which included representatives of most of the factions and
warlords who had opposed the Taliban, and build support around a strong leader, Hamid
Karzai, a Pashtun war leader, with a base in the south who was also acceptable to some of the
other regional leaders.
Finally, the critics usually also note that the only way in which democracy could be
established in Iraq is if the US Government and/or the United Nations was prepared to stay
there for a very long time, establish order and stability, ensure economic development and
help to create a democratic political culture. However, as they point out, many voters in the
United States and in the other coalition countries want their troops to be withdrawn from Iraq
as quickly as possible and therefore, according to the critics, it is unlikely that any future US
Government would be prepared to adopt such a long-term strategy.
156
So how have those who support the attempt to introduce democracy to Iraq as quickly as
possible addressed the issues and concerns raised by the critics? In response to the view that
democracy cannot grow in a country where there is no history of democratic traditions or a
liberal, democratic political culture, they point to what has happened over the last 20 years in
Eastern Europe after the end of communism.
To those who point to events in Rwanda and Sudan and argue that a country which is divided
along ethnic, religious and geographical lines is inherently unstable and consequently infertile
soil for “the democratic plant”, they point to developments in Bosnia and Herzegovina,
Kosovo and East Timor. Whilst not arguing that the transition to stable, pluralist democracies
has already taken place, they argue instead that the United Nations has put in place a longterm development programme that is creating the conditions necessary for democratic
institutions and processes to flourish.
The supporters of the US Government’s position on Iraq also argue that Iraq may have ethnic
and religious divisions like Afghanistan, Rwanda and Sudan but differs from them in other
important respects which could make the transition to democracy easier. They point to the
fact that 75% of the Iraqi population lives in towns and they do not have any ties to rural
tribal leaders and warlords; that many Iraqis are secular and do not want to be ruled by
religious leaders; that revenues from the oil, once production is back to normal, will finance
rapid economic development and that literacy and educational levels are much higher in Iraq
than in those other countries mentioned by the critics.
To those who suggest that the Afghanistan model might be more appropriate for Iraq than
immediate democratisation, they argue that the national unity government strategy worked in
Afghanistan precisely because there were several leaders who had sufficient legitimacy to
represent and speak for their tribes and communities. By comparison, Saddam, during his
dictatorship, had eliminated any local or regional leaders who posed a threat to him while
others had fled the country and no longer had a power base in Iraq.
To those who argue that the US electorate and voters in other coalition countries want a quick
military withdrawal from Iraq, the supporters of democratisation point out that critics were
saying the same thing after World War II but the United States sustained a presence in Japan
and Germany and then South Korea for many years and has now been actively involved in the
political and economic development of Bosnia for 10 years.
Finally, some of the supporters of the US Government have argued that President Bush and
his advisers had no choice but to push for democratisation as quickly as possible. Having
opted for war and then having deposed Saddam and opted for a regime change, they would
have been universally criticised if they had then replaced one dictator with another who was
more favourable to them.
A variety of viewpoints
James Woolsey, former director of the CIA and a strong supporter of the decision to
invade Iraq:
“Everybody can say, ‘Oh sure, you’re going to democratise the Middle East…But if you look
at what we and our allies have done with the three world wars – two hot, one cold - …..we’ve
already achieved this for two-thirds of the world. Eighty five years ago, when we went into
World War I, there were eight or 10 democracies at the time. Now its around 120 – some free
157
some partly free. An order of magnitude! The compromises we made along the way, whether
allying with Stalin or Franco or Pinochet, we have gotten around to fixing, and their successor
regimes are democracies. Around half of the states of sub-Saharan Africa are democratic.
Half of the 20-plus non-Arab Muslim states. We have all of Europe except Belarus and
occasionally parts of the Balkans. If you look back, what has happened in less than a century,
then getting the Arab world plus Iran moving in the same direction looks a lot less awesome.
Its not Americanising the world. Its Athenising it. And it is do-able.”
Quoted by James Fallows, The Fifty-First State?, The Atlantic Monthly, November 2002
A US State Department Report on Human Rights published in March 2006, citing as
evidence the elections in January of that year and the growth of non-governmental
organisations, claimed that:
“Last year [2005], was marked by major progress for democracy, democratic rights and
freedom in Iraq.”
Daniel L. Byman and Kenneth Pollack, from the Saban Center for Middle East Policy at
the Brookings Institution, Washington DC, USA suggest that:
“Failure to establish democracy in Iraq…would be disastrous. Civil war, massive refugee
flows, and even renewed interstate fighting would likely resurface to plague this long-cursed
region. Moreover, should democracy fail to take root, this would add credence to charges that
the United States cares little for Muslim and Arab peoples…..The failure to transform Iraq’s
Government tarnished the 1991 military victory over Iraq; more than 10 years later, the
United States must not make the same mistake.”
On the other hand, US Marine General Anthony Zinni, who had retired by the start of
the Iraq War, doubted whether there was a short-term means of introducing democracy
in Iraq:
“If we think there is a fast solution to changing the governance of Iraq, then we don’t
understand history, the nature of the country, the divisions, or the underneath suppressed
passions that could rise up. God help us if we think this transition will occur easily. The
attempts I’ve seen to install democracy in short periods of time where there is no history and
no roots have failed [e.g.]……Somalia.”
Journalist Robert Kaplan argued in the Washington Post on 2 March 2006 that:
“Globalization and other dynamic forces will continue to rid the world of dictatorships.
Political change is nothing we need to force upon people; it’s something that will happen
anyway. What we have to work toward – for which peoples with historical experiences
different from ours will be grateful – is not democracy but normality. Stabilizing newly
democratic regimes, and easing the development path of undemocratic ones, should be the
goal…..The more cautious we are in a world already in the throes of tumultuous upheaval, the
more we’ll achieve.”
Instability also concerns Iraq’s neighbours. Shibley Telhami, Anwar Sadat Professor of
Peace and Development at the University of Maryland, USA, says that:
“In states like the United Arab Emirates and Qatar, even Saudi Arabia, there is the fear that
the complete demise of Iraq would in the long run play into the hands of Iran, which they see
158
as even more of a threat…they see instability, at a minimum, for a long period of time, and in
the worst case the disintegration of the Iraqi state.”
Dr Hamid al-Bayati, a political adviser to the Shiite United Iraqi Alliance emphasises:
“We have to take Iraqi reality into account – we can’t copy any one democratic system in the
world and apply it here.”
Middle east expert, N.N. Ayubi, argues in his book Overstating the Arab State: Politics
and Society in the Middle East (2001) p.424.
“To speak about democratisation in relation to Iraq, however much one may stretch the
meaning of this term, seems almost to border on the ridiculous.”
Not everyone on the right of American politics supports the government’s view on Iraq.
Patrick Basham of the conservative Washington-based Cato Institute is pessimistic
about the presence of the pre-conditions necessary for a functioning democracy,
although it is also clear that what he has in mind is a Western-style liberal democracy
with a free market economy:
“In such an environment, most people adopt a political passivity that acts as a brake on the
development of the principles – such as personal responsibility and self-help – central to the
development of economic and political liberalism. Hence political freedom is an alien
concept to most Iraqis………[In addition he notes that many of the educated middle class
who might have been ‘the fertile soil for democracy’ fled the country under Saddam’s rule
and ‘the remnants can contribute to the democratisation of their country but the current
middle class does not constitute a critical mass capable of moderating and channelling
political debate in a secular, liberal fashion’].” Quoted in T. Dodge, (2003) Inventing Iraq:
The Failure of Nation Building and a History Denied, p.157.
What Do You Think?
In your opinion, is it realistic to expect that Iraq will have evolved into a western liberal
democracy by 2020?
What conditions will need to be present if that development is to happen?
What do you think of Robert Kaplan’s view that it is more important to bring stability to Iraq
rather than democracy? Could (or should) the same argument be applied to some of the
countries that had been communist until 1989 or were their circumstances very different from
those in Iraq?
159
CASE STUDY 17: Can New Technologies help to make
governments more accountable to the people?
Robert Stradling
Timeline: Cyber-dissidents in China
Shi Tao is 39. He is a journalist in China. Until May 2004, he worked for the Dangdai Shang
Boa [Contemporary Business News], then he became a freelance journalist and writer.
15 May 1989: Thousands of protesters occupied Tiananmen Square in Beijing to call on the
government to introduce democratic reforms.
20 May 1989: The Chinese Government declared martial law and troops and tanks were sent
in to Tiananmen Square.
4 June 1989: Troops fired on the protesters.
September 2000: Qui Yanchen was sentenced to four years in prison for putting subversive
material on to the Internet.
14 January 2002: The Chinese Information and Technology Ministry introduced new rules
about use of the Internet. Internet Service Providers were required to install software to
monitor and copy the content of “sensitive” email messages and report the authors to the
Chinese authorities. They were also required to censor their sites to prevent access to any
sites that were held to be subversive by the Chinese authorities.
2002: The Internet Search Engine, Yahoo, voluntarily signed the “Public Pledge on SelfDiscipline for the China Internet Industry”. In this pledge, they agreed to abide by the
Chinese Government’s censorship regulations. This meant that any search topic which the
Chinese authorities deemed to be sensitive, such as “Taiwan Independence”, would only
produce government-approved results on screen.
September 2002: The Chinese authorities blocked access in China to the search engine
Google for 12 days.
15 November 2002: The Chinese Government introduced a law requiring the owners of
cybercafés to be responsible for the websites looked at by their customers or risk being fined
or shut down.
July 2002: A group of 18 Chinese intellectuals wrote a “Declaration of the Rights of Chinese
Internet Users” which called for freedom of expression through creating blogs and websites,
freedom of access to online information and freedom of association through networks,
chatrooms and cybercafés. This document was subsequently signed by thousands of Chinese
Internet users.
2003: By early 2003, there were 26 cyber-dissidents in prison in China for putting material on
to the Internet which the authorities thought was subversive.
160
Spring 2004 The Chinese Government wrote to all forms of Chinese media: television,
newspapers, magazines and Internet-based news providers informing them of the restrictions
that would be imposed on them in the period leading up to the 15th Anniversary of the
Tiananmen Square protests and deaths. Shi Tao emailed a summary of these restrictions to a
dissident online newspaper, Min Zhu Ton Xun.
24 November 2004: Officials from the Changsa Security Bureau arrested Shi near his home
in Taiyuan, in the north eastern province of Shanxi. They then visited his home and
confiscated his computer.
14 December 2004: Shi was charged with learning state secrets.
4 March 2005: Shi’s lawyer, Guo Guoting was informed that he has been banned from
practising as a lawyer for a year. This was just 20 days before Shi’s trial began.
27 April 2005: The Changsa Intermediate People’s Court found Shi guilty of leaking state
securities.
30 April 2005: Shi was sentenced to 10 years in prison.
2 June 2005: Shi’s appeal against his sentence was rejected by the Hunan Province High
People’s Court without a hearing. Shi Tao was sent to the National Security Bureau prison of
Hunan Province, Changsa.
14 September 2005: The Dui Hua Foundation releases a translation into English of the
Court’s verdict on Shi which revealed that the Hong Kong office of Yahoo, the Internet
Server, had provided Chinese police with detailed information that enabled them to link the
email message containing the alleged state secret with the IP address of his computer even
though he was using a pseudonym.
19 March 2007: Jian Ling was sentenced to six years in prison by a court in Ningbo in the
province of Zhejiang. Jian, a pro-democracy dissident, had been sent to a re-education camp
for 18 months for counter-propaganda after the massacre in Tiananmen Square in 1989. In
August 2005, he started his own website, which was then closed down by the Chinese
Government in March 2006. Jiang joined 61 other people held in detention or awaiting trial
for posting messages or accessing websites that the authorities considered to be subversive.
What was in dispute here?
Elections, even when they are free, fair and virtually the whole adult population is eligible to
vote, do not guarantee democracy in a regime if they are only mechanisms for legitimating
governments which, once they are elected, are not particularly responsive to the demands or
needs of their citizens. But how to ensure that they are responsive to the demands of all of
their citizens when some or even many of them feel excluded from the political process most
of the time and feel that power lies in the hands of a political elite consisting of a small
number of mainstream political parties, a bureaucracy which is growing steadily larger and
influencing more and more of people’s lives, a number of powerful political and economic
interest groups and a mass media which is mostly owned by a small number of rich and
influential tycoons and oligarchs.
161
It is perhaps not surprising that many people have looked to new technologies, particularly
email and Internet, as a possible way of redressing the balance of power between the
apparatus of the state and the individual citizen. Some have seen the Internet as a means of
seeking out information that those in power do not want others to have or providing others
with information that challenges the official view presented by the government or by others in
positions of power and authority.
A particularly interesting group here are those sometimes described as “whistle blowers” people who have inside information about the activities or motives of those in power and wish
to share it with everyone else. Such people have always existed but now their revelations can
reach a much wider proportion of the population than ever before. Others draw a parallel
between economic consumers and political consumers. In the economic market, they argue,
the consumer is able to use the Internet to compare products in terms of price, value for
money, performance and so forth. They can “shop around” until they find the product that
best suits their needs. They can check the claims in the advertising against the information
provided by existing consumers.
Similarly, there are those who argue that the “political consumer” can do the same with the
claims made by those who are seeking their votes in order to get elected and gain power.
Paraphrasing the 16th Century philosopher, Francis Bacon, they argue that “knowledge [or
information] is power”.
On the other hand, others have challenged whether the Internet really does fulfil this critical
role in a modern, technocratic society. Jonathan Zittrain and Benjamin Edelman of the
Berkman Center for Internet & Society at the Harvard Law School in the United States have
been monitoring ways in which governments block access to the Internet for more than five
years now. Their work has shown that a number of autocracies and dictatorships such as
China, Cuba, Burma, North Korea, Saudi Arabia, Syria and the United Arab Emirates now
lead the way in blocking public access to websites that they consider to be sensitive or
subversive and also in prosecuting dissidents and pro-democracy activists who use the
Internet to disseminate their opinions through their own blogs and websites and to create
networks with other critics of the regime.
As our case study shows, countries like China have invested a lot of human and financial
resources in installing software that will enable them to restrict searches on topics such as
Taiwan, Tibet or the pro-democracy movement, all of which are sensitive to the regime; to
censor the material that internet users can access and also to read people’s email messages
and track down those whom they regard as dissident and counter-revolutionaries who are
criticising or even seeking to undermine the security of the state. In doing this, it would
appear that some Internet Service Providers (ISPs) have been willing to agree to censorship in
order to gain a toe-hold in the Chinese Internet market and, in some cases, have been willing
to act as police informers.
At the same time, it should not be assumed that only dictatorships and autocracies seek to
control and censor how people access and use the Internet. For some time now, most liberal
democracies have been putting pressure on ISPs not to offer access to websites containing
child pornography and, in some cases, Internet users who have downloaded this material have
been prosecuted for possessing it. The mass media in those same liberal democracies,
particularly the cultural industries such as film, television and music publishing, have also
been very active in taking action against those who pirate their work and release it through the
Internet.
162
But perhaps the most significant change that has taken place in recent years is as a direct
result of the terrorist attacks of 11 September 2001. A month later, the US Congress passed
the Patriot Act and then other western states began to introduce their own legislation designed
to give police and other security forces new surveillance powers and the right to obtain
personal data about Internet users. Whilst many people would probably support these kinds
of measures in order to protect the public at large, there is also some growing concern that
these new powers give the authorities in liberal democracies the means of extending their
surveillance beyond terrorists, paedophiles and video pirates to include anyone who is
engaged in lawful actions but visits controversial and radical websites or uses the Internet to
disseminate information which might embarrass and politically compromise the government
of the day.
So, one debate which has been ongoing over the last decade has been mainly about the
implications of extending political and civil rights to the Internet to ensure that Internet users
are assured of freedom of speech, freedom of association and freedom from censorship and
that sanctions will not be taken against them if they exercise those rights through the Internet.
However, over the last 10 years or so, another debate has also emerged which focuses on the
potential that the Internet offers for a more direct kind of democracy. In 1994, the term Edemocracy was coined, although some prefer to use terms such as cyber democracy or digital
democracy. Whichever term is used, the aim is broadly the same: to enhance the democratic
process by narrowing the gap between the governed and the government, the electorate and
the elected representatives.
At one level, this technological development opens up the possibility of electronic campaigns,
grassroots campaigns that bypass the usual political channels, interactive political dialogues
between voters and candidates for office and even virtual political associations, parties and
interest groups.
At another level, it can mean the use of the Internet to enable people to be directly involved in
the decision-making process through electronic voting at elections and electronic
referendums. In 2002, a number of experiments were carried out in Switzerland where they
had already introduced postal voting as an alternative to the traditional ballot box. This
required the development of procedures that would ensure that the electronic vote would be as
private and secure and as free of fraudulent voting as the traditional system or the postal
voting system. The card which entitled a person to vote electronically included a scratch card
strip concealing a password that was exclusive to that person. They could not access the evoting system without both the password and some additional personal information.
The advantages of electronic–voting for elections and referendums are clear to any population
which, like the Swiss, is highly mobile. They do not have to be in their locality when they
vote. It also offers more opportunities for those who are physically handicapped to exercise
their right to vote in secrecy. But perhaps the main attraction for the supporters of direct
democracy is that it increases the possibility of the electorate exercising their vote regularly
on a whole range of referendums and opinion polls between elections. The supporters also
argue that this would encourage people to vote more frequently and to become more
politically active and involved in the important decisions which affect their everyday lives.
However, as yet, in Switzerland, it would appear that the experts are undecided as to whether
or not this will happen. Some other countries have also started to make tentative steps
towards electronic democracy, particularly the United States and some of the EU member
163
states. Perhaps the strongest argument that has been put by the supporters of electronic
democracy is that most democracies are now confronted by electorates where a growing
number of people are cynical about politics and politicians and where there is widespread
political apathy and unwillingness to engage in the political process. The opportunity to
express their views, engage in electronic campaigns, and organise electronic petitions might
help to reverse what is otherwise a dangerous trend for any democracy.
This move towards electronic democracy is not universally welcomed. There are three main
concerns or objections. The first is both a practical and ethical consideration. Political
equality is a fundamental principle of democracy. If people are eligible to be citizens, then
they have a fundamental right to take part in the political process and they should not be
excluded from that process simply because they do not own or have access to the necessary
electronic media. This is an important consideration but there is no reason why people could
not continue to vote by post or at the ballot box if they could not access a computer. However,
it may also be the case that they might be less likely to exercise their right to vote as
frequently as someone who simply has to access the Internet.
A second argument that is sometimes made when supporters of e-democracy mention the
Swiss example is that there is a political culture and a tradition of direct democracy in
Switzerland which is not present in most other liberal democracies. On average the Swiss go
to the polls at least four times a year and, when they do, they often vote on several proposals
and referendums at the same time. They will also vote on local, cantonal and federal issues
on that occasion. Even so, it has also been pointed out that the turnout for these votes is
usually only about 40% of the electorate, although it is anticipated that this would probably
increase if people could vote electronically.
A third argument made by some of those who are sceptical about electronic democracy is that
it increases the likelihood of political populism - sometimes described by the critics as a
tendency for some political activist to “pander to people’s lowest instincts”, such as racism,
anti-Semitism, fascism, demands for vengeance for real or imagined wrongs, and so on.
Some of these critics express a fear of what they describe as “the tyranny of the majority” - a
situation where direct democracy enables the largest ethnic, religious, national or other group
always to win every vote leaving minorities feeling powerless and oppressed. This gets to the
heart of any discussion of democracy. Sometimes people claim that democracy is the rule of
the majority. The problem with this definition is that the democratic process itself tends to
break down if the majority in each case is essentially the same group. Then it would mean
that the minority or minorities would never be able to influence any political decision. Their
participation in the democratic process would be a sham.
Majority rule depends on two things. First, that all citizens agree to be bound by the will of
the majority on any particular decision. This depends on the principle of reciprocity. I agree to
abide by your decisions as long as you will abide my decisions when I am in the majority.
But, for this principle to work, everyone has to have an equal chance of belonging to the
majority on some issues some of the time. A fundamental problem arises when a regime is
split into a permanent majority and permanent minorities based on race, ethnicity, religion,
language or some other characteristic which is part of people’s basic identity. If their
opinions and choices always reflect their identity, then there is no hope that people will make
up their minds on the basis of specific circumstances, information or evidence. This problem
can also arise in representative or parliamentary democracies as well but usually the
institutions and the constitutional checks and balances (different parliamentary chambers, an
independent judiciary, etc) are designed to limit the possibility of tyranny by the majority.
164
The key question for supporters of direct democracy or e-democracy therefore is how to
ensure that there are constitutional safeguards in this system too that will prevent the total
domination by the same majority over the different minorities in society.
The fourth and final argument from the critics of e-democracy is partly a technological one
but it is also rooted in another fundamental principle of liberal democracy. The rise of
electronic commerce led to widespread concerns about online privacy. Indeed, many people
are still reluctant to use the Internet to purchase goods because they are afraid that
information they provide (financial and personal) will lead to an invasion of their privacy:
information about their credit cards, their bank accounts, their email and home addresses, and
so on, and after there has been a transaction then the Website they were visiting may have left
cookies and other files on their personal computer to enable a commercial interest to continue
gathering information about them and their online purchasing patterns. The website may even
choose to make this information available to other commercial enterprises without your
permission.
Now, say the critics, translate this typical consumer concern into a political context. Their
view is that citizens would be vulnerable whenever they engaged in political activity via the
Internet. If they participated in an online discussion or debate, wanted to find out more about
a particular campaign or piece of legislation, or whatever, the web servers they were
accessing could create a “cookie” that would contain that person’s unique identification
number which would then allow the people who control the web site to call up information
about that person collected on previous occasions.
Furthermore, any web site that knows a person’s identity and has a cookie for them could then
exchange their data with other agencies who have web sites and synchronise their cookies.
This may seem rather paranoid but these techniques are already being used to a limited degree
in US Presidential, Senate and Congressional elections where political campaigning
consultants are profiling voters in this way so that candidates can target their campaigning on
those who are likely to agree with them and not waste scarce resources on trying to convince
people who are unlikely to vote for them anyway.
At the same time, the supporters of E-democracy point out that, in an open political
democracy, this kind of activity cannot be kept secret for very long. To support their view
they point to the widespread public outrage in the United States in 1999 when the Internet
advertising network, DoubleClick, proposed to merge with Abacus Direct, an off-line direct
marketing company, which had a large database containing information about people’s credit
cards, incomes, home loan records, etc. In the end, the public objections were so intense and
widespread that DoubleClick did not go through with the deal.
It is of course possible that legal and electronic safeguards can be introduced to protect the
privacy of every citizen who becomes actively engaged in politics through the Internet.
However, it is worth remembering just how important individual privacy is in the modern
democratic political process. Privacy was never an issue in ancient Greece or Rome. Private
interests were not supposed to have any place in the political public sphere. The Latin origins
of the word “privacy” was “privatus” meaning “withdrawn from public life”.
Of course, in practice, as anyone who has read a history of life in classical Athens or Rome
will know, the distinction between private interest and public life was never that clear-cut.
Nevertheless, when we come to more modern times, with our knowledge of what life can be
165
like in a totalitarian regime, there is a recognition that privacy ensures greater freedom to
engage in politics without fear of threats designed to force them into supporting a particular
position. So the important question that any advocate of direct-democracy, electronic or
otherwise, always has to answer is what safeguards can be put in place to ensure that each
individual’s privacy can be protected.
A variety of viewpoints about the issues
A statement issued by Google in January 2006:
“While censoring search results is inconsistent with Google’s mission, providing no
information (or a heavily degraded user experience that amounts to no information) is more
inconsistent with our mission.”
Sergey Brin, founder of Google, when asked if Google had agreed to censorship
regulations set down by the Chinese authorities replied:
“No, and China never demanded such things. However, other search engines have established
local premises there and, as a price of doing so, offer severely restricted information. We have
no sales team in China. Regardless, many Chinese Internet users rely on Google. To be fair
to China, it never made any explicit demands regarding censoring material. That’s not to say
I’m happy about the policies of other portals that have established a presence there.”
Interview in Playboy, September 2004.
US intelligence agencies have recently shown a great deal of interest in Internet
surveillance. One thrust of this is determining geolocation from IP numbers. Currently this
is about 80% effective in fixing the IP number to a major city and over 90% in fixing it to a
country. It is believed that when geolocation is combined with an analysis of the kinds of
search topics people in that location have most often visited ...that would provide an insight
into a society and its sub cultures.
Guo Guoting, lawyer to Shi Tao, explained that:
“The law on state secrets is not very clear. As a result, the interpretation of the concept of socalled secrets is vague. It is therefore very easy for the authorities to use this law against
journalists who speak their mind.” 25 January 2005.
Spokesperson for Yahoo:
“Just like any other global company, Yahoo must ensure that its local country sites must
operate within the laws, regulations and customs of the country in which they are based.”
The Prison Committee of International PEN urged Yahoo: “To re-examine its policies to
ensure that they do not have a negative impact on the legitimate practice of the right to
freedom of expression and information, as guaranteed by international human rights
standards, notably Article 19 of the Universal Declaration on Human Rights.”
Reporters sans Frontières, 6 September 2005:
“We already know that Yahoo collaborates enthusiastically with the Chinese regime on
questions of censorship, and now we know it is a Chinese police informant as well…….the
company will yet again simply state that they just conform to the laws of the countries in
166
which they operate. But does the fact that this cooperation operates under Chinese law free it
from all ethical considerations? How far will it go to please Beijing?”
The British journalist, Nick Cohen, writing in The Observer, on 25 February 2007,
surveyed the steps being taken in a number of countries, but particularly China,
suggests that:
“The net is humbling big business, so it is claimed, as consumers compare the price of
everything from gas to bank interest rates and take their custom to the corporations offering
the best value. Doctors face patients who can find out where the best value-for-money
treatments are offered. Politicians must cope with an electorate that can investigate the claims
of soundbites and manifestos with the click of a mouse. …The globalisation of the net was
meant to challenge censorship and tyranny but dictatorships are tenacious and it has not
happened yet,…the net is proving surprisingly easy for dictatorships to control.”
E.J. Bloustein describes life under a totalitarian regime, but the question that might
follow on from this, would be whether a similar condition might emerge when electronic
means of enabling people to get more information and to participate more widely in
political activity can also lead to greater surveillance of their behaviour and opinions:
“The man who is compelled to live every minute of his life among others and whose every
need, thought, fancy or gratification is subject to public scrutiny, has been deprived of his
individuality and human dignity. Such an individual merges with the mass. His opinions,
being public, tend always to be conventionally accepted ones; his feelings, being openly
exhibited, tend to lose their quality of unique personal warmth and to become the feelings of
every man.” “Privacy as an aspect of human dignity” in F.D. Schoeman (ed) Philosophical
dimensions of privacy, New York (1984).
C.D. Hunter, a political analyst in the United States, suggests that there are a number of
steps that any liberal democracy can take to ensure that every political campaign web
site conforms to the same privacy policy:
“Given the sensitive nature of individuals’ deeply held political convictions, [personal]
information should be afforded a high level of privacy protection. All campaign web sites
collecting personal information should have a posted privacy policy which states:
1. What personal information is collected?
2. How and where it is gathered?
3. Whether personal information will be sold to third parties or shared with other
campaigns, and if so the right to opt-out;
4. The ability to access and correct information held by the campaign;
5. An assurance that personal information will be stored in a secure fashion; and
6. The ability to opt-out of campaign e-mail lists.”
C.D. Hunter, Political Privacy and Online Politics: how E-Campaigning Threatens Voter
Privacy, First Monday, Issue 7 (2002)
167
What Do You Think?
Would it be anti-democratic to prevent a political party from contesting elections and
achieving power if it threatened in its public statements to undermine democracy by banning
opposition parties, trades unions, opposition media, etc?
In your opinion, can new technologies help citizens to participate more directly in the political
process? If you do think this, what steps would need to be taken to ensure that this happens?
Would access to Electronic Democracy encourage more people to take an interest in
European issues and elections to the European Parliament?
168
CONCLUSION
Today most of us live in societies which are increasingly characterised by their ethnic and
cultural diversity. This is partly due to the enlargement of the European Union since 1989
which has enabled people to move from one European country to another in search of
employment or education. But there are also historical reasons for this diversity: the
continuing links between some West European countries and their former colonies, the desire
of some indigenous linguistic and cultural minorities to maintain their languages and
traditions, and the creation of ethnic and national minorities in some countries as a result of
the break up of the old European empires after the First World War, the re-drawing of some
national borders after the Second World War and then the changes which occurred after the
break-up of the Soviet Union and the former Federal Republic of Yugoslavia in the 1990s.
These are relatively recent changes but they are part of a much longer historical process of
cultural borrowing and assimilation - enhanced by trade, wars, invasions and colonisation –
that have helped to create the rich cultural diversity which characterises contemporary
Europe.
However, in recent years, there has been a growing concern that this increased cultural and
ethnic diversity may undermine the cohesion of the societies in which we live. In other words,
that some minorities do not seem to have a sense of belonging to the society as a whole, are
not appreciated or positively valued, and are consciously excluded from mainstream society
or exclude themselves. This cultural volatility, it is often claimed, creates social tensions
within multicultural communities which can then lead to violence and public disorder.
In the view of the editors of this book, diversity, in itself, does not pose a threat to social
cohesion. The threat lies in how we respond to diversity; how we treat people who are
different from ourselves – whether “we” represent the majority or a minority. As F. Peter
Wagner points out, problems tend to arise in two general situations17. The first is where one
group abides by certain practices, beliefs or values which are unacceptable to other groups in
that community. It is likely that issues around this conflict of values will become volatile and
divisive in circumstances where an attempt is made to place limitations either on the rights of
that group to propagate and pursue its beliefs and values or on the rights of other groups to
publicly oppose those beliefs and values.
The second situation is where the cultural practices, beliefs or values of a specific group
(often a minority) appear to challenge the cultural practices, beliefs or values of other groups
(often the majority) to the point where the latter question the inclusion and participation of the
former within the community or the minority virtually exclude themselves and opt-out of
participation in that community.
When the International Covenant on Civil and Political Rights was being ratified, a number of
member states of the United Nations expressed concern about Article 27 which asserts the
right of minorities “in community with the other members of their group, to enjoy their own
culture, to profess and practise their own religion, or to use their own language”. Several
member states proposed an amendment to Article 27 to the effect that recent immigrants
should not be considered to be minorities because they were not yet assimilated and
17
F. Peter WAGNER, “Defining Citizenship, Common Values and the Cultural Foundations of
Citizenship”, DGIV/CULT/ID(2005)10, Appendix 2.
169
constituted a potential challenge to the unity of the nation18. In this sense, “inclusion” and
“participation” seemed to presuppose “loyalty”. Ultimately, the amendment was defeated but
the tension between “rights” and “loyalty” (or between “rights” and “responsibilities” in the
current debate) has continued to shape discussion about citizenship in the 21st Century.
Essentially, two very different interpretations of citizenship have emerged out of this debate.
One view focuses on assimilation into a common national history and cultural traditions; the
other focuses on generating cross-community support for certain fundamental processes
which enable multicultural communities to sustain themselves regardless of whether or not
their members have a common history, cultural tradition and practices. This book is firmly
rooted in the latter tradition.
Through a variety of case studies focused around some of the most important questions about
how we live and how we interact with each other, we have tried to encourage the adoption of
a way of thinking and arguing which:
ƒ Challenges the taken-for-granted assumptions and perspectives of all who engage in a
debate on a particular issue (regardless of whether they represent a majority or a
minority);
ƒ Acknowledges that those who represent “a minority view” may lack the power and
access to the mass media to promote their views as effectively as those who represent
“a majority view”;
ƒ Respects all cultures but will critically examine the specific views held or actions
taken by individual members of different cultures, particularly if their views or
actions would violate the rights of others (both within and outside those cultures);
ƒ Attempts to understand different perspectives and points of view and why people
might hold them, without necessarily agreeing with those views;
ƒ Expects to have one’s own views and taken-for-granted assumptions critically
challenged as well.
In this book, we have suggested that, at one level, the basis for how we treat other people is
set out in documents such as the UN Declaration of Human Rights and the European
Convention of Human Rights. Although these documents are primarily concerned with the
relationship between the state and the individual, they also have important implications for
how we behave in our everyday lives. We are expected to respect and protect other people’s
rights and, in return, we expect them to respect and protect our rights. However, as we have
seen in each of the case studies, human rights often conflict with each other in everyday
practical situations. In such circumstances, we would argue that there is a second, even more
fundamental, basis for determining how we treat each other: the core procedural values which
underpin human rights. That is, that other people, regardless of whether or not we share a
common culture, traditions, faith, lifestyle or political beliefs and ideals, are entitled to be
treated with respect, to get the same fair and equal treatment as you expect for yourself and to
have the same opportunity as you to express their views or practise their faith or way of life.
In return we expect the same degree of respect and fair and equal treatment from them so that
any interaction between us can be based on good faith.
For centuries now, as we have tried to show in the Timeline which accompanies this book,
people around the world have been engaged in a prolonged, difficult and often violent
struggle to promote these core values and the fundamental human rights which represent their
18
See, e.g., Francesco CAPOTORTI, Study on the Rights of Persons belonging to Ethnic, Religious
and Linguistic Minorities, United Nations, New York, 1991.
170
practical expression in every aspect of our day-to-day lives. Nevertheless, as the heavy
caseload of the European Court of Human Rights and, indeed, the caseloads of the highest
courts in each European nation, clearly demonstrate, states, multinational corporations and
individual people need constant reminders that their actions may be violating other people’s
human rights and treating them in ways which ignore natural justice. These core values need
to be practised and upheld not only in the law courts but in our everyday dealings with each
other. Otherwise they cease to have real meaning and we will cease to have any real sense of
commitment to them. To return to the words of Eleanor Roosevelt which we quoted in the
Preface to this book, human rights begin close to home where everyone of us seeks justice,
equal opportunities and dignity without discrimination. “Unless these rights have meaning
there, they have little meaning anywhere.”
171

Documentos relacionados