To see the other types of publications on this topic, follow the link: Juges – Canada – Attitudes.

Journal articles on the topic 'Juges – Canada – Attitudes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 17 journal articles for your research on the topic 'Juges – Canada – Attitudes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ostberg, C. L., Matthew E. Wetstein, and Craig R. Ducat. "Attitudes, Precedents and Cultural Change: Explaining the Citation of Foreign Precedents by the Supreme Court of Canada." Canadian Journal of Political Science 34, no. 2 (June 2001): 377–99. http://dx.doi.org/10.1017/s0008423901777943.

Full text
Abstract:
Policy convergence theory suggests that political leaders of societies will often emulate policy solutions that work in other settings. Yet political leaders can also reject policy alternatives, leading to policy divergence. This study explores the extent to which policy convergence (and/or divergence) takes place in the legal setting of citation practices by the Supreme Court of Canada. The authors examine the Court's practice of citing authorities from other countries, particularly the United States. The findings echo earlier works that have found increasing citation of US case law since the adoption of the Canadian Charter of Rights and Freedoms in 1982. The justices of the Canadian Supreme Court continue to devote considerable attention to the legal doctrines of other countries' courts, particularly when they are confronted with Charter disputes. Thus, convergence theory gets some qualified support when applied to the Canadian Supreme Court's citation practices. The authors provide several complementary explanations for this evidence of policy emulation, suggesting that it stems from the individual attitudes of justices, from the litigation strategies pursued by groups and from broader societal values that the justices adhere to in their rulings. As such, foreign citation patterns of justices on the Supreme Court of Canada should not only be of interest to public law scholars, but to political scientists generally.La théorie sur la convergence des politiques soutient que les dirigeants des sociétés imitent souvent les solutions politiques qui ont fait leur preuve dans d'autres contextes. Les dirigeants peuvent également, cependant, rejeter les alternatives politiques menant à des divergences. Cette étude examine la portée de la convergence (ou des divergences) des politiques dans le cadre des pratiques de citation de la Cour suprême du Canada, lorsque celles-ci concernent les autorités de d'autres pays, les États-Unis en particulier. Ses conclusions rejoignent celles de travaux antérieurs qui ont constaté une augmentation des citations des lois américaines depuis l'adoption de la Charte canadienne des droits et libertés, en l982. Les juges de la Cour Suprême du Canada continuent d'accorder une attention importante aux doctrines légales des cours des autres pays, en particulier lorsqu'ils sont confrontés à des contestations de la Charte. Donc la théorie de la convergence est confirmée dans une certaine mesure par les pratiques de citation de la Cour suprême du Canada. L'article fournit plusieurs explications complémentaires de cette politique d'imitation, suggérant qu'elle origine des attitudes individuelles des juges, des stratégies de contestation utilisées par les groupes et, plus largement, des valeurs sociétales auxquelles se référent les juges dans leurs décisions. Par conséquent, les patterns de citation des jurisprudences étrangères de la Cour suprême du Canada devraient intéressé, non seulement les chercheurs en droit public, mais les spécialistes de la science politique en général.
APA, Harvard, Vancouver, ISO, and other styles
2

Akman, Dogan D., André Normandeau, Thorsten Sellin, and Marvin E. Wolfgang. "Towards the Measurement of Criminality in Canada." Acta Criminologica 1, no. 1 (January 19, 2006): 135–260. http://dx.doi.org/10.7202/017002ar.

Full text
Abstract:
RésuméMESURE DE LA DELINQUANCE AU CANADA« Mesure de la delinquance au Canada > presente les resultats definitifs d'une replique methodologique de l'etude de T. Sellin et M.E. Wolfgang qui ont valide, il y a quelques annees, un indice de la criminalite pour les Etats-Unis. Le but de la presente recherche vise a mettre au point un indice semblable pour le Canada.Le Bureau federal de la statistique est responsable de la compilation des statistiques criminelles canadiennes. Ces statistiques sont basees sur les rapports annuels des differents corps de police du Canada et la classification des crimes est semblable, dans l'ensemble, au systeme americain communement appele Uniform Crime Reporting.Ce systeme ne tient pas compte, toutefois, de la gravite relative des differentes violations de la loi. Cette carence biaise toute analyse de l'etendue et de la nature de la criminalite dans le temps et dans l'espace. C'est ce qui determina Sellin et Wolfgang, ainsi que les auteurs de la presente etude, a y remedier.L'objectif principal de la recherche est la quantification des elements qualitatifs inherents aux evenements criminels. Aux Etats-Unis, un systeme pondere, fruit de l'analyse des attitudes caracterisant des echantillons d'etudiants universitaires, de policiers et de juges de la Cour juvenile, servit a cette fin.La strategie de la presente etude repose sur un « modele de replique minimum ». Ce modele, legitime par la validite des resultats, des interpretations et des conclusions de la recherche de Sellin et Wolfgang, reprend le dernier stade — qui est aussi le plus essentiel — de l'etude originale. Quatorze versions de delits criminels sont alors retenues afin de developper l'indice final.Les postulats de base qui sous-tendent cet indice sont les suivants:1 ) La mesure de la criminalite et de la delinquance juvenile doit etre fondee sur une echelle de gravite qui reflete les sentiments de la communaute sur la gravite relative des differents delits criminels.2) L'indice doit etre elabore a partir de renseignements detailles, tires des rapports de police et non a partir des etiquettes legales qui sont apposees aux evenements criminels.3) En ce qui concerne la delinquance juvenile: a) les delits commis par les jeunes delinquants le sont independamment du type de cours ou de procedes qui menent a leur jugement; b) l'indice ne doit tenir compte que des violations qui seraient considerees comme criminelles si ces jeunes delinquants etaient des adultes.4) L'indice doit etre fonde sur les delits criminels qui sont de nature a amener rapidement les victimes ou leurs proches a rapporter lesdits evenements a la police.5) L'indice doit etre fonde sur les delits qui sont rapportes d'une facon un tant soit peu constante et qui causent un prejudice explicite aux membres de la communaute, tels que les blessures corporelles, le vol et la perte des biens ou les dommages a la propriete. L'indice exclut: a) les delits impliquant le consentement de la victime et la conspiration; b) les delits dont la decouverte depend surtout de l'activite de la police; c) les delits qui ne sont que des attentats ne produisant aucun dommage corporel ou materiel.6) L'unite de compilation doit etre l'« evenement » pris dans sa totalite et non un seul element, si important soit-il.7) Une echelle de proportions (ratio) est la plus appropriee, particulierement a cause de la qualite cumulative d'une telle echelle.8) Des variables supposement importantes — telles que le type d'armes ou la legalite de la presence du coupable .— n'accroissent pas la gravite des delits et n'entrent donc pas en ligne de compte.L'echantillon canadien est de 2 738 sujets. Des etudiants, des juges, des policiers et des employes de bureau ont participe a cette etude.Les methodes et les techniques employees furent empruntees au domaine de la psychophysique, particulierement aux travaux de S.S. Stevens, de l'Universite de Harvard. Ces travaux etablissent une relation mathematique entre « stimulus » et « perception ».Chaque sujet recut les quatorze descriptions de delits criminels auxquels il attacha des poids numeriques variant selon ses attitudes particulieres. Ces resultats numeriques furent compiles a l'aide de la moyenne geometrique et analyses par les methodes de correlation (r) et de regression (b).Les hypotheses majeures de Sellin et Wolfgang, sur la base de ces resultats, furent reformulees de la facon suivante:Expectative minimumSi les indices de gravite des delits tires de deux populations (sexe, culture, pays) sont confrontes, la relation qui existe entre eux doit etre une fonction ayant la forme Y = aXb (les points traces graphiquement sur papier log-log se placent sur une ligne droite). Il est evident que cette expectative ne s'applique qu'aux delits choisis par Sellin et Wolfgang.Expectative maximumSi les indices de gravite des delits tires d'un grand nombre de populations ou de sous-populations (specialement a l'interieur d'un pays) sont mis en rapport, la relation qui se forme entre eux est une fonction ayant la forme Y = aXb (les points traces graphiquement sur papier log-log se placent sur une ligne droite); de plus, a mesure que le nombre de groupes dans l'echantillon augmente, la pente tend vers 1. De nouveau, ceci ne s'applique qu'aux delits choisis.Les resultats de cette etude confirment la fiabilite et la stabilite de l'indice de Sellin et Wolfgang. Ils permettent l'elaboration d'un indice canadien de gravite des delits qui constitue une mesure raffinee de la criminalite et de la delinquance juvenile, capable de remplacer avantageusement celle utilisee presentement au Canada.
APA, Harvard, Vancouver, ISO, and other styles
3

Macdonald, Scott, and Patricia Erickson. "Factors associated with attitudes toward harm reduction among judges in Ontario, Canada." International Journal of Drug Policy 10, no. 1 (February 1999): 17–24. http://dx.doi.org/10.1016/s0955-3959(98)00074-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Grandpré, Louis-Philippe de. "Faut-il réformer la Cour suprême du Canada ?" La réforme de la Cour suprême 26, no. 1 (April 12, 2005): 189–93. http://dx.doi.org/10.7202/042657ar.

Full text
Abstract:
The author deals with the question of the Supreme Court's entire jurisdiction over all subject matters. Is the amount of work, especially with the arrival of the Charter of Rights and Freedoms, too much for the Court to handle ? He also questions the number of judges, their nomination process, the inequity of their remuneration and the problems inherent in a dual judicial system. The thrust of the article is that it is not the Supreme Court that is in need of reform, but rather the attitude that the government has towards it.
APA, Harvard, Vancouver, ISO, and other styles
5

Dagnone, J. Damon, Amandeep Takhar, and Lauren Lacroix. "The Simulation Olympics: a resuscitation-based simulation competition as an educational intervention." CJEM 14, no. 06 (November 2012): 363–68. http://dx.doi.org/10.2310/8000.2012.120767.

Full text
Abstract:
ABSTRACTThe Department of Emergency Medicine at Queen's University developed, implemented, and evaluated an interprofessional simulation-based competition called the Simulation Olympics with the purpose of encouraging health care providers to practice resuscitation skills and foster strong team-based attitudes. Eleven teams (N= 45) participated in the competition. Teams completed three standardized resuscitation scenarios in a high-fidelity simulation laboratory with teams composed of nurses, respiratory therapists, and undergraduate and postgraduate medical trainees. Trained standardized actors and a dedicated technician were used for all scenarios. Judges evaluated team performance using standardized assessment tools. All participants (100%) completed an anonymous two-page questionnaire prior to the competition assessing baseline characteristics and evaluating participant attitudes, motivation, and barriers to participation. The majority of participants (71%) completed an evaluation form following the event focusing on highlights, barriers to participation, and desired future directions. Evaluations were uniformly positive in short-answer feedback and attitudinal scoring measures. To our knowledge, the Simulation Olympics competition is the first of its kind in Canada to be offered at an academic teaching hospital.
APA, Harvard, Vancouver, ISO, and other styles
6

Hudson, Graham. "Neither Here nor There: The (Non-) Impact of International Law on Judicial Reasoning in Canada and South Africa." Canadian Journal of Law & Jurisprudence 21, no. 2 (July 2008): 321–54. http://dx.doi.org/10.1017/s0841820900004446.

Full text
Abstract:
In this paper, the author explores the question of whether formalizing the Canadian law of reception would lead to an increase in the domestic influence of international law. He begins by briefly recounting Canada’s decidedly informal law of reception and, through a review of academic commentary, suggests a relationship between informality and international law’s historically weak influence on judicial reasoning. Tying this commentary to seemingly sociological perspectives on globalization, judges’ international legal personality and the changing forms and functions of law, he forwards the hypothesis that judges’ subjective recognition of the authority of international law can be engendered, modified and/or regulated through the procedural use of more familiar domestic legal authority. This hypothesis is then tested through a comparative analysis of the impact which international law has had in South Africa, where an historically informal law of reception akin to Canada’s has been replaced with clear and robust constitutional rules obligating the judiciary to consider and use international law. The author observes that there are no perceptible differences in the two jurisdictions; in neither country does international law exert a significant, regular or predictable impact on judicial reasoning. He concludes, modestly, that there is no available evidence to support the belief that Canadian judicial practice would change if the Canadian law of reception were formalized. He further concludes, less modestly, that this has significant implications for underlying legal theory and, in particular, that theories concerning how the domestic impact of international law can be augmented, though seemingly sociological, are decidedly positivist in orientation. Given that judges’ subjective attitudes towards international law are not perceptibly linked to domestic legal procedures, international, comparative and transnational legal theorists must, either, find evidence to demonstrate this link, or, recognize that their theoretical allegiances are divided between two, inconsistent traditions: legal positivism and the sociology of law.
APA, Harvard, Vancouver, ISO, and other styles
7

Flanagan, Brian, and Sinéad Ahern. "JUDICIAL DECISION-MAKING AND TRANSNATIONAL LAW: A SURVEY OF COMMON LAW SUPREME COURT JUDGES." International and Comparative Law Quarterly 60, no. 1 (January 2011): 1–28. http://dx.doi.org/10.1017/s0020589310000655.

Full text
Abstract:
AbstractThis is a survey study of 43 judges from the British House of Lords, the Caribbean Court of Justice, the High Court of Australia, the Constitutional Court of South Africa, and the Supreme Courts of Ireland, India, Israel, Canada, New Zealand and the United States on the use of foreign law in constitutional rights cases. We find that the conception of apex judges citing foreign law as a source of persuasive authority (associated with Anne-Marie Slaughter, Vicki Jackson and Chris McCrudden) is of limited application. Citational opportunism and the aspiration to membership of an emerging international ‘guild’ appear to be equally important strands in judicial attitudes towards foreign law. We argue that their presence is at odds with Ronald Dworkin's theory of legal objectivity, and is revealed in a manner meeting his own methodological standard for attitudinal research.Wordsworth's words, written about the French Revolution, will, I hope, still ring true: Bliss was it in that dawn to be alive. But to be young was very heaven.– Justice Stephen Breyer's assessment of ‘the global legal enterprise now upon us’ before the American Society of International Law (2003)
APA, Harvard, Vancouver, ISO, and other styles
8

Brown, R. Blake, and Magen Hudak. "‘Have you any recollection of what occurred at all?’: Davis v. Colchester County Hospital and Medical Negligence in Interwar Canada." Journal of the Canadian Historical Association 26, no. 1 (August 8, 2016): 131–62. http://dx.doi.org/10.7202/1037200ar.

Full text
Abstract:
The history of medical malpractice in Canada has received little attention from legal or medical historians. Through a contextualized study of a Nova Scotia case from the 1930s, Davis v. Colchester County Hospital, this article demonstrates how changes in technology and surgical procedures both created situations that spurred malpractice claims, and made it difficult for injured patients to prove medical negligence. In addition, developments in tort law concerning the liability of hospitals, and the doctors and nurses working within them, provided medical defendants ample opportunity to avoid legal liability, even in cases in which the existence of negligent treatment was obvious. The testimony at trial, the legal strategies utilized by the lawyers, and the judicial rulings also shed light on attitudes of the medical profession toward personal responsibility and ethics, and demonstrates how the interests of patients were weighed against those of medical institutions and professionals by lawyers and judges.
APA, Harvard, Vancouver, ISO, and other styles
9

Shain, Martin, and Gillian Higgins. "The intoxication defense and theories of criminal liability: a praxeological approach." Contemporary Drug Problems 24, no. 4 (December 1997): 731–63. http://dx.doi.org/10.1177/009145099702400405.

Full text
Abstract:
This paper applies an emerging method of research, “legal praxeology,” to the study of decisions concerning intoxication as a defense to criminal charges. This method is based on the observation that judges import their own values, attitudes and beliefs into their decisions in identifiable ways. We observed this phenomenon in 40 cases and deduced that judicial views about the intoxication defense are organized around two major constructs that themselves are drawn from the substrate of judicial views concerning the basis of criminal liability in general. The resulting two-dimensional analytic framework was then applied to the leading Canadian case, R. v. Daviault [1994]3 SCR 63. We observe that majority and minority opinions of the Supreme Court in Daviault fall out along the dimensions extracted from the 40 cases, as does the text of the legislative amendment introduced in the wake of the decision (Bill C-72, now S.33.1 of the Criminal Code of Canada). In Daviault, the Canadian Charter of Rights and Freedoms plays a significant role in challenging the judges of the Supreme Court to identify their fundamental values and beliefs. We conclude that the Charter is a benign catalyst to the development of legal praxeology in that it calls for a more declarative, and thus public, jurisprudence. Charter-assisted legal praxeology goes some way toward revealing the great social value tensions locked up in what at first appear to be purely legal doctrinal disputes concerning the scope and application of the intoxication defense.
APA, Harvard, Vancouver, ISO, and other styles
10

Hudson, Graham. "Wither International Law? Security Certificates, the Supreme Court, and the Rights of Non-Citizens in Canada." Refuge: Canada's Journal on Refugees 26, no. 1 (October 9, 2010): 172–86. http://dx.doi.org/10.25071/1920-7336.30619.

Full text
Abstract:
In this paper, the author examines the role of international law on the development of Canada’s security certificate regime. On the one hand, international law has had a perceptible impact on judicial reasoning, contributing to judges’ increased willingness to recognize the rights of non-citizens named in certificates and to envision better ways of balancing national security and human rights. On the other hand, the judiciary’s attitudes towards international law as non-binding sources of insight akin to foreign law has reinforced disparities in levels of rights afforded by the Canadian Charter of Rights and Freedoms and those afforded by international human rights. Viewed skeptically, one might argue that the judiciary’s selective result-oriented use of international law and foreign law helped it spread a veneer of legality over an otherwise unaltered and discriminatory certificate regime. Reviewing Charkaoui I and II in international context, the author suggests an alternative account. He suggests that the judiciary’s use of international law and foreign law, although highly ambiguous and ambivalent, both was principled and has progressively brought named persons’ Charter rights more closely in step with their international human rights. Although the current balance between national security and human rights is imperfect, the way in which aspects of Canada’s certificate regime have been improved suggests that international law is a valuable resource for protecting the rights of non-citizens in Canada.
APA, Harvard, Vancouver, ISO, and other styles
11

Vandervort, Lucinda. "Sexual Consent as Voluntary Agreement: Tales of “Seduction” or Questions of Law?" New Criminal Law Review 16, no. 1 (January 1, 2013): 143–201. http://dx.doi.org/10.1525/nclr.2013.16.1.143.

Full text
Abstract:
This article proposes a rigorous method to map the law on to the facts in the legal analysis of sexual consent using a series of mandatory questions of law designed to eliminate the legal errors often made by decision makers who routinely rely on personal beliefs about and attitudes toward “normal sexual behavior” in screening and deciding cases. In Canada, sexual consent is affirmative consent, the communication by words or conduct of “voluntary agreement” to a specific sexual activity, with a specific person. As in many jurisdictions, however, the sexual assault laws are often not enforced. Reporting is lowest and non-enforcement highest in cases involving the most common type of assailants, those who are not strangers but instead persons the complainant knows, often quite well—acquaintances, supervisors or coworkers, and family members. Reliance on popular narratives about “seduction” and “stranger-danger” leads complainants, police, prosecutors, lawyers, and trial judges to truncate legal analysis of the facts and leap to erroneous conclusions about consent. Wrongful convictions and perverse acquittals, questionable plea bargains and ill-considered decisions not to charge, result. This proposal is designed to curtail the impact of prejudgments, assumptions, and biases in legal reasoning about voluntariness and affirmative agreement and to produce decisions that are legally sound, based on the application of the rule of law to the material facts. Law has long had better tools than the age-old and popular tales of “ravishment” and “seduction.” Those tools can and should be used.
APA, Harvard, Vancouver, ISO, and other styles
12

Payne, Julien D. "Further Reflections on Spousal and Child Support After Pelech, Caron and Richardson." Revue générale de droit 20, no. 3 (March 28, 2019): 477–98. http://dx.doi.org/10.7202/1058451ar.

Full text
Abstract:
The causal connection thesis espoused in Pelech, Caron and Richardson has provoked more questions than solutions. Although conceived with the objective of providing certainty and predictability as well as national uniformity, its subsequent gestation has been fraught with complications. Professor J. McLeod, whose opinions in this context have been cited by the judiciary on frequent occasions, concludes that a causal connection between the applicant's need and a state of economic dependence engendered by the marriage is a legal prerequisite to the success of any claim for spousal support, whether it be governed by the Divorce Act, 1985 or by provincial statute and whether or not any prior settlement has been reached. This blanket approach is, in the opinion of this writer, an unacceptable extension of Pelech, Caron and Richardson. Many questions have arisen concerning the prospective application of the Supreme Court of Canada trilogy. Answers to these questions by the judiciary have generated divergent and irreconcilable opinions and dispositions. The judicial conflict cannot be rationalized simply on the basis that each case is to be determined on its own facts. Consequently, this paper does not provide a detailed analysis of the cases nor even cite the diverse judicial rulings. There must be well over two hundred cases wherein Pelech, Caron and Richardson have been cited. An examination of these decisions leads to the inescapable conclusion that the attitudes of individual judges towards marriage, divorce, and spousal support obligations constitute a major inarticulate premise that explains the wide diversity of opinions expressed and dispositions reached. In an attempt to provide some semblance of order out of judicial chaos, this writer attempts to provide a framework for future decision making that can accommodate the demands of both logic and fairness.
APA, Harvard, Vancouver, ISO, and other styles
13

Peralta, Susana. "Numéro 11 - mai 2003." Regards économiques, October 12, 2018. http://dx.doi.org/10.14428/regardseco.v1i0.16183.

Full text
Abstract:
L’abstention est un sujet de débat omniprésent dans la plupart des démocraties et ce pour deux raisons. Une de ces raisons est son importance croissante. Dans de nombreux pays démocratiques, un pourcentage croissant de la population décide de ne pas voter, suscitant de nombreux débats scientifiques, politiques et médiatiques. Même en Belgique, où le vote est obligatoire, nous sommes loin des 100 % de participation. En 1995, 9 % de la population avec droit de vote s’est abstenue, alors qu’en 1977 ils n’étaient que 5 %. Le cadre légal permettant de faire respecter la loi du vote obligatoire n’est en effet pas très strict. Entre 1987 et 1990, parmi les 500.000 personnes s’étant abstenues, seules 153 d’entre elles ont été jugées, et 138 condamnées à une amende symbolique. L’autre raison est beaucoup plus inquiétante : les citoyens qui décident de ne pas voter sont très souvent les plus défavorisés (moins riches, moins éduqués, ouvriers). Cette inégalité est loin d’être négligeable. Pour un ensemble de sept pays européens et le Canada, l’écart entre la participation des citoyens les plus éduqués et de leurs concitoyens moins diplômés a été estimé à 10 points de pourcent; en Suisse, pour les referenda menés entre 1981 et 1991, on a estimé l’écart à 25 points de pourcent; aux Etats-Unis, pour l’élection de 1972, il était de 40 points de pourcent. Est-ce problématique ? Le politologue Arend Lijphart affirme que la sous-représentation des plus défavorisés est l’équivalent fonctionnel des règles de vote censitaire existantes dans beaucoup de démocraties à la fin du dix-neuvième siècle, ce qui est intolérable. Cette position n’est cependant pas consensuelle. John Stuart Mill, par exemple, était de l’avis que les moins éduqués ne devraient pas voter parce qu’ils sont incapables de juger quelles sont les politiques favorables au bien-être de la communauté. Les données ne confirment cependant pas cette affirmation, mais elles montrent clairement que les pays ayant plus d’abstention sont ceux où la distribution du revenu est la plus inégale. Cela confirme la crainte de Lijphart de sous-représentation des opinions politiques des moins favorisés. Cette crainte est aussi renforcée par le fait qu’une diminution de l’abstention bénéficie principalement aux partis de gauche. Un phénomène de ce type peut partiellement expliquer les positions des différents partis sur le vote obligatoire en Belgique. En effet, selon les politologues belges Johan Ackaert et Lieven De Winter, son abolition peut gonfler ou diminuer fortement les résultats électoraux de certains partis. Quels sont alors les facteurs qui influencent l’abstention ? Le vote obligatoire a un impact déterminant sur le taux d’abstention. Dans une enquête menée en Belgique en 1991, 27 % des répondants affirment qu’ils ne voteraient plus jamais aux élections parlementaires si la loi sur le vote obligatoire était abolie. Pour l’élection du Parlement européen, on a estimé que le vote obligatoire diminuait l’abstention d’environ 20 à 23 points de pourcent. Par ailleurs, l’abstention varie selon le type d’élection (nationale, locale, européenne), le système électoral (proportionnel ou majoritaire), le jour de la semaine où ont lieu les élections (week-end ou jour ouvrable), l’existence ou pas d’un processus préalable d’inscription en tant qu’électeur (plus d’abstention dans les pays où c’est le cas), le nombre d’élections annuelles (l’abstention augmente lorsqu’il y en a beaucoup), le résultat espéré (moins d’abstention lorsqu’un résultat plus serré est attendu). La décision de voter ou de s’abstenir intéresse les économistes depuis que Downs a publié "An Economic Theory of Democracy" en 1957. L’auteur y décrit le comportement de l’électeur en tant qu’individu rationnel, qui évalue le bénéfice et le coût de voter. Le bénéfice correspond au gain de voir son parti préféré gagner l’élection, pondéré par la probabilité que son propre vote soit déterminant pour un tel résultat. Avec des millions d’électeurs, le vote d’un individu a un impact très faible sur le résultat, rendant presque nul le bénéfice de voter. Les coûts associés à l’acte de voter incluent le déplacement, le temps d’attente au bureau de vote et la récolte d’information préalable. L’électeur rationnel devrait donc s’abstenir. Downs conclut que si les citoyens votent malgré tout, c’est parce qu’ils attachent de la valeur au système démocratique et qu’ils veulent éviter son effondrement. C’est ce qu’il appelle la "valeur de long terme" de la démocratie. Ces éléments nous permettent d’interpréter les faits empiriques. Voter un jour ouvrable et le fait de devoir s’inscrire sont des coûts, qui font augmenter l’abstention. Le bénéfice de l’élection de son parti préféré est supérieur lorsque l’enjeu de l’élection est plus grand, ce qui explique la moindre abstention aux élections nationales par rapport aux européennes. Un résultat espéré très serré augmente l’impact du vote individuel sur le résultat des élections, ce qui fait diminuer l’abstention. Si on pense au coût d’obtention de l’information nécessaire à la décision de voter, la plus forte participation des plus diplômés devient claire : ce sont eux qui ont le plus de facilités à obtenir et interpréter cette information. Downs a aussi mis l’accent sur le paradoxe fondamental du vote. Si aucun individu ne vote parce qu’il ne peut influencer le résultat, chaque citoyen peut décider de voter et ainsi élire son parti préféré, puisque tous ses concitoyens se sont abstenus. Mais si tous parviennent à la même conclusion, ils votent donc tous et chaque vote individuel perd sa valeur. Ce raisonnement fait appel à deux aspects fondamentaux de l’acte de voter. D’un côté la compétition, qui pousse les gens à voter : les sympathisants d’un parti veulent voter pour que l’autre parti ne gagne pas. D’un autre le phénomène du "tire-au-flanc", qui amène les gens à s’abstenir : les sympathisants d’un même parti ont tendance à reporter l’un sur l’autre la responsabilité de voter, car cela leur évite le coût du vote tout en gardant le bénéfice de voir son parti élu. Le message des approches économiques face au problème de l’abstention est que son existence n’est pas étonnante, bien au contraire. Cependant, dans le souci d’augmenter la participation, on peut éliminer certains aspects institutionnels qui rendent l’acte de voter coûteux. De nombreuses études empiriques ont démontré l’importance des aspects institutionnels, et la théorie nous permet de comprendre pourquoi des tels facteurs influencent la décision de voter. Parmi les différentes mesures que l’on peut mettre en place pour faire baisser l’abstention, la plus effective mais aussi la plus controversée est sans doute le vote obligatoire, qui permet à la fois de faire descendre l’abstention à des niveaux très faibles et d’éliminer le biais social. La Belgique a le système le plus ancien et le mieux établi de vote obligatoire. Ce n’est cependant pas le seul pays à l’avoir adopté. L’introduction du vote obligatoire n’est cependant pas exempte de critiques. La plus importante concerne la liberté de choix. Les défenseurs du vote obligatoire tels que Arend Lijphart affirment que le droit de ne pas voter reste intact (par un vote blanc ou nul), c’est l’obligation de se déplacer jusqu’au bureau de vote qui est en cause. En outre, tout dépend de l’échelle des valeurs : si l’on préfère la liberté individuelle à l’égalité de représentation et d’opportunité, le vote obligatoire a en effet peu de sens. Enfin, ne pas voter est une attitude de tire-au-flanc comme beaucoup d’autres dans la vie économique, que l’Etat doit souvent éliminer en imposant une obligation.
APA, Harvard, Vancouver, ISO, and other styles
14

Peralta, Susana. "Numéro 11 - mai 2003." Regards économiques, October 12, 2018. http://dx.doi.org/10.14428/regardseco2003.05.01.

Full text
Abstract:
L’abstention est un sujet de débat omniprésent dans la plupart des démocraties et ce pour deux raisons. Une de ces raisons est son importance croissante. Dans de nombreux pays démocratiques, un pourcentage croissant de la population décide de ne pas voter, suscitant de nombreux débats scientifiques, politiques et médiatiques. Même en Belgique, où le vote est obligatoire, nous sommes loin des 100 % de participation. En 1995, 9 % de la population avec droit de vote s’est abstenue, alors qu’en 1977 ils n’étaient que 5 %. Le cadre légal permettant de faire respecter la loi du vote obligatoire n’est en effet pas très strict. Entre 1987 et 1990, parmi les 500.000 personnes s’étant abstenues, seules 153 d’entre elles ont été jugées, et 138 condamnées à une amende symbolique. L’autre raison est beaucoup plus inquiétante : les citoyens qui décident de ne pas voter sont très souvent les plus défavorisés (moins riches, moins éduqués, ouvriers). Cette inégalité est loin d’être négligeable. Pour un ensemble de sept pays européens et le Canada, l’écart entre la participation des citoyens les plus éduqués et de leurs concitoyens moins diplômés a été estimé à 10 points de pourcent; en Suisse, pour les referenda menés entre 1981 et 1991, on a estimé l’écart à 25 points de pourcent; aux Etats-Unis, pour l’élection de 1972, il était de 40 points de pourcent. Est-ce problématique ? Le politologue Arend Lijphart affirme que la sous-représentation des plus défavorisés est l’équivalent fonctionnel des règles de vote censitaire existantes dans beaucoup de démocraties à la fin du dix-neuvième siècle, ce qui est intolérable. Cette position n’est cependant pas consensuelle. John Stuart Mill, par exemple, était de l’avis que les moins éduqués ne devraient pas voter parce qu’ils sont incapables de juger quelles sont les politiques favorables au bien-être de la communauté. Les données ne confirment cependant pas cette affirmation, mais elles montrent clairement que les pays ayant plus d’abstention sont ceux où la distribution du revenu est la plus inégale. Cela confirme la crainte de Lijphart de sous-représentation des opinions politiques des moins favorisés. Cette crainte est aussi renforcée par le fait qu’une diminution de l’abstention bénéficie principalement aux partis de gauche. Un phénomène de ce type peut partiellement expliquer les positions des différents partis sur le vote obligatoire en Belgique. En effet, selon les politologues belges Johan Ackaert et Lieven De Winter, son abolition peut gonfler ou diminuer fortement les résultats électoraux de certains partis. Quels sont alors les facteurs qui influencent l’abstention ? Le vote obligatoire a un impact déterminant sur le taux d’abstention. Dans une enquête menée en Belgique en 1991, 27 % des répondants affirment qu’ils ne voteraient plus jamais aux élections parlementaires si la loi sur le vote obligatoire était abolie. Pour l’élection du Parlement européen, on a estimé que le vote obligatoire diminuait l’abstention d’environ 20 à 23 points de pourcent. Par ailleurs, l’abstention varie selon le type d’élection (nationale, locale, européenne), le système électoral (proportionnel ou majoritaire), le jour de la semaine où ont lieu les élections (week-end ou jour ouvrable), l’existence ou pas d’un processus préalable d’inscription en tant qu’électeur (plus d’abstention dans les pays où c’est le cas), le nombre d’élections annuelles (l’abstention augmente lorsqu’il y en a beaucoup), le résultat espéré (moins d’abstention lorsqu’un résultat plus serré est attendu). La décision de voter ou de s’abstenir intéresse les économistes depuis que Downs a publié "An Economic Theory of Democracy" en 1957. L’auteur y décrit le comportement de l’électeur en tant qu’individu rationnel, qui évalue le bénéfice et le coût de voter. Le bénéfice correspond au gain de voir son parti préféré gagner l’élection, pondéré par la probabilité que son propre vote soit déterminant pour un tel résultat. Avec des millions d’électeurs, le vote d’un individu a un impact très faible sur le résultat, rendant presque nul le bénéfice de voter. Les coûts associés à l’acte de voter incluent le déplacement, le temps d’attente au bureau de vote et la récolte d’information préalable. L’électeur rationnel devrait donc s’abstenir. Downs conclut que si les citoyens votent malgré tout, c’est parce qu’ils attachent de la valeur au système démocratique et qu’ils veulent éviter son effondrement. C’est ce qu’il appelle la "valeur de long terme" de la démocratie. Ces éléments nous permettent d’interpréter les faits empiriques. Voter un jour ouvrable et le fait de devoir s’inscrire sont des coûts, qui font augmenter l’abstention. Le bénéfice de l’élection de son parti préféré est supérieur lorsque l’enjeu de l’élection est plus grand, ce qui explique la moindre abstention aux élections nationales par rapport aux européennes. Un résultat espéré très serré augmente l’impact du vote individuel sur le résultat des élections, ce qui fait diminuer l’abstention. Si on pense au coût d’obtention de l’information nécessaire à la décision de voter, la plus forte participation des plus diplômés devient claire : ce sont eux qui ont le plus de facilités à obtenir et interpréter cette information. Downs a aussi mis l’accent sur le paradoxe fondamental du vote. Si aucun individu ne vote parce qu’il ne peut influencer le résultat, chaque citoyen peut décider de voter et ainsi élire son parti préféré, puisque tous ses concitoyens se sont abstenus. Mais si tous parviennent à la même conclusion, ils votent donc tous et chaque vote individuel perd sa valeur. Ce raisonnement fait appel à deux aspects fondamentaux de l’acte de voter. D’un côté la compétition, qui pousse les gens à voter : les sympathisants d’un parti veulent voter pour que l’autre parti ne gagne pas. D’un autre le phénomène du "tire-au-flanc", qui amène les gens à s’abstenir : les sympathisants d’un même parti ont tendance à reporter l’un sur l’autre la responsabilité de voter, car cela leur évite le coût du vote tout en gardant le bénéfice de voir son parti élu. Le message des approches économiques face au problème de l’abstention est que son existence n’est pas étonnante, bien au contraire. Cependant, dans le souci d’augmenter la participation, on peut éliminer certains aspects institutionnels qui rendent l’acte de voter coûteux. De nombreuses études empiriques ont démontré l’importance des aspects institutionnels, et la théorie nous permet de comprendre pourquoi des tels facteurs influencent la décision de voter. Parmi les différentes mesures que l’on peut mettre en place pour faire baisser l’abstention, la plus effective mais aussi la plus controversée est sans doute le vote obligatoire, qui permet à la fois de faire descendre l’abstention à des niveaux très faibles et d’éliminer le biais social. La Belgique a le système le plus ancien et le mieux établi de vote obligatoire. Ce n’est cependant pas le seul pays à l’avoir adopté. L’introduction du vote obligatoire n’est cependant pas exempte de critiques. La plus importante concerne la liberté de choix. Les défenseurs du vote obligatoire tels que Arend Lijphart affirment que le droit de ne pas voter reste intact (par un vote blanc ou nul), c’est l’obligation de se déplacer jusqu’au bureau de vote qui est en cause. En outre, tout dépend de l’échelle des valeurs : si l’on préfère la liberté individuelle à l’égalité de représentation et d’opportunité, le vote obligatoire a en effet peu de sens. Enfin, ne pas voter est une attitude de tire-au-flanc comme beaucoup d’autres dans la vie économique, que l’Etat doit souvent éliminer en imposant une obligation.
APA, Harvard, Vancouver, ISO, and other styles
15

Holden, Todd. ""And Now for the Main (Dis)course..."." M/C Journal 2, no. 7 (October 1, 1999). http://dx.doi.org/10.5204/mcj.1794.

Full text
Abstract:
Food is not a trifling matter on Japanese television. More visible than such cultural staples as sumo and enka, food-related talk abounds. Aired year-round and positioned on every channel in every time period throughout the broadcast day, the lenses of food shows are calibrated at a wider angle than heavily-trafficked samurai dramas, beisboru or music shows. Simply, more aspects of everyday life, social history and cultural values pass through food programming. The array of shows work to reproduce traditional Japanese cuisine and cultural mores, educating viewers about regional customs and history. They also teach viewers about the "peculiar" practices of far-away countries. Thus, food shows engage globalisation and assist the integration of outside influences and lifestyles in Japan. However, food-talk is also about nihonjinron -- the uniqueness of Japanese culture1. As such, it tends toward cultural nationalism2. Food-talk is often framed in the context of competition and teaches viewers about planning and aesthetics, imparting class values and a consumption ethic. Food discourse is also inevitably about the reproduction of popular culture. Whether it is Jackie Chan plugging a new movie on a "guess the price" food show or a group of celebs are taking a day-trip to a resort town, food-mediated discourse enables the cultural industry and the national economy to persist -- even expand. To offer a taste of the array of cultural discourse that flows through food, this article serves up an ideal week of Japanese TV programming. Competition for Kisses: Over-Cooked Idols and Half-Baked Sexuality Monday, 10:00 p.m.: SMAP x SMAP SMAP is one of the longest-running, most successful male idol groups in Japan. At least one of their members can be found on TV every day. On this variety show, all five appear. One segment is called "Bistro SMAP" where the leader of the group, Nakai-kun, ushers a (almost always) female guest into his establishment and inquires what she would like to eat. She states her preference and the other four SMAP members (in teams of two) begin preparing the meal. Nakai entertains the guest on a dais overlooking the cooking crews. While the food is being prepared he asks standard questions about the talento's career; "how did you get in this business", "what are your favorite memories", "tell us about your recent work" -- the sort of banal banter that fills many cooking shows. Next, Nakai leads the guest into the kitchen and introduces her to the cooks. Finally, she samples both culinary efforts with the camera catching the reactions of anguish or glee from the opposing team. Each team then tastes the other group's dish. Unlike many food shows, the boys eat without savoring the food. The impression conveyed is that these are everyday boys -- not mega CD-selling pop idols with multiple product endorsements, commercials and television commitments. Finally, the moment of truth arrives: which meal is best. The winners jump for joy, the losers stagger in disappointment. The reason: the winners receive a kiss from the judge (on an agreed-upon innocuous body part). Food as entrée into discourse on sexuality. But, there is more than mere sex in the works, here. For, with each collected kiss, a set of red lips is affixed to the side of the chef's white cap. Conquests. After some months the kisses are tallied and the SMAPster with the most lips wins a prize. Food begets sexuality which begets measures of skill which begets material success. Food is but a prop in managing each idol's image. Putting a Price-tag on Taste (Or: Food as Leveller) Tuesday 8:00 p.m.: Ninki mono de ikou (Let's Go with the Popular People) An idol's image is an essential aspect of this show. The ostensible purpose is to observe five famous people appraising a series of paired items -- each seemingly identical. Which is authentic and which is a bargain-basement copy? One suspects, though, that the deeper aim is to reveal just how unsophisticated, bumbling and downright stupid "talento" can be. Items include guitars, calligraphy, baseball gloves and photographs. During evaluation, the audience is exposed to the history, use and finer points of each object, as well as the guest's decision-making process (via hidden camera). Every week at least one food item is presented: pasta, cat food, seaweed, steak. During wine week contestants smelled, tasted, swirled and regarded the brew's hue. One compared the sound each glass made, while another poured the wines on a napkin to inspect patterns of dispersion! Guests' reasoning and behaviors are monitored from a control booth by two very opinionated hosts. One effect of the recurrent criticism is a levelling -- stars are no more (and often much less) competent (and sacrosanct) than the audience. Technique, Preparation and Procedure? Old Values Give Way to New Wednesday 9:00: Tonerus no nama de daradara ikasette (Tunnels' Allow Us to Go Aimlessly, as We Are) This is one of two prime time shows featuring the comedy team "Tunnels"3. In this show both members of the duo engage in challenging themselves, one another and select members of their regular "team" to master a craft. Last year it was ballet and flamenco dance. This month: karate, soccer and cooking. Ishibashi Takaaki (or "Taka-san") and his new foil (a ne'er-do-well former Yomiuri Giants baseball player) Sadaoka Hiyoshi, are being taught by a master chef. The emphasis is on technique and process: learning theki (the aura, the essence) of cooking. After taking copious notes both men are left on their own to prepare a meal, then present it to a young femaletalento, who selects her favorite. In one segment, the men learned how to prepare croquette -- striving to master the proper procedure for flouring, egg-beating, breading, heating oil, frying and draining. In the most recent episode, Taka prepared his shortcake to perfection, impressing even the sensei. Sadaoka, who is slow on the uptake and tends to be lax, took poor notes and clearly botched his effort. Nonetheless, the talento chose Sadaoka's version because it was different. Certain he was going to win, Taka fell into profound shock. For years a popular host of youth-oriented shows, he concluded: "I guess I just don't understand today's young people". In Japanese television, just as in life, it seems there is no accounting for taste. More, whatever taste once was, it certainly has changed. "We Japanese": Messages of Distinctiveness (Or: Old Values NEVER Die) Thursday, 9:00 p.m.: Douchi no ryori shiou: (Which One? Cooking Show) By contrast, on this night viewers are served procedure, craft and the eternal order of things. Above all, validation of Japanese culinary instincts and traditions. Like many Japanese cooking showsDouchi involves competition between rival foods to win the hearts of a panel of seven singers, actors, writers and athletes.Douchi's difference is that two hosts front for rival dishes, seeking to sway the panel during the in-studio preparation. The dishes are prepared by chefs fromTsuji ryori kyoshitsu, a major cooking academy in Osaka, and are generally comparable (for instance, beef curry versus beef stew). On the surface Douchi is a standard infotainment show. Video tours of places and ingredients associated with the dish entertain the audience and assist in making the guests' decisions more agonising. Two seating areas are situated in front of each chef and panellists are given a number of opportunities to switch sides. Much playful bantering, impassioned appeals and mock intimidation transpire throughout the show. It is not uncommon for the show to pit a foreign against a domestic dish; and most often the indigenous food prevails. For, despite the recent "internationalisation" of Japanese society, many Japanese have little changed from the "we-stick-with-what-we-know-best" attitude that is a Japanese hallmark. Ironically, this message came across most clearly in a recent show pitting spaghetti and meat balls against tarako supagetei (spicy fish eggs and flaked seaweed over Italian noodles) -- a Japanese favorite. One guest, former American, now current Japanese Grand Sumo Champion, Akebono, insisted from the outset that he preferred the Italian version because "it's what my momma always cooked for me". Similarly the three Japanese who settled on tarako did so without so much as a sample or qualm. "Nothing could taste better than tarako" one pronounced even before beginning. A clear message in Douchi is that Japanese food is distinct, special, irreplaceable and (if you're not opposed by a 200 kilogram giant) unbeatable. Society as War: Reifying the Strong and Powerful Friday, 11:00 p.m.: Ryori no tetsujin. (The Ironmen of Cooking) Like sumo this show throws the weak into the ring with the strong for the amusement of the audience. The weak in this case being an outsider who runs his own restaurant. Usually the challengers are Japanese or else operate in Japan, though occasionally they come from overseas (Canada, America, France, Italy). Almost without exception they are men. The "ironmen" are four famous Japanese chefs who specialise in a particular cuisine (Japanese, Chinese, French and Italian). The contest has very strict rules. The challenger can choose which chef he will battle. Both are provided with fully-equipped kitchens positioned on a sprawling sound stage. They must prepare a full-course meal for four celebrity judges within a set time frame. Only prior to the start are they informed of which one key ingredient must be used in every course. It could be crab, onion, radish, pears -- just about any food imaginable. The contestants must finish within the time limit and satisfy the judges in terms of planning, creativity, composition, aesthetics and taste. In the event of a tie, a one course playoff results. The show is played like a sports contest, with a reporter and cameras wading into the trenches, conducting interviews and play-by-play commentary. Jump-cut editing quickens the pace of the show and the running clock adds a dimension of suspense and excitement. Consistent with one message encoded in Japanese history, it is very hard to defeat the big power. Although the ironmen are not weekly winners, their consistency in defeating challengers works to perpetuate the deep-seated cultural myth4. Food Makes the Man Saturday 12:00: Merenge no kimochi (Feelings like Meringue) Relative to the full-scale carnage of Friday night, Saturdays are positively quiescent. Two shows -- one at noon, the other at 11:30 p.m. -- employ food as medium through which intimate glimpses of an idol's life are gleaned.Merenge's title makes no bones about its purpose: it unabashedly promises fluff. In likening mood to food -- and particularly in the day-trip depicted here -- we are reminded of the Puffy's famous ditty about eating crab: "taking the car out for a spin with a caramel spirit ... let's go eat crab!"Merengue treats food as a state of mind, a many-pronged road to inner peace. To keep it fluffy,Merenge is hosted by three attractive women whose job it is to act frivolous and idly chat with idols. The show's centrepiece is a segment where the male guest introduces his favorite (or most cookable) recipe. In-between cutting, beating, grating, simmering, ladling, baking and serving, the audience is entertained and their idol's true inner character is revealed. Continuity Editing Running throughout the day, every day, on all (but the two public) stations, is advertising. Ads are often used as a device to heighten tension or underscore the food show's major themes, for it is always just before the denouement (a judge's decision, the delivery of a story's punch-line or a final tally) that an ad interrupts. Ads, however, are not necessarily departures from the world of food, as a large proportion of them are devoted to edibles. In this way, they underscore food's intimate relationship to economy -- a point that certain cooking shows make with their tie-in goods for sale or maps to, menus of and prices for the featured restaurants. While a considerable amount of primary ad discourse is centred on food (alcoholic and non-alcoholic beverages, coffees, sodas, instant or packaged items), it is ersatz food (vitamin-enriched waters, energy drinks, sugarless gums and food supplements) which has recently come to dominate ad space. Embedded in this commercial discourse are deeper social themes such as health, diet, body, sexuality and even death5. Underscoring the larger point: in Japan, if it is television you are tuned into, food-mediated discourse is inescapable. Food for Conclusion The question remains: "why food?" What is it that qualifies food as a suitable source and medium for filtering the raw material of popular culture? For one, food is something that all Japanese share in common. It is an essential part of daily life. Beyond that, though, the legacy of the not-so-distant past -- embedded in the consciousness of nearly a third of the population -- is food shortages giving rise to overwhelming abundance. Within less than a generation's time Japanese have been transported from famine (when roasted potatoes were considered a meal and chocolate was an unimaginable luxury) to excess (where McDonald's is a common daily meal, scores of canned drink options can be found on every street corner, and yesterday's leftover 7-Eleven bentos are tossed). Because of food's history, its place in Japanese folklore, its ubiquity, its easy availability, and its penetration into many aspects of everyday life, TV's food-talk is of interest to almost all viewers. Moreover, because it is a part of the structure of every viewer's life, it serves as a fathomable conduit for all manner of other talk. To invoke information theory, there is very little noise on the channel when food is involved6. For this reason food is a convenient vehicle for information transmission on Japanese television. Food serves as a comfortable podium from which to educate, entertain, assist social reproduction and further cultural production. Footnotes 1. For an excellent treatment of this ethic, see P.N. Dale, The Myth of Japanese Uniqueness. London: Routledge, 1986. 2. A predilection I have discerned in other Japanese media, such as commercials. See my "The Color of Difference: Critiquing Cultural Convergence via Television Advertising", Interdisciplinary Information Sciences 5.1 (March 1999): 15-36. 3. The other, also a cooking show which we won't cover here, appears on Thursdays and is called Tunnerusu no minasan no okage deshita. ("Tunnels' Because of Everyone"). It involves two guests -- a male and female -- whose job it is to guess which of 4 prepared dishes includes one item that the other guest absolutely detests. There is more than a bit of sadism in this show as, in-between casual conversation, the guest is forced to continually eat something that turns his or her stomach -- all the while smiling and pretending s/he loves it. In many ways this suits the Japanese cultural value of gaman, of bearing up under intolerable conditions. 4. After 300-plus airings, the tetsujin show is just now being put to bed for good. It closes with the four iron men pairing off and doing battle against one another. Although Chinese food won out over Japanese in the semi-final, the larger message -- that four Japanese cooks will do battle to determine the true iron chef -- goes a certain way toward reifying the notion of "we Japanese" supported in so many other cooking shows. 5. An analysis of such secondary discourse can be found in my "The Commercialized Body: A Comparative Study of Culture and Values". Interdisciplinary Information Sciences 2.2 (September 1996): 199-215. 6. The concept is derived from C. Shannon and W. Weaver, The Mathematical Theory of Communication. Urbana, Ill.: University of Illinois Press, 1949. Citation reference for this article MLA style: Todd Holden. "'And Now for the Main (Dis)course...': Or, Food as Entrée in Contemporary Japanese Television." M/C: A Journal of Media and Culture 2.7 (1999). [your date of access] <http://www.uq.edu.au/mc/9910/entree.php>. Chicago style: Todd Holden, "'And Now for the Main (Dis)course...': Or, Food as Entrée in Contemporary Japanese Television," M/C: A Journal of Media and Culture 2, no. 7 (1999), <http://www.uq.edu.au/mc/9910/entree.php> ([your date of access]). APA style: Todd Holden. (1999) "And now for the main (dis)course...": or, food as entrée in contemporary Japanese television. M/C: A Journal of Media and Culture 2(7). <http://www.uq.edu.au/mc/9910/entree.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
16

Paull, John. "Beyond Equal: From Same But Different to the Doctrine of Substantial Equivalence." M/C Journal 11, no. 2 (June 1, 2008). http://dx.doi.org/10.5204/mcj.36.

Full text
Abstract:
A same-but-different dichotomy has recently been encapsulated within the US Food and Drug Administration’s ill-defined concept of “substantial equivalence” (USFDA, FDA). By invoking this concept the genetically modified organism (GMO) industry has escaped the rigors of safety testing that might otherwise apply. The curious concept of “substantial equivalence” grants a presumption of safety to GMO food. This presumption has yet to be earned, and has been used to constrain labelling of both GMO and non-GMO food. It is an idea that well serves corporatism. It enables the claim of difference to secure patent protection, while upholding the contrary claim of sameness to avoid labelling and safety scrutiny. It offers the best of both worlds for corporate food entrepreneurs, and delivers the worst of both worlds to consumers. The term “substantial equivalence” has established its currency within the GMO discourse. As the opportunities for patenting food technologies expand, the GMO recruitment of this concept will likely be a dress rehearsal for the developing debates on the labelling and testing of other techno-foods – including nano-foods and clone-foods. “Substantial Equivalence” “Are the Seven Commandments the same as they used to be, Benjamin?” asks Clover in George Orwell’s “Animal Farm”. By way of response, Benjamin “read out to her what was written on the wall. There was nothing there now except a single Commandment. It ran: ALL ANIMALS ARE EQUAL BUT SOME ANIMALS ARE MORE EQUAL THAN OTHERS”. After this reductionist revelation, further novel and curious events at Manor Farm, “did not seem strange” (Orwell, ch. X). Equality is a concept at the very core of mathematics, but beyond the domain of logic, equality becomes a hotly contested notion – and the domain of food is no exception. A novel food has a regulatory advantage if it can claim to be the same as an established food – a food that has proven its worth over centuries, perhaps even millennia – and thus does not trigger new, perhaps costly and onerous, testing, compliance, and even new and burdensome regulations. On the other hand, such a novel food has an intellectual property (IP) advantage only in terms of its difference. And thus there is an entrenched dissonance for newly technologised foods, between claiming sameness, and claiming difference. The same/different dilemma is erased, so some would have it, by appeal to the curious new dualist doctrine of “substantial equivalence” whereby sameness and difference are claimed simultaneously, thereby creating a win/win for corporatism, and a loss/loss for consumerism. This ground has been pioneered, and to some extent conquered, by the GMO industry. The conquest has ramifications for other cryptic food technologies, that is technologies that are invisible to the consumer and that are not evident to the consumer other than via labelling. Cryptic technologies pertaining to food include GMOs, pesticides, hormone treatments, irradiation and, most recently, manufactured nano-particles introduced into the food production and delivery stream. Genetic modification of plants was reported as early as 1984 by Horsch et al. The case of Diamond v. Chakrabarty resulted in a US Supreme Court decision that upheld the prior decision of the US Court of Customs and Patent Appeal that “the fact that micro-organisms are alive is without legal significance for purposes of the patent law”, and ruled that the “respondent’s micro-organism plainly qualifies as patentable subject matter”. This was a majority decision of nine judges, with four judges dissenting (Burger). It was this Chakrabarty judgement that has seriously opened the Pandora’s box of GMOs because patenting rights makes GMOs an attractive corporate proposition by offering potentially unique monopoly rights over food. The rear guard action against GMOs has most often focussed on health repercussions (Smith, Genetic), food security issues, and also the potential for corporate malfeasance to hide behind a cloak of secrecy citing commercial confidentiality (Smith, Seeds). Others have tilted at the foundational plank on which the economics of the GMO industry sits: “I suggest that the main concern is that we do not want a single molecule of anything we eat to contribute to, or be patented and owned by, a reckless, ruthless chemical organisation” (Grist 22). The GMO industry exhibits bipolar behaviour, invoking the concept of “substantial difference” to claim patent rights by way of “novelty”, and then claiming “substantial equivalence” when dealing with other regulatory authorities including food, drug and pesticide agencies; a case of “having their cake and eating it too” (Engdahl 8). This is a clever slight-of-rhetoric, laying claim to the best of both worlds for corporations, and the worst of both worlds for consumers. Corporations achieve patent protection and no concomitant specific regulatory oversight; while consumers pay the cost of patent monopolization, and are not necessarily apprised, by way of labelling or otherwise, that they are purchasing and eating GMOs, and thereby financing the GMO industry. The lemma of “substantial equivalence” does not bear close scrutiny. It is a fuzzy concept that lacks a tight testable definition. It is exactly this fuzziness that allows lots of wriggle room to keep GMOs out of rigorous testing regimes. Millstone et al. argue that “substantial equivalence is a pseudo-scientific concept because it is a commercial and political judgement masquerading as if it is scientific. It is moreover, inherently anti-scientific because it was created primarily to provide an excuse for not requiring biochemical or toxicological tests. It therefore serves to discourage and inhibit informative scientific research” (526). “Substantial equivalence” grants GMOs the benefit of the doubt regarding safety, and thereby leaves unexamined the ramifications for human consumer health, for farm labourer and food-processor health, for the welfare of farm animals fed a diet of GMO grain, and for the well-being of the ecosystem, both in general and in its particularities. “Substantial equivalence” was introduced into the food discourse by an Organisation for Economic Co-operation and Development (OECD) report: “safety evaluation of foods derived by modern biotechnology: concepts and principles”. It is from this document that the ongoing mantra of assumed safety of GMOs derives: “modern biotechnology … does not inherently lead to foods that are less safe … . Therefore evaluation of foods and food components obtained from organisms developed by the application of the newer techniques does not necessitate a fundamental change in established principles, nor does it require a different standard of safety” (OECD, “Safety” 10). This was at the time, and remains, an act of faith, a pro-corporatist and a post-cautionary approach. The OECD motto reveals where their priorities lean: “for a better world economy” (OECD, “Better”). The term “substantial equivalence” was preceded by the 1992 USFDA concept of “substantial similarity” (Levidow, Murphy and Carr) and was adopted from a prior usage by the US Food and Drug Agency (USFDA) where it was used pertaining to medical devices (Miller). Even GMO proponents accept that “Substantial equivalence is not intended to be a scientific formulation; it is a conceptual tool for food producers and government regulators” (Miller 1043). And there’s the rub – there is no scientific definition of “substantial equivalence”, no scientific test of proof of concept, and nor is there likely to be, since this is a ‘spinmeister’ term. And yet this is the cornerstone on which rests the presumption of safety of GMOs. Absence of evidence is taken to be evidence of absence. History suggests that this is a fraught presumption. By way of contrast, the patenting of GMOs depends on the antithesis of assumed ‘sameness’. Patenting rests on proven, scrutinised, challengeable and robust tests of difference and novelty. Lightfoot et al. report that transgenic plants exhibit “unexpected changes [that] challenge the usual assumptions of GMO equivalence and suggest genomic, proteomic and metanomic characterization of transgenics is advisable” (1). GMO Milk and Contested Labelling Pesticide company Monsanto markets the genetically engineered hormone rBST (recombinant Bovine Somatotropin; also known as: rbST; rBGH, recombinant Bovine Growth Hormone; and the brand name Prosilac) to dairy farmers who inject it into their cows to increase milk production. This product is not approved for use in many jurisdictions, including Europe, Australia, New Zealand, Canada and Japan. Even Monsanto accepts that rBST leads to mastitis (inflammation and pus in the udder) and other “cow health problems”, however, it maintains that “these problems did not occur at rates that would prohibit the use of Prosilac” (Monsanto). A European Union study identified an extensive list of health concerns of rBST use (European Commission). The US Dairy Export Council however entertain no doubt. In their background document they ask “is milk from cows treated with rBST safe?” and answer “Absolutely” (USDEC). Meanwhile, Monsanto’s website raises and answers the question: “Is the milk from cows treated with rbST any different from milk from untreated cows? No” (Monsanto). Injecting cows with genetically modified hormones to boost their milk production remains a contested practice, banned in many countries. It is the claimed equivalence that has kept consumers of US dairy products in the dark, shielded rBST dairy farmers from having to declare that their milk production is GMO-enhanced, and has inhibited non-GMO producers from declaring their milk as non-GMO, non rBST, or not hormone enhanced. This is a battle that has simmered, and sometimes raged, for a decade in the US. Finally there is a modest victory for consumers: the Pennsylvania Department of Agriculture (PDA) requires all labels used on milk products to be approved in advance by the department. The standard issued in October 2007 (PDA, “Standards”) signalled to producers that any milk labels claiming rBST-free status would be rejected. This advice was rescinded in January 2008 with new, specific, department-approved textual constructions allowed, and ensuring that any “no rBST” style claim was paired with a PDA-prescribed disclaimer (PDA, “Revised Standards”). However, parsimonious labelling is prohibited: No labeling may contain references such as ‘No Hormones’, ‘Hormone Free’, ‘Free of Hormones’, ‘No BST’, ‘Free of BST’, ‘BST Free’,’No added BST’, or any statement which indicates, implies or could be construed to mean that no natural bovine somatotropin (BST) or synthetic bovine somatotropin (rBST) are contained in or added to the product. (PDA, “Revised Standards” 3) Difference claims are prohibited: In no instance shall any label state or imply that milk from cows not treated with recombinant bovine somatotropin (rBST, rbST, RBST or rbst) differs in composition from milk or products made with milk from treated cows, or that rBST is not contained in or added to the product. If a product is represented as, or intended to be represented to consumers as, containing or produced from milk from cows not treated with rBST any labeling information must convey only a difference in farming practices or dairy herd management methods. (PDA, “Revised Standards” 3) The PDA-approved labelling text for non-GMO dairy farmers is specified as follows: ‘From cows not treated with rBST. No significant difference has been shown between milk derived from rBST-treated and non-rBST-treated cows’ or a substantial equivalent. Hereinafter, the first sentence shall be referred to as the ‘Claim’, and the second sentence shall be referred to as the ‘Disclaimer’. (PDA, “Revised Standards” 4) It is onto the non-GMO dairy farmer alone, that the costs of compliance fall. These costs include label preparation and approval, proving non-usage of GMOs, and of creating and maintaining an audit trail. In nearby Ohio a similar consumer versus corporatist pantomime is playing out. This time with the Ohio Department of Agriculture (ODA) calling the shots, and again serving the GMO industry. The ODA prescribed text allowed to non-GMO dairy farmers is “from cows not supplemented with rbST” and this is to be conjoined with the mandatory disclaimer “no significant difference has been shown between milk derived from rbST-supplemented and non-rbST supplemented cows” (Curet). These are “emergency rules”: they apply for 90 days, and are proposed as permanent. Once again, the onus is on the non-GMO dairy farmers to document and prove their claims. GMO dairy farmers face no such governmental requirements, including no disclosure requirement, and thus an asymmetric regulatory impost is placed on the non-GMO farmer which opens up new opportunities for administrative demands and technocratic harassment. Levidow et al. argue, somewhat Eurocentrically, that from its 1990s adoption “as the basis for a harmonized science-based approach to risk assessment” (26) the concept of “substantial equivalence” has “been recast in at least three ways” (58). It is true that the GMO debate has evolved differently in the US and Europe, and with other jurisdictions usually adopting intermediate positions, yet the concept persists. Levidow et al. nominate their three recastings as: firstly an “implicit redefinition” by the appending of “extra phrases in official documents”; secondly, “it has been reinterpreted, as risk assessment processes have … required more evidence of safety than before, especially in Europe”; and thirdly, “it has been demoted in the European Union regulatory procedures so that it can no longer be used to justify the claim that a risk assessment is unnecessary” (58). Romeis et al. have proposed a decision tree approach to GMO risks based on cascading tiers of risk assessment. However what remains is that the defects of the concept of “substantial equivalence” persist. Schauzu identified that: such decisions are a matter of “opinion”; that there is “no clear definition of the term ‘substantial’”; that because genetic modification “is aimed at introducing new traits into organisms, the result will always be a different combination of genes and proteins”; and that “there is no general checklist that could be followed by those who are responsible for allowing a product to be placed on the market” (2). Benchmark for Further Food Novelties? The discourse, contestation, and debate about “substantial equivalence” have largely focussed on the introduction of GMOs into food production processes. GM can best be regarded as the test case, and proof of concept, for establishing “substantial equivalence” as a benchmark for evaluating new and forthcoming food technologies. This is of concern, because the concept of “substantial equivalence” is scientific hokum, and yet its persistence, even entrenchment, within regulatory agencies may be a harbinger of forthcoming same-but-different debates for nanotechnology and other future bioengineering. The appeal of “substantial equivalence” has been a brake on the creation of GMO-specific regulations and on rigorous GMO testing. The food nanotechnology industry can be expected to look to the precedent of the GMO debate to head off specific nano-regulations and nano-testing. As cloning becomes economically viable, then this may be another wave of food innovation that muddies the regulatory waters with the confused – and ultimately self-contradictory – concept of “substantial equivalence”. Nanotechnology engineers particles in the size range 1 to 100 nanometres – a nanometre is one billionth of a metre. This is interesting for manufacturers because at this size chemicals behave differently, or as the Australian Office of Nanotechnology expresses it, “new functionalities are obtained” (AON). Globally, government expenditure on nanotechnology research reached US$4.6 billion in 2006 (Roco 3.12). While there are now many patents (ETC Group; Roco), regulation specific to nanoparticles is lacking (Bowman and Hodge; Miller and Senjen). The USFDA advises that nano-manufacturers “must show a reasonable assurance of safety … or substantial equivalence” (FDA). A recent inventory of nano-products already on the market identified 580 products. Of these 11.4% were categorised as “Food and Beverage” (WWICS). This is at a time when public confidence in regulatory bodies is declining (HRA). In an Australian consumer survey on nanotechnology, 65% of respondents indicated they were concerned about “unknown and long term side effects”, and 71% agreed that it is important “to know if products are made with nanotechnology” (MARS 22). Cloned animals are currently more expensive to produce than traditional animal progeny. In the course of 678 pages, the USFDA Animal Cloning: A Draft Risk Assessment has not a single mention of “substantial equivalence”. However the Federation of Animal Science Societies (FASS) in its single page “Statement in Support of USFDA’s Risk Assessment Conclusion That Food from Cloned Animals Is Safe for Human Consumption” states that “FASS endorses the use of this comparative evaluation process as the foundation of establishing substantial equivalence of any food being evaluated. It must be emphasized that it is the food product itself that should be the focus of the evaluation rather than the technology used to generate cloned animals” (FASS 1). Contrary to the FASS derogation of the importance of process in food production, for consumers both the process and provenance of production is an important and integral aspect of a food product’s value and identity. Some consumers will legitimately insist that their Kalamata olives are from Greece, or their balsamic vinegar is from Modena. It was the British public’s growing awareness that their sugar was being produced by slave labour that enabled the boycotting of the product, and ultimately the outlawing of slavery (Hochschild). When consumers boycott Nestle, because of past or present marketing practices, or boycott produce of USA because of, for example, US foreign policy or animal welfare concerns, they are distinguishing the food based on the narrative of the food, the production process and/or production context which are a part of the identity of the food. Consumers attribute value to food based on production process and provenance information (Paull). Products produced by slave labour, by child labour, by political prisoners, by means of torture, theft, immoral, unethical or unsustainable practices are different from their alternatives. The process of production is a part of the identity of a product and consumers are increasingly interested in food narrative. It requires vigilance to ensure that these narratives are delivered with the product to the consumer, and are neither lost nor suppressed. Throughout the GM debate, the organic sector has successfully skirted the “substantial equivalence” debate by excluding GMOs from the certified organic food production process. This GMO-exclusion from the organic food stream is the one reprieve available to consumers worldwide who are keen to avoid GMOs in their diet. The organic industry carries the expectation of providing food produced without artificial pesticides and fertilizers, and by extension, without GMOs. Most recently, the Soil Association, the leading organic certifier in the UK, claims to be the first organisation in the world to exclude manufactured nonoparticles from their products (Soil Association). There has been the call that engineered nanoparticles be excluded from organic standards worldwide, given that there is no mandatory safety testing and no compulsory labelling in place (Paull and Lyons). The twisted rhetoric of oxymorons does not make the ideal foundation for policy. Setting food policy on the shifting sands of “substantial equivalence” seems foolhardy when we consider the potentially profound ramifications of globally mass marketing a dysfunctional food. If there is a 2×2 matrix of terms – “substantial equivalence”, substantial difference, insubstantial equivalence, insubstantial difference – while only one corner of this matrix is engaged for food policy, and while the elements remain matters of opinion rather than being testable by science, or by some other regime, then the public is the dupe, and potentially the victim. “Substantial equivalence” has served the GMO corporates well and the public poorly, and this asymmetry is slated to escalate if nano-food and clone-food are also folded into the “substantial equivalence” paradigm. Only in Orwellian Newspeak is war peace, or is same different. It is time to jettison the pseudo-scientific doctrine of “substantial equivalence”, as a convenient oxymoron, and embrace full disclosure of provenance, process and difference, so that consumers are not collateral in a continuing asymmetric knowledge war. References Australian Office of Nanotechnology (AON). Department of Industry, Tourism and Resources (DITR) 6 Aug. 2007. 24 Apr. 2008 < http://www.innovation.gov.au/Section/Innovation/Pages/ AustralianOfficeofNanotechnology.aspx >.Bowman, Diana, and Graeme Hodge. “A Small Matter of Regulation: An International Review of Nanotechnology Regulation.” Columbia Science and Technology Law Review 8 (2007): 1-32.Burger, Warren. “Sidney A. Diamond, Commissioner of Patents and Trademarks v. Ananda M. Chakrabarty, et al.” Supreme Court of the United States, decided 16 June 1980. 24 Apr. 2008 < http://caselaw.lp.findlaw.com/cgi-bin/getcase.pl?court=US&vol=447&invol=303 >.Curet, Monique. “New Rules Allow Dairy-Product Labels to Include Hormone Info.” The Columbus Dispatch 7 Feb. 2008. 24 Apr. 2008 < http://www.dispatch.com/live/content/business/stories/2008/02/07/dairy.html >.Engdahl, F. William. Seeds of Destruction. Montréal: Global Research, 2007.ETC Group. Down on the Farm: The Impact of Nano-Scale Technologies on Food and Agriculture. Ottawa: Action Group on Erosion, Technology and Conservation, November, 2004. European Commission. Report on Public Health Aspects of the Use of Bovine Somatotropin. Brussels: European Commission, 15-16 March 1999.Federation of Animal Science Societies (FASS). Statement in Support of FDA’s Risk Assessment Conclusion That Cloned Animals Are Safe for Human Consumption. 2007. 24 Apr. 2008 < http://www.fass.org/page.asp?pageID=191 >.Grist, Stuart. “True Threats to Reason.” New Scientist 197.2643 (16 Feb. 2008): 22-23.Hochschild, Adam. Bury the Chains: The British Struggle to Abolish Slavery. London: Pan Books, 2006.Horsch, Robert, Robert Fraley, Stephen Rogers, Patricia Sanders, Alan Lloyd, and Nancy Hoffman. “Inheritance of Functional Foreign Genes in Plants.” Science 223 (1984): 496-498.HRA. Awareness of and Attitudes toward Nanotechnology and Federal Regulatory Agencies: A Report of Findings. Washington: Peter D. Hart Research Associates, 25 Sep. 2007.Levidow, Les, Joseph Murphy, and Susan Carr. “Recasting ‘Substantial Equivalence’: Transatlantic Governance of GM Food.” Science, Technology, and Human Values 32.1 (Jan. 2007): 26-64.Lightfoot, David, Rajsree Mungur, Rafiqa Ameziane, Anthony Glass, and Karen Berhard. “Transgenic Manipulation of C and N Metabolism: Stretching the GMO Equivalence.” American Society of Plant Biologists Conference: Plant Biology, 2000.MARS. “Final Report: Australian Community Attitudes Held about Nanotechnology – Trends 2005-2007.” Report prepared for Department of Industry, Tourism and Resources (DITR). Miranda, NSW: Market Attitude Research Services, 12 June 2007.Miller, Georgia, and Rye Senjen. “Out of the Laboratory and on to Our Plates: Nanotechnology in Food and Agriculture.” Friends of the Earth, 2008. 24 Apr. 2008 < http://nano.foe.org.au/node/220 >.Miller, Henry. “Substantial Equivalence: Its Uses and Abuses.” Nature Biotechnology 17 (7 Nov. 1999): 1042-1043.Millstone, Erik, Eric Brunner, and Sue Mayer. “Beyond ‘Substantial Equivalence’.” Nature 401 (7 Oct. 1999): 525-526.Monsanto. “Posilac, Bovine Somatotropin by Monsanto: Questions and Answers about bST from the United States Food and Drug Administration.” 2007. 24 Apr. 2008 < http://www.monsantodairy.com/faqs/fda_safety.html >.Organisation for Economic Co-operation and Development (OECD). “For a Better World Economy.” Paris: OECD, 2008. 24 Apr. 2008 < http://www.oecd.org/ >.———. “Safety Evaluation of Foods Derived by Modern Biotechnology: Concepts and Principles.” Paris: OECD, 1993.Orwell, George. Animal Farm. Adelaide: ebooks@Adelaide, 2004 (1945). 30 Apr. 2008 < http://ebooks.adelaide.edu.au/o/orwell/george >.Paull, John. “Provenance, Purity and Price Premiums: Consumer Valuations of Organic and Place-of-Origin Food Labelling.” Research Masters thesis, University of Tasmania, Hobart, 2006. 24 Apr. 2008 < http://eprints.utas.edu.au/690/ >.Paull, John, and Kristen Lyons. “Nanotechnology: The Next Challenge for Organics.” Journal of Organic Systems (in press).Pennsylvania Department of Agriculture (PDA). “Revised Standards and Procedure for Approval of Proposed Labeling of Fluid Milk.” Milk Labeling Standards (2.0.1.17.08). Bureau of Food Safety and Laboratory Services, Pennsylvania Department of Agriculture, 17 Jan. 2008. ———. “Standards and Procedure for Approval of Proposed Labeling of Fluid Milk, Milk Products and Manufactured Dairy Products.” Milk Labeling Standards (2.0.1.17.08). Bureau of Food Safety and Laboratory Services, Pennsylvania Department of Agriculture, 22 Oct. 2007.Roco, Mihail. “National Nanotechnology Initiative – Past, Present, Future.” In William Goddard, Donald Brenner, Sergy Lyshevski and Gerald Iafrate, eds. Handbook of Nanoscience, Engineering and Technology. 2nd ed. Boca Raton, FL: CRC Press, 2007.Romeis, Jorg, Detlef Bartsch, Franz Bigler, Marco Candolfi, Marco Gielkins, et al. “Assessment of Risk of Insect-Resistant Transgenic Crops to Nontarget Arthropods.” Nature Biotechnology 26.2 (Feb. 2008): 203-208.Schauzu, Marianna. “The Concept of Substantial Equivalence in Safety Assessment of Food Derived from Genetically Modified Organisms.” AgBiotechNet 2 (Apr. 2000): 1-4.Soil Association. “Soil Association First Organisation in the World to Ban Nanoparticles – Potentially Toxic Beauty Products That Get Right under Your Skin.” London: Soil Association, 17 Jan. 2008. 24 Apr. 2008 < http://www.soilassociation.org/web/sa/saweb.nsf/848d689047 cb466780256a6b00298980/42308d944a3088a6802573d100351790!OpenDocument >.Smith, Jeffrey. Genetic Roulette: The Documented Health Risks of Genetically Engineered Foods. Fairfield, Iowa: Yes! Books, 2007.———. Seeds of Deception. Melbourne: Scribe, 2004.U.S. Dairy Export Council (USDEC). Bovine Somatotropin (BST) Backgrounder. Arlington, VA: U.S. Dairy Export Council, 2006.U.S. Food and Drug Administration (USFDA). Animal Cloning: A Draft Risk Assessment. Rockville, MD: Center for Veterinary Medicine, U.S. Food and Drug Administration, 28 Dec. 2006.———. FDA and Nanotechnology Products. U.S. Department of Health and Human Services, U.S. Food and Drug Administration, 2008. 24 Apr. 2008 < http://www.fda.gov/nanotechnology/faqs.html >.Woodrow Wilson International Center for Scholars (WWICS). “A Nanotechnology Consumer Products Inventory.” Data set as at Sep. 2007. Woodrow Wilson International Center for Scholars, Project on Emerging Technologies, Sep. 2007. 24 Apr. 2008 < http://www.nanotechproject.org/inventories/consumer >.
APA, Harvard, Vancouver, ISO, and other styles
17

McGrath, Shane. "Compassionate Refugee Politics?" M/C Journal 8, no. 6 (December 1, 2005). http://dx.doi.org/10.5204/mcj.2440.

Full text
Abstract:
One of the most distinct places the politics of affect have played out in Australia of late has been in the struggles around the mandatory detention of undocumented migrants; specifically, in arguments about the amount of compassion border control practices should or do entail. Indeed, in 1990 the newly established Joint Standing Committee on Migration (JSCM) published its first report, Illegal Entrants in Australia: Balancing Control and Compassion. Contemporaneous, thought not specifically concerned, with the establishment of mandatory detention for asylum seekers, this report helped shape the context in which detention policy developed. As the Bureau of Immigration and Population Research put it in their summary of the report, “the Committee endorsed a tough stance regarding all future illegal entrants but a more compassionate stance regarding those now in Australia” (24). It would be easy now to frame this report in a narrative of decline. Under a Labor government the JSCM had at least some compassion to offer; since the 1996 conservative Coalition victory any such compassion has been in increasingly short supply, if not an outright political liability. This is a popular narrative for those clinging to the belief that Labor is still, in some residual sense, a social-democratic party. I am more interested in the ways the report’s subtitle effectively predicted the framework in which debates about detention have since been constructed: control vs. compassion, with balance as the appropriate mediating term. Control and compassion are presented as the poles of a single governmental project insofar as they can be properly calibrated; but at the same time, compassion is presented as an external balance to the governmental project (control), an extra-political restriction of the political sphere. This is a very formal way to put it, but it reflects a simple, vernacular theory that circulates widely among refugee activists. It is expressed with concision in Peter Mares’ groundbreaking book on detention centres, Borderlines, in the chapter title “Compassion as a vice”. Compassion remains one of the major themes and demands of Australian refugee advocates. They thematise compassion not only for the obvious reasons that mandatory detention involves a devastating lack thereof, and that its critics are frequently driven by intense emotional connections both to particular detainees and TPV holders and, more generally, to all who suffer the effects of Australian border control. There is also a historical or conjunctural element: as Ghassan Hage has written, for the last ten years or so many forms of political opposition in Australia have organised their criticisms in terms of “things like compassion or hospitality rather than in the name of a left/right political divide” (7). This tendency is not limited to any one group; it ranges across the spectrum from Liberal Party wets to anarchist collectives, via dozens of organised groups and individuals varying greatly in their political beliefs and intentions. In this context, it would be tendentious to offer any particular example(s) of compassionate activism, so let me instead cite a complaint. In November 2002, the conservative journal Quadrant worried that morality and compassion “have been appropriated as if by right by those who are opposed to the government’s policies” on border protection (“False Refugees” 2). Thus, the right was forced to begin to speak the language of compassion as well. The Department of Immigration, often considered the epitome of the lack of compassion in Australian politics, use the phrase “Australia is a compassionate country, but…” so often they might as well inscribe it on their letterhead. Of course this is hypocritical, but it is not enough to say the right are deforming the true meaning of the term. The point is that compassion is a contested term in Australian political discourse; its meanings are not fixed, but constructed and struggled over by competing political interests. This should not be particularly surprising. Stuart Hall, following Ernesto Laclau and others, famously argued that no political term has an intrinsic meaning. Meanings are produced – articulated, and de- or re-articulated – through a dynamic and partisan “suturing together of elements that have no necessary or eternal belongingness” (10). Compassion has many possible political meanings; it can be articulated to diverse social (and antisocial) ends. If I was writing on the politics of compassion in the US, for example, I would be talking about George W. Bush’s slogan of “compassionate conservatism”, and whatever Hannah Arendt meant when she argued that “the passion of compassion has haunted and driven the best men [sic] of all revolutions” (65), I think she meant something very different by the term than do, say, Rural Australians for Refugees. As Lauren Berlant has written, “politicized feeling is a kind of thinking that too often assumes the obviousness of the thought it has” (48). Hage has also opened this assumed obviousness to question, writing that “small-‘l’ liberals often translate the social conditions that allow them to hold certain superior ethical views into a kind of innate moral superiority. They see ethics as a matter of will” (8-9). These social conditions are complex – it isn’t just that, as some on the right like to assert, compassion is a product of middle class comfort. The actual relations are more dynamic and open. Connections between class and occupational categories on the one hand, and social attitudes and values on the other, are not given but constructed, articulated and struggled over. As Hall put it, the way class functions in the distribution of ideologies is “not as the permanent class-colonization of a discourse, but as the work entailed in articulating these discourses to different political class practices” (139). The point here is to emphasise that the politics of compassion are not straightforward, and that we can recognise and affirm feelings of compassion while questioning the politics that seem to emanate from those feelings. For example, a politics that takes compassion as its basis seems ill-suited to think through issues it can’t put a human face to – that is, the systematic and structural conditions for mandatory detention and border control. Compassion’s political investments accrue to specifiable individuals and groups, and to the harms done to them. This is not, as such, a bad thing, particularly if you happen to be a specifiable individual to whom a substantive harm has been done. But compassion, going one by one, group by group, doesn’t cope well with situations where the form of the one, or the form of the disadvantaged minority, constitutes not only a basis for aid or emancipation, but also violently imposes particular ideas of modern western subjectivity. How does this violence work? I want to answer by way of the story of an Iranian man who applied for asylum in Australia in 2004. In the available documents he is referred to as “the Applicant”. The Applicant claimed asylum based on his homosexuality, and his fear of persecution should he return to Iran. His asylum application was rejected by the Refugee Review Tribunal because the Tribunal did not believe he was really gay. In their decision they write that “the Tribunal was surprised to observe such a comprehensive inability on the Applicant’s part to identify any kind of emotion-stirring or dignity-arousing phenomena in the world around him”. The phenomena the Tribunal suggest might have been emotion-stirring for a gay Iranian include Oscar Wilde, Alexander the Great, Andre Gide, Greco-Roman wrestling, Bette Midler, and Madonna. I can personally think of much worse bases for immigration decisions than Madonna fandom, but there is obviously something more at stake here. (All quotes from the hearing are taken from the High Court transcript “WAAG v MIMIA”. I have been unable to locate a transcript of the original RRT decision, and so far as I know it remains unavailable. Thanks to Mark Pendleton for drawing my attention to this case, and for help with references.) Justice Kirby, one of the presiding Justices at the Applicant’s High Court appeal, responded to this with the obvious point, “Madonna, Bette Midler and so on are phenomena of the Western culture. In Iran, where there is death for some people who are homosexuals, these are not in the forefront of the mind”. Indeed, the High Court is repeatedly critical and even scornful of the Tribunal decision. When Mr Bennett, who is appearing for the Minister for Immigration in the appeal begins his case, he says, “your Honour, the primary attack which seems to be made on the decision of the –”, he is cut off by Justice Gummow, who says, “Well, in lay terms, the primary attack is that it was botched in the Tribunal, Mr Solicitor”. But Mr Bennett replies by saying no, “it was not botched. If one reads the whole of the Tribunal judgement, one sees a consistent line of reasoning and a conclusion being reached”. In a sense this is true; the deep tragicomic weirdness of the Tribunal decision is based very much in the unfolding of a particular form of homophobic rationality specific to border control and refugee determination. There have been hundreds of applications for protection specifically from homophobic persecution since 1994, when the first such application was made in Australia. As of 2002, only 22% of those applications had been successful, with the odds stacked heavily against lesbians – only 7% of lesbian applicants were successful, against a shocking enough 26% of gay men (Millbank, Imagining Otherness 148). There are a number of reasons for this. The Tribunal has routinely decided that even if persecution had occurred on the basis of homosexuality, the Applicant would be able to avoid such persecution if she or he acted ‘discreetly’, that is, hid their sexuality. The High Court ruled out this argument in 2003, but the Tribunal maintains an array of effective techniques of homophobic exclusion. For example, the Tribunal often uses the Spartacus International Gay Guide to find out about local conditions of lesbian and gay life even though it is a tourist guide book aimed at Western gay men with plenty of disposable income (Dauvergne and Millbank 178-9). And even in cases which have found in favour of particular lesbian and gay asylum seekers, the Tribunal has often gone out of its way to assert that lesbians and gay men are, nevertheless, not the subjects of human rights. States, that is, violate no rights when they legislate against lesbian and gay identities and practices, and the victims of such legislation have no rights to protection (Millbank, Fear 252-3). To go back to Madonna. Bennett’s basic point with respect to the references to the Material Girl et al is that the Tribunal specifically rules them as irrelevant. Mr Bennett: The criticism which is being made concerns a question which the Tribunal asked and what is very much treated in the Tribunal’s judgement as a passing reference. If one looks, for example, at page 34 – Kirby J: This is where Oscar, Alexander and Bette as well as Madonna turn up? Mr Bennett: Yes. The very paragraph my learned friend relies on, if one reads the sentence, what the Tribunal is saying is, “I am not looking for these things”. Gummow J: Well, why mention it? What sort of training do these people get in decision making before they are appointed to this body, Mr Solicitor? Mr Bennett: I cannot assist your Honour on that. Gummow J: No. Well, whatever it is, what happened here does not speak highly of the results of it. To gloss this, Bennett argues that the High Court are making too much of an irrelevant minor point in the decision. Mr Bennett: One would think [based on the High Court’s questions] that the only things in this judgement were the throwaway references saying, “I wasn’t looking for an understanding of Oscar Wilde”, et cetera. That is simply, when one reads the judgement as a whole, not something which goes to the centre at all… There is a small part of the judgement which could be criticized and which is put, in the judgement itself, as a subsidiary element and prefaced with the word “not”. Kirby J: But the “not” is a bit undone by what follows when I think Marilyn [Monroe] is thrown in. Mr Bennett: Well, your Honour, I am not sure why she is thrown in. Kirby J: Well, that is exactly the point. Mr Bennett holds that, as per Wayne’s World, the word “not” negates any clause to which it is attached. Justice Kirby, on the other hand, feels that this “not” comes undone, and that this undoing – and the uncertainty that accrues to it – is exactly the point. But the Tribunal won’t be tied down on this, and makes use of its “not” to hold gay stereotypes at arm’s length – which is still, of course, to hold them, at a remove that will insulate homophobia against its own illegitimacy. The Tribunal defends itself against accusations of homophobia by announcing specifically and repeatedly, in terms that consciously evoke culturally specific gay stereotypes, that it is not interested in those stereotypes. This unconvincing alibi works to prevent any inconvenient accusations of bias from butting in on the routine business of heteronormativity. Paul Morrison has noted that not many people will refuse to believe you’re gay: “Claims to normativity are characteristically met with scepticism. Only parents doubt confessions of deviance” (5). In this case, it is not a parent but a paternalistic state apparatus. The reasons the Tribunal did not believe the applicant [were] (a) because of “inconsistencies about the first sexual experience”, (b) “the uniformity of relationships”, (c) the “absence of a “gay” circle of friends”, (d) “lack of contact with the “gay” underground” and [(e)] “lack of other forms of identification”. Of these the most telling, I think, are the last three: a lack of gay friends, of contact with the gay underground, or of unspecified other forms of identification. What we can see here is that even if the Tribunal isn’t looking for the stereotypical icons of Western gay culture, it is looking for the characteristic forms of Western gay identity which, as we know, are far from universal. The assumptions about the continuities between sex acts and identities that we codify with names like lesbian, gay, homosexual and so on, often very poorly translate the ways in which non-Western populations understand and describe themselves, if they translate them at all. Gayatri Gopinath, for example, uses the term “queer diaspor[a]... in contradistinction to the globalization of “gay” identity that replicates a colonial narrative of development and progress that judges all other sexual cultures, communities, and practices against a model of Euro-American sexual identity” (11). I can’t assess the accuracy of the Tribunal’s claims regarding the Applicant’s social life, although I am inclined to scepticism. But if the Applicant in this case indeed had no gay friends, no contact with the gay underground and no other forms of identification with the big bad world of gaydom, he may obviously, nevertheless, have been a Man Who Has Sex With Men, as they sometimes say in AIDS prevention work. But this would not, either in the terms of Australian law or the UN Convention, qualify him as a refugee. You can only achieve refugee status under the terms of the Convention based on membership of a ‘specific social group’. Lesbians and gay men are held to constitute such groups, but what this means is that there’s a certain forcing of Western identity norms onto the identity and onto the body of the sexual other. This shouldn’t read simply as a moral point about how we should respect diversity. There’s a real sense that our own lives as political and sexual beings are radically impoverished to the extent we fail to foster and affirm non-Western non-heterosexualities. There’s a sustaining enrichment that we miss out on, of course, in addition to the much more serious forms of violence others will be subject to. And these are kinds of violence as well as forms of enrichment that compassionate politics, organised around the good refugee, just does not apprehend. In an essay on “The politics of bad feeling”, Sara Ahmed makes a related argument about national shame and mourning. “Words cannot be separated from bodies, or other signs of life. So the word ‘mourns’ might get attached to some subjects (some more than others represent the nation in mourning), and it might get attached to some objects (some losses more than others may count as losses for this nation)” (73). At one level, these points are often made with regard to compassion, especially as it is racialised in Australian politics; for example, that there would be a public outcry were we to detain hypothetical white boat people. But Ahmed’s point stretches further – in the necessary relation between words and bodies, she asks not only which bodies do the describing and which are described, but which are permitted a relation to language at all? If “words cannot be separated from bodies”, what happens to those bodies words fail? The queer diasporic body, so reductively captured in that phrase, is a case in point. How do we honour its singularity, as well as its sociality? How do we understand the systematicity of the forces that degrade and subjugate it? What do the politics of compassion have to offer here? It’s easy for the critic or the cynic to sneer at such politics – so liberal, so sentimental, so wet – or to deconstruct them, expose “the violence of sentimentality” (Berlant 62), show “how compassion towards the other’s suffering might sustain the violence of appropriation” (Ahmed 74). These are not moves I want to make. A guiding assumption of this essay is that there is never a unilinear trajectory between feelings and politics. Any particular affect or set of affects may be progressive, reactionary, apolitical, or a combination thereof, in a given situation; compassionate politics are no more necessarily bad than they are necessarily good. On the other hand, “not necessarily bad” is a weak basis for a political movement, especially one that needs to understand and negotiate the ways the enclosures and borders of late capitalism mass-produce bodies we can’t put names to, people outside familiar and recognisable forms of identity and subjectivity. As Etienne Balibar has put it, “in utter disregard of certain borders – or, in certain cases, under covers of such borders – indefinable and impossible identities emerge in various places, identities which are, as a consequence, regarded as non-identities. However, their existence is, none the less, a life-and-death question for large numbers of human beings” (77). Any answer to that question starts with our compassion – and our rage – at an unacceptable situation. But it doesn’t end there. References Ahmed, Sara. “The Politics of Bad Feeling.” Australian Critical Race and Whiteness Studies Association Journal 1.1 (2005): 72-85. Arendt, Hannah. On Revolution. Harmondsworth: Penguin, 1973. Balibar, Etienne. We, the People of Europe? Reflections on Transnational Citizenship. Trans. James Swenson. Princeton: Princeton UP, 2004. Berlant, Lauren. “The Subject of True Feeling: Pain, Privacy and Politics.” Cultural Studies and Political Theory. Ed. Jodi Dean. Ithaca and Cornell: Cornell UP, 2000. 42-62. Bureau of Immigration and Population Research. Illegal Entrants in Australia: An Annotated Bibliography. Canberra: Australian Government Publishing Service, 1994. Dauvergne, Catherine and Jenni Millbank. “Cruisingforsex.com: An Empirical Critique of the Evidentiary Practices of the Australian Refugee Review Tribunal.” Alternative Law Journal 28 (2003): 176-81. “False Refugees and Misplaced Compassion” Editorial. Quadrant 390 (2002): 2-4. Hage, Ghassan. Against Paranoid Nationalism: Searching for Hope in a Shrinking Society. Annandale: Pluto, 2003. Hall, Stuart. The Hard Road to Renewal: Thatcherism and the Crisis of the Left. London: Verso, 1988. Joint Standing Committee on Migration. Illegal Entrants in Australia: Balancing Control and Compassion. Canberra: The Committee, 1990. Mares, Peter. Borderline: Australia’s Treatment of Refugees and Asylum Seekers. Sydney: UNSW Press, 2001. Millbank, Jenni. “Imagining Otherness: Refugee Claims on the Basis of Sexuality in Canada and Australia.” Melbourne University Law Review 26 (2002): 144-77. ———. “Fear of Persecution or Just a Queer Feeling? Refugee Status and Sexual orientation in Australia.” Alternative Law Journal 20 (1995): 261-65, 299. Morrison, Paul. The Explanation for Everything: Essays on Sexual Subjectivity. New York: New York UP, 2001. Pendleton, Mark. “Borderline.” Bite 2 (2004): 3-4. “WAAG v MIMIA [2004]. HCATrans 475 (19 Nov. 2004)” High Court of Australia Transcripts. 2005. 17 Oct. 2005 http://www.austlii.edu.au/au/other/HCATrans/2004/475.html>. Citation reference for this article MLA Style McGrath, Shane. "Compassionate Refugee Politics?." M/C Journal 8.6 (2005). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0512/02-mcgrath.php>. APA Style McGrath, S. (Dec. 2005) "Compassionate Refugee Politics?," M/C Journal, 8(6). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0512/02-mcgrath.php>.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography