To see the other types of publications on this topic, follow the link: United States. Federal Engery Regulatory Commission.

Journal articles on the topic 'United States. Federal Engery Regulatory Commission'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 27 journal articles for your research on the topic 'United States. Federal Engery Regulatory Commission.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Ross, Alex. "Federal Pipeline Rate Making: Alternative Approaches of the United States Federal Energy Regulatory Commission." Alberta Law Review 45, no. 3 (March 1, 2008): 735. http://dx.doi.org/10.29173/alr263.

Full text
Abstract:
This article provides an overview of the alternative rate making methodologies adopted by the United States Federal Energy Regulatory Commission (FERC) in itsregulation of transportation rates for oil and natural gas pipelines. In 1997, authority over rate making for interstate oil and natural gas pipelines was transferred to the newly created FERC. This article describes the history of interstate pipeline rate making and the transfer of rate making authority to the FERC.The author looks at the innovative pipeline rate making methodologies implemented by the FERC in its regulation of transportation rates for both oil andnatural gas pipelines. The article describes the adoption by FERC of market based rates and a generally applicable indexed rate cap methodology for oil pipelinerate setting. In respect of natural gas pipelines, the legislative requirements and practical realities associated with cost-of-service rate making by FERC aredescribed and FERC’s policies permitting selective discounting, shipper-specific negotiated rates, and market based rates for natural gas pipelines arereviewed.The Commission’s adoption of the alternative rate making methodologies has taken the emphasis off of general rate case litigation as a means of establishingjust and reasonable rates for interstate oil and natural gas pipelines and related facilities. The alternative rate making methodologies also represent a significantdeparture from cost-of-service rate making, with increasing focus on rate flexibility and competition as a means of generating efficiencies for customers of interstate oil and natural gas pipelines.
APA, Harvard, Vancouver, ISO, and other styles
2

John, Douglas F. "Marketing Alberta Natural Gas in the United States after the Free Trade Agreement: Negotiating the U.S. Regulatory Maze." Alberta Law Review 28, no. 1 (January 1, 1990): 94. http://dx.doi.org/10.29173/alr704.

Full text
Abstract:
Although the border between Canada and the United States for natural gas has been open for some time now, the free-market development of natural gas industries is changing from short-term deal-making to long-term industry placement. Here the Canada-United States Free Trade Agreement will take on a critical role in permitting decisions on elements of trade to be made more confidently. This article focuses on key U. S. federal regulatory principles and programs and how Congress's intention in the Natural Gas Act has been carried through so that the federal government will no longer occupy the field of gas regulation, but ensure that where the use of that commodity involves the interests of two or more states, the overall national public interest would be protected. Therefore, producing states would regulate the physical production of gas before it enters the stream of interstate commerce as well as control matters entirely intrastate in nature. The future of contract demand conversions and gas inventory charges will allow customers to purchase gas from a variety of competitive suppliers without suffering a loss of service reliability. In effect gas inventory charges represent the Federal Energy Regulatory Commission's attempt to prevent pipelines from finding them selves with massive take-or-pay liabilities. Through Order No. 436, the Commission has attempted to streamline the regulatory approval process for pipeline construction projects and in turn to foster market competition. The author argues that rate reform is making its way towards what he feels is its natural conclusion where contract, rather than regulation, will be the principal determinant of right and obligation between industry participants at the interstate level. The Federal Energy Regulatory Commission would become more of a referee than director for questions of anti-competitive behaviour in the use of interstate facilities.
APA, Harvard, Vancouver, ISO, and other styles
3

Hollingworth, Alan S. "California Gas: A Brief History and Recent Events." Alberta Law Review 31, no. 1 (February 1, 1993): 86. http://dx.doi.org/10.29173/alr678.

Full text
Abstract:
The author discusses recent developments and ongoing issues related to regulatory authorities, contracts and pipeline matters affecting the gas industry in California, in comparison to elsewhere in the United States and Canada. Included is a review of some of the more important decisions of the Federal Energy Regulatory Commission, the California Public Utilities Commission and the National Energy Board. This paper is solely the work of the author. The views expressed herein do not necessarily represent the views of the author's firm or any client of that firm.
APA, Harvard, Vancouver, ISO, and other styles
4

Collier, Deirdre M., and Paul J. Miranti. "The Enlightenment’s connections to two US accounting-based regulatory models." Accounting History 24, no. 2 (July 29, 2018): 269–92. http://dx.doi.org/10.1177/1032373218787296.

Full text
Abstract:
Enlightenment ideals relating to individual and group autonomy versus state power have long shaped socioeconomic ordering in the Western world. This article explores how competing Enlightenment ideologies influenced the development of two different accounting-based regulatory models in the United States, the Interstate Commerce Commission (ICC) and the Securities and Exchange Commission (SEC). Both commissions experimented with both models with different outcomes. The ICC, formed in 1887, ultimately followed a Hamiltonian approach involving direct intervention of the federal government to regulate the monopoly power of railroads. Almost half of a century later, after the 1929 Crash, the SEC was formed to re-establish public confidence in the nation’s financial markets. That resulted in reducing investors’ risk perceptions by assuring greater transactional transparency and probity. The SEC settled upon a Jeffersonian approach, which supported the delegation of responsibility for the application of accounting knowledge in regulation to professional groups rather than government officials. This approach characterized the emergent bureaucracy of the United States’ fast-expanding national executive state.
APA, Harvard, Vancouver, ISO, and other styles
5

Abernethy, Avery M., and George R. Franke. "FTC Regulatory Activity and the Information Content of Advertising." Journal of Public Policy & Marketing 17, no. 2 (September 1998): 239–56. http://dx.doi.org/10.1177/074391569801700208.

Full text
Abstract:
Meta-analysis of studies examining more than 66,000 U.S. advertisements indicates that advertisements contained significantly fewer objective information claims during a period of strict advertising regulation by the Federal Trade Commission (1971–1981) than in the subsequent, less stringent period (1982–1992). The results do not appear to be due to spurious effects of atypical studies, other contemporaneous trends in the United States, or global economic factors. An important implication for public policy is that strict advertising regulation may have reduced the amount of advertising information available to consumers.
APA, Harvard, Vancouver, ISO, and other styles
6

Corones, Stephen, and Juliet Davis. "Protecting Consumer Privacy and Data Security: Regulatory Challenges and Potential Future Directions." Federal Law Review 45, no. 1 (March 2017): 65–95. http://dx.doi.org/10.1177/0067205x1704500104.

Full text
Abstract:
This article considers the regulatory problems of online tracking behaviour, lack of consent to data collection, and the security of data collected with or without consent. Since the mid-1990s the United States Federal Trade Commission has been using its power under the United States consumer protection regime to regulate these problems. The Australian Competition and Consumer Commission (ACCC), on the other hand, has yet to bring civil or criminal proceedings for online privacy or data security breaches, which indicates a reluctance to employ the Australian Consumer Law (‘ACL’) in this field.1 Recent legislative action instead points to a greater application of the specifically targeted laws under the Privacy Act 1988 (Cth) (‘Privacy Act’), and the powers of the Office of the Australian Information Commissioner (OAIC), to protect consumer privacy and data security. This article contends that while specific legislation setting out, and publicly enforcing, businesses’ legal obligations with respect to online privacy and data protection is an appropriate regulatory response, the ACL's broad, general protections and public and/or private enforcement mechanisms also have a role to play in protecting consumer privacy and data security.
APA, Harvard, Vancouver, ISO, and other styles
7

Burns, James, and Kimberly Beattie Saunders. "SEC fines non-US entities for unregistered cross-border brokerage and advisory activities." Journal of Investment Compliance 18, no. 1 (May 2, 2017): 75–77. http://dx.doi.org/10.1108/joic-02-2017-0002.

Full text
Abstract:
Purpose To explain a settlement involving a foreign financial institution, its non-US subsidiaries, and the US Securities and Exchange Commission (“SEC”) that reveals an SEC focus on policing the activities of foreign firms that reach into the United States and helps further define the scope of activities that require registration under the federal securities laws. Design/methodology/approach Provides insight into a recent area of focus for SEC regulators and introduces the potential regulatory implications for non-US firms with activities that reach into the United States. Findings Given the SEC’s current enforcement focus, it is critical that financial institutions take care to conduct their activities with an understanding of the regulatory requirements associated with the provision of brokerage and advisory services to US clients and customers – including, for many firms, registration as an investment adviser, broker-dealer, or both. Originality/value Practical regulatory guidance regarding SEC registration requirements that may reach non-US firms from experienced financial services lawyers specializing in asset management.
APA, Harvard, Vancouver, ISO, and other styles
8

Tajti, Tibor. "What makes the securities criminal law system of the United States work: 'All-embracing' 'blanket' securities crimes and the linked enforcement framework." Pravni zapisi 12, no. 1 (2021): 146–83. http://dx.doi.org/10.5937/pravzap0-30658.

Full text
Abstract:
The article explores the key factors that make the securities criminal law of the United States (US), as one of the integral building blocks of the capital markets and securities regulatory system, efficient. This includes the role and characteristics of sectoral (blanket) all-embracing securities crimes enshrined into the federal securities statutes, their nexus with general crimes, the close cooperation of the Securities Exchange Commission (SEC) and prosecutorial offices, the applicable evidentiary standards, and the fundamental policies undergirding these laws. The rich repository of US experiences should be instructive not only to the Member States of the European Union (EU) striving to forge deeper capital markets but also to those endeavoring to accede the EU (e.g., Serbia), or to create deep capital markets for which efficient prosecution of securities crimes is inevitable.
APA, Harvard, Vancouver, ISO, and other styles
9

Aldrovandi, Matthew S. P., Esther S. Parish, and Brenda M. Pracheil. "Understanding the Environmental Study Life Cycle in the United States Hydropower Licensing and Federal Authorization Process." Energies 14, no. 12 (June 10, 2021): 3435. http://dx.doi.org/10.3390/en14123435.

Full text
Abstract:
We analyzed United States Federal Energy Regulatory Commission (FERC) documents prepared for 29 recently licensed hydropower projects and created two novel datasets to improve understanding of the environmental study life cycle, defined here as the process that begins with an environmental study being requested by a hydropower stakeholder or regulator, and ends with the study either being rejected or approved/conducted. Our two datasets consisted of summaries of information taken from (1), study determination letters prepared by FERC for 23 projects that were using the integrated licensing process, and (2), environmental study submittals and issuances tracked and attributed to seven projects using the FERC record. Our objective was to use the two resulting environmental life cycle datasets to understand which types of environmental studies are approved, rejected, and implemented during FERC licensing, and how consistently those types of studies are required across multiple hydropower projects. We matched the requested studies to a set of 61 river function indicators in eight categories and found that studies related to the category of biota and biodiversity were requested most often across all 29 projects. Within that category, studies related to river function indicators of presence, absence, detection of species and habitat/critical habitat were the most important to stakeholders, based on the relative number of studies requested. The study approval, rejection, and request rates were similar within each dataset, although the 23 projects with study determination letters had many rejected studies, whereas the dataset created from the seven projects had very few rejected studies.
APA, Harvard, Vancouver, ISO, and other styles
10

Dwyer, Johanna T., and Paul M. Coates. "Why Americans Need Information on Dietary Supplements." Journal of Nutrition 148, suppl_2 (August 1, 2018): 1401S—1405S. http://dx.doi.org/10.1093/jn/nxy081.

Full text
Abstract:
Abstract Until a decade ago, no dietary supplement (DS) databases with open access for public use existed in the United States. They were needed by researchers, since half of American adults use dietary DSs and, without information on supplement use and composition, exposures could not be estimated. These articles on Challenges and Future Directions for Dietary Supplement Databases describe subsequent progress. They begin by describing why information on DSs is needed by the government and how it is used to ensure the health of the public. Current developments include: application of DS information to meet public health needs; research efforts on DS quality, efficacy, and safety (as conducted by the Office of Dietary Supplements and other federal agencies); enhanced regulatory activities implemented by the FDA Office of Dietary Supplement Programs, the FDA Office of Enforcement, and the Federal Trade Commission; and initiatives for broader development and dissemination of DS databases for commercial and public use. Other contributions in this journal supplement describe the challenges of working with DSs and the progress that has been made. Additional articles describe surveys of DS use among the general US population and also among special groups such as high supplement users, illustrating why there is a need in the United States for information on supplements. Likely directions for the future of DS science are summarized.
APA, Harvard, Vancouver, ISO, and other styles
11

Strange, W. E. "Establishment of GPS Strain Monitoring Networks in the Eastern United States." Seismological Research Letters 59, no. 4 (October 1, 1988): 317. http://dx.doi.org/10.1785/gssrl.59.4.317.

Full text
Abstract:
Abstract The National Geodetic Survey (NGS) in cooperation with a number of federal agencies, state and local groups and universities is establishing GPS networks in the United States, east of the Rocky Mountains, which can be used to monitor strain and vertical deformation. These GPS networks are tied to a framework of some 14 fixed and mobile VLBI sites. In cooperation with the Nuclear Regulatory Commission (NRC), NGS established a 45 station GPS regional network in Nov.–Dec. 1987 which is tied to the VLBI framework. This network is scheduled for reobservation in 1989 and funds permitting, at regular intervals thereafter. A number of additional, more dense networks have been or are in the process of being established. The Tennessee Department of Transportation has established a 60 station statewide network to act as a reference network for surveying in conjunction with road construction. This network is expected to have an accuracy of a few parts in 107. NGS in cooperation with the NRC and the University of Maine established in 1986 a high accuracy GPS network in southeast Maine. In 1987 NGS in support of the Federal Aviation Administration (FAA) has established approximately 100 stations throughout Ohio with an accuracy in the 1:106 to 1:107 range. Toward the end of 1988, NGS, working in conjunction with several state agencies and the University of Florida, will establish a statewide network of about 140 stations with an accuracy in the 1:106 to 1:107 range. NGS, in cooperation with the Department of Energy, has also established a high accuracy to 1:107 to 1:108 GPS traverse from Florida to Maine connecting stations at tide gauge sites. The State of Texas is establishing a number of permanent GPS stations in support of highway surveying. These stations will allow strain monitoring across Texas at the 1:108 level. Additional networks are in the planning stage. It is clear that large numbers of high accuracy GPS networks are being established throughout the eastern United States. Many of these networks are being established for other than geophysical purposes. In many cases the state highway departments and others are interested only in 1:106 accuracy. As a practical matter this means that to assure 1:106 accuracy a few parts in 107 accuracy (1 to 3 cm over 100 kms) is often attained, but this is by no means certain. Also there are normally no plans for systematic resurveys, only replacement of destroyed monuments. A challenge to the geophysical community is to interact with the groups undertaking the high accuracy surveys to assure that, at points of geophysical interest, satisfactory accuracies are achieved during initial epoch measurements. This means that a satisfactory number of observations are obtained and high accuracy reduction methods are used in obtaining differential positions from the data. The geophysical community must also develop plans for resurvey of geophysically interesting network components on a systematic basis.
APA, Harvard, Vancouver, ISO, and other styles
12

Dell'Erba, Marco. "From Inactivity to Full Enforcement: The Implementation of the "Do No Harm" Approach in Initial Coin Offerings." Michigan Technology Law Review, no. 26.2 (2020): 175. http://dx.doi.org/10.36645/mtlr.26.2.from.

Full text
Abstract:
This Article analyzes the way the Securities and Exchange Commission (“SEC”) has enforced securities laws with regard to Initial Coin Offerings (“ICOs”). In a speech held in 2016, the U.S. Commodities Futures Trading Commission (“CFTC”) Chairman Christopher Giancarlo emphasized the similarities between the advent of the blockchain technology and the Internet era. He offered the “do no harm” approach as the best way to regulate blockchain technology. The Clinton administration implemented the “do no harm” approach at the beginning of the Internet Era in the 1990s when regulators sought to support technological innovations without stifling them with burdensome rules. This Article suggests that the SEC adopted a “do no harm approach” and successfully pursued two of its fundamental institutional goals when enforcing securities laws in the context of ICOs: investor protection and preservation of capital formation. After providing a brief description of the basics of ICOs and the way they have evolved in the last two years, this Article examines the transition into a new phase of full enforcement action implemented by the SEC. This shift from inactivity to enforcement was gradual, characterized by clearly identifiable steps. Data on ICOs demonstrates that this rigorous enforcement of securities laws has not damaged the industry in the United States and may suggest that entrepreneurs have adapted to this enforcement approach. By contrast, a lack of enforcement would have probably increased uncertainty to the detriment of investors and entrepreneurs and put the UNITED STATES at a disadvantage in the international arena. Furthermore, this paper emphasizes the importance of pursuing specific goals in the short-to-medium term, particularly in order to make securities regulation uniform and avoid differences at the state and federal levels, as well as to encourage industry authorities such as Self-Regulatory Organizations (SROs) to develop high standards for self-regulation.
APA, Harvard, Vancouver, ISO, and other styles
13

Weinisch, PE, Kevin, and Paul Brueckner, BA. "The impact of shadow evacuation on evacuation time estimates for nuclear power plants." Journal of Emergency Management 13, no. 2 (March 1, 2015): 145. http://dx.doi.org/10.5055/jem.2015.0227.

Full text
Abstract:
A shadow evacuation is the voluntary evacuation of people from areas outside a declared evacuation area. Shadow evacuees can congest roadways and inhibit the egress of those evacuating from an area at risk. Federal regulations stipulate that nuclear power plant (NPP) licensees in the United States must conduct an Evacuation Time Estimate (ETE) study after each decennial census. The US Nuclear Regulatory Commission (NRC) published federal guidance for conducting ETE studies in November 2011. This guidance document recommends the consideration of a Shadow Region which extends 5 miles radially beyond the existing 10-mile Emergency Planning Zone (EPZ) for NPPs. The federal guidance also suggests the consideration of the evacuation of 20 percent of the permanent resident population in the Shadow Region in addition to 100 percent of the declared evacuation region within the EPZ when conducting ETE studies. The 20 percent recommendation was questioned in a March 2013 report prepared by the US Government Accountability Office. This article discusses the effects on ETE of increasing the shadow evacuation from 20 to 60 percent for 48 NPPs in the United States. Only five (10 percent) of the 48 sites show a significant increase (30 minutes or greater) in 90th percentile ETE (time to evacuate 90 percent of the population in the EPZ), while seven (15 percent) of the 48 sites show a significant increase in 100th percentile ETE (time to evacuate all population in the EPZ). Study areas that are prone to a significant increase in ETE due to shadow evacuation are classified as one of four types; case studies are presented for one plant of each type to explain why the shadow evacuation significantly affects ETE. A matrix of the four case types can be used by emergency management personnel to predict during planning stages whether the evacuated area is prone to a significant increase in ETE due to shadow evacuation. Potential mitigation tactics that reduce demand (public information) or increase capacity (contraflow, traffic control points, specialized intersection treatments) to offset the impact of shadow evacuation are discussed.
APA, Harvard, Vancouver, ISO, and other styles
14

Rogerson, David Alfred, Pedro Seixas, and James Robert Holmes. "Net Neutrality: An Incyte Perspective responding to recent developments in the European Union." Australian Journal of Telecommunications and the Digital Economy 4, no. 4 (January 11, 2017): 17. http://dx.doi.org/10.18080/ajtde.v4n4.79.

Full text
Abstract:
The road to net neutrality within the European Union (EU) has been slow and winding. However, a major milestone was reached in August 2016 through the publication of the BEREC Guidelines on the Implementation by National Regulators of European Net Neutrality Rules. These Guidelines, which must be given the “utmost consideration” by national regulators, provide the EU’s first detailed and unambiguous regulatory commitment to net neutrality, and are carefully crafted to balance the needs of content providers and network operators. This extended article explores the scope of the net neutrality principle as understood and applied in a number of jurisdictions. The approach in the EU is contrasted with the approaches of the Federal Communications Commission (FCC) in the United States (US) and of a number of other countries. Although there are some constants that recur for net neutrality in all of the countries examined, there remain a variety of specific local connotations.The article indicates that the question, “What is net neutrality?” will continue to be asked and continue to be very apposite for some time. The EU approach, with the BEREC Guidelines, will likely be central to a more harmonised approach worldwide.It also argues that the new EU framework is likely to be even more influential than its highly publicized but politically fractious US equivalent. The EU approach to net neutrality will find greater favour among developing countries, as it provides sufficient flexibility to offer network investment incentives whilst retaining appropriate competition and user safeguards. As with the earlier EU ex-ante regulatory frameworks for market analysis and cost-based interconnection, the BEREC paper paves the way for continued export of best practice regulation from the EU to the rest of the world. However, there are issues that demand caution in how the BEREC approach might be implemented.The authors have extensive experience as consultants and commentators on telecommunications industry and regulatory issues in many countries. David Rogerson and Jim Holmes are founding partners of Incyte Consulting, located in Falkirk (Scotland) and Melbourne, respectively. Pedro Seixas is a Principal Associate of Incyte Consulting located in Frankfurt.
APA, Harvard, Vancouver, ISO, and other styles
15

Rogerson, David Alfred, Pedro Seixas, and James Robert Holmes. "Net Neutrality: An Incyte Perspective responding to recent developments in the European Union." Journal of Telecommunications and the Digital Economy 4, no. 4 (January 11, 2017): 17–57. http://dx.doi.org/10.18080/jtde.v4n4.79.

Full text
Abstract:
The road to net neutrality within the European Union (EU) has been slow and winding. However, a major milestone was reached in August 2016 through the publication of the BEREC Guidelines on the Implementation by National Regulators of European Net Neutrality Rules. These Guidelines, which must be given the “utmost consideration” by national regulators, provide the EU’s first detailed and unambiguous regulatory commitment to net neutrality, and are carefully crafted to balance the needs of content providers and network operators. This extended article explores the scope of the net neutrality principle as understood and applied in a number of jurisdictions. The approach in the EU is contrasted with the approaches of the Federal Communications Commission (FCC) in the United States (US) and of a number of other countries. Although there are some constants that recur for net neutrality in all of the countries examined, there remain a variety of specific local connotations.The article indicates that the question, “What is net neutrality?” will continue to be asked and continue to be very apposite for some time. The EU approach, with the BEREC Guidelines, will likely be central to a more harmonised approach worldwide.It also argues that the new EU framework is likely to be even more influential than its highly publicized but politically fractious US equivalent. The EU approach to net neutrality will find greater favour among developing countries, as it provides sufficient flexibility to offer network investment incentives whilst retaining appropriate competition and user safeguards. As with the earlier EU ex-ante regulatory frameworks for market analysis and cost-based interconnection, the BEREC paper paves the way for continued export of best practice regulation from the EU to the rest of the world. However, there are issues that demand caution in how the BEREC approach might be implemented.The authors have extensive experience as consultants and commentators on telecommunications industry and regulatory issues in many countries. David Rogerson and Jim Holmes are founding partners of Incyte Consulting, located in Falkirk (Scotland) and Melbourne, respectively. Pedro Seixas is a Principal Associate of Incyte Consulting located in Frankfurt.
APA, Harvard, Vancouver, ISO, and other styles
16

Rodgman, A., and TA Perfetti. "The Composition of Cigarette Smoke: A Chronology of the Studies of Four Polycyclic Aromatic Hydrocarbons." Beiträge zur Tabakforschung International/Contributions to Tobacco Research 22, no. 3 (October 1, 2006): 208–54. http://dx.doi.org/10.2478/cttr-2013-0830.

Full text
Abstract:
AbstractAmong the polycyclic aromatic hydrocarbons (PAHs), a major class of identified cigarette mainstream smoke (MSS) components, are several shown to be tumorigenic in laboratory animals and suspect as possible tumorigens to humans. To date, nearly 540 PAHs have been completely or partially identified in tobacco smoke [Rodgman and Perfetti (1)]. A detailed chronology is presented of studies on four much discussed PAHs identified in tobacco smoke, namely, benz[a]anthracene (B[a]A), its 7,12-dimethyl derivative (DMB[a]A), dibenz[a, h]anthracene (DB[a, h]A), and benzo[a]pyrene (B[a]P). Of the four, DMB[a]A, DB[a, h]A, and B[a]P are considered to be potently tumorigenic on mouse skin painting and subcutaneous injection. Opinions on the tumorigenicity of B[a]A to mouse skin vary. DMB[a]A is frequently used in tumorigenicity studies as an initiator. Examination of the number of tobacco smoke-related citations listed for these four PAHs reveals the enormous effort devoted since the early 1950s to B[a]P vs. the other three. An annotated chronology from 1886 to date describes the tobacco smoke-related research pertinent to these four PAHs, their discovery, isolation and/or identification, quantitation, and contribution to the observed biological activity of MSS or cigarette smoke condensate (CSC). Much of the major literature on these four PAHs in tobacco smoke is presented in order to permit the reader to decide whether the current evidence is sufficient to classify them as a health risk to smokers. There has certainly been a tremendous effort by researchers to learn about these PAHs over the past several decades. Each of these PAHs when tested individually has been shown to possess the following biological properties: 1) Mutagenicity in certain bacterial situations, 2) tumorigenicity in certain animal species, to varying degrees under various administration modes, and 3) a threshold limit below which no tumorigenesis occurs. For more than five decades, it has been known that some of the PAHs, when co-administered in pairs of a potent tumorigen plus a non-tumorigen or weak tumorigen, show inhibitory effects on the tumorigenicity of the most potent, e.g., B[a]A plus DB[a, h]A; B[a]A plus B[a]P; anthracene plus DB[a, h]A. Over the period studied, some regulatory agencies considered these tobacco smoke PAHs to be serious health concerns, others did not.With respect to cigarette MSS, certainly the ‘danger is in the dose’ for any MSS component tested singularly to be tumorigenic. But is the level of any of these MSS PAHs high enough to be of concern to smokers? The information herein presented indicates that over the last five decades the following has occurred: 1) The per cigarette yields of these four PAHs have decreased substantially, 2) compared to CSC or Federal Trade Commission (FTC) ‘tar’, their per cigarette yields have also decreased to a point that they may be below any significance biologically, and 3) the specific tumorigenicity in mouse skin-painting studies of the CSC has decreased. These are the three criteria originally proposed to define the ‘less hazardous’ cigarette. Actually, criterion 1) was first directed only at B[a]P. Previous studies highlighted the concern that some regulatory bodies had in attempting to understand why lung cancer and other forms of cancer seemed more prevalent in smokers. But cigarette smoking alone could not reconcile the evidence. Social, ethnic, environmental, and economic factors are also very important in understanding the entire biological effect. In fact, the level of B[a]P in CSC could only explain about 2% of its specific tumorigenicity observed in skin-painted mice and the combination of the levels of all the known tumorigenic PAHs in CSC could only explain about 3% of its tumorigenicity. Despite an 18-month study in the late 1950s, the search for a ‘supercarcinogen’ in MSS and CSC to explain the observed biological effects was unsuccessful. In addition, the exceptional study on MSS PAHs by United States Department of Agriculture (USDA) personnel in the 1970s indicated no ‘supercarcinogen’ was present. Only recently has the concept of complex mixtures in relation to the understanding of the complexity of carcinogenesis taken hold. Perhaps the reason why MSS is less tumorigenic than expected in humans is because of the presence of other MSS components that inhibit or prevent tumorigenesis. For example, it is well known that MSS contains numerous anticarcinogens present in quantities significantly greater than those of the PAHs of concern. When one reviews the history of these four PAHs in MSS or CSC it is clear that many unanswered questions remain.
APA, Harvard, Vancouver, ISO, and other styles
17

Pettingill, Bernard. "Why Orthopedic Surgery for Elderly Indicates that the Maryland Total Cost of Care Model should be Universally Adopted." Journal of Health Care and Research 2, no. 1 (April 26, 2021): 63–69. http://dx.doi.org/10.36502/2021/hcr.6190.

Full text
Abstract:
Arthritis is the disease that kills the fewest but cripples the most. With the aging of the population in the United States and the antiquated DRG reimbursement system for hospital surgical intervention, it is inevitable that the Medicare assistant will bankrupt itself prior to the proposed bankruptcy date of 2026 if changes are not made. It may change would be to insist that the system in Maryland for reimbursement to hospitals for essential joint replacement surgery of the elderly be adapted nationwide. Medicaid expenditures are driven by a variety of factors, including the demand for care, the complexity of medical services provided, medical inflation, and life expectancy. The Medicare program has two separate trust funds – the Hospital Insurance (HI) Trust Fund and the Supplementary Medical Insurance (SMI) Trust Fund. Under the Hospital Insurance Trust, payroll taxes from workers and their employers go towards paying for the Part A benefits for today’s Medicare beneficiaries. In 2019, Medicare provided benefits to over 60 million elderly patients at an estimated cost of $796 billion [1]. While excluding the significant decrease in payroll taxes during the COVID-19 pandemic, the latest 2020 projections calculate Medicare Hospital Trust insolvency by 2026 [2]. The 2020 report declared that funds would be sufficient to pay for only 90 percent of Part A expenses at the time of this writing. Since inception, the Hospital Insurance Trust has never been insolvent, because there are no provisions in the Social Security Act that govern what would happen if insolvency were to occur. Ten of the last twelve years have witnessed expenditure outflows outpacing the Hospital Insurance Trust inflows, resulting in total Medicare spending obligations outpacing the increasing demands on the federal budget, as the number of elderly beneficiaries and the per capita health care costs continue to grow [2]. One of the principal goals of the following study is to determine how elderly patients, who often suffer from acute stages of arthritis and other orthopedic diseases, due in part to wear and tear, can continue to demand surgical intervention, in particular joint replacement surgery. Arthritis has been described as the disease that kills the fewest but cripples the most. With that in mind, the hospital systems ability to absorb the ever increasing number of elderly patients who demand joint replacement surgeries will continue to outstrip supply. The principal author of this study completed his PhD dissertation at the University of Manchester in 1977 by measuring the cost-benefit analysis of the treatment of chronic rheumatoid arthritis in Great Britain. Therefore, the author of this study aims to show the only reasonable method of payment for the imminent immeasurable demand for treatment for the elderly for age related diseases such as joint replacement surgery [3]. A recent Journal of Rheumatology article projects Medicare will finance approximately 2.67 million joint replacement surgeries by 2035, plus an additional 2.35 million joint replacement surgeries by the year 2040 [4]. The author believes that the current nationwide Diagnostic Related Groups (DRGs) system that helps determine how much Medicare pays the hospital for each “product” needs to be phased out as soon as possible. Our research shows that prior to Medicare implementing the DRGs payment system, Maryland proved that their total cost model of state-wide rewards and penalties compensated “efficient and effective” hospitals, providing care as defined by metrics set up by the Health Services Cost Review Commission (HSCRC). The Maryland legislature granted this independent government agency the broad powers to insulate the HSCRC from conflicts of interests, regulatory capture, and political meddling in the long term. In exchange, the HSCRC had the freedom to design a system that must deliver on three areas: cost reduction of hospital services, health improvement for all Maryland residents, and quality of life care improvements. Since inception of the HSCRC, all stakeholders are legally required to comply with robust auditing and data-submission requirements that allow the agency to collect data on the costs, patient volume, and financial condition of all inpatient, hospital-based outpatient, and emergency services in Maryland. This level of transparency allows the agency to set prices for hospital services, and hospitals must obey because it is Maryland law. Because of this methodology, HSCRC-approved average Maryland hospital markups ranged from 18 percent in 1980 to only 22 percent in 2008. During that same period, the average hospital markup nationally skyrocketed from 20 percent in 1980 to more than 187 percent in 2008 [5]. This strong evidence is the primary reason why the HSCRC has continued to receive a federal waiver from the Centers for Medicare and Medicaid Services, which requires both Medicare and Medicaid to pay the HSCRC-approved rates statewide. No discounts are given because of volume, nor any shifting of costs to other payers. There is a mandate: same price for the same service at the same hospital, no exceptions. Adjustments for uncompensated medical care are automatically bundled into the HSCRC-approved rates, as thus, this financial burden is shared by all hospitals in Maryland. This article explores the important milestones taken by the state of Maryland and how the lessons learned are responsible for the impressive results of their program today. This author believes that by applying the Maryland Total Cost of Care Model (Maryland TCOC Model) nationwide will yield financial savings of at least $227 billion by 2035, plus another $280 billion by 2040, exclusively from joint replacement surgeries reimbursed at HSCRC-approved rates and not any other method.
APA, Harvard, Vancouver, ISO, and other styles
18

Seelman, Kate D. "Viewpoint: Telecommunications and Internet Broadband Policy: Sorting Out the Pieces for Telerehabilitation." International Journal of Telerehabilitation 2, no. 1 (September 24, 2010). http://dx.doi.org/10.5195/ijt.2010.6051.

Full text
Abstract:
Technological change is accelerating and with it regulatory upheaval. Most of us agree that providing universal telecommunication services to all our citizens is a worthy ideal. Nonetheless, many of us do not agree that regulation should be the means to make broadband Internet services widely available. This Viewpoint begins sorting out pieces of the emerging United States, regulatory and policy puzzle for broadband Internet with an eye to the interests of telerehabilitation providers and consumers. Just how might changes in legal authority, regulation and agency jurisdictions impact us?Key words: telecommunications regulation, telerehabilitation, Federal Communications Commission (FCC), Telecommunications Act of 1996, National Broadband Plan; Comcast vs. FCC; electronic health records (EHR); personal health records (PHR); health information technology (HIT),
APA, Harvard, Vancouver, ISO, and other styles
19

Venger, Olesya. "Internet Research in Online Environments for Children: Readability of Privacy and Terms of Use Policies; The Uses of (Non)Personal Data by Online Environments and Third-Party Advertisers." Journal For Virtual Worlds Research 10, no. 1 (May 31, 2017). http://dx.doi.org/10.4101/jvwr.v10i1.7227.

Full text
Abstract:
Online environments encourage their prospects, including children and teens, to register and provide information about themselves in order to participate in online activities. Many sites' privacy and terms of use policies tend to provide hard-to-understand explanations about their data-using practices, contributing to a widespread confusion regarding the differences between what counts as non-personal versus personal data, and whether this data could be used for behavioral targeting or selling. Little research has been done on online advertising self-regulations and repercussions stemming from privacy-related dilemmas associated with them (Markham & Buchanan, 2012). Given the push of advertising networks to substantiate self-regulatory policies regarding online advertising (Luft, 2008; Lal Bhasin, 2008), this study investigates how privacy and terms of use policies reflect media self-regulations and privacy-related dilemmas worldwide (Federal Trade Commission, 2000; European Commission, 2012). Addressing self-regulatory practices of online media entities and their implications, this study also conducts the readability tests of privacy and terms of use-related policies of Neopets as an example of a popular virtual environment. Finally, it discusses the use of (non)personal data provided by children and teens, while evaluating how marketers' promotional initiatives operate online, and how marketers self-regulate across the United States and the European Union. Implications are discussed and recommendations regarding how marketers in online environments may enhance their reputation by being responsible given their promotional activities in online environments are offered.
APA, Harvard, Vancouver, ISO, and other styles
20

Trautman, Lawrence J. "GOVERNANCE OF THE FACEBOOK PRIVACY CRISIS." Pittsburgh Journal of Technology Law & Policy 20, no. 1 (January 7, 2020). http://dx.doi.org/10.5195/tlp.2020.234.

Full text
Abstract:
In November 2018, The New York Times ran a front-page story describing how Facebook concealed knowledge and disclosure of Russian-linked activity and exploitation resulting in Kremlin led disruption of the 2016 and 2018 U.S. elections, through the use of global hate campaigns and propaganda warfare. By mid-December 2018, it became clear that the Russian efforts leading up to the 2016 U.S. elections were much more extensive than previously thought. Two studies conducted for the United States Senate Select Committee on Intelligence (SSCI), by: (1) Oxford University’s Computational Propaganda Project and Graphika; and (2) New Knowledge, provide considerable new information and analysis about the Russian Internet Research Agency (IRA) influence operations targeting American citizens.By early 2019 it became apparent that a number of influential and successful high growth social media platforms had been used by nation states for propaganda purposes. Over two years earlier, Russia was called out by the U.S. intelligence community for their meddling with the 2016 American presidential elections. The extent to which prominent social media platforms have been used, either willingly or without their knowledge, by foreign powers continues to be investigated as this Article goes to press. Reporting by The New York Times suggests that it wasn’t until the Facebook board meeting held September 6, 2017 that board audit committee chairman, Erskin Bowles, became aware of Facebook’s internal awareness of the extent to which Russian operatives had utilized the Facebook and Instagram platforms for influence campaigns in the United States. As this Article goes to press, the degree to which the allure of advertising revenues blinded Facebook to their complicit role in offering the highest bidder access to Facebook users is not yet fully known. This Article can not be a complete chapter in the corporate governance challenge of managing, monitoring, and oversight of individual privacy issues and content integrity on prominent social media platforms. The full extent of Facebook’s experience is just now becoming known, with new revelations yet to come. All interested parties: Facebook users; shareholders; the board of directors at Facebook; government regulatory agencies such as the Federal Trade Commission (FTC) and Securities and Exchange Commission (SEC); and Congress must now figure out what has transpired and what to do about it. These and other revelations have resulted in a crisis for Facebook. American democracy has been and continues to be under attack. This article contributes to the literature by providing background and an account of what is known to date and posits recommendations for corrective action.
APA, Harvard, Vancouver, ISO, and other styles
21

Scott, Tony, and Jacob Hilton. "Cultural Resource Investigations for the Praxair Phillips 66 H2 Pipeline in Brazoria County, Texas." Index of Texas Archaeology Open Access Grey Literature from the Lone Star State, 2020. http://dx.doi.org/10.21112/ita.2020.1.49.

Full text
Abstract:
Gray & Pape, Inc. was contracted to conduct a cultural resources survey for a proposed pipeline project. The project is a 14-inch pipeline from Praxair Freeport Plant to the Phillips 66 Clemens Storage Cavern located near Freeport, Texas. The project route measures approximately 28.0 kilometers (17.4 miles). The project’s Area of Potential Effect is the entire alignment route within a survey corridor of 91.4 meters (300 feet). This amounts to approximately 252 hectares (622 acres). Subsequent workspace revisions resulted in an additional 25.7 hectares (63.4 acres) or 2.6 kilometers (1.6 miles) of workspace, documented in Appendix C of this final report. The pipeline will be collocated with several existing pipelines in a well-maintained corridor for the entire length. The Project is part of a Nationwide 12 permit for which the Lead Federal Agency is the United States Army Corps of Engineers, Galveston District. The procedures to be followed by the United States Army Corps of Engineers to fulfill the requirements set forth in the National Historic Preservation Act, other applicable historic preservation laws, and Presidential directives as they relate to the regulatory program of the United States Army Corps of Engineers (33 CFR Parts 320-334) are articulated in the Regulatory Program of the United States Army Corps of Engineers, Part 325 -Processing of Department of the Army Permits, Appendix C -Procedures for the Protection of Historic Properties. Approximately 3.6 kilometers (2.25 miles) of the project length is located within property owned by the Texas Department of Criminal Justice, Clemens Prison Unit, which necessitated the procurement of a permit subject to the Antiquities Code of Texas. Permit Number 8666 was assigned to the project on December 4, 2018. As required under the provisions of Texas Antiquities Code Permit, all project records are housed at the Center for Archaeological Studies at Texas State University, San Marcos, Texas. The goals of this study were to assist the client, the Texas Historical Commission, and other relevant agencies in determining whether intact cultural resources were present within areas planned for construction, and if so to provide management recommendations for these resources. All work conducted by Gray & Pape, Inc. followed accepted guidelines and standards set forth by the Texas Historical Commission and the Council of Texas Archeologists. Prior to field investigation, site file research was used to develop a cultural context for the study. This research resulted in a listing of all archaeological sites and National Register properties within 1.6 kilometers (1 mile) of the project area, as well as a discussion of archaeological potential within the tract. Previous surveys conducted by HRA Gray & Pape, LLC and other firms overlap approximately 6.1 kilometers (3.8 miles) / 55.4 hectares (137 acres) of the current project’s corridor. These surveys were undertaken from between 2012 to 2013. These areas along with an additional 2.8 kilometers (2 miles) / 28.9 hectares (71.3 acres) of highly disturbed pipeline corridor were subjected to visual reconnaissance survey only. Another 3.0 kilometers (1.9 miles) / 27.5 hectares (68 acres) of the project is located within highly industrial areas of DOW property and was subjected to desktop assessment and determined to be of low potential for containing intact cultural materials. No further work is recommended for these areas. No new cultural resources were discovered during the survey. Gray & Pape, Inc. recommends no survey within these portions due to the highly disturbed conditions. Intensive pedestrian survey was completed on those portions of the current project that fall outside of the previous survey coverage or that have potential to impact previously unidentified sites. This amounts to 15.6 kilometers (9.7 miles) / 140 hectares (346 acres). As a result of survey efforts, one previously unrecorded archaeological site was identified during survey efforts. As currently mapped, the site is overlapped by an existing pipeline corridor and does not retain integrity within the project right-of-way. Gray & Pape, Inc. recommends that no further investigation be necessary within the surveyed portions of the project.
APA, Harvard, Vancouver, ISO, and other styles
22

Strosnider, Heather, Patrick Wall, Holly Wilson, Joseph Ralph, and Fuyuen Yip. "Tracking environmental hazards and health outcomes to inform decision-making in the United States." Online Journal of Public Health Informatics 11, no. 1 (May 30, 2019). http://dx.doi.org/10.5210/ojphi.v11i1.9772.

Full text
Abstract:
ObjectiveTo increase the availability and accessibility of standardized environmental health data for public health surveillance and decision-making.IntroductionIn 2002, the United States (US) Centers for Disease Control and Prevention (CDC) launched the National Environmental Public Health Tracking Program (Tracking Program) to address the challenges in environmental health surveillance described by the Pew Environmental Commission (1). The report cited gaps in our understanding of how the environment affects our health and attributed these gaps to a dearth of surveillance data for environmental hazards, human exposures, and health effects. The Tracking Program’s mission is to provide information from a nationwide network of integrated health and environmental data that drives actions to improve the health of communities. Accomplishing this mission requires a range of expertise from environmental health scientists to programmers to communicators employing the best practices and latest technical advances of their disciplines. Critical to this mission, the Tracking Program must identify and prioritize what data are needed, address any gaps found, and integrate the data into the network for ongoing surveillance.MethodsThe Tracking Program identifies important environmental health topics with data challenges based on the recommendations in the Pew Commission report as well as input from federal, state, territorial, tribal, and local partners. For each topic, the first step is to formulate the key surveillance question, which includes identifying the decision-maker or end user. Next, available data are evaluated to determine if the data can answer the question and, if not, what enhancements or new data are needed. Standards are developed to establish data requirements and to ensure consistency and comparability. Standardized data are then integrated into the network at national, state, and local levels. Standardized measures are calculated to translate the data into the information needed. These measures are then publically disseminated via national, state, and local web-based portals. Data are updated annually or as they are available and new data are added regularly. All data undergo a multi-step validation process that is semi-automated, routinized, and reproducible.ResultsThe first set of nationally consistent data and measures (NCDM) was released in 2008 and covered 8 environmental health topics. Since then the NCDM have grown to cover 14 topics. Additional standardized data and measures are integrated into the national network resulting in 23 topics with standardized 450 measures (Figure). On the national network, measures can be queried via the Data Explorer, viewed in the info-by-location application, or connected to via the network’s Application Program Interface (API). On average, 15,000 and 3300 queries are run every month on the Data Explorer and the API respectfully. Additional locally relevant data are available on state and local tracking networks.Gaps in data have been addressed through standards for new data collections, models to extend available data, new methodologies for using existing data, and expansion of the utility of non-traditional public health data. For example, the program has collaborated with the Environmental Protection Agency to develop daily estimates of fine particulate matter and ozone for every county in the conterminous US and to develop the first national database of standardized radon testing data. The program also collaborated with the National Aeronautics and Space Administration and its academic partners to transform satellite data into data products for public health.The Tracking Program has analyzed the data to address important gaps in our understanding of the relationship between negative health outcomes and environmental hazards. Data have been used in epidemiologic studies to better quantify the association between fine particulate matter, ozone, wildfire smoke, and extreme heat on emergency department visits and hospitalizations. Results are translated into measures of health burden for public dissemination and can be used to inform regulatory standards and public health interventions.ConclusionsThe scope of the Tracking Program’s mission and the volume of data within the network requires the program to merge traditional public health expertise and practices with current technical and scientific advances. Data integrated into the network can be used to (1) describe temporal and spatial trends in health outcomes and potential environmental exposures, (2) identify populations most affected, (3) generate hypotheses about associations between health and environmental exposures, and (4) develop, guide, and assess the environmental public health policies and interventions aimed at reducing or eliminating health outcomes associated with environmental factors. The program continues to expand the data within the network and the applications deployed for others to access the data. Current data challenges include the need for more temporally and spatially resolved data to better understand the complex relationships between environmental hazards, health outcomes, and risk factors at a local level. National standards are in development for systematically generating, analyzing, and disseminating small area data and real-time data that will allow for comparisons between different datasets over geography and time.References1. Pew Environmental Health Tracking Project Team. America’s Environmental Health Gap: Why the Country Needs a Nationwide Health Tracking Network. Johns Hopkins School of Hygiene and Public Health, Department of Health Policy and Management; 2000.
APA, Harvard, Vancouver, ISO, and other styles
23

Stewart, Richard B., Katrina Wyman, and Danielle Spiegel Feld. "Amicus Curiae Brief of the Guarini Center on Environmental, Energy and Land Use Law at New York University School of Law in Support of Petitioners (Federal Energy Regulatory Commission v. Electric Power Supply Association, Supreme Court of the United States)." SSRN Electronic Journal, 2015. http://dx.doi.org/10.2139/ssrn.2639196.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Hosseini, Hossein, and James Wright. "The Potential of Nuclear Reactor Technology to Treat Produced/Brackish Water for Oil & Gas Applications." Journal of Energy Research and Reviews, November 29, 2018, 1–6. http://dx.doi.org/10.9734/jenrr/2019/v2i129721.

Full text
Abstract:
A merger of two mature technologies (the nuclear and petroleum industries) has the potential to process water produced from oil and gas operations to drinking quality standards at a reasonable price of $0.30 to $0.40 per 42-gallon barrel. This “RO” process treats the produced water with the process heat from a small nuclear reactor with ~125 MW of power. This process also improves the efficiency of hydraulic fracturing and directional drilling, plus significantly reduces the volume of disposed water into formations while at the same time it increases public safety by reducing the probability of earthquakes [1]. For at least most of the past 10 years, the oil and gas industry in the United States has struggled to manage the ever-increasing costs of disposing and handling produced-water and other wastewater from oil and gas production in the Permian Basin and the US. This includes trying to develop and maintain the required high-quality fresh water supplies for both horizontal drilling, and new production techniques such as hydraulic fracturing. In fact, the current cost of water management for oil and gas production in the region has risen to the point where it has arguably become the industry’s most important cost issue. A successful approach to water management will maximize profit by promoting higher operational efficiency, leading to reduced costs. The nuclear energy industry is well known for being a capable generator of electricity in the US. In the past 10 years, the Department of Energy’s (DOE) Idaho National Laboratory (INL) has verified an innovative nuclear reactor design that has been constructed and tested in the US to treat any water source to “drinking water” quality, plus a “waste” stream.” According to DOE/INL Reports, this can be accomplished in the cost range of ~$0.38 per 42-gallon “barrel” (or less than a half a cent per gallon) [2]. This would improve the efficiency of the Oil & Gas production industry through the utilization of “clean” water sources, plus also potentially re-establish the freshwater resources (e.g., the Ogallala Aquifer) that have been both depleted and polluted by both the petroleum industry and agriculture over the past 75 years or so. The “process heat” required to treat this produced water to “drinking water” quality would be supplied by a 25 MW(thermal) “High-Temperature, Gas-cooled (nuclear) Reactor” (HTGR) that would be operated at temperatures up to 1700o F and cooled by the inert gas Helium (He). Further, this facility will never have to be "turned off" for refueling for ~70 years (the estimated life of the facility) since, in this reactor design, that process is automatic, and driven simply by gravity as described below. The nuclear fuel is contained in thousands of small fuel-bearing microspheres that are ~1 mm in diameter and also made of graphite. The fuel-bearing microspheres are then mixed with more graphite and placed in thousands of graphite “pebbles” that are approximately the size of a tennis ball. These tennis ball sized pebbles are then placed in the reactor core in a manner analogous to a moving “gum-ball” machine. They enter the core at the top and start their travel to the core bottom. When these tennis-ball sized pebbles reach the bottom, via gravity, the fuel is completely used and they automatically fall out of the reactor core bottom for disposal. When this occurs, space is made at the top of the reactor core for a new fuel pebble to start its journey to the bottom of the core. The cost of a “first” commercial plant with this design, constructed and privately financed in west Texas by the US private nuclear reactor engineering, design, and construction company named X-Energy, is estimated to be ~$1-2 billion. However, this cost is expected to be significantly reduced if X-Energy is 1) successful in financing this facility with municipal bonds and other non-governmental sources, plus 2) also working with the Trump administration in streamlining the “construction approval and licensing process” performed by the USNRC (United States Nuclear Regulatory Commission). It is X-energy’s belief that the current cost estimates by the federal government are inflated, and that by using engineering, design and construction processes currently required and used by other governments around the world, the total cost will be significantly reduced by up to 50%. In addition, this X-Energy facility will be the very first nuclear facility that will be constructed in the US using entirely private equity funds and financing which should also lower costs! And in fact, the Trump administration is currently reviewing other projects such as this, and X-Energy believes that the secret to lowering the facility costs of nuclear reactors in the US is to drastically streamline the regulatory process for the facility design, engineering and construction of all reactors. The attractive economic projections for this facility indicate both, a significant cost reduction of treating “produced water” from Oil & Gas Operations, and also provide a good path to both clean up and recharge existing fresh-water aquifers that have been polluted by agriculture and/or the petroleum industry. This marriage of technologies in the Petroleum and Nuclear industries can truly “make a difference” in improving the quality of drinking water in West Texas and also lead to a significant increase in profit for the oil and gas industry. It is also important to emphasize that the proposed nuclear reactor design that will be used for these applications have been proven to be “intrinsically safe” throughout the world. In this case, “intrinsically safe” is defined as “if this reactor starts to have any potentially catastrophic problem (generally caused by fuel “failure”), it will automatically and without human intervention shut itself down” This ability is due to the unique design of the fuel system. The concepts presented in this paper are transformational since this facility will utilize the technologies and experience of two gigantic and effective energy-producing entities in addressing and developing true “energy security” for the US and the world.
APA, Harvard, Vancouver, ISO, and other styles
25

Dwyer, Tim. "Transformations." M/C Journal 7, no. 2 (March 1, 2004). http://dx.doi.org/10.5204/mcj.2339.

Full text
Abstract:
The Australian Government has been actively evaluating how best to merge the functions of the Australian Communications Authority (ACA) and the Australian Broadcasting Authority (ABA) for around two years now. Broadly, the reason for this is an attempt to keep pace with the communications media transformations we reduce to the term “convergence.” Mounting pressure for restructuring is emerging as a site of turf contestation: the possibility of a regulatory “one-stop shop” for governments (and some industry players) is an end game of considerable force. But, from a public interest perspective, the case for a converged regulator needs to make sense to audiences using various media, as well as in terms of arguments about global, industrial, and technological change. This national debate about the institutional reshaping of media regulation is occurring within a wider global context of transformations in social, technological, and politico-economic frameworks of open capital and cultural markets, including the increasing prominence of international economic organisations, corporations, and Free Trade Agreements (FTAs). Although the recently concluded FTA with the US explicitly carves out a right for Australian Governments to make regulatory policy in relation to existing and new media, considerable uncertainty remains as to future regulatory arrangements. A key concern is how a right to intervene in cultural markets will be sustained in the face of cultural, politico-economic, and technological pressures that are reconfiguring creative industries on an international scale. While the right to intervene was retained for the audiovisual sector in the FTA, by contrast, it appears that comparable unilateral rights to intervene will not operate for telecommunications, e-commerce or intellectual property (DFAT). Blurring Boundaries A lack of certainty for audiences is a by-product of industry change, and further blurs regulatory boundaries: new digital media content and overlapping delivering technologies are already a reality for Australia’s media regulators. These hypothetical media usage scenarios indicate how confusion over the appropriate regulatory agency may arise: 1. playing electronic games that use racist language; 2. being subjected to deceptive or misleading pop-up advertising online 3. receiving messaged imagery on your mobile phone that offends, disturbs, or annoys; 4. watching a program like World Idol with SMS voting that subsequently raises charging or billing issues; or 5. watching a new “reality” TV program where products are being promoted with no explicit acknowledgement of the underlying commercial arrangements either during or at the end of the program. These are all instances where, theoretically, regulatory mechanisms are in place that allow individuals to complain and to seek some kind of redress as consumers and citizens. In the last scenario, in commercial television under the sector code, no clear-cut rules exist as to the precise form of the disclosure—as there is (from 2000) in commercial radio. It’s one of a number of issues the peak TV industry lobby Commercial TV Australia (CTVA) is considering in their review of the industry’s code of practice. CTVA have proposed an amendment to the code that will simply formalise the already existing practice . That is, commercial arrangements that assist in the making of a program should be acknowledged either during programs, or in their credits. In my view, this amendment doesn’t go far enough in post “cash for comment” mediascapes (Dwyer). Audiences have a right to expect that broadcasters, production companies and program celebrities are open and transparent with the Australian community about these kinds of arrangements. They need to be far more clearly signposted, and people better informed about their role. In the US, the “Commercial Alert” <http://www.commercialalert.org/> organisation has been lobbying the Federal Communications Commission and the Federal Trade Commission to achieve similar in-program “visual acknowledgements.” The ABA’s Commercial Radio Inquiry (“Cash-for-Comment”) found widespread systemic regulatory failure and introduced three new standards. On that basis, how could a “standstill” response by CTVA, constitute best practice for such a pervasive and influential medium as contemporary commercial television? The World Idol example may lead to confusion for some audiences, who are unsure whether the issues involved relate to broadcasting or telecommunications. In fact, it could be dealt with as a complaint to the Telecommunication Industry Ombudsman (TIO) under an ACA registered, but Australian Communications Industry Forum (ACIF) developed, code of practice. These kind of cross-platform issues may become more vexed in future years from an audience’s perspective, especially if reality formats using on-screen premium rate service numbers invite audiences to participate, by sending MMS (multimedia messaging services) images or short video grabs over wireless networks. The political and cultural implications of this kind of audience interaction, in terms of access, participation, and more generally the symbolic power of media, may perhaps even indicate a longer-term shift in relations with consumers and citizens. In the Internet example, the Australian Competition and Consumer Commission’s (ACCC) Internet advertising jurisdiction would apply—not the ABA’s “co-regulatory” Internet content regime as some may have thought. Although the ACCC deals with complaints relating to Internet advertising, there won’t be much traction for them in a more complex issue that also includes, say, racist or religious bigotry. The DVD example would probably fall between the remits of the Office of Film and Literature Classification’s (OFLC) new “convergent” Guidelines for the Classification of Film and Computer Games and race discrimination legislation administered by the Human Rights and Equal Opportunity Commission (HREOC). The OFLC’s National Classification Scheme is really geared to provide consumer advice on media products that contain sexual and violent imagery or coarse language, rather than issues of racist language. And it’s unlikely that a single person would have the locus standito even apply for a reclassification. It may fall within the jurisdiction of the HREOC depending on whether it was played in public or not. Even then it would probably be considered exempt on free speech grounds as an “artistic work.” Unsolicited, potentially illegal, content transmitted via mobile wireless devices, in particular 3G phones, provide another example of content that falls between the media regulation cracks. It illustrates a potential content policy “turf grab” too. Image-enabled mobile phones create a variety of novel issues for content producers, network operators, regulators, parents and viewers. There is no one government media authority or agency with a remit to deal with this issue. Although it has elements relating to the regulatory activities of the ACA, the ABA, the OFLC, the TIO, and TISSC, the combination of illegal or potentially prohibited content and its carriage over wireless networks positions it outside their current frameworks. The ACA may argue it should have responsibility for this kind of content since: it now enforces the recently enacted Commonwealth anti-Spam laws; has registered an industry code of practice for unsolicited content delivered over wireless networks; is seeking to include ‘adult’ content within premium rate service numbers, and, has been actively involved in consumer education for mobile telephony. It has also worked with TISSC and the ABA in relation to telephone sex information services over voice networks. On the other hand, the ABA would probably argue that it has the relevant expertise for regulating wirelessly transmitted image-content, arising from its experience of Internet and free and subscription TV industries, under co-regulatory codes of practice. The OFLC can also stake its claim for policy and compliance expertise, since the recently implemented Guidelines for Classification of Film and Computer Games were specifically developed to address issues of industry convergence. These Guidelines now underpin the regulation of content across the film, TV, video, subscription TV, computer games and Internet sectors. Reshaping Institutions Debates around the “merged regulator” concept have occurred on and off for at least a decade, with vested interests in agencies and the executive jockeying to stake claims over new turf. On several occasions the debate has been given renewed impetus in the context of ruling conservative parties’ mooted changes to the ownership and control regime. It’s tended to highlight demarcations of remit, informed as they are by historical and legal developments, and the gradual accretion of regulatory cultures. Now the key pressure points for regulatory change include the mere existence of already converged single regulatory structures in those countries with whom we tend to triangulate our policy comparisons—the US, the UK and Canada—increasingly in a context of debates concerning international trade agreements; and, overlaying this, new media formats and devices are complicating existing institutional arrangements and legal frameworks. The Department of Communications, Information Technology & the Arts’s (DCITA) review brief was initially framed as “options for reform in spectrum management,” but was then widened to include “new institutional arrangements” for a converged regulator, to deal with visual content in the latest generation of mobile telephony, and other image-enabled wireless devices (DCITA). No other regulatory agencies appear, at this point, to be actively on the Government’s radar screen (although they previously have been). Were the review to look more inclusively, the ACCC, the OFLC and the specialist telecommunications bodies, the TIO and the TISSC may also be drawn in. Current regulatory arrangements see the ACA delegate responsibility for broadcasting services bands of the radio frequency spectrum to the ABA. In fact, spectrum management is the turf least contested by the regulatory players themselves, although the “convergent regulator” issue provokes considerable angst among powerful incumbent media players. The consensus that exists at a regulatory level can be linked to the scientific convention that holds the radio frequency spectrum is a continuum of electromagnetic bands. In this view, it becomes artificial to sever broadcasting, as “broadcasting services bands” from the other remaining highly diverse communications uses, as occurred from 1992 when the Broadcasting Services Act was introduced. The prospect of new forms of spectrum charging is highly alarming for commercial broadcasters. In a joint submission to the DCITA review, the peak TV and radio industry lobby groups have indicated they will fight tooth and nail to resist new regulatory arrangements that would see a move away from the existing licence fee arrangements. These are paid as a sliding scale percentage of gross earnings that, it has been argued by Julian Thomas and Marion McCutcheon, “do not reflect the amount of spectrum used by a broadcaster, do not reflect the opportunity cost of using the spectrum, and do not provide an incentive for broadcasters to pursue more efficient ways of delivering their services” (6). An economic rationalist logic underpins pressure to modify the spectrum management (and charging) regime, and undoubtedly contributes to the commercial broadcasting industry’s general paranoia about reform. Total revenues collected by the ABA and the ACA between 1997 and 2002 were, respectively, $1423 million and $3644.7 million. Of these sums, using auction mechanisms, the ABA collected $391 million, while the ACA collected some $3 billion. The sale of spectrum that will be returned to the Commonwealth by television broadcasters when analog spectrum is eventually switched off, around the end of the decade, is a salivating prospect for Treasury officials. The large sums that have been successfully raised by the ACA boosts their position in planning discussions for the convergent media regulatory agency. The way in which media outlets and regulators respond to publics is an enduring question for a democratic polity, irrespective of how the product itself has been mediated and accessed. Media regulation and civic responsibility, including frameworks for negotiating consumer and citizen rights, are fundamental democratic rights (Keane; Tambini). The ABA’s Commercial Radio Inquiry (‘cash for comment’) has also reminded us that regulatory frameworks are important at the level of corporate conduct, as well as how they negotiate relations with specific media audiences (Johnson; Turner; Gordon-Smith). Building publicly meaningful regulatory frameworks will be demanding: relationships with audiences are often complex as people are constructed as both consumers and citizens, through marketised media regulation, institutions and more recently, through hybridising program formats (Murdock and Golding; Lumby and Probyn). In TV, we’ve seen the growth of infotainment formats blending entertainment and informational aspects of media consumption. At a deeper level, changes in the regulatory landscape are symptomatic of broader tectonic shifts in the discourses of governance in advanced information economies from the late 1980s onwards, where deregulatory agendas created an increasing reliance on free market, business-oriented solutions to regulation. “Co-regulation” and “self-regulation’ became the preferred mechanisms to more direct state control. Yet, curiously contradicting these market transformations, we continue to witness recurring instances of direct intervention on the basis of censorship rationales (Dwyer and Stockbridge). That digital media content is “converging” between different technologies and modes of delivery is the norm in “new media” regulatory rhetoric. Others critique “visions of techno-glory,” arguing instead for a view that sees fundamental continuities in media technologies (Winston). But the socio-cultural impacts of new media developments surround us: the introduction of multichannel digital and interactive TV (in free-to-air and subscription variants); broadband access in the office and home; wirelessly delivered content and mobility, and, as Jock Given notes, around the corner, there’s the possibility of “an Amazon.Com of movies-on-demand, with the local video and DVD store replaced by online access to a distant server” (90). Taking a longer view of media history, these changes can be seen to be embedded in the global (and local) “innovation frontier” of converging digital media content industries and its transforming modes of delivery and access technologies (QUT/CIRAC/Cutler & Co). The activities of regulatory agencies will continue to be a source of policy rivalry and turf contestation until such time as a convergent regulator is established to the satisfaction of key players. However, there are risks that the benefits of institutional reshaping will not be readily available for either audiences or industry. In the past, the idea that media power and responsibility ought to coexist has been recognised in both the regulation of the media by the state, and the field of communications media analysis (Curran and Seaton; Couldry). But for now, as media industries transform, whatever the eventual institutional configuration, the evolution of media power in neo-liberal market mediascapes will challenge the ongoing capacity for interventions by national governments and their agencies. Works Cited Australian Broadcasting Authority. Commercial Radio Inquiry: Final Report of the Australian Broadcasting Authority. Sydney: ABA, 2000. Australian Communications Information Forum. Industry Code: Short Message Service (SMS) Issues. Dec. 2002. 8 Mar. 2004 <http://www.acif.org.au/__data/page/3235/C580_Dec_2002_ACA.pdf >. Commercial Television Australia. Draft Commercial Television Industry Code of Practice. Aug. 2003. 8 Mar. 2004 <http://www.ctva.com.au/control.cfm?page=codereview&pageID=171&menucat=1.2.110.171&Level=3>. Couldry, Nick. The Place of Media Power: Pilgrims and Witnesses of the Media Age. London: Routledge, 2000. Curran, James, and Jean Seaton. Power without Responsibility: The Press, Broadcasting and New Media in Britain. 6th ed. London: Routledge, 2003. Dept. of Communication, Information Technology and the Arts. Options for Structural Reform in Spectrum Management. Canberra: DCITA, Aug. 2002. ---. Proposal for New Institutional Arrangements for the ACA and the ABA. Aug. 2003. 8 Mar. 2004 <http://www.dcita.gov.au/Article/0,,0_1-2_1-4_116552,00.php>. Dept. of Foreign Affairs and Trade. Australia-United States Free Trade Agreement. Feb. 2004. 8 Mar. 2004 <http://www.dfat.gov.au/trade/negotiations/us_fta/outcomes/11_audio_visual.php>. Dwyer, Tim. Submission to Commercial Television Australia’s Review of the Commercial Television Industry’s Code of Practice. Sept. 2003. Dwyer, Tim, and Sally Stockbridge. “Putting Violence to Work in New Media Policies: Trends in Australian Internet, Computer Game and Video Regulation.” New Media and Society 1.2 (1999): 227-49. Given, Jock. America’s Pie: Trade and Culture After 9/11. Sydney: U of NSW P, 2003. Gordon-Smith, Michael. “Media Ethics After Cash-for-Comment.” The Media and Communications in Australia. Ed. Stuart Cunningham and Graeme Turner. Sydney: Allen and Unwin, 2002. Johnson, Rob. Cash-for-Comment: The Seduction of Journo Culture. Sydney: Pluto, 2000. Keane, John. The Media and Democracy. Cambridge: Polity, 1991. Lumby, Cathy, and Elspeth Probyn, eds. Remote Control: New Media, New Ethics. Melbourne: Cambridge UP, 2003. Murdock, Graham, and Peter Golding. “Information Poverty and Political Inequality: Citizenship in the Age of Privatized Communications.” Journal of Communication 39.3 (1991): 180-95. QUT, CIRAC, and Cutler & Co. Research and Innovation Systems in the Production of Digital Content and Applications: Report for the National Office for the Information Economy. Canberra: Commonwealth of Australia, Sept. 2003. Tambini, Damian. Universal Access: A Realistic View. IPPR/Citizens Online Research Publication 1. London: IPPR, 2000. Thomas, Julian and Marion McCutcheon. “Is Broadcasting Special? Charging for Spectrum.” Conference paper. ABA conference, Canberra. May 2003. Turner, Graeme. “Talkback, Advertising and Journalism: A cautionary tale of self-regulated radio”. International Journal of Cultural Studies 3.2 (2000): 247-255. ---. “Reshaping Australian Institutions: Popular Culture, the Market and the Public Sphere.” Culture in Australia: Policies, Publics and Programs. Ed. Tony Bennett and David Carter. Melbourne: Cambridge UP, 2001. Winston, Brian. Media, Technology and Society: A History from the Telegraph to the Internet. London: Routledge, 1998. Web Links http://www.aba.gov.au http://www.aca.gov.au http://www.accc.gov.au http://www.acif.org.au http://www.adma.com.au http://www.ctva.com.au http://www.crtc.gc.ca http://www.dcita.com.au http://www.dfat.gov.au http://www.fcc.gov http://www.ippr.org.uk http://www.ofcom.org.uk http://www.oflc.gov.au Links http://www.commercialalert.org/ Citation reference for this article MLA Style Dwyer, Tim. "Transformations" M/C: A Journal of Media and Culture <http://www.media-culture.org.au/0403/06-transformations.php>. APA Style Dwyer, T. (2004, Mar17). Transformations. M/C: A Journal of Media and Culture, 7, <http://www.media-culture.org.au/0403/06-transformations.php>
APA, Harvard, Vancouver, ISO, and other styles
26

Fowles, Jib. "Television Violence and You." M/C Journal 3, no. 1 (March 1, 2000). http://dx.doi.org/10.5204/mcj.1828.

Full text
Abstract:
Introduction Television has become more and more restricted within the past few years. Rating systems and "family programming" have taken over the broadcast networks, relegating violent programming, often some of the most cutting edge work in television, to pay channels. There are very few people willing to stand up and say that viewers -- even young children -- should be able to watch whatever they want, and that viewing acts of violence can actually result in more mature, balanced adults. Jib Fowles is one of those people. His book, The Case For Television Violence, explores the long history of violent content in popular culture, and how its modern incarnation, television, fulfils the same function as epic tragedy and "penny dreadfuls" did -- the diverting of aggressive feelings into the cathartic action of watching. Fowles points out the flaws in studies linking TV violence to actual violence (why, for example, has there been a sharp decline in violent crime in the U.S. during the 1990s when, by all accounts, television violence has increased?), as well as citing overlooked studies that show no correlation between viewing and performing acts of violence. The book also demonstrates how efforts to censor TV violence are not only ineffective, but can lead to the opposite result: an increase in exposure to violent viewing as audiences forsake traditional broadcast programming for private programming through pay TV and videocassettes. The revised excerpt below describes one of the more heated topics of debate -- the V-Chip. Television Violence and You Although the antiviolence fervor crested in the US in the first half of the 1990s, it also continued into the second half. As Sissela Bok comments: "during the 1990s, much larger efforts by citizen advocacy groups, churches, professional organizations, public officials, and media groups have been launched to address the problems posed by media violence" (146). It continues as always. On the one side, the reformist position finds articulation time and again; on the other side, the public's incessant desire for violent entertainment is reluctantly (because there is no prestige or cachet to be had in it) serviced by television companies as they compete against each other for profits. We can contrast these two forces in the following way: the first, the antitelevision violence campaign, is highly focussed in its presentation, calling for the curtailment of violent content, but this concerted effort has underpinnings that are vague and various; the second force is highly diffused on the surface (the public nowhere speaks pointedly in favor of violent content), but its underpinnings are highly concentrated and functional, pertinent to the management of disapproved emotions. To date, neither force has triumphed decisively. The antiviolence advocates can be gratified by the righteousness of their cause and sense of moral superiority, but violent content continues as a mainstay of the medium's offerings and in viewers' attention. Over the longer term, equilibrium has been the result. If the equilibrium were upset, however, unplanned consequences would result. The attack on television violence is not simply unwarranted; it carries the threat of unfortunate dangers should it succeed. In the US, television violence is a successful site for the siphoning off of unwanted emotions. The French critic Michel Mourlet explains: "violence is a major theme in aesthetics. Violence is decompression: Arising out of a tension between the individual and the world, it explodes as the tension reaches its pitch, like an abscess burning. It has to be gone through before there can be any repose" (233). The loss or even diminishment of television violence would suggest that surplus psychic energy would have to find other outlets. What these outlets would be is open to question, but the possibility exists that some of them might be retrogressive, involving violence in more outright and vicious forms. It is in the nation's best interest not to curtail the symbolic displays that come in the form of television violence. Policy The official curbing of television violence is not an idle or empty threat. It has happened recently in Canada. In 1993, the Canadian Radio- Television and Telecommunications Commission, the equivalent of the Australian Broadcasting Authority or of the American FCC, banned any "gratuitous" violence, which was defined as violence not playing "an integral role in developing the plot, character, or theme of the material as a whole" (Scully 12). Violence of any sort cannot be broadcast before 9 p.m. Totally forbidden are any programs promoting violence against women, minorities, or animals. Detailed codes regulate violence in children's shows. In addition, the Canadian invention of the V-chip is to be implemented, which would permit parents to block out programming that exceeds preset levels for violence, sexuality, or strong language (DePalma). In the United States, the two houses of Congress have held 28 hearings since 1954 on the topic of television violence (Cooper), but none has led to the passage of regulatory legislation until the Telecommunications Act of 1996. According to the Act, "studies have shown that children exposed to violent video programming at a young age have a higher tendency for violent and aggressive behavior later in life than children not so exposed, and that children exposed to violent video programming are prone to assume that acts of violence are acceptable behavior" (Section 551). It then requires that newly manufactured television sets must "be equipped with a feature designed to enable viewers to block display of all programs with a common rating" (Telecommunications Act of 1996, section 551). The V-chip, the only available "feature" to meet the requirements, will therefore be imported from Canada to the United States. Utilising a rating system reluctantly and haltingly developed by the television industry, parents on behalf of their children would be able to black out offensive content. Censorship had passed down to the family level. Although the V-chip represents the first legislated regulation of television violence in the US, that country experienced an earlier episode of violence censorship whose outcome may be telling for the fate of the chip. This occurred in the aftermath of the 1972 Report to the Surgeon General on Television and Social Behavior, which, in highly equivocal language, appeared to give some credence to the notion that violent content can activate violent behavior in some younger viewers. Pressure from influential congressmen and from the FCC and its chairman, Richard Wiley, led the broadcasting industry in 1975 to institute what came to be known as the Family Viewing Hour. Formulated as an amendment to the Television Code of the National Association of Broadcasters, the stipulation decreed that before 9:00 p.m. "entertainment programming inappropriate for viewing by a general family audience should not be broadcast" (Cowan 113). The definition of "inappropriate programming" was left to the individual networks, but as the 1975-1976 television season drew near, it became clear to a production company in Los Angeles that the definitions would be strict. The producers of M*A*S*H (which aired at 8:30 p.m.) learned from the CBS censor assigned to them that three of their proposed programs -- dealing with venereal disease, impotence, and adultery -- would not be allowed (Cowan 125). The series Rhoda could not discuss birth control (131) and the series Phyllis would have to cancel a show on virginity (136). Television writers and producers began to rebel, and in late 1975 their Writers Guild brought a lawsuit against the FCC and the networks with regard to the creative impositions of the Family Viewing Hour. Actor Carroll O'Connor (as quoted in Cowan 179) complained, "Congress has no right whatsoever to interfere in the content of the medium", and writer Larry Gelbert voiced dismay (as quoted in Cowan 177): "situation comedies have become the theater of ideas, and those ideas have been very, very restricted". The judge who heard the case in April and May of 1976 took until November to issue his decision, but when it emerged it was polished and clear: the Family Viewing Hour was the result of "backroom bludgeoning" by the FCC and was to be rescinded. According to the judge, "the existence of threats, and the attempted securing of commitments coupled with the promise to publicize noncompliance ... constituted per se violations of the First Amendment" (Corn-Revere 201). The fate of the Family Viewing Hour may have been a sort of premoniton: The American Civil Liberties Union is currently bringing a similar case against proponents of the V-chip -- a case that may produce similar results. Whether or not the V-chip will withstand judicial scrutiny, there are several problematic aspects to the device and any possible successors. Its usage would appear to impinge on the providers of violent content, on the viewers of it, and indeed on the fundamental legal structure of the United States. To confront the first of these three problems, significant use of the V- chip by parents would measurably reduce the audience size for certain programmes containing symbolic violence. Little else could have greater impact on the American television system as it is currently constituted. A decrease in audience numbers quickly translates into a decrease in advertising revenues in an advertising system such as that of the United States. Advertisers may additionally shy away from a shunned programme because of its loss of popularity or because its lowered ratings have clearly stamped it as violent. The decline in revenues would make the programme less valuable in the eyes of network executives and perhaps a candidate for cancellation. The Hollywood production company would quickly take notice and begin tailoring its broadcast content to the new standards. Blander or at least different fare would be certain to result. Broadcast networks may begin losing viewers to bolder content on less fastidious cable networks and in particular to the channels that are not supported and influenced by advertising. Thus, we might anticipate a shift away from the more traditional and responsible channels towards the less so and away from advertising-supported channels to subscriber-supported channels. This shift would not transpire according to the traditional governing mechanism of television -- audience preferences. Those to whom the censored content had been destined would have played no role in its neglect. Neglect would have transpired because of the artificial intercession of controls. The second area to be affected by the V-chip, should its implementation prove successful, is viewership, in particular younger viewers. Currently, young viewers have great license in most households to select the content they want to watch; this license would be greatly reduced by the V-chip, which can block out entire genres. Screening for certain levels of violence, the parent would eliminate most cartoons and all action- adventure shows, whether the child desires some of these or not. A New York Times reporter, interviewing a Canadian mother who had been an early tester of a V-chip prototype, heard the mother's 12-year-old son protesting in the background, "we're not getting the V-chip back!" The mother explained to the reporter, "the kids didn't like the fact that they were not in control any longer" (as quoted in DePalma C14) -- with good reason. Children are losing the right to pick the content of which they are in psychological need. The V-chip represents another weapon in the generational war -- a device that allows parents to eradicate the compensational content of which children have learned to make enjoyable use. The consequences of all this for the child and the family would be unpleasant. The chances that the V-chip will increase intergenerational friction are high. Not only will normal levels of tension and animosity be denied their outlet via television fiction but also so will the new superheated levels. It is not a pleasant prospect. Third, the V-chip constitutes a strong challenge to traditional American First Amendment rights of free speech and a free press. Stoutly defended by post-World War II Supreme Courts, First Amendment rights can be voided "only in order to promote a compelling state interest, and then only if the government adopts the least restrictive means to further that interest" (Ballard 211). The few restrictions allowed concern such matters as obscenity, libel, national security, and the sometimes conflicting right to a fair trial. According to legal scholar Ian Ballard, there is no "compelling state interest" involved in the matter of television violence because "the social science evidence used to justify the regulation of televised violence is subject to such strong methodological criticism that the evidence is insufficient to support massive regulatory assault on the television entertainment industry" (185). Even if the goal of restricting television violence were acceptable, the V-chip is hardly "the least restrictive means" because it introduces a "chilling effect" on programme producers and broadcasters that "clearly infringes on fundamental First Amendment rights" (216). Moreover, states Ballard, "fear of a slippery slope is not unfounded" (216). If television violence can be censored, supposedly because it poses a threat to social order, then what topics might be next? It would not be long before challenging themes such a feminism or multiculturalism were deemed unfit for the same reason. Taking all these matters into consideration, the best federal policy regarding television violence would be to have no policy -- to leave the extent of violent depictions completely up to the dictates of viewer preferences, as expertly interpreted by the television industry. In this, I am in agreement with Ian Ballard, who finds that the best approach "is for the government to do nothing at all about television violence" (218). Citation reference for this article MLA style: Jib Fowles. "Television Violence and You." M/C: A Journal of Media and Culture 3.1 (2000). [your date of access] <http://www.uq.edu.au/mc/0003/television.php>. Chicago style: Jib Fowles, "Television Violence and You," M/C: A Journal of Media and Culture 3, no. 1 (2000), <http://www.uq.edu.au/mc/0003/television.php> ([your date of access]). APA style: Jib Fowles. (2000) Television Violence and You. M/C: A Journal of Media and Culture 3(1). <http://www.uq.edu.au/mc/0003/television.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
27

Paull, John. "Beyond Equal: From Same But Different to the Doctrine of Substantial Equivalence." M/C Journal 11, no. 2 (June 1, 2008). http://dx.doi.org/10.5204/mcj.36.

Full text
Abstract:
A same-but-different dichotomy has recently been encapsulated within the US Food and Drug Administration’s ill-defined concept of “substantial equivalence” (USFDA, FDA). By invoking this concept the genetically modified organism (GMO) industry has escaped the rigors of safety testing that might otherwise apply. The curious concept of “substantial equivalence” grants a presumption of safety to GMO food. This presumption has yet to be earned, and has been used to constrain labelling of both GMO and non-GMO food. It is an idea that well serves corporatism. It enables the claim of difference to secure patent protection, while upholding the contrary claim of sameness to avoid labelling and safety scrutiny. It offers the best of both worlds for corporate food entrepreneurs, and delivers the worst of both worlds to consumers. The term “substantial equivalence” has established its currency within the GMO discourse. As the opportunities for patenting food technologies expand, the GMO recruitment of this concept will likely be a dress rehearsal for the developing debates on the labelling and testing of other techno-foods – including nano-foods and clone-foods. “Substantial Equivalence” “Are the Seven Commandments the same as they used to be, Benjamin?” asks Clover in George Orwell’s “Animal Farm”. By way of response, Benjamin “read out to her what was written on the wall. There was nothing there now except a single Commandment. It ran: ALL ANIMALS ARE EQUAL BUT SOME ANIMALS ARE MORE EQUAL THAN OTHERS”. After this reductionist revelation, further novel and curious events at Manor Farm, “did not seem strange” (Orwell, ch. X). Equality is a concept at the very core of mathematics, but beyond the domain of logic, equality becomes a hotly contested notion – and the domain of food is no exception. A novel food has a regulatory advantage if it can claim to be the same as an established food – a food that has proven its worth over centuries, perhaps even millennia – and thus does not trigger new, perhaps costly and onerous, testing, compliance, and even new and burdensome regulations. On the other hand, such a novel food has an intellectual property (IP) advantage only in terms of its difference. And thus there is an entrenched dissonance for newly technologised foods, between claiming sameness, and claiming difference. The same/different dilemma is erased, so some would have it, by appeal to the curious new dualist doctrine of “substantial equivalence” whereby sameness and difference are claimed simultaneously, thereby creating a win/win for corporatism, and a loss/loss for consumerism. This ground has been pioneered, and to some extent conquered, by the GMO industry. The conquest has ramifications for other cryptic food technologies, that is technologies that are invisible to the consumer and that are not evident to the consumer other than via labelling. Cryptic technologies pertaining to food include GMOs, pesticides, hormone treatments, irradiation and, most recently, manufactured nano-particles introduced into the food production and delivery stream. Genetic modification of plants was reported as early as 1984 by Horsch et al. The case of Diamond v. Chakrabarty resulted in a US Supreme Court decision that upheld the prior decision of the US Court of Customs and Patent Appeal that “the fact that micro-organisms are alive is without legal significance for purposes of the patent law”, and ruled that the “respondent’s micro-organism plainly qualifies as patentable subject matter”. This was a majority decision of nine judges, with four judges dissenting (Burger). It was this Chakrabarty judgement that has seriously opened the Pandora’s box of GMOs because patenting rights makes GMOs an attractive corporate proposition by offering potentially unique monopoly rights over food. The rear guard action against GMOs has most often focussed on health repercussions (Smith, Genetic), food security issues, and also the potential for corporate malfeasance to hide behind a cloak of secrecy citing commercial confidentiality (Smith, Seeds). Others have tilted at the foundational plank on which the economics of the GMO industry sits: “I suggest that the main concern is that we do not want a single molecule of anything we eat to contribute to, or be patented and owned by, a reckless, ruthless chemical organisation” (Grist 22). The GMO industry exhibits bipolar behaviour, invoking the concept of “substantial difference” to claim patent rights by way of “novelty”, and then claiming “substantial equivalence” when dealing with other regulatory authorities including food, drug and pesticide agencies; a case of “having their cake and eating it too” (Engdahl 8). This is a clever slight-of-rhetoric, laying claim to the best of both worlds for corporations, and the worst of both worlds for consumers. Corporations achieve patent protection and no concomitant specific regulatory oversight; while consumers pay the cost of patent monopolization, and are not necessarily apprised, by way of labelling or otherwise, that they are purchasing and eating GMOs, and thereby financing the GMO industry. The lemma of “substantial equivalence” does not bear close scrutiny. It is a fuzzy concept that lacks a tight testable definition. It is exactly this fuzziness that allows lots of wriggle room to keep GMOs out of rigorous testing regimes. Millstone et al. argue that “substantial equivalence is a pseudo-scientific concept because it is a commercial and political judgement masquerading as if it is scientific. It is moreover, inherently anti-scientific because it was created primarily to provide an excuse for not requiring biochemical or toxicological tests. It therefore serves to discourage and inhibit informative scientific research” (526). “Substantial equivalence” grants GMOs the benefit of the doubt regarding safety, and thereby leaves unexamined the ramifications for human consumer health, for farm labourer and food-processor health, for the welfare of farm animals fed a diet of GMO grain, and for the well-being of the ecosystem, both in general and in its particularities. “Substantial equivalence” was introduced into the food discourse by an Organisation for Economic Co-operation and Development (OECD) report: “safety evaluation of foods derived by modern biotechnology: concepts and principles”. It is from this document that the ongoing mantra of assumed safety of GMOs derives: “modern biotechnology … does not inherently lead to foods that are less safe … . Therefore evaluation of foods and food components obtained from organisms developed by the application of the newer techniques does not necessitate a fundamental change in established principles, nor does it require a different standard of safety” (OECD, “Safety” 10). This was at the time, and remains, an act of faith, a pro-corporatist and a post-cautionary approach. The OECD motto reveals where their priorities lean: “for a better world economy” (OECD, “Better”). The term “substantial equivalence” was preceded by the 1992 USFDA concept of “substantial similarity” (Levidow, Murphy and Carr) and was adopted from a prior usage by the US Food and Drug Agency (USFDA) where it was used pertaining to medical devices (Miller). Even GMO proponents accept that “Substantial equivalence is not intended to be a scientific formulation; it is a conceptual tool for food producers and government regulators” (Miller 1043). And there’s the rub – there is no scientific definition of “substantial equivalence”, no scientific test of proof of concept, and nor is there likely to be, since this is a ‘spinmeister’ term. And yet this is the cornerstone on which rests the presumption of safety of GMOs. Absence of evidence is taken to be evidence of absence. History suggests that this is a fraught presumption. By way of contrast, the patenting of GMOs depends on the antithesis of assumed ‘sameness’. Patenting rests on proven, scrutinised, challengeable and robust tests of difference and novelty. Lightfoot et al. report that transgenic plants exhibit “unexpected changes [that] challenge the usual assumptions of GMO equivalence and suggest genomic, proteomic and metanomic characterization of transgenics is advisable” (1). GMO Milk and Contested Labelling Pesticide company Monsanto markets the genetically engineered hormone rBST (recombinant Bovine Somatotropin; also known as: rbST; rBGH, recombinant Bovine Growth Hormone; and the brand name Prosilac) to dairy farmers who inject it into their cows to increase milk production. This product is not approved for use in many jurisdictions, including Europe, Australia, New Zealand, Canada and Japan. Even Monsanto accepts that rBST leads to mastitis (inflammation and pus in the udder) and other “cow health problems”, however, it maintains that “these problems did not occur at rates that would prohibit the use of Prosilac” (Monsanto). A European Union study identified an extensive list of health concerns of rBST use (European Commission). The US Dairy Export Council however entertain no doubt. In their background document they ask “is milk from cows treated with rBST safe?” and answer “Absolutely” (USDEC). Meanwhile, Monsanto’s website raises and answers the question: “Is the milk from cows treated with rbST any different from milk from untreated cows? No” (Monsanto). Injecting cows with genetically modified hormones to boost their milk production remains a contested practice, banned in many countries. It is the claimed equivalence that has kept consumers of US dairy products in the dark, shielded rBST dairy farmers from having to declare that their milk production is GMO-enhanced, and has inhibited non-GMO producers from declaring their milk as non-GMO, non rBST, or not hormone enhanced. This is a battle that has simmered, and sometimes raged, for a decade in the US. Finally there is a modest victory for consumers: the Pennsylvania Department of Agriculture (PDA) requires all labels used on milk products to be approved in advance by the department. The standard issued in October 2007 (PDA, “Standards”) signalled to producers that any milk labels claiming rBST-free status would be rejected. This advice was rescinded in January 2008 with new, specific, department-approved textual constructions allowed, and ensuring that any “no rBST” style claim was paired with a PDA-prescribed disclaimer (PDA, “Revised Standards”). However, parsimonious labelling is prohibited: No labeling may contain references such as ‘No Hormones’, ‘Hormone Free’, ‘Free of Hormones’, ‘No BST’, ‘Free of BST’, ‘BST Free’,’No added BST’, or any statement which indicates, implies or could be construed to mean that no natural bovine somatotropin (BST) or synthetic bovine somatotropin (rBST) are contained in or added to the product. (PDA, “Revised Standards” 3) Difference claims are prohibited: In no instance shall any label state or imply that milk from cows not treated with recombinant bovine somatotropin (rBST, rbST, RBST or rbst) differs in composition from milk or products made with milk from treated cows, or that rBST is not contained in or added to the product. If a product is represented as, or intended to be represented to consumers as, containing or produced from milk from cows not treated with rBST any labeling information must convey only a difference in farming practices or dairy herd management methods. (PDA, “Revised Standards” 3) The PDA-approved labelling text for non-GMO dairy farmers is specified as follows: ‘From cows not treated with rBST. No significant difference has been shown between milk derived from rBST-treated and non-rBST-treated cows’ or a substantial equivalent. Hereinafter, the first sentence shall be referred to as the ‘Claim’, and the second sentence shall be referred to as the ‘Disclaimer’. (PDA, “Revised Standards” 4) It is onto the non-GMO dairy farmer alone, that the costs of compliance fall. These costs include label preparation and approval, proving non-usage of GMOs, and of creating and maintaining an audit trail. In nearby Ohio a similar consumer versus corporatist pantomime is playing out. This time with the Ohio Department of Agriculture (ODA) calling the shots, and again serving the GMO industry. The ODA prescribed text allowed to non-GMO dairy farmers is “from cows not supplemented with rbST” and this is to be conjoined with the mandatory disclaimer “no significant difference has been shown between milk derived from rbST-supplemented and non-rbST supplemented cows” (Curet). These are “emergency rules”: they apply for 90 days, and are proposed as permanent. Once again, the onus is on the non-GMO dairy farmers to document and prove their claims. GMO dairy farmers face no such governmental requirements, including no disclosure requirement, and thus an asymmetric regulatory impost is placed on the non-GMO farmer which opens up new opportunities for administrative demands and technocratic harassment. Levidow et al. argue, somewhat Eurocentrically, that from its 1990s adoption “as the basis for a harmonized science-based approach to risk assessment” (26) the concept of “substantial equivalence” has “been recast in at least three ways” (58). It is true that the GMO debate has evolved differently in the US and Europe, and with other jurisdictions usually adopting intermediate positions, yet the concept persists. Levidow et al. nominate their three recastings as: firstly an “implicit redefinition” by the appending of “extra phrases in official documents”; secondly, “it has been reinterpreted, as risk assessment processes have … required more evidence of safety than before, especially in Europe”; and thirdly, “it has been demoted in the European Union regulatory procedures so that it can no longer be used to justify the claim that a risk assessment is unnecessary” (58). Romeis et al. have proposed a decision tree approach to GMO risks based on cascading tiers of risk assessment. However what remains is that the defects of the concept of “substantial equivalence” persist. Schauzu identified that: such decisions are a matter of “opinion”; that there is “no clear definition of the term ‘substantial’”; that because genetic modification “is aimed at introducing new traits into organisms, the result will always be a different combination of genes and proteins”; and that “there is no general checklist that could be followed by those who are responsible for allowing a product to be placed on the market” (2). Benchmark for Further Food Novelties? The discourse, contestation, and debate about “substantial equivalence” have largely focussed on the introduction of GMOs into food production processes. GM can best be regarded as the test case, and proof of concept, for establishing “substantial equivalence” as a benchmark for evaluating new and forthcoming food technologies. This is of concern, because the concept of “substantial equivalence” is scientific hokum, and yet its persistence, even entrenchment, within regulatory agencies may be a harbinger of forthcoming same-but-different debates for nanotechnology and other future bioengineering. The appeal of “substantial equivalence” has been a brake on the creation of GMO-specific regulations and on rigorous GMO testing. The food nanotechnology industry can be expected to look to the precedent of the GMO debate to head off specific nano-regulations and nano-testing. As cloning becomes economically viable, then this may be another wave of food innovation that muddies the regulatory waters with the confused – and ultimately self-contradictory – concept of “substantial equivalence”. Nanotechnology engineers particles in the size range 1 to 100 nanometres – a nanometre is one billionth of a metre. This is interesting for manufacturers because at this size chemicals behave differently, or as the Australian Office of Nanotechnology expresses it, “new functionalities are obtained” (AON). Globally, government expenditure on nanotechnology research reached US$4.6 billion in 2006 (Roco 3.12). While there are now many patents (ETC Group; Roco), regulation specific to nanoparticles is lacking (Bowman and Hodge; Miller and Senjen). The USFDA advises that nano-manufacturers “must show a reasonable assurance of safety … or substantial equivalence” (FDA). A recent inventory of nano-products already on the market identified 580 products. Of these 11.4% were categorised as “Food and Beverage” (WWICS). This is at a time when public confidence in regulatory bodies is declining (HRA). In an Australian consumer survey on nanotechnology, 65% of respondents indicated they were concerned about “unknown and long term side effects”, and 71% agreed that it is important “to know if products are made with nanotechnology” (MARS 22). Cloned animals are currently more expensive to produce than traditional animal progeny. In the course of 678 pages, the USFDA Animal Cloning: A Draft Risk Assessment has not a single mention of “substantial equivalence”. However the Federation of Animal Science Societies (FASS) in its single page “Statement in Support of USFDA’s Risk Assessment Conclusion That Food from Cloned Animals Is Safe for Human Consumption” states that “FASS endorses the use of this comparative evaluation process as the foundation of establishing substantial equivalence of any food being evaluated. It must be emphasized that it is the food product itself that should be the focus of the evaluation rather than the technology used to generate cloned animals” (FASS 1). Contrary to the FASS derogation of the importance of process in food production, for consumers both the process and provenance of production is an important and integral aspect of a food product’s value and identity. Some consumers will legitimately insist that their Kalamata olives are from Greece, or their balsamic vinegar is from Modena. It was the British public’s growing awareness that their sugar was being produced by slave labour that enabled the boycotting of the product, and ultimately the outlawing of slavery (Hochschild). When consumers boycott Nestle, because of past or present marketing practices, or boycott produce of USA because of, for example, US foreign policy or animal welfare concerns, they are distinguishing the food based on the narrative of the food, the production process and/or production context which are a part of the identity of the food. Consumers attribute value to food based on production process and provenance information (Paull). Products produced by slave labour, by child labour, by political prisoners, by means of torture, theft, immoral, unethical or unsustainable practices are different from their alternatives. The process of production is a part of the identity of a product and consumers are increasingly interested in food narrative. It requires vigilance to ensure that these narratives are delivered with the product to the consumer, and are neither lost nor suppressed. Throughout the GM debate, the organic sector has successfully skirted the “substantial equivalence” debate by excluding GMOs from the certified organic food production process. This GMO-exclusion from the organic food stream is the one reprieve available to consumers worldwide who are keen to avoid GMOs in their diet. The organic industry carries the expectation of providing food produced without artificial pesticides and fertilizers, and by extension, without GMOs. Most recently, the Soil Association, the leading organic certifier in the UK, claims to be the first organisation in the world to exclude manufactured nonoparticles from their products (Soil Association). There has been the call that engineered nanoparticles be excluded from organic standards worldwide, given that there is no mandatory safety testing and no compulsory labelling in place (Paull and Lyons). The twisted rhetoric of oxymorons does not make the ideal foundation for policy. Setting food policy on the shifting sands of “substantial equivalence” seems foolhardy when we consider the potentially profound ramifications of globally mass marketing a dysfunctional food. If there is a 2×2 matrix of terms – “substantial equivalence”, substantial difference, insubstantial equivalence, insubstantial difference – while only one corner of this matrix is engaged for food policy, and while the elements remain matters of opinion rather than being testable by science, or by some other regime, then the public is the dupe, and potentially the victim. “Substantial equivalence” has served the GMO corporates well and the public poorly, and this asymmetry is slated to escalate if nano-food and clone-food are also folded into the “substantial equivalence” paradigm. Only in Orwellian Newspeak is war peace, or is same different. It is time to jettison the pseudo-scientific doctrine of “substantial equivalence”, as a convenient oxymoron, and embrace full disclosure of provenance, process and difference, so that consumers are not collateral in a continuing asymmetric knowledge war. References Australian Office of Nanotechnology (AON). Department of Industry, Tourism and Resources (DITR) 6 Aug. 2007. 24 Apr. 2008 < http://www.innovation.gov.au/Section/Innovation/Pages/ AustralianOfficeofNanotechnology.aspx >.Bowman, Diana, and Graeme Hodge. “A Small Matter of Regulation: An International Review of Nanotechnology Regulation.” Columbia Science and Technology Law Review 8 (2007): 1-32.Burger, Warren. “Sidney A. Diamond, Commissioner of Patents and Trademarks v. Ananda M. Chakrabarty, et al.” Supreme Court of the United States, decided 16 June 1980. 24 Apr. 2008 < http://caselaw.lp.findlaw.com/cgi-bin/getcase.pl?court=US&vol=447&invol=303 >.Curet, Monique. “New Rules Allow Dairy-Product Labels to Include Hormone Info.” The Columbus Dispatch 7 Feb. 2008. 24 Apr. 2008 < http://www.dispatch.com/live/content/business/stories/2008/02/07/dairy.html >.Engdahl, F. William. Seeds of Destruction. Montréal: Global Research, 2007.ETC Group. Down on the Farm: The Impact of Nano-Scale Technologies on Food and Agriculture. Ottawa: Action Group on Erosion, Technology and Conservation, November, 2004. European Commission. Report on Public Health Aspects of the Use of Bovine Somatotropin. Brussels: European Commission, 15-16 March 1999.Federation of Animal Science Societies (FASS). Statement in Support of FDA’s Risk Assessment Conclusion That Cloned Animals Are Safe for Human Consumption. 2007. 24 Apr. 2008 < http://www.fass.org/page.asp?pageID=191 >.Grist, Stuart. “True Threats to Reason.” New Scientist 197.2643 (16 Feb. 2008): 22-23.Hochschild, Adam. Bury the Chains: The British Struggle to Abolish Slavery. London: Pan Books, 2006.Horsch, Robert, Robert Fraley, Stephen Rogers, Patricia Sanders, Alan Lloyd, and Nancy Hoffman. “Inheritance of Functional Foreign Genes in Plants.” Science 223 (1984): 496-498.HRA. Awareness of and Attitudes toward Nanotechnology and Federal Regulatory Agencies: A Report of Findings. Washington: Peter D. Hart Research Associates, 25 Sep. 2007.Levidow, Les, Joseph Murphy, and Susan Carr. “Recasting ‘Substantial Equivalence’: Transatlantic Governance of GM Food.” Science, Technology, and Human Values 32.1 (Jan. 2007): 26-64.Lightfoot, David, Rajsree Mungur, Rafiqa Ameziane, Anthony Glass, and Karen Berhard. “Transgenic Manipulation of C and N Metabolism: Stretching the GMO Equivalence.” American Society of Plant Biologists Conference: Plant Biology, 2000.MARS. “Final Report: Australian Community Attitudes Held about Nanotechnology – Trends 2005-2007.” Report prepared for Department of Industry, Tourism and Resources (DITR). Miranda, NSW: Market Attitude Research Services, 12 June 2007.Miller, Georgia, and Rye Senjen. “Out of the Laboratory and on to Our Plates: Nanotechnology in Food and Agriculture.” Friends of the Earth, 2008. 24 Apr. 2008 < http://nano.foe.org.au/node/220 >.Miller, Henry. “Substantial Equivalence: Its Uses and Abuses.” Nature Biotechnology 17 (7 Nov. 1999): 1042-1043.Millstone, Erik, Eric Brunner, and Sue Mayer. “Beyond ‘Substantial Equivalence’.” Nature 401 (7 Oct. 1999): 525-526.Monsanto. “Posilac, Bovine Somatotropin by Monsanto: Questions and Answers about bST from the United States Food and Drug Administration.” 2007. 24 Apr. 2008 < http://www.monsantodairy.com/faqs/fda_safety.html >.Organisation for Economic Co-operation and Development (OECD). “For a Better World Economy.” Paris: OECD, 2008. 24 Apr. 2008 < http://www.oecd.org/ >.———. “Safety Evaluation of Foods Derived by Modern Biotechnology: Concepts and Principles.” Paris: OECD, 1993.Orwell, George. Animal Farm. Adelaide: ebooks@Adelaide, 2004 (1945). 30 Apr. 2008 < http://ebooks.adelaide.edu.au/o/orwell/george >.Paull, John. “Provenance, Purity and Price Premiums: Consumer Valuations of Organic and Place-of-Origin Food Labelling.” Research Masters thesis, University of Tasmania, Hobart, 2006. 24 Apr. 2008 < http://eprints.utas.edu.au/690/ >.Paull, John, and Kristen Lyons. “Nanotechnology: The Next Challenge for Organics.” Journal of Organic Systems (in press).Pennsylvania Department of Agriculture (PDA). “Revised Standards and Procedure for Approval of Proposed Labeling of Fluid Milk.” Milk Labeling Standards (2.0.1.17.08). Bureau of Food Safety and Laboratory Services, Pennsylvania Department of Agriculture, 17 Jan. 2008. ———. “Standards and Procedure for Approval of Proposed Labeling of Fluid Milk, Milk Products and Manufactured Dairy Products.” Milk Labeling Standards (2.0.1.17.08). Bureau of Food Safety and Laboratory Services, Pennsylvania Department of Agriculture, 22 Oct. 2007.Roco, Mihail. “National Nanotechnology Initiative – Past, Present, Future.” In William Goddard, Donald Brenner, Sergy Lyshevski and Gerald Iafrate, eds. Handbook of Nanoscience, Engineering and Technology. 2nd ed. Boca Raton, FL: CRC Press, 2007.Romeis, Jorg, Detlef Bartsch, Franz Bigler, Marco Candolfi, Marco Gielkins, et al. “Assessment of Risk of Insect-Resistant Transgenic Crops to Nontarget Arthropods.” Nature Biotechnology 26.2 (Feb. 2008): 203-208.Schauzu, Marianna. “The Concept of Substantial Equivalence in Safety Assessment of Food Derived from Genetically Modified Organisms.” AgBiotechNet 2 (Apr. 2000): 1-4.Soil Association. “Soil Association First Organisation in the World to Ban Nanoparticles – Potentially Toxic Beauty Products That Get Right under Your Skin.” London: Soil Association, 17 Jan. 2008. 24 Apr. 2008 < http://www.soilassociation.org/web/sa/saweb.nsf/848d689047 cb466780256a6b00298980/42308d944a3088a6802573d100351790!OpenDocument >.Smith, Jeffrey. Genetic Roulette: The Documented Health Risks of Genetically Engineered Foods. Fairfield, Iowa: Yes! Books, 2007.———. Seeds of Deception. Melbourne: Scribe, 2004.U.S. Dairy Export Council (USDEC). Bovine Somatotropin (BST) Backgrounder. Arlington, VA: U.S. Dairy Export Council, 2006.U.S. Food and Drug Administration (USFDA). Animal Cloning: A Draft Risk Assessment. Rockville, MD: Center for Veterinary Medicine, U.S. Food and Drug Administration, 28 Dec. 2006.———. FDA and Nanotechnology Products. U.S. Department of Health and Human Services, U.S. Food and Drug Administration, 2008. 24 Apr. 2008 < http://www.fda.gov/nanotechnology/faqs.html >.Woodrow Wilson International Center for Scholars (WWICS). “A Nanotechnology Consumer Products Inventory.” Data set as at Sep. 2007. Woodrow Wilson International Center for Scholars, Project on Emerging Technologies, Sep. 2007. 24 Apr. 2008 < http://www.nanotechproject.org/inventories/consumer >.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography