Academic literature on the topic 'Gutenberg Diagram'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Gutenberg Diagram.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Gutenberg Diagram"

1

Newman, W. I., and D. L. Turcotte. "A simple model for the earthquake cycle combining self-organized complexity with critical point behavior." Nonlinear Processes in Geophysics 9, no. 5/6 (2002): 453–61. http://dx.doi.org/10.5194/npg-9-453-2002.

Full text
Abstract:
Abstract. We have studied a hybrid model combining the forest-fire model with the site-percolation model in order to better understand the earthquake cycle. We consider a square array of sites. At each time step, a "tree" is dropped on a randomly chosen site and is planted if the site is unoccupied. When a cluster of "trees" spans the site (a percolating cluster), all the trees in the cluster are removed ("burned") in a "fire." The removal of the cluster is analogous to a characteristic earthquake and planting "trees" is analogous to increasing the regional stress. The clusters are analogous to the metastable regions of a fault over which an earthquake rupture can propagate once triggered. We find that the frequency-area statistics of the metastable regions are power-law with a negative exponent of two (as in the forest-fire model). This is analogous to the Gutenberg-Richter distribution of seismicity. This "self-organized critical behavior" can be explained in terms of an inverse cascade of clusters. Small clusters of "trees" coalesce to form larger clusters. Individual trees move from small to larger clusters until they are destroyed. This inverse cascade of clusters is self-similar and the power-law distribution of cluster sizes has been shown to have an exponent of two. We have quantified the forecasting of the spanning fires using error diagrams. The assumption that "fires" (earthquakes) are quasi-periodic has moderate predictability. The density of trees gives an improved degree of predictability, while the size of the largest cluster of trees provides a substantial improvement in forecasting a "fire."
APA, Harvard, Vancouver, ISO, and other styles
2

Maras, Steven. "Reflections on Adobe Corporation, Bill Viola, and Peter Ramus while Printing Lecture Notes." M/C Journal 8, no. 2 (2005). http://dx.doi.org/10.5204/mcj.2338.

Full text
Abstract:

 
 
 In March 2002, I was visiting the University of Southern California. One night, as sometimes happens on a vibrant campus, two interesting but very different public lectures were scheduled against one another. The first was by the co-chairman and co-founder of Adobe Systems Inc., Dr. John E. Warnock, talking about books. The second was a lecture by acclaimed video artist Bill Viola. The first event was clearly designed as a networking forum for faculty and entrepreneurs. The general student population was conspicuously absent. Warnock spoke of the future of Adobe, shared stories of his love of books, and in an embodiment of the democratising potential of Adobe software (and no doubt to the horror of archivists in the room) he invited the audience to handle extremely rare copies of early printed works from his personal library. In the lecture theatre where Viola was to speak the atmosphere was different. Students were everywhere; even at the price of ten dollars a head. Viola spoke of time and memory in the information age, of consciousness and existence, to an enraptured audience—and showed his latest work. The juxtaposition of these two events says something about our cultural moment, caught between a paradigm modelled on reverence toward the page, and a still emergent sense of medium, intensity and experimentation. But, the juxtaposition yields more. At one point in Warnock’s speech, in a demonstration of the ultra-high resolution possible in the next generation of Adobe products, he presented a scan of a manuscript, two pages, two columns per page, overflowing with detail. Fig. 1. Dr John E. Warnock at the Annenberg Symposium. Photo courtesy of http://www.annenberg.edu/symposia/annenberg/2002/photos.php Later, in Viola’s presentation, a fragment of a video work, Silent Mountain (2001) splits the screen in two columns, matching Warnock’s text: inside each a human figure struggles with intense emotion, and the challenges of bridging the relational gap. Fig. 2. Images from Bill Viola, Silent Mountain (2001). From Bill Viola, THE PASSIONS. The J. Paul Getty Museum, Los Angeles in Association with The National Gallery, London. Ed. John Walsh. p. 44. Both events are, of course, lectures. And although they are different in style and content, a ‘columnular’ scheme informs and underpins both, as a way of presenting and illustrating the lecture. Here, it is worth thinking about Pierre de la Ramée or Petrus (Peter) Ramus (1515-1572), the 16th century educational reformer who in the words of Frances Yates ‘abolished memory as a part of rhetoric’ (229). Ramus was famous for transforming rhetoric through the introduction of his method or dialectic. For Walter J. Ong, whose discussion of Ramism we are indebted to here, Ramus produced the paradigm of the textbook genre. But it is his method that is more noteworthy for us here, organised through definitions and divisions, the distribution of parts, ‘presented in dichotomized outlines or charts that showed exactly how the material was organised spatially in itself and in the mind’ (Ong, Orality 134-135). Fig. 3. Ramus inspired study of Medicine. Ong, Ramus 301. Ong discusses Ramus in more detail in his book Ramus: Method, and the Decay of Dialogue. Elsewhere, Sutton, Benjamin, and I have tried to capture the sense of Ong’s argument, which goes something like the following. In Ramus, Ong traces the origins of our modern, diagrammatic understanding of argument and structure to the 16th century, and especially the work of Ramus. Ong’s interest in Ramus is not as a great philosopher, nor a great scholar—indeed Ong sees Ramus’s work as a triumph of mediocrity of sorts. Rather, his was a ‘reformation’ in method and pedagogy. The Ramist dialectic ‘represented a drive toward thinking not only of the universe but of thought itself in terms of spatial models apprehended by sight’ (Ong, Ramus 9). The world becomes thought of ‘as an assemblage of the sort of things which vision apprehends—objects or surfaces’. Ramus’s teachings and doctrines regarding ‘discoursing’ are distinctive for the way they draw on geometrical figures, diagrams or lecture outlines, and the organization of categories through dichotomies. This sets learning up on a visual paradigm of ‘study’ (Ong, Orality 8-9). Ramus introduces a new organization for discourse. Prior to Ramus, the rhetorical tradition maintained and privileged an auditory understanding of the production of content in speech. Central to this practice was deployment of the ‘seats’, ‘images’ and ‘common places’ (loci communes), stock arguments and structures that had accumulated through centuries of use (Ong, Orality 111). These common places were supported by a complex art of memory: techniques that nourished the practice of rhetoric. By contrast, Ramism sought to map the flow and structure of arguments in tables and diagrams. Localised memory, based on dividing and composing, became crucial (Yates 230). For Ramus, content was structured in a set of visible or sight-oriented relations on the page. Ramism transformed the conditions of visualisation. In our present age, where ‘content’ is supposedly ‘king’, an archaeology of content bears thinking about. In it, Ramism would have a prominent place. With Ramus, content could be mapped within a diagrammatic page-based understanding of meaning. A container understanding of content arises. ‘In the post-Gutenberg age where Ramism flourished, the term “content”, as applied to what is “in” literary productions, acquires a status which it had never known before’ (Ong, Ramus 313). ‘In lieu of merely telling the truth, books would now in common estimation “contain” the truth, like boxes’ (313). For Ramus, ‘analysis opened ideas like boxes’ (315). The Ramist move was, as Ong points out, about privileging the visual over the audible. Alongside the rise of the printing press and page-based approaches to the word, the Ramist revolution sought to re-work rhetoric according to a new scheme. Although spatial metaphors had always had a ‘place’ in the arts of memory—other systems were, however, phonetically based—the notion of place changed. Specific figures such as ‘scheme’, ‘plan’, and ‘table’, rose to prominence in the now-textualised imagination. ‘Structure’ became an abstract diagram on the page disconnected from the total performance of the rhetor. This brings us to another key aspect of the Ramist reformation: that alongside a spatialised organisation of thought Ramus re-works style as presentation and embellishment (Brummett 449). A kind of separation of conception and execution is introduced in relation to performance. In Ramus’ separation of reason and rhetoric, arrangement and memory are distinct from style and delivery (Brummett 464). While both dialectic and rhetoric are re-worked by Ramus in light of divisions and definitions (see Ong, Ramus Chs. XI-XII), and dialectic remains a ‘rhetorical instrument’ (Ramus 290), rhetoric becomes a unique site for simplification in the name of classroom practicality. Dialectic circumscribes the space of learning of rhetoric; invention and arrangement (positioning) occur in advance (289). Ong’s work on the technologisation of the word is strongly focused on identifying the impact of literacy on consciousness. What Ong’s work on Ramus shows is that alongside the so-called printing revolution the Ramist reformation enacts an equally if not more powerful transformation of pedagogic space. Any serious consideration of print must not only look at the technologisation of the word, and the shifting patterns of literacy produced alongside it, but also a particular tying together of pedagogy and method that Ong traces back to Ramus. If, as is canvassed in the call for papers of this issue of M/C Journal, ‘the transitions in print culture are uneven and incomplete at this point’, then could it be in part due to the way Ramism endures and is extended in electronic and hypermedia contexts? Powerpoint presentations, outlining tools (Heim 139-141), and the scourge of bullet points, are the most obvious evidence of greater institutionalization of Ramist knowledge architecture. Communication, and the teaching of communication, is now embedded in a Ramist logic of opening up content like a box. Theories of communication draw on so-called ‘models’ that draw on the representation of the communication process through boxes that divide and define. Perhaps in a less obvious way, ‘spatialized processes of thought and communication’ (Ong, Ramus 314) are essential to the logic of flowcharting and tracking new information structures, and even teaching hypertext (see the diagram in Nielsen 7): a link puts the popular notion that hypertext is close to the way we truly think into an interesting perspective. The notion that we are embedded in print culture is not in itself new, even if the forms of our continual reintegration into print culture can be surprising. In the experience of printing, of the act of pressing the ‘Print’ button, we find ourselves re-integrated into page space. A mini-preview of the page re-assures me of an actuality behind the actualizations on the screen, of ink on paper. As I write in my word processing software, the removal of writing from the ‘element of inscription’ (Heim 136) —the frictionless ‘immediacy’ of the flow of text (152) — is conditioned by a representation called the ‘Page Layout’, the dark borders around the page signalling a kind of structures abyss, a no-go zone, a place, beyond ‘Normal’, from which where there is no ‘Return’. At the same time, however, never before has the technological manipulation of the document been so complex, a part of a docuverse that exists in three dimensions. It is a world that is increasingly virtualised by photocopiers that ‘scan to file’ or ‘scan to email’ rather than good old ‘xeroxing’ style copying. Printing gives way to scanning. In a perverse extension of printing (but also residually film and photography), some video software has a function called ‘Print to Video’. That these super-functions of scanning to file or email are disabled on my department photocopier says something about budgets, but also the comfort with which academics inhabit Ramist space. As I stand here printing my lecture plan, the printer stands defiantly separate from the photocopier, resisting its colonizing convergence even though it is dwarfed in size. Meanwhile, the printer demurely dispenses pages, one at a time, face down, in a gesture of discretion or perhaps embarrassment. For in the focus on the pristine page there is a Puritanism surrounding printing: a morality of blemishes, smudges, and stains; of structure, format and order; and a failure to match that immaculate, perfect argument or totality. (Ong suggests that ‘the term “method” was appropriated from the Ramist coffers and used to form the term “methodists” to designate first enthusiastic preachers who made an issue of their adherence to “logic”’ (Ramus 304).) But perhaps this avoidance of multi-functionality is less of a Ludditism than an understanding that the technological assemblage of printing today exists peripherally to the ideality of the Ramist scheme. A change in technological means does not necessarily challenge the visile language that informs our very understanding of our respective ‘fields’, or the ideals of competency embodied in academic performance and expression, or the notions of content we adopt. This is why I would argue some consideration of Ramism and print culture is crucial. Any ‘true’ breaking out of print involves, as I suggest, a challenge to some fundamental principles of pedagogy and method, and the link between the two. And of course, the very prospect of breaking out of print raises the issue of its desirability at a time when these forms of academic performance are culturally valued. On the surface, academic culture has been a strange inheritor of the Ramist legacy, radically furthering its ambitions, but also it would seem strongly tempering it with an investment in orality, and other ideas of performance, that resist submission to the Ramist ideal. Ong is pessimistic here, however. Ramism was after all born as a pedagogic movement, central to the purveying ‘knowledge as a commodity’ (Ong, Ramus 306). Academic discourse remains an odd mixture of ‘dialogue in the give-and-take Socratic form’ and the scheduled lecture (151). The scholastic dispute is at best a ‘manifestation of concern with real dialogue’ (154). As Ong notes, the ideals of dialogue have been difficult to sustain, and the dominant practice leans towards ‘the visile pole with its typical ideals of “clarity”, “precision”, “distinctness”, and “explanation” itself—all best conceivable in terms of some analogy with vision and a spatial field’ (151). Assessing the importance and after-effects of the Ramist reformation today is difficult. Ong describes it an ‘elusive study’ (Ramus 296). Perhaps Viola’s video, with its figures struggling in a column-like organization of space, structured in a kind of dichotomy, can be read as a glimpse of our existence in or under a Ramist scheme (interestingly, from memory, these figures emote in silence, deprived of auditory expression). My own view is that while it is possible to explore learning environments in a range of ways, and thus move beyond the enclosed mode of study of Ramism, Ramism nevertheless comprises an important default architecture of pedagogy that also informs some higher level assumptions about assessment and knowledge of the field. Software training, based on a process of working through or mimicking a linked series of screenshots and commands is a direct inheritor of what Ong calls Ramism’s ‘corpuscular epistemology’, a ‘one to one correspondence between concept, word and referent’ (Ong, Orality 168). My lecture plan, providing an at a glance view of my presentation, is another. The default architecture of the Ramist scheme impacts on our organisation of knowledge, and the place of performance with in it. Perhaps this is another area where Ong’s fascinating account of secondary orality—that orality that comes into being with television and radio—becomes important (Orality 136). Not only does secondary orality enable group-mindedness and communal exchange, it also provides a way to resist the closure of print and the Ramist scheme, adapting knowledge to new environments and story frameworks. Ong’s work in Orality and Literacy could thus usefully be taken up to discuss Ramism. But this raises another issue, which has to do with the relationship between Ong’s two books. In Orality and Literacy, Ong is careful to trace distinctions between oral, chirographic, manuscript, and print culture. In Ramus this progression is not as prominent— partly because Ong is tracking Ramus’ numerous influences in detail —and we find a more clear-cut distinction between the visile and audile worlds. Yates seems to support this observation, suggesting contra Ong that it is not the connection between Ramus and print that is important, but between Ramus and manuscript culture (230). The interconnections but also lack of fit between the two books suggests a range of fascinating questions about the impact of Ramism across different media/technological contexts, beyond print, but also the status of visualisation in both rhetorical and print cultures. References Brummett, Barry. Reading Rhetorical Theory. Fort Worth: Harcourt, 2000. Heim, Michael. Electric Language: A Philosophical Study of Word Processing. New Haven: Yale UP, 1987. Maras, Steven, David Sutton, and with Marion Benjamin. “Multimedia Communication: An Interdisciplinary Approach.” Information Technology, Education and Society 2.1 (2001): 25-49. Nielsen, Jakob. Multimedia and Hypertext: The Internet and Beyond. Boston: AP Professional, 1995. Ong, Walter J. Orality and Literacy: The Technologizing of the Word. London: Methuen, 1982. —. Ramus: Method, and the Decay of Dialogue. New York: Octagon, 1974. The Second Annual Walter H. Annenberg Symposium. 20 March 2002. http://www.annenberg.edu/symposia/annenberg/2002/photos.php> USC Annenberg Center of Communication and USC Annenberg School for Communication. 22 March 2005. Viola, Bill. Bill Viola: The Passions. Ed. John Walsh. London: The J. Paul Getty Museum, Los Angeles in Association with The National Gallery, 2003. Yates, Frances A. The Art of Memory. Harmondsworth: Penguin, 1969. 
 
 
 
 Citation reference for this article
 
 MLA Style
 Maras, Steven. "Reflections on Adobe Corporation, Bill Viola, and Peter Ramus while Printing Lecture Notes." M/C Journal 8.2 (2005). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0506/05-maras.php>. APA Style
 Maras, S. (Jun. 2005) "Reflections on Adobe Corporation, Bill Viola, and Peter Ramus while Printing Lecture Notes," M/C Journal, 8(2). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0506/05-maras.php>. 
APA, Harvard, Vancouver, ISO, and other styles
3

Raven, Francis. "Copyright and Public Goods." M/C Journal 8, no. 3 (2005). http://dx.doi.org/10.5204/mcj.2366.

Full text
Abstract:

 
 
 The U.S. Constitution charges Congress with promoting ‘the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.’ This is achieved through copyrights. The most common argument in favour of the distribution of exclusive copyrights is that they provide an incentive for artists and scientists to create their works. But, as I will show, the characteristics of intellectual objects (objects that can be copyrighted) can support the contradictory arguments that one, exclusive copyrights are necessary and two, that they should not exist at all. I conclude that the appropriate amount of copyright protection protects the incentive for producers to create while also defending the public’s right to a rich intellectual realm. This is sometimes termed ‘thin’ copyright protection. Thin copyright protection is far weaker than the current copyright regime. For instance, the Sonny Bono Copyright Extension Act of 1998 extended copyright protection to the life of the author plus 70 years, and in the case of works created by corporate entities the act extended protection to 95 years. This is a far cry from copyright’s original duration of 14 years (plus one possible renewal). It would be difficult to argue that these extensions provide any extra incentive for authors to create, while on the other hand they surely attack the public’s right to a robust intellectual realm. Therefore, the current copyright regime needs to be substantially weakened to a ‘thin’ level.
 
 To avoid confusion, I will call works that have the potential to be copyrighted ‘intellectual objects’ before they are copyrighted and use the term ‘copyrighted works’ for intellectual objects after they are copyrighted. Intellectual objects, however, are not objects in the ordinary sense of the word. The particular edition of a book (and the particular copy of a particular edition of a book) is not an intellectual object. It is merely a manifestation (or instance) of the intellectual object. The work, in the broadest sense, is the actual intellectual object. In other words, the manifestation of the work is not the intellectual object, but the work itself is. An individual book is an instantiation of the work, which is the actual intellectual object. Without delving too far into the ontology of artworks it is necessary for this discussion only to see that when talking about intellectual objects we are not talking about physical objects but about objects that can be instantiated in many locations. That is, intellectual objects can be reproduced without losing their intellectual value.
 
 Copyright discussions often begin with the incentive argument mentioned above. An incentive is needed to foster innovation because intellectual objects are non-rivalrous (with regards to consumption) and non-excludable before they are copyrighted. A non-rivalrous good is a good for which enjoyment of it by some agents does not diminish available opportunities for others to enjoy it as well. A non-excludable good, on the other hand, is a good for which it is not possible to prevent individuals (who do not own the good) from consuming it or partaking of the benefits of it (at a relatively low cost). Since intellectual objects are non-rivalrous and non-excludable there is good reason to believe that without copyright protection authors would reduce their production of intellectual objects. This is because without this protection there would be (arguably) no way for authors to receive compensation for their work and to recoup the costs that went into producing the intellectual object at hand.
 
 The fact that intellectual objects are non-rivalrous means that there is no reason why you and I cannot read the same book at the same time. My reading the same work that you are reading (as opposed to reading the same manifestation of the intellectual object) does not decrease your enjoyment in reading that book. That is, the fact that we are both reading Moby Dick in the same period of time does not diminish either of our utilities. This should be contrasted with rivalrous private goods. Take, for instance, a bag of potato chips that you have just bought from your local grocery store. If I eat all of your chips you can no longer derive pleasure from them and if you eat the chips I cannot derive pleasure from them. Rivalrous goods are marked by this relationship. One person’s full enjoyment of such a good disallows another person’s full enjoyment of a rivalrous good. Edwin Hettinger aptly explains the concept of non-rivalrousness in his essay ‘Justifying Intellectual Property’ by writing that intellectual objects are goods which ‘are not consumed by their use’ (34). 
 
 Purely non-excludable goods are goods for which there is no way for one person to exclude another from their use or consumption. An example of a purely non-excludable good is the air. It is absolutely impossible for one to exclude another person from breathing the air (except perhaps by killing them). Yet, intellectual objects are not purely non-excludable but relatively non-excludable. This ‘relative’ non-excludability arises from the fact that a person can exclude another from the physical instantiation of an intellectual object s/he owns (where s/he owns the physical instantiation and not the intellectual object). That is, s/he can prevent another person from taking his/her copy of The Corrections. But s/he cannot exclude another from the intellectual object instantiated in the book. This is because a person’s copy of The Corrections is, in many ways, a piece of physical property and not of intellectual property. What I am concerned with here is intellectual property and thus with intellectual objects (what are later the copyrighted works). Copyrighted works are legally excludable, but it is still difficult to restrict their distribution. This means that they are quasi-non-excludable. 
 
 That intellectual objects are non-rivalrous and non-excludable leads to two contradictory conclusions. The first argues that there is a very good justification for having strong copyright laws; namely that without strong copyright laws works that originally had great value will be copied by unauthorised entities who will sell the copied works for very little and will give none of it back to the author of the work. This means that the author will eventually have no financial incentive to create his/her works. However, these attributes of intellectual objects also mean that there is a very good for having weak (or thin) copyrights (or no copyrights at all). Since there is no reason why each person should not be able to possess all of the great works for a very cheap price (which having weak or short copyrights would ensure). This is especially true given the fact that the entire reasoning for having copyrights at all (in this line of argument) is to ensure the progress of science and the arts which presumably are meant to belong to every citizen of the United States. The first branch of this tension could be called the producer’s conclusion and the second could be called the consumer’s conclusion. If we believe the first conclusion we will have to side with producers over consumers, whereas if we believe the second we will have to side with consumers over producers.
 
 These contradictory results both follow from the fact that intellectual works are non-rivalrous and non-excludable. Since they are non-rivalrous and non-excludable there is every reason to leave them that way (that is, not to have copyrights) as it benefits the public but for the same reason there is every reason to have strong copyrights so that authors will create intellectual works in the future. Hettinger notes that the justification for copyright at this level is paradoxical. ‘It establishes a right to restrict the current availability and use of intellectual products for the purpose of increasing the production and thus future availability and use of new intellectual products’ (48). That is, the logic is that you’ll get more intellectual objects if you limit the current availability of intellectual objects.
 
 Law Professor and copyright specialist Paul Goldstein summarises this argument in his book Copyright’s Highway when he writes, ‘since copyright allows creators and publishers of literary and artistic works to charge a price for gaining access to these works, the inescapable effect is to withhold the work from people who will not or cannot pay that price, even though giving them free access would harm no one else’ (176). But this is only one side of the tension, to elucidate the other side which Goldstein subscribes to, he writes that ‘if society withholds property rights from creative work, the price that its producers can charge for access to it will begin to approach zero; their revenues will diminish and, with them, their incentives to produce more’ (177). So we are left with this tension that must be duly dealt with by policy makers.
 
 In light of the tension we should measure copyright protection by both of its poles. These poles correspond in the first case to the author’s rights and in the second to the consumer’s rights. The best copyright protection will accept what both sets of rights demand to the extent that it can, but when it cannot it will side with the user since the set of users more or less corresponds to the public at large. (We are all users of intellectual objects but are not all authors of them.) What this means for enacting copyright policies is that copyright protection should exist, but it should exist no more than is necessary for promoting the arts and sciences. That is, copyrights should be seen as incentives to create, not property rights. The fact that there are incentives will please authors and the fact that they are limited (through broad fair use exemptions, a healthy distinction between ideas and expressions, and having copyright protections for a relatively short period of time) will please users. All in all this is the best way of seeing our way through the tension at the heart of copyright law. 
 
 In terms of the enactment of the law, copyright laws should be limited in duration and scope. First, copyright protection should not last for 70 years plus the life of the author, which is too long to justify in terms of providing an incentive for authors to create. Second, fair use provisions for copying parts of works should be broadened and minor infractions (such as private copying, regardless of the difficulties in defining what ‘private’ means) should not be prosecuted since small amounts of copying do not encroach on the effectiveness of the incentive for authors to create. Third, the idea/expression distinction should be strongly and vigorously maintained. While all of these changes appear on the surface to be siding with the public over authors, the fact that copyright protection exists at all is obviously to the author’s advantage. Thus, these changes constitute a copyright regime that is more beneficial to all, authors and public included.
 
 References
 
 Bell, Tom W. “Diagram of ‘The Paths of Intellectual Property’.” http://www.tomwbell.com/teaching/Prop_Paths.pdf>. Goldstein, Paul. Copyright’s Highway. From Gutenberg to the Celestial Jukebox (rev. ed.). Stanford, Calif.: Stanford UP, 2003. Hettinger, Edwin. “Justifying Intellectual Property.” Philosophy and Public Affairs 18 (1989): 31-52. Morgan, Scott. “Columbus Farmers Market Contemplates Countersuing Recording Industry Association of America, Which Is Suing Market over Pirated Music.” Packet Online 16 Oct. 2003. http://www.zwire.com/site/news.cfm?BRD=1091&dept_id=425707&newsid=10328460&PAG=461&rfi=9>. Samuels, Edward. “The Idea-Expression Dichotomy in Copyright Law.” 56 Tenn. L. Rev. 321 (1989). ’ Vaidhyanathan, Siva. Copyrights and Copywrongs: The Rise of Intellectual Property and How It Threatens Creativity. New York, NY: New York UP, 2001.
 Note
 
 A basic discussion of public goods can be found at http://www.pitt.edu/~upjecon/MCG/MICRO/GOVT/Pubgood.html>.
 
 
 
 
 Citation reference for this article
 
 MLA Style
 Raven, Francis. "Copyright and Public Goods: An Argument for Thin Copyright Protection." M/C Journal 8.3 (2005). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0507/06-raven.php>. APA Style
 Raven, F. (Jul. 2005) "Copyright and Public Goods: An Argument for Thin Copyright Protection," M/C Journal, 8(3). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0507/06-raven.php>. 
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Gutenberg Diagram"

1

Singh, Ravi Inder. "Is the New Paragraph More Readable than the Traditional Paragraph?" Master's thesis, 2011. http://hdl.handle.net/10048/1699.

Full text
Abstract:
The definition of the traditional paragraph has remained unchanged for generations of readers. Yet today the predominant form of the paragraph on the Web is so new that it can only be called the new paragraph. So the question is which is the more readable of the two paragraph formats? More specifically, how can the new paragraph be defined and how can its readability be measured against the traditional paragraph? A literature review reveals that no attempt has ever been made to define the new paragraph. A novel approach is taken: collect the headline stories from the top 43 English language online daily newspapers and use them to define the new paragraph. They exclusively use the new paragraph format and 1200 stories were collected from them over a period of four months. The results indicate a drastic difference between the old and new paragraph with the new paragraph being on average less than half the size of the old paragraph. White space between paragraphs occupies almost exactly half a given story. Words of less than two syllables are the norm in a new paragraph. To determine the readability of the new paragraph, a test of readability was performed using human subjects. A passage of text was selected and formatted according to the rules for the traditional paragraph and according to the metrics of the new paragraph. The cloze procedure is then used to decide readability. The reading test‟s data is analyzed and the results and future directions of the study are discussed in the conclusion.<br>Software Engineering and Intelligent Systems
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Gutenberg Diagram"

1

Filippov, Alexander E., and Valentin L. Popov. "Study of Dynamics of Block-Media in the Framework of Minimalistic Numerical Models." In Springer Tracts in Mechanical Engineering. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-60124-9_7.

Full text
Abstract:
AbstractOne of the principal methods of preventing large earthquakes is stimulation of a large series of small events. The result is a transfer of the rapid tectonic dynamics in a creep mode. In this chapter, we discuss possibilities for such a transfer in the framework of simplified models of a subduction zone. The proposed model describes well the basic characteristic features of geo-medium behavior, in particular, statistics of earthquakes (Gutenberg Richter and Omori laws). Its analysis shows that local relatively low-energy impacts can switch block dynamics from stick–slip to creep mode. Thus, it is possible to change the statistics of seismic energy release by means of a series of local, periodic, and relatively low energy impacts. This means a principal possibility of “suppressing” strong earthquakes. Additionally, a modified version of the Burridge-Knopoff model including a simple model for state dependent friction force is derived and studied. The friction model describes a velocity weakening of friction between moving blocks and an increase of static friction during stick periods. It provides a simplified but qualitatively correct stability diagram for the transition from smooth sliding to a stick–slip behavior as observed in various tribological systems. Attractor properties of the model dynamic equations were studied under a broad range of parameters for one- and two-dimensional systems.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography