To see the other types of publications on this topic, follow the link: WIPO domain name processes.

Journal articles on the topic 'WIPO domain name processes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'WIPO domain name processes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Escudero, Laura Martín. "Quality in WIPO Domain Name Arbitration Decisions." HERMES - Journal of Language and Communication in Business 24, no. 47 (October 30, 2017): 79. http://dx.doi.org/10.7146/hjlcb.v24i47.97568.

Full text
Abstract:
Over the last decades, linguists have drawn special considerations to define specialized discourse. Basically, the complexity of describing specialized discourse lies in its multi-dimensional nature. The main purpose of this paper is to characterize specialized discourse in WIPO Domain Name Arbitration as the result of social and institutional conditionings. First, the study focuses on characterizing text-external factors associated with this highly-specialized professional practice. More in particular, the study focuses on the analysis of ‘Quality’ in WIPO Domain Name Arbitration decisions. Second, the study found useful to define the boundaries of ‘specialized discourse’ Third, the study limited the focus of attention on analyzing ‘Quality’ in relation to ‘Objectivity’ and ‘Neutrality’ as factors associated with specialized discourse and also to ‘Impartiality’ and ‘Independence’ as conditionings specifically related to WIPO domain name arbitration professional practice. Following Bhatia (Bhatia 2004) this study conceptualizes specialized discourse as a concept highly dependent on social and institutional conditions.
APA, Harvard, Vancouver, ISO, and other styles
2

Escudero, Laura Martínez. "A corpus-based insight into genre: The case of WIPO domain name arbitration decisions." Discourse & Communication 5, no. 4 (November 2011): 375–92. http://dx.doi.org/10.1177/1750481311427084.

Full text
Abstract:
To prevent domains from cyber-piracy, the WIPO offers private and confidential procedures tasked to address the legitimate use of a domain name. WIPO domain name arbitration consists of an alternative dispute resolution process in which one or more panelists make a binding decision over the legitimacy of a domain. This article investigates the structure of the discourse of this professional genre. Following Maley (1987), this study focuses, first, on spotting the generic moves of WIPO domain name arbitration decisions. Second, the analysis unveils patterns shaping WIPO domain name arbitration decisions, hence exploring how discursive features work within this specialized discourse. Third, it examines whether corpus data reveal that this type of professional discourse has entered into a process of standardization. The study is based on Bhatia’s multi-perspective four-space model of discourse (2004). This analytical framework emphasizes a multidisciplinary and multidimensional perspective which highlights that discourse is indistinguishable from constructing reality.
APA, Harvard, Vancouver, ISO, and other styles
3

Blackshaw, Ian. "FIFA wins its latest domain name dispute filed with WIPO." Entertainment and Sports Law Journal 9, no. 1 (June 1, 2011): 4. http://dx.doi.org/10.16997/eslj.29.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rimmer, Matthew. "Virtual Countries: Internet Domain Names and Geographical Terms." Media International Australia 106, no. 1 (February 2003): 124–36. http://dx.doi.org/10.1177/1329878x0310600113.

Full text
Abstract:
This paper examines the dispute between the Seattle company Virtual Countries Inc. and the Republic of South Africa over the ownership of the domain name address southafrica.com . The first part of the paper deals with the pre-emptive litigation taken by Virtual Countries Inc. in a District Court of the United States. The second part considers the possible arbitration of the dispute under the Uniform Domain Name Dispute Resolution Process of the Internet Corporation for Assigned Names and Numbers (ICANN) and examines the wider implications of this dispute for the jurisdiction and the governance of ICANN. The final section of the paper evaluates the Final Report of the Second WIPO Internet Domain Name Process.
APA, Harvard, Vancouver, ISO, and other styles
5

Putri, Dheka Ermelia. "APPLICATION OF ONLINE DISPUTE RESOLUTION (ODR) IN INTERNATIONAL AND INDONESIA DOMAIN NAMES DISPUTES." Lampung Journal of International Law 1, no. 1 (August 11, 2020): 19. http://dx.doi.org/10.25041/lajil.v1i1.2021.

Full text
Abstract:
The Online Dispute Resolution has become a breakthrough in the world of law, especially the law of dispute settlement. Online Dispute Resolution is used in several disputes such as e-commerce disputes and domain name. Technically, part of the ODR has been used by Indonesia’s Constitutional Court, where the Indonesian Constitutional Court utilizes video conferencing facilities in listening to witness testimonies and expert opinions. Moreover, PANDI (Pengelola Nama Domain Indonesia) has implemented most of the functions of the ODR in resolving Domain Name disputes in Indonesia like one of the cases that has been resolved, we call as “Netflix.id” Case that is the case of the Netflix Company who has used Netflix’s name as merchandise since 1977. Netflix Company knowing that there is a new domain name that uses the name of the item, namely “Netflix.id” is officially registered and this interferes with the trading of the Netflix Company, with the result that Netflix Company filed a lawsuit to namely removing the “Netflix.id” domain name. This case was resolved without going through a face to face PPND Process (Pedoman Perselisihan Nama Domain) as a legal basis under Indonesian legislation Currently. Online Dispute Resolution has been used by various world organizations including UNCITRAL, European Commission, and WIPO Arbitration and Mediation. PANDI (Pengelola Nama Domain Internet Indonesia) as one of the parties that utilize the online dispute resolution has policies established under existing international regulations. Still, the ODR has been applied in some cases and resulted in binding decisions to the parties.
APA, Harvard, Vancouver, ISO, and other styles
6

Zafar, Rimsha, Shayan Khan, and Ankur Rajput. "Electronic Commerce - Electronic Data Interchange (EDI) and Alternative Dispute Resolution (ADR)." International Journal of Advanced Research in Computer Science and Software Engineering 7, no. 7 (July 30, 2017): 294. http://dx.doi.org/10.23956/ijarcsse/v7i7/0191.

Full text
Abstract:
E‐commerce is one of the applications of various technologies of communication to provide the exchange of automated business information. The present scenario requires an approach where the paper is replaced with technology and giving rise to EDI (Electronic data interchange). Alternative Dispute Resolution (ADR) is a mode of resolving legal disputes in online medium. The requirement of this mode is both in the business world as well as common men that requires timely justice. To so.lve the delayed justice problem and helping out people in all domains ADR plays a very important role. The project report discusses on the concept of EDI, the various processes and steps involved in EDI. The standardized formats and types of EDI have also been discussed with a case study. The drawbacks and benefits have also been listed and how does the business world look forward for growth in EDI and explanation of ADR and comparing two models named as virtual magistrate project and Wipo.
APA, Harvard, Vancouver, ISO, and other styles
7

Plotkin, James. "THE MODEL FOR A PATH FORWARD. A PROPOSAL FOR A MODEL LAW DEALING WITH CYBER-SQUATTING AND OTHER ABUSIVE DOMAIN NAME PRACTICES." Denning Law Journal 27 (November 16, 2015): 204–40. http://dx.doi.org/10.5750/dlj.v27i0.989.

Full text
Abstract:
The internet has revolutionized the way we interact with information and each other. Among the internet’s many applications, e-commerce ranks at the top. Businesses derive significant value from a robust online presence which arguably begins with a strong domain name.Websites are identified by internet protocol (IP) addresses which consist of sets of numbers. The Domain Name System (DNS) is the internet’s address book. Its function is to allow internet users to identify websites with more memorable indicia than a set of numbers such as words, phrases and acronyms. Given that businesses often devote significant resources to growing brand recognition and the goodwill associated with their trademarks, many of them tend to register domain names under those trademarks. Domain names (unlike trademarks) are unique which further increases a trademark holder’s interest in securing ones that consumers would likely associate with its goods or services.Cyber-squatters seek to profit from the DNS by engaging in a form of “online speculation”. They register domain names that are either identical or confusingly similar to trademarks and then attempt to sell the domain name(s) to a legitimate trademark holder for a profit.The current regulatory framework dealing with cyber-squatting comprises of: 1) The Internet Corporation for Assigned Names and Numbers (ICANN) Uniform Dispute Resolution Policy (UDRP) and variants thereof; 2) The American Anticybersquatting Consumer Protection Act (ACPA); and 3) National trademark laws. This paper argues that while partially effective, the current framework is lacking.A review of UDRP panel statistics reveals a steady flow of complaints since 2000 with a marked upswing from 2005 forward. The WIPO Arbitration and Mediation Center, the largest UDRP resolution provider, receives between 1700-2600 complaints per year relating to cyber-squatting. Cyber-squatting is therefore clearly an issue that requires further or better regulation.The UDRP, ACPA and trademark statutes all suffer from significant shortcomings. This paper seeks to identify those shortcomings and propose a potential solution: a model law relating to cyber-squatting and other abusive domain name practices. The model law would create specific causes of action for cybersquatting and the abusive practice known as “reverse-domain name hijacking”. It would also comport certain key provisions to aid in the harmonization of an internationally accepted body of “domain name law”.While a model law approach itself suffers from certain shortcomings (most notably the requirement that it be adopted in a significant number of states to become effective), this paper demonstrates that those shortcomings are far outweighed by its benefits.
APA, Harvard, Vancouver, ISO, and other styles
8

Prentoulis, N. "Freedom of political speech prevails over common law mark rights in a WIPO domain name decision inspired by US constitutional principles." Journal of Intellectual Property Law & Practice 4, no. 11 (October 16, 2009): 781–82. http://dx.doi.org/10.1093/jiplp/jpp165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Golubkova, Ekaterina, and Anastasia Zakharova. "Meaning-Making Processes in Derivatives from Precedent Names." Lege Artis 1, no. 2 (December 1, 2016): 37–79. http://dx.doi.org/10.1515/lart-2016-0010.

Full text
Abstract:
Abstract The paper addresses the issue of meaning-making in a highly prolific and sprawling segment of English vocabulary – derivatives from precedent names (DPNs). The combination of cognitive linguistics methods applied to analyze the semantics of DPNs (Robin Hood cluster, Cinderella-based blends and Dorian Gray effect) permitted to account for their bias towards polysemy, which seems to be basically grounded in the process of metonymic zoom-in on the selected content in the event frame that describes the precedent name, oftentimes leading to domain extension and indeterminacy.
APA, Harvard, Vancouver, ISO, and other styles
10

Liang, SiYu, ZhiHong Tian, XinDa Cheng, Yu Jiang, Le Wang, and Shen Su. "A Quantitative Method for the DNS Isolation Management Risk Estimation." Electronics 9, no. 6 (June 1, 2020): 922. http://dx.doi.org/10.3390/electronics9060922.

Full text
Abstract:
The domain name system (DNS) is an important infrastructure of the Internet, providing domain name resolution services for almost all Internet communication systems. However, the current DNS is centrally managed, leading to unfair sovereignty of the Internet among countries. A domestic DNS is unable to work normally, noted as isolated management risk (IMR), especially when the national network is isolated from the rest of the Internet. To improve understanding of the DNS isolated management risk for better DNS resource deployment, it is critical to determine how serious the IMR is among various countries. In order to quantify DNS isolated management risk, this paper proposed an effective approach to collect DNS resolution demand data from the network used by various intelligent devices and to conduct data analysis to estimate isolated management risk of certain country’s domestic DNSs. Our idea is to quantify the domain name resolution demand and its relationship with the overseas resolution processes. We further used our quantitative method to compare the IMR of the USA and China and analyzed the difference between them.
APA, Harvard, Vancouver, ISO, and other styles
11

Han, Luchao, Zhichuan Guo, and Xuewen Zeng. "Research on Multicore Key-Value Storage System for Domain Name Storage." Applied Sciences 11, no. 16 (August 12, 2021): 7425. http://dx.doi.org/10.3390/app11167425.

Full text
Abstract:
This article proposes a domain name caching method for the multicore network-traffic capture system, which significantly improves insert latency, throughput and hit rate. The caching method is composed of caching replacement algorithm, cache set method. The method is easy to implement, low in deployment cost, and suitable for various multicore caching systems. Moreover, it can reduce the use of locks by changing data structures and algorithms. Experimental results show that compared with other caching system, our proposed method reaches the highest throughput under multiple cores, which indicates that the cache method we proposed is best suited for domain name caching.
APA, Harvard, Vancouver, ISO, and other styles
12

Lindeman, Marjaana, and Annika M. Svedholm. "What's in a Term? Paranormal, Superstitious, Magical and Supernatural Beliefs by Any Other Name Would Mean the Same." Review of General Psychology 16, no. 3 (September 2012): 241–55. http://dx.doi.org/10.1037/a0027158.

Full text
Abstract:
What are paranormal, superstitious, magical, and supernatural (PSMS) beliefs and what, if anything, separates them? Currently, use of the concepts is ambiguous and agreement is weak. A literature search for articles dealing with PSMS beliefs during the last two decades produced conceptual definitions that we grouped into seven groups. Five groups consisted of definitions that were domain-general, namely false beliefs, belief in scientifically impossible phenomena, and associative biases (covariation bias, laws of sympathetic magic, and irrational acts). Two sets of definitions were domain-specific (content-dependent), namely counterintuitive and intuitive beliefs about physical, biological, and psychological phenomena. Empirical methods were reviewed to show what kinds of beliefs have been examined under the rubrics paranormal, supernatural, magical, or superstitious. We concluded that the concepts paranormal, superstitious, magical, and supernatural denote the same thing and that domain-general definitions are inadequately precise. Defining PSMS beliefs as category mistakes that confuse the distinctive attributes of mental phenomena, material objects, living, and animate organisms, and the processes these engage in, fared best in distinguishing PSMS beliefs from other beliefs and covering relevant beliefs. We suggest researchers sharpen the content of the measures and strive to integrate lines of research that have so far remained separate.
APA, Harvard, Vancouver, ISO, and other styles
13

Voelzmann, André, and Reinhard Bauer. "Ceramide synthases in mammalians, worms, and insects: emerging schemes." BioMolecular Concepts 1, no. 5-6 (December 1, 2010): 411–22. http://dx.doi.org/10.1515/bmc.2010.028.

Full text
Abstract:
AbstractThe ceramide synthase (CerS) gene family comprises a group of highly conserved transmembrane proteins, which are found in all studied eukaryotes. The key feature of the CerS proteins is their role in ceramide synthase activity. Therefore, their original name ‘longevity assurance gene (Lass) homologs’, after the founding member, the yeast longevity assurance gene lag1, was altered to ‘CerS’. All CerS have high sequence similarity in a domain called LAG1 motif and a subset of CerS proteins is predicted to contain a Homeobox (Hox) domain. These domains could be the key to the multiple roles CerS have. CerS proteins play a role in diverse biological processes such as proliferation, differentiation, apoptosis, stress response, cancer, and neurodegeneration. In this review, we focus on CerS structure and biological function with emphasis of biological functions in the widely used model systems Caenorhabditis elegans and Drosophila melanogaster. Also, we focus on the accumulating data suggesting a role for CerS in lipid homeostasis.
APA, Harvard, Vancouver, ISO, and other styles
14

Zhivago, N. A. "SEMANTICS AND PRAGMATICS OF VERBAL METAPHOR OF FOOD (THE SEMANTIC GROUP «FOOD CONSUMPTION WITH THE AID OF TEETH»)." Bulletin of Kemerovo State University, no. 4 (January 10, 2018): 167–74. http://dx.doi.org/10.21603/2078-8975-2017-4-167-174.

Full text
Abstract:
The article features an analysis of semantics, pragmatics and text-based functioning of figurative vocabulary and expressions, which are metaphorically motivated by such verbs as bite, gnaw, nibble and chew. These verbs name various activities, processes and phenomena because of their analogy to food consumption with the aid of teeth. The analysis allowed the author to describe a number of cognitive metaphorical models which, in their turn, reflect the image projection from the source-domain “Food Consumption” into the conceptual spheres of physiological, mental, psychological and social phenomena. The given system of images expresses metaphorically the concept of various processes and notions related to deformation, destruction and disappearance of material objects; destructive effects on the human body and psyche; social and moral pressure (e. g. competitive struggle, repressive political system), loss of physical resources and values. The pragmatic potential of the image-bearing vocabulary and expressions is predominantly related to negative evaluative connotation since they reveal disapproving, condemning, contemptuous emotional attitude and high level of expressivity.
APA, Harvard, Vancouver, ISO, and other styles
15

Budiarta, I. Wayan, and Ni Wayan Kasni. "The Concept of Animals in Balinese Proverbs." International Journal of Linguistics, Literature and Culture 3, no. 1 (January 25, 2017): 87. http://dx.doi.org/10.21744/ijllc.v3i1.371.

Full text
Abstract:
This research is aimed to figure out the syntactic structure of Balinese proverbs, the relation of meaning between the name of the animals and the meaning of the proverbs, and how the meanings are constructed in logical dimension. This research belongs to a qualitative as the data of this research are qualitative data which taken from a book entitled Basita Paribahasa written by Simpen (1993) and a book of Balinese short story written by Sewamara (1977). The analysis shows that the use of concept of animals in Balinese proverbs reveal similar characteristics, whether their form, their nature, and their condition. Moreover, the cognitive processes which happen in resulting the proverb is by conceptualizing the experience which is felt by the body, the nature, and the characteristic which owned by the target with the purpose of describing event or experience by the speech community of Balinese. Analogically, the similarity of characteristic in the form of shape of source domain can be proved visually, while the characteristic of the nature and the condition can be proved through bodily and empirical experiences. Ecolinguistics parameters are used to construct of Balinese proverbs which happen due to cross mapping process. It is caused by the presence of close characteristic or biological characteristic which is owned by the source domain and target domain, especially between Balinese with animal which then are verbally recorded and further patterned in ideological, biological, and sociological dimensions.
APA, Harvard, Vancouver, ISO, and other styles
16

Helmy, Tarek, and Saeed Al-Bukhitan. "Framework for Automatic Semantic Annotation of Arabic Websites." International Journal of Cooperative Information Systems 25, no. 01 (March 2016): 1650001. http://dx.doi.org/10.1142/s0218843016500015.

Full text
Abstract:
In order to achieve the vision of the semantic Web, it is important to have enough amount of semantic content on the Web sources. To produce the semantic content on the existing Web, semantic annotation of the Web sources is required. Semantic annotation adds machine-readable content to the Web sources. Because the Web is growing at an exponential rate, semantic annotation by hand is not possible. In this paper, we present an Automatic Semantic Annotation Framework (ASAF) for semantic annotation of Arabic Web sources based on the domain ontologies. We present a learning approach that utilizes public Arabic resources, such as Wikipedia and WordNet for building Arabic ontologies. Moreover, we present different approaches for extracting name entities and relationships from Arabic Web sources. As a case study, we have developed and expanded a set of Arabic ontologies related to food, health, and nutrition through a set of processes. We have also developed the ASAF prototype, and showed how it can utilize these ontologies for extracting health, food related name entities, and relationships from the Web sources in order to annotate and store them in the knowledge-base. We conducted several experiments to test the capability of ASAF in recognizing the name entities and relationships using different approaches. Empirical evaluations of ASAF show promising performance results in terms of precision, recall, and [Formula: see text]-measure. The outcome of the presented framework could be utilized by semantic Web searching applications to retrieve precise answers to the end user smarter queries. An important feature of ASAF is that it could be ported to other domains with minimal extension. ASAF also contributes to the vision of the semantic Web in the target domains in Arabic Web sources.
APA, Harvard, Vancouver, ISO, and other styles
17

Flechsler, Jennifer, Thomas Heimerl, Harald Huber, Reinhard Rachel, and Ivan A. Berg. "Functional compartmentalization and metabolic separation in a prokaryotic cell." Proceedings of the National Academy of Sciences 118, no. 25 (June 14, 2021): e2022114118. http://dx.doi.org/10.1073/pnas.2022114118.

Full text
Abstract:
The prokaryotic cell is traditionally seen as a “bag of enzymes,” yet its organization is much more complex than in this simplified view. By now, various microcompartments encapsulating metabolic enzymes or pathways are known for Bacteria. These microcompartments are usually small, encapsulating and concentrating only a few enzymes, thus protecting the cell from toxic intermediates or preventing unwanted side reactions. The hyperthermophilic, strictly anaerobic Crenarchaeon Ignicoccus hospitalis is an extraordinary organism possessing two membranes, an inner and an energized outer membrane. The outer membrane (termed here outer cytoplasmic membrane) harbors enzymes involved in proton gradient generation and ATP synthesis. These two membranes are separated by an intermembrane compartment, whose function is unknown. Major information processes like DNA replication, RNA synthesis, and protein biosynthesis are located inside the “cytoplasm” or central cytoplasmic compartment. Here, we show by immunogold labeling of ultrathin sections that enzymes involved in autotrophic CO2 assimilation are located in the intermembrane compartment that we name (now) a peripheric cytoplasmic compartment. This separation may protect DNA and RNA from reactive aldehydes arising in the I. hospitalis carbon metabolism. This compartmentalization of metabolic pathways and information processes is unprecedented in the prokaryotic world, representing a unique example of spatiofunctional compartmentalization in the second domain of life.
APA, Harvard, Vancouver, ISO, and other styles
18

Hoffmann, Christian Herzog né. "On formal ethics versus inclusive moral deliberation." AI and Ethics 1, no. 3 (March 5, 2021): 313–29. http://dx.doi.org/10.1007/s43681-021-00045-4.

Full text
Abstract:
AbstractIn this article, I will advocate caution against a formalization of ethics by showing that it may produce and perpetuate unjustified power imbalances, disadvantaging those without a proper command of the formalisms, and those not in a position to decide on the formalisms’ use. My focus rests mostly on ethics formalized for the purpose of implementing ethical evaluations in computer science–artificial intelligence, in particular—but partly also extends to the project of applying mathematical rigor to moral argumentation with no direct intention to automate moral deliberation. Formal ethics of the latter kind can, however, also be seen as a facilitator of automated ethical evaluation. I will argue that either form of formal ethics presents an obstacle to inclusive and fair processes for arriving at a society-wide moral consensus. This impediment to inclusive moral deliberation may prevent a significant portion of society from acquiring a deeper understanding of moral issues. However, I will defend the view that such understanding supports genuine and sustained moral progress. From this, it follows that formal ethics is not per se supportive of moral progress. I will illustrate these arguments by practical examples of manifest asymmetric relationships of power primarily from the domain of autonomous vehicles as well as on more visionary concepts, such as artificial moral advisors. As a result, I will show that in these particular proposed use-cases of formal ethics, machine ethics risks to run contrary to their proponents’ proclaimed promises of increasing the rigor of moral deliberation and even improving human morality on the whole. Instead, I will propose that inclusive discourse about automating ethical evaluations, e.g., in autonomous vehicles, should be conducted with unrelenting transparency about the limitations of implementations of ethics. As an outlook, I will briefly discuss uses formal ethics that are more likely to avoid discrepancies between the ideal of inclusion and the challenge from power asymmetries.Please check and confirm that the authors and their respective affiliations have been correctly identified and amend if necessary.I confirm.Author names: Please confirm if the author names are presented accurately and in the correct sequence (given name, middle name/initial, family name). I confirm. Kindly check and confirm the country name for the affiliation [1] is correct.I confirm.
APA, Harvard, Vancouver, ISO, and other styles
19

Hörster, Maria António Hörster, and Cornelia Plag. "The first translation of Freud in Portugal." Translation Matters 3, no. 1 (2021): 41–57. http://dx.doi.org/10.21747/21844585/tm3_1a3.

Full text
Abstract:
: The first translation of Freud published in Portugal appears to have been a version of the 1905 text Drei Abhandlungen zur Sexualtheorie, which came out in November 1932 named Sexualidade.Published by the Ática Pressin a collection entitled Scientia Vitæ, the translator's name –Osório de Oliveira –was,surprisingly,displayed in a prominent position on the title page. A comparison between this translation, Freud's original and a French version by Blanche Reverchon, that had come out shortly before, shows that it was a case of indirect translation, which reproduced many of the characteristics of the intermediary version. Forexample, while Freud's original enables the reader to follow the thought processes behind his hypotheses and scientific conclusions, both of the translated texts are much less tentative. This paper explores the circumstances surrounding the production of this Portuguese translation at that moment, the translational options made, and the effect of both on the text's reception. Particular attention is given to the domain of lexis –creation of neologisms, terminological consistency and coherence –and modalization, and whether the terminological options caught on and were reproduced in subsequent translations and commentaries.
APA, Harvard, Vancouver, ISO, and other styles
20

Williams, Christopher R., Allen B. White, Kenneth S. Gage, and F. Martin Ralph. "Vertical Structure of Precipitation and Related Microphysics Observed by NOAA Profilers and TRMM during NAME 2004." Journal of Climate 20, no. 9 (May 1, 2007): 1693–712. http://dx.doi.org/10.1175/jcli4102.1.

Full text
Abstract:
Abstract In support of the 2004 North American Monsoon Experiment (NAME) field campaign, NOAA established and maintained a field site about 100 km north of Mazatlán, Mexico, consisting of wind profilers, precipitation profilers, surface upward–downward-looking radiometers, and a 10-m meteorological tower to observe the environment within the North American monsoon. Three objectives of this NOAA project are discussed in this paper: 1) to observe the vertical structure of precipitating cloud systems as they passed over the NOAA profiler site, 2) to estimate the vertical air motion and the raindrop size distribution from near the surface to just below the melting layer, and 3) to better understand the microphysical processes associated with stratiform rain containing well-defined radar bright bands. To provide a climatological context for the profiler observations at the field site, the profiler reflectivity distributions were compared with Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR) reflectivity distributions from the 2004 season over the NAME domain as well as from the 1998–2005 seasons. This analysis places the NAME 2004 observations into the context of other monsoon seasons. It also provides a basis for evaluating the representativeness of the structure of the precipitation systems sampled at this location. The number of rain events observed by the TRMM PR is dependent on geography; the land region, which includes portions of the Sierra Madre Occidental, has more events than the coast and gulf regions. Conversely, from this study it is found that the frequencies of occurrence of stratiform rain and reflectivity profiles with radar bright bands are mostly independent of region. The analysis also revealed that the reflectivity distribution at each height has more year-to-year variability than region-to-region variability. These findings suggest that in cases with a well-defined bright band, the vertical profile of the reflectivity relative to the height of the bright band is similar over the gulf, coast, and land regions.
APA, Harvard, Vancouver, ISO, and other styles
21

Buchanan, Frances, Niccolo Capanni, and Horacio González-Vélez. "Distributed aggregation of heterogeneous Web-based Fine Art Information: enabling multi-source accessibility and curation." Knowledge Engineering Review 30, no. 2 (March 2015): 220–36. http://dx.doi.org/10.1017/s0269888914000319.

Full text
Abstract:
AbstractThe sources of information on the Web relating to Fine Art and in particular to Fine Artists are numerous, heterogeneous and distributed. Data relating to the biographies of an artist, images of their artworks, location of the artworks and exhibition reviews invariably reside in distinct and seemingly unrelated, or at least unlinked, sources. While communication and exchange exists, there is a great deal of independence between major repositories, such as museum, often owing to their ownership or heritage. This increases the individuality in the repository’s own processes and dissemination. It is currently necessary to browse through numerous different websites to obtain information about any one artist, and at this time there is little aggregation of Fine Art Information. This is in contrast to the domain of books and music, where the aggregation and re-grouping of information (usually by author or artist/band name) has become the norm. A Museum API (Application Programming Interface), however, is a tool that can facilitate a similar information service for the domain of Fine Art, by allowing the retrieval and aggregation of Web-based Fine Art Information, whilst at the same time increasing public access to the content of a museum’s collection. In this paper, we present the case for a pragmatic solution to the problems of heterogeneity and distribution of Fine Art Data and this is the first step towards the comprehensive re-presentation of Fine Art Information in a more ‘artist-centric’ way, via accessible Web applications. This paper examines the domain of Fine Art Information on the Web, putting forward the case for more Web services such as generic Museum APIs, highlighting this via a prototype Web application known as the ArtBridge. The generic Museum API is the standardisation mechanism to enable interfacing with specific Museum APIs.
APA, Harvard, Vancouver, ISO, and other styles
22

Minchik, Sergey S. "Reception of A.S. Griboyedov on the Internet: based on the blog example about the connections of the writer's name with the Crimea." Neophilology, no. 24 (2020): 765–75. http://dx.doi.org/10.20310/2587-6953-2020-6-24-765-775.

Full text
Abstract:
Almost two centuries after death, Alexander Sergeyevich Griboyedov (1794-1829) continues to influence social processes. To unravel the mystery of this phenomenon, it is necessary to answer the question about the character of perception of the writer and diplomat and to find out what features of his image appeal to modern people. The subject of this work is the perception of Griboyedov in the electronic environment. The work shows how the search engines index the blo-gosphere, which is dedicated to “the author of Famusov and Skalozub”, what topics are popular among its readers and what kind of audience it is, what influences the attendance of the only on-line-diary about the writer’s discourse, what is its history creation, purpose, structure, genre and style of material presentation, what factors determine the demand for the content of this resource and in which way it differs from similar media, what Web borrows from this project and in what manner. The basis of the narrative is made up of information that is not available in the public domain, observations over the media space are commented in the light of the state of affairs in griboedovistic. The request for the new data about the writer and diplomat on the virtual space is correlated with the attitudes of researchers and relevant publications of 2012–2020, the prospect of overcoming the identified discrepancies is associated with the changing of the agenda in modern science and disclosed by the characteristics of its tasks.
APA, Harvard, Vancouver, ISO, and other styles
23

Olivia, Caunday, Agulles Odette, McGrath Eoin, Empereur Fabienne, Stoltz Jean Francois, and Chabannon Christian. "Identification, categorization, and mapping of indicators used by JACIE-accredited stem cell transplantation programs to assess distribution and coverage of processes." Journal of Clinical Oncology 30, no. 34_suppl (December 1, 2012): 255. http://dx.doi.org/10.1200/jco.2012.30.34_suppl.255.

Full text
Abstract:
255 Background: More than 145 European hematopoietic stem cell transplantation (HSCT) programs received JACIE accreditation since 2000, demonstrating compliance with FACT-JACIE international standards. The association of JACIE with improved patient outcome was recently documented. Conditions in which quality management systems were introduced and actual benefits remain to be fully evaluated. Methods: The study explores one aspect of quality management: introduction and use of indicators. Through a questionnaire sent to JACIE accredited centers, we aimed at identifying indicators (name, domain of application, category, longevity, and general description), understanding how they were set in place, whether or not similar indicators were used by different programs, and whether all of the HSCT processes were monitored. The survey was first sent to 14 French accredited HSCT centers and next to 68 other programs in 11 European countries. Categorization was double-checked against published criteria. Results: The response rate was 40% (32 programs). 293 indicators were collected, including 224 (76%) that were introduced during the preparatory phase of JACIE accreditation. Indicators were associated with the following processes measurement, analysis and improvement (54/293 or 18%); donor collection (49/293 or 16%); processing and storage of cell therapy products (37 /293 or 12.5%), administration of HPC (67/293 or 23%). Mapping reveals an uneven distribution of indicators across the different sub-processes that contribute to this highly-specialized medical procedure. Moreover, we found that only 101/293 indicators (34%) comply with the rules for implementation of a quality indicator, as defined by the FDX 50-171 standard. Conclusions: This suggests that risks to donors/recipients are unevenly monitored, leaving critical medical steps with low levels of monitoring.
APA, Harvard, Vancouver, ISO, and other styles
24

Couckuyt, Dries, and Amy Van Looy. "A systematic review of Green Business Process Management." Business Process Management Journal 26, no. 2 (November 18, 2019): 421–46. http://dx.doi.org/10.1108/bpmj-03-2019-0106.

Full text
Abstract:
Purpose Green Business Process Management (BPM) focusses on the ecological impact of business processes. Although it is an emerging field, different attitudes exist towards the discipline’s name, the objectives and the approaches to realise them. By means of a systematic literature review, the purpose of this paper is to arrive at a common understanding of the discipline for successful development. Design/methodology/approach The review methodology relies on a hermeneutic framework which integrates the search, analysis and interpretation of the literature. The sample is used in a text analysis to find an appropriate definition (RQ1), a bibliometric analysis to give insights in current Green BPM contributions (RQ2) and a content analysis to present differences with conventional BPM (RQ3). Findings Green BPM follows a similar development as conventional BPM, namely from a more technical perspective to also including the managerial perspective. More research is required that goes beyond the traditional business process lifecycle. Originality/value The research questions generated a comprehensive overview about application domains and research topics, which in turn can deliver benefits for both research and practitioner-related communities. Researchers identify future research avenues, while practitioners find appropriate Green BPM techniques for their domain.
APA, Harvard, Vancouver, ISO, and other styles
25

Ragothaman, Srinivasan, Thomas Davies, and DeVee Dykstra. "Legal aspects of electronic commerce and their implications for the accounting profession." Human Systems Management 19, no. 4 (October 12, 2000): 245–54. http://dx.doi.org/10.3233/hsm-2000-19404.

Full text
Abstract:
The electronic commerce (e-commerce) revolution is changing the business processes dramatically. It permits new kinds of interactions among business firms, their customers and suppliers, as well as internally within the firms. E-commerce is shaking the foundations of many industries and is leading to new types of e-business models. The vast potential of this exciting way of doing business has led many universities to offer courses, options, majors, and even degrees in e-commerce. The objectives of this paper are to describe some of the legal issues that impact e-commerce activities and to explore their implications for the accounting profession. While both business-to-business e-commerce and business-to-consumer e-commerce are expanding at a brisk pace, laws dealing with e-commerce are lagging behind. This paper provides a brief overview of several legal issues that have emerged in the arena of e-commerce including the following: jurisdictional issues, web linking practices, intellectual property and copyrights, libel laws, sales and use tax issues, encryption regulation, privacy rights, domain name disputes, electronic agreements, and digital signatures. Implications of e-commerce related legal issues for the accounting profession and accounting students are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
26

Gabbar, Hossam A., Sk Sami Al Jabar, Hassan A. Hassan, and Jing Ren. "Development of Knowledge Base Using Human Experience Semantic Network for Instructive Texts." Applied Sciences 11, no. 17 (August 31, 2021): 8072. http://dx.doi.org/10.3390/app11178072.

Full text
Abstract:
An organized knowledge structure or knowledge base plays a vital role in retaining knowledge where data are processed and organized so that machines can understand. Instructive text (iText) consists of a set of instructions to accomplish a task or operation. Hence, iText includes a group of texts having a title or name of the task or operation and step-by-step instructions on how to accomplish the task. In the case of iText, storing only entities and their relationships with other entities does not always provide a solution for capturing knowledge from iTexts as it consists of parameters and attributes of different entities and their action based on different operations or procedures and the values differ for every individual operation or procedure for the same entity. There is a research gap in iTexts that created limitations to learn about different operations, capture human experience and dynamically update knowledge for every individual operation or instruction. This research presents a knowledge base for capturing and retaining knowledge from iTexts existing in operational documents. From each iTexts, small pieces of knowledge are extracted and represented as nodes linked to one another in the form of a knowledge network called the human experience semantic network (HESN). HESN is the crucial component of our proposed knowledge base. The knowledge base also consists of domain knowledge having different classified terms and key phrases of the specific domain.
APA, Harvard, Vancouver, ISO, and other styles
27

Kehrein, Philipp, Mark van Loosdrecht, Patricia Osseweijer, John Posada, and Jo Dewulf. "The SPPD-WRF Framework: A Novel and Holistic Methodology for Strategical Planning and Process Design of Water Resource Factories." Sustainability 12, no. 10 (May 20, 2020): 4168. http://dx.doi.org/10.3390/su12104168.

Full text
Abstract:
This paper guides decision making in more sustainable urban water management practices that feed into a circular economy by presenting a novel framework for conceptually designing and strategically planning wastewater treatment processes from a resource recovery perspective. Municipal wastewater cannot any longer be perceived as waste stream because a great variety of technologies are available to recover water, energy, fertilizer, and other valuable products from it. Despite the vast technological recovery possibilities, only a few processes have yet been implemented that deserve the name water resource factory instead of wastewater treatment plant. This transition relies on process designs that are not only technically feasible but also overcome various non-technical bottlenecks. A multidimensional and multidisciplinary approach is needed to design water resource factories (WRFs) in the future that are technically feasible, cost effective, show low environmental impacts, and successfully market recovered resources. To achieve that, the wastewater treatment plant (WWTP) design space needs to be opened up for a variety of expertise that complements the traditional wastewater engineering domain. Implementable WRF processes can only be designed if the current design perspective, which is dominated by the fulfilment of legal effluent qualities and process costs, is extended to include resource recovery as an assessable design objective from an early stage on. Therefore, the framework combines insights and methodologies from different fields and disciplines beyond WWTP design like, e.g., circular economy, industrial process engineering, project management, value chain development, and environmental impact assessment. It supports the transfer of the end-of-waste concept into the wastewater sector as it structures possible resource recovery activities according to clear criteria. This makes recovered resources more likely to fulfil the conditions of the end-of-waste concept and allows the change in their definition from wastes to full-fledged products.
APA, Harvard, Vancouver, ISO, and other styles
28

Gutzler, D. S., L. N. Long, J. Schemm, S. Baidya Roy, M. Bosilovich, J. C. Collier, M. Kanamitsu, et al. "Simulations of the 2004 North American Monsoon: NAMAP2." Journal of Climate 22, no. 24 (December 15, 2009): 6716–40. http://dx.doi.org/10.1175/2009jcli3138.1.

Full text
Abstract:
Abstract The second phase of the North American Monsoon Experiment (NAME) Model Assessment Project (NAMAP2) was carried out to provide a coordinated set of simulations from global and regional models of the 2004 warm season across the North American monsoon domain. This project follows an earlier assessment, called NAMAP, that preceded the 2004 field season of the North American Monsoon Experiment. Six global and four regional models are all forced with prescribed, time-varying ocean surface temperatures. Metrics for model simulation of warm season precipitation processes developed in NAMAP are examined that pertain to the seasonal progression and diurnal cycle of precipitation, monsoon onset, surface turbulent fluxes, and simulation of the low-level jet circulation over the Gulf of California. Assessment of the metrics is shown to be limited by continuing uncertainties in spatially averaged observations, demonstrating that modeling and observational analysis capabilities need to be developed concurrently. Simulations of the core subregion (CORE) of monsoonal precipitation in global models have improved since NAMAP, despite the lack of a proper low-level jet circulation in these simulations. Some regional models run at higher resolution still exhibit the tendency observed in NAMAP to overestimate precipitation in the CORE subregion; this is shown to involve both convective and resolved components of the total precipitation. The variability of precipitation in the Arizona/New Mexico (AZNM) subregion is simulated much better by the regional models compared with the global models, illustrating the importance of transient circulation anomalies (prescribed as lateral boundary conditions) for simulating precipitation in the northern part of the monsoon domain. This suggests that seasonal predictability derivable from lower boundary conditions may be limited in the AZNM subregion.
APA, Harvard, Vancouver, ISO, and other styles
29

Klots, Y. P., I. V. Muliar, V. M. Cheshun, and O. V. Burdyug. "USE OF DISTRIBUTED HASH TABLES TO PROVIDE ACCESS TO CLOUD SERVICES." Collection of scientific works of the Military Institute of Kyiv National Taras Shevchenko University, no. 67 (2020): 85–95. http://dx.doi.org/10.17721/2519-481x/2020/67-09.

Full text
Abstract:
In the article the urgency of the problem of granting access to services of distributed cloud system is disclosed, in particular, the peer distributed cloud system is characterized. The process of interaction of the main components is provided to access the domain name web resource. It is researched that the distribution of resources between nodes of a peer distributed cloud system with the subsequent provision of services on request is implemented using the Kademlia protocol on a local network or Internet and contains processes for publishing the resource at the initial stage of its owner, replication and directly providing access to resources. Application of modern technologies of adaptive information security systems does not allow full control over the information flows of the cloud computing environment, since they function at the upper levels of the hierarchy. Therefore, to create effective mechanisms for protecting software in a cloud computing environment, it is necessary to develop new threat models and to create methods for displaying computer attacks that allow operatively to identify hidden and potentially dangerous processes of information interaction. Rules of access form the basis of security policy and include restrictions on the mechanisms of initialization processes access. Under the developed operations model, the formalized description of hidden threats is reduced to the emergence of context-dependent transitions in the multigraph transactions. The method of granting access to the services of the distributed cloud system is substantiated. It is determined that the Distributed Hash Table (DHT) infrastructure is used to find a replication node that has a replica of the requested resource or part of it. The study identified the stages of identification of the node's validation. The process of adding a new node, validating authenticity, publishing a resource, and accessing a resource is described in the form of a step-by-step sequence of actions within the framework of the method of granting access to services of a distributed cloud system by graphical description of information flows, interaction of processes of information and objects processing.
APA, Harvard, Vancouver, ISO, and other styles
30

Ungurean, Ioan, and Nicoleta Cristina Gaitan. "A Software Architecture for the Industrial Internet of Things—A Conceptual Model." Sensors 20, no. 19 (September 30, 2020): 5603. http://dx.doi.org/10.3390/s20195603.

Full text
Abstract:
The Internet of Things (IoT) is an emerging concept that has revolutionized the use of new technologies in everyday life. The economic impact of IoT becoming very important, and it began to be used in the industrial environment under the name of the Industrial Internet of Things (IIoT) concept, which is a sub-domain of IoT. The IIoT changes the way industrial processes are controlled and monitored, increasing operating efficiency. This article proposes a software architecture for IIoT that has a low degree of abstraction compared to the reference architectures presented in the literature. The architecture is organized on four-layer and it integrates the latest concepts related to fog and edge computing. These concepts are activated through the use of fog/edge/gateway nodes, where the processing of data acquired from things is performed and it is the place where things interact with each other in the virtual environment. The main contributions of this paper are the proposal and description of a complete IIoT software architecture, the use of a unified address space, and the use of the computing platform based on SoC (System on Chip) with specialized co-processors in order to be able to execute in real-time certain time-critical operations specific to the industrial environment.
APA, Harvard, Vancouver, ISO, and other styles
31

Gomes, José-Eduardo, Sandra E. Encalada, Kathryn A. Swan, Christopher A. Shelton, J. Clayton Carter, and Bruce Bowerman. "The maternal genespn-4encodes a predicted RRM protein required for mitotic spindle orientation and cell fate patterning in earlyC. elegansembryos." Development 128, no. 21 (November 1, 2001): 4301–14. http://dx.doi.org/10.1242/dev.128.21.4301.

Full text
Abstract:
C. elegans embryogenesis begins with a stereotyped sequence of asymmetric cell divisions that are largely responsible for establishing the nematode body plan. These early asymmetries are specified after fertilization by the widely conserved, cortically enriched PAR and PKC-3 proteins, which include three kinases and two PDZ domain proteins. During asymmetric cell divisions in the early embryo, centrosome pairs initially are positioned on transverse axes but then rotate to align with the anteroposterior embryonic axis. We show that rotation of the centrosomal/nuclear complex in an embryonic cell called P1 requires a maternally expressed gene we name spn-4. The predicted SPN-4 protein contains a single RNA recognition motif (RRM), and belongs to a small subfamily of RRM proteins that includes one Drosophila and two human family members. Remarkably, in mutant embryos lacking spn-4 function the transversely oriented ‘P1’ mitotic spindle appears to re-specify the axis of cell polarity, and the division remains asymmetric. spn-4 also is required for other developmental processes, including the specification of mesendoderm, the restriction of mesectoderm fate to P1 descendants, and germline quiescence during embryogenesis. We suggest that SPN-4 post-transcriptionally regulates the expression of multiple developmental regulators. Such SPN-4 targets might then act more specifically to generate a subset of the anterior-posterior asymmetries initially specified after fertilization by the more generally required PAR and PKC-3 proteins.
APA, Harvard, Vancouver, ISO, and other styles
32

MARCHENKO, George N., Boris I. RATNIKOV, Dmitry V. ROZANOV, Adam GODZIK, Elena I. DERYUGINA, and Alex Y. STRONGIN. "Characterization of matrix metalloproteinase-26, a novel metalloproteinase widely expressed in cancer cells of epithelial origin." Biochemical Journal 356, no. 3 (June 8, 2001): 705–18. http://dx.doi.org/10.1042/bj3560705.

Full text
Abstract:
Identification of expanding roles for matrix metalloproteinases (MMPs) in complex regulatory processes of tissue remodelling has stimulated the search for genes encoding proteinases with unique functions, regulation and expression patterns. By using a novel cloning strategy, we identified three previously unknown human MMPs, i.e. MMP-21, MMP-26 and MMP-28, in comprehensive gene libraries. The present study is focused on the gene and the protein of a novel MMP, MMP-26. Our findings show that MMP-26 is specifically expressed in cancer cells of epithelial origin, including carcinomas of lung, prostate and breast. Several unique structural and regulatory features, including an unusual ‘cysteine-switch’ motif, discriminate broad-spectrum MMP-26 from most other MMPs. MMP-26 efficiently cleaves fibrinogen and extracellular matrix proteins, including fibronectin, vitronectin and denatured collagen. Protein sequence, minimal modular domain structure, exon–intron mapping and computer modelling demonstrate similarity between MMP-26 and MMP-7 (matrilysin). However, substrate specificity and transcriptional regulation, as well as the functional role of MMP-26 and MMP-7 in cancer, are likely to be distinct. Despite these differences, matrilysin-2 may be a suitable trivial name for MMP-26. Our observations suggest an important specific function for MMP-26 in tumour progression and angiogenesis, and confirm and extend the recent findings of other authors [Park, Ni, Gerkema, Liu, Belozerov and Sang (2000) J. Biol. Chem. 275, 20540–20544; Uría and López-Otín (2000) Cancer Res. 60, 4745–4751; de Coignac, Elson, Delneste, Magistrelli, Jeannin, Aubry, Berthier, Schmitt, Bonnefoy and Gauchat (2000) Eur. J. Biochem. 267, 3323–3329].
APA, Harvard, Vancouver, ISO, and other styles
33

Sullivan, Karen. "Genre-dependent metonymy in Norse skaldic poetry." Language and Literature: International Journal of Stylistics 17, no. 1 (February 2008): 21–36. http://dx.doi.org/10.1177/0963947007085051.

Full text
Abstract:
This article describes a metonymic process which is common in skaldic verse, but rare in everyday language. This process allows one member of a category to stand for another (for example, SEA is referred to by the name of another member of BODIES OF WATER, such as `river' or `fjord'). This process has previously been called `metaphor' (cf. Fidjestøl, 1997). However, I show that the process lacks several characteristics of metaphor as defined in cognitive linguistics, including multiple mappings and the creation of target-domain inferences. I suggest that the process is more similar to metonymies such as Category for Member (cf. Radden and Kövecses, 1999), and should be called `Member for Member' metonymy. I argue that Member for Member metonymy is rare in conversational language because it fails to generate the inferences and cognitive benefits provided by most metaphors and metonymies. However, Member for Member is abundant in skaldic verse, because the aesthetic and sociolinguistic goals of this genre outweigh the considerations of clarity and efficiency imposed on conversation by the Gricean Maxims. I furthermore propose that Member for Member metonymy is a defining feature of classical skaldic poetry, and one that distinguishes this genre from later, more naturalistic styles such as hrynhent. The observation that Member for Member occurs in a specific literary genre like skaldic poetry — even though it is normally barred from conversational language — indicates that cognitive linguists must study the full range of linguistic genres in order to document the cognitive processes that underlie language use.
APA, Harvard, Vancouver, ISO, and other styles
34

Maxwell, Deborah, Chris Speed, and Larissa Pschetz. "Story Blocks." Convergence: The International Journal of Research into New Media Technologies 23, no. 1 (January 24, 2017): 79–97. http://dx.doi.org/10.1177/1354856516675263.

Full text
Abstract:
Digital technology is changing, and has changed the ways we create and consume narratives, from moving images and immersive storyworlds to digital long-form and multi-branched story experiences. At the same time, blockchain, the technology that underpins cryptocurrencies such as Bitcoin, is revolutionizing the way that transactions and exchanges occur. As a globally stored and collaboratively written list of all transactions that have ever taken place within a given system, the blockchain decentralizes money and offers a platform for its creative use. There are already examples of blockchain technologies extending beyond the realm of currency, including the decentralization of domain name servers that are not subject to government takedown and identity management and governance. By framing key blockchain concepts with past and present storytelling practices, this article raises questions as to how the principles and implementation of such distributed ledger technologies might be used within contemporary writing practices – that is, can we imagine stories as a currency or value system? We present three experiments that draw on some of the fundamental principles of blockchain and Bitcoin, as an instantiation of a blockchain implemented application, namely, (1) the ledger, (2) the blocks and (3) the mining process. Each low-fi experiment was intentionally designed to be very accessible to take part in and understand and all were conducted as discrete workshops with different sets of participants. Participants included a cohort of design students, technology industry and design professionals and writing and interaction design academics. Each experiment raised a different set of reflections and subsequent questions on the nature of digital, the linearity (or not) of narratives and collaborative processes.
APA, Harvard, Vancouver, ISO, and other styles
35

Teichert, Thorsten, Alexander Graf, Sajad Rezaei, Philipp Wörfel, and Helen Duh. "Measures of Implicit Cognition for Marketing Research." Marketing ZFP 41, no. 3 (2019): 48–76. http://dx.doi.org/10.15358/0344-1369-2019-3-48.

Full text
Abstract:
Academics and managers need to know that key mental processes occur below the conscious awareness threshold. While unconscious processes largely influence consumer decision-making processes, self-report measures do not reveal these processes adequately. Consequently, marketers need to utilise psychologists’ indirect measures that infer unconscious mental content from reaction-time tasks. Three well-known tools are explicated in the present article: the Emotional Stroop Task, the Implicit Association Test (IAT), and the Approach-Avoidance Task (AAT). Each test taps into a different facet of implicit cognition. This research describes these test instruments’ experimental setups and alternative procedures to guide academics and practitioners when they apply implicit measures. The Ask Your Brain (AYB) survey software is presented as an online research platform for executing all three test types and provides a cost-efficient alternative to lab experiments. In this paper’s conceptual part, we outline the three test instruments’ research paradigms and describe their past applications in the marketing domain. We describe each implicit measurement instrument’s conceptual background, summarize its standard test procedures, and briefly discuss relevant methodological criticisms. We describe how the obtained measurement data should be prepared, condensed, and analysed. Subsequently, we present an empirical case to illustrate the concrete application of the different measurement instruments, utilising empirical data gained from a consumer protection study of 104 South African students. These young adults were confronted with alcohol stimuli in the Emotional Stroop Task, IAT, and AAT. They subsequently performed a discrete choice task related to alcoholic drinks and soft drinks. Based on their drink choices, we explore the extent to which the implicit measures relate to their choice behaviour. The Emotional Stroop Task is based on the premise that emotional stimuli attract more visual attention than neutral stimuli. This distraction causes a delay in response when participants are asked to name a displayed word's colour as fast as possible. Although our study could not directly support this premise, alcohol-inclined participants generally reacted more slowly to alcohol and neutral stimuli. The IAT confronts participants with combinations of a bipolar target category and a bipolar attribute category. Category combinations corresponding to the respondent's intuition (compatible) facilitate task performance and result in shorter reaction times. In our study, those individuals who chose significantly more drinks containing alcohol reacted faster to combinations of “alcohol” and “active” (rather than “alcohol” and “miserable”). This finding shows that the IAT can indeed predict choice behaviour. Finally, the AAT postulates that individuals move faster to a desired object and away from an undesired object. Both the reaction times and the error rates indicated this pattern. Individuals were slower and produced more errors during incongruent tasks (push positive items; pull negative items) than congruent tasks (pull liked items, push disliked items). This finding show that implicit measures can identify consumers´ approach and avoidance tendencies. This paper provides methodological insights into three prominent implicit cognition measures, as well as practical suggestions for practitioners and academics. We exemplify each method’s usage for research questions in marketing and consumer research. We particularly suggest using the Emotional Stroop Task for studies on attention-based processes, for example, advertisement exposure. The IAT is recommended for assessing richer cognitive processes, such as product or brand images, and the AAT when studying visceral and biological influences on impulsive consumption. Overall, we encourage marketing researchers to add implicit measures to their toolbox and to explore their contributions further for a better understanding of consumer decision making.
APA, Harvard, Vancouver, ISO, and other styles
36

Lah, Nataša. "Prilog širenju teorijske domene u povijesnom prostoru povijesti umjetnosti." Ars Adriatica, no. 3 (January 1, 2013): 243. http://dx.doi.org/10.15291/ars.472.

Full text
Abstract:
In the European cultural tradition of the second half of the nineteenth century, the framework of the discipline of art history was outlined through a clearly defined set of boundaries of its research into objects, space and time. By identifying itself as a history of European architecture, painting, sculpture and the applied arts, art history excluded the art of the primitive, Oriental, American and Asian, both early and moredeveloped civilizations from the remit of its research and study (Dilly). However, a scholarly paradigm which was postulated like this could not be applied to the study and assessment of numerous twentieth-centuryartistic practices which were based on the exploration of cultures as systems of discourse and ideology. In other words, a shattering shift within the discipline was caused by the epochal change of what a paradigm is: as suggested by T. S. Kuhn, it is understood as thenormative content of the topic under discussion. Such an understanding of a paradigm indirectly influences scholarly processes because it dictates what is to be researched, which questions are to be asked and how they are to be formulated, and how research findings are to be interpreted. Scholarly interest has turned from a chronological study of the development of artistic styles, schools and movements in the history ofEuropean art towards contextual research into the same topics which are set within a spatial and chronological framework of a series of discontinued revolutions in world views. The difficulty of applying a traditional scholarly apparatus to new models was also transferred in the field of aesthetics, which resulted in a complete rejection of the evaluation of art as judgement of taste, as it was specifically perceived in this philosophical (sub) discipline from Baumgarten (1750) onwards. To some degree, aesthetics was replaced by an interdisciplinaryunderstanding of art theory which developed from various autonomous disciplines which are nonetheless mutually interconnected through their research processes, that is, the social sciences and humanities such as history of art, art criticism, sociology of art, psychology of art, semiotics and semiology of art, philosophy of art and aesthetics. In such a context,our interest is directed towards the understanding of a theoretical field which has been defined as the history of art history, since it outlines the journey of a discipline, in Udo Kultermann’s book of the same name which is on the reading list for the course in art theory in Croatian academic art-historical circles. The study of that section of the book which describes the history of art history in the classical period, has demonstrated that the explanations and conclusions contained in it are in contrast to the explanations and conclusions of prominent art theorians, especially those who studied the history of aesthetics and classical philology. We can note the differences on two levels. The first is the methodology of scholarly research, while the second is based on a different perception of the boundaries of the domain of art-historical theory. Kultermann relies on a strict division with regard to content and methodology between art istory,philosophy (aesthetics) and historiography, and so, following from this, it appears that classical art history almost did not even exist. On the other hand, the theory of art takes into consideration the nature of classical historiographic standards, the aim of which was to provide examples of the normative content of philosophy, that is, the testimonies of its credibility and manifestation. Such an approach takes into account thecontent norms of the preserved classical sources about art, and through it, our perception of the position of art in that period focuses on the theoretical insights which are more encompassing than those encountered in the aforementioned section of Kultermann’s book. Based on this, we suggest that the evaluation of material should follow the methodological standards of art theory in such a way that individual artistic eras are understood and interpreted as historical periods which were unifiedthrough invariable paradigms which were always new and which integrated a large number of artistic concepts and ideas but which, nonetheless, possessed a general value in a specific period. According to Bihalji-Merin, we act like this out of gratitude towards an academicdiscipline which creates an orderly knowledge since the “images which lead us, constructed from a mythical tradition, disperse slowly and instead of them, a critical, human system of thought is formed.” Such aprocess focuses primarily on the revision of a number of hitherto unrevised prejudices towards theory.However, this is not done on the ruins of the historical legacy of art history but on its foundations.
APA, Harvard, Vancouver, ISO, and other styles
37

Mugarura, Norman. "Law as an equalizer." International Journal of Law and Management 58, no. 6 (November 14, 2016): 602–17. http://dx.doi.org/10.1108/ijlma-07-2015-0043.

Full text
Abstract:
Purpose The purpose of the paper is to examine the law and how it has been utilised in fostering proper functioning of global markets within member countries and globally. The term “law” in this context refers to international law, whose primary function is to regulate activities of sovereign States and organisations created by a group of States. The Statute of the International Court of Justice 1907, which has been ratified as a treaty by all UN nations, provides the most authoritative definition of the sources of international law to date (Schachter, 1991). Under Article 38 of Statute of the International Court of Justice 1907, there four main sources of international law such as treaties, international customs, general principles of law recognised by civilised nations and judicial decisions of International Court of Justice and other internationally accepted tribunals. They are the materials and processes out of which the rules and principles regulating the international community are developed and sustained. The term “global Village” was coined by a Canadian scholar by the name of Marshall McLuhan to describe the contraction of the globe into a village because of advances in internet communication technology and increased consciousness and enhanced transport systems (McLuhan, 2003). The current “global village” is manifested by the growing interconnectedness of economies which has enhanced the ability of states to interact economically, politically and socially. It operates in a way that seems to defy common definitions such as delimitations of national borders and states. The global system has created shared synergies such as free movement of workers, capital, good and services. However, it has created varied challenges for individual states given that challenges in one part of the globe can easily navigate into the system to infest other countries including those that have nothing to do with its causes. This dichotomy is highlighted by the debt crisis in the Eurozone member countries which has been simmering since 2009 but has recently bubbled to the surface by the crisis in Greece. The challenges in Greece as well in other deeply integrated countries have not been confined within individual countries or regions but have had a domino effect farther afield due to the growing interconnectedness of economies. There are dualities in the global system manifested by the fact that developed countries are endowed with the means, and, therefore, they have requisite capacity to harness the law and markets easily as opposed to their counterparts in least developed countries (LDCs), where this leverage is non-existent. Less-developed economies are so described because they lack requisite capacity and cannot compete as efficiently as their counterpart in developed countries. This has translated into ambivalence and half-heartedness in some states attitude to embrace market discipline wholeheartedly. The foregoing challenges have been exacerbated by the tenuous legal systems, lack of robust infrastructure, oversight institutions and corruption, especially in the LDCs cohort. The paper utilises empirical data to evaluate the role of law in fostering the relationship between states and markets. In other words, are the rules governing global markets effectively working to ensure a harmonious co-existence of markets, states and various stakeholders? Can the recent global crises such as the debt crisis in Greece mean that the global village is in quandary? Is there any village that is devoid of challenges or they are part and parcel of life? The paper utilises empirical examples in both developed and developing countries to evaluate the current state of the contemporary global village in search for answers to the foregoing nagging questions. Design/methodology/approach The paper adopts a selective review approach in analysing the most appropriate materials for inclusion in its analysis. It is an empirical study based on the most recent global developments such as the global financial crisis, the debt crisis in European Union (EU) to gains insights into the interplay of the relationship between law and markets and the occasional disharmony between these two regulatory domains. Findings The issues examined in this paper provide significant insights into the dynamics of the global village, law and markets. It has delineated that for markets to work effectively, the state needs to remain in the loop and to keep an arm’s length relationship with the market because it will have to come in to pick the pieces when things go wrong. The law cannot be pushed to the sidelines because it will have to provide the instruments for states and markets to operate efficiently within their respective regulatory domain. There is no state, including North Korea (not as open as other economies in Asia), which can close its door entirely to markets. Experience has demonstrated that law is more than rules which govern societies but a way of life such that a society is as developed as is its legal system. The State needs to use the leverage of the law and to take centre stage for markets to remain viable and relevant. Recent crises such as the debt crisis in Greece or the global financial crisis before provide lessons for proponents of the global market system to learn so that it can proportionately distribute benefits and not challenges. Research limitations/implications The global market system has imposed varied challenges on states at the scale never envisaged before. Some of the theoretical premises relating to the paper were based on secondary data sources and were evaluated based on a small sample of cases. The author, therefore, extrapolated that the law seems to have been relegated to the sidelines to not interfere with markets. The paper has evaluated the current global market system in the context of contemporary challenges in Europe and in other regions; it would have been better to explore examples from other regions. It is evident that the state and the market are two sides of the same coin – they are embedded in each other, and their relationship complimentary and will have to co-exist. They need to work in tandem because the market needs the state and the state needs the market. Meanwhile, both the state and the market need the law as an equalizer to ensure they are regulated according to engendered rules. It appears that the disharmony between the state and the market is because of the fusion of law and politics which often results in overlapping interests. The recent global financial crisis and the frantic efforts of EU government to bail out debt distressed countries like Greece have implied that governments will need to maintain an arms-length relationship with markets. When the state lets its hands off, literally speaking, in the author’s view, markets will veer off course. Practical implications The global system has created shared synergies such as free movement of workers, capital, good and services. However, it has created varied challenges for individual states given that challenges in one part of the globe can easily navigate into the system to infest other countries including those that have nothing to do with its causes. States and stakeholders will need to carefully evaluate the impact of global regulatory initiatives to make sure that in adopting them, they are not debased or undermined by those initiatives. Social implications For markets to work properly, the state must remain in the loop and keep an arms-length relationship with the market because it will have to come in to pick the pieces when things go wrong. The law cannot be pushed to the sidelines because it will have to provide the instruments for states and markets to operate efficiently within their respective regulatory domain. There is no state, including North Korea (not as open as other economies in Asia), which can close its door entirely to markets. Experience has demonstrated that law is more than rules which govern societies but a way of life such that a society is as developed as is its legal system. The State needs to use the leverage of the law in providing effective regulatory oversight of markets both domestically and globally. Originality/value The paper was written on the basis of recent global crises such as the debt crisis in Greece, Europe, which were evaluated in the narrow context and are objectives of the paper.
APA, Harvard, Vancouver, ISO, and other styles
38

Xing, Fei, Yi Ping Yao, Zhi Wen Jiang, and Bing Wang. "Fine-Grained Parallel and Distributed Spatial Stochastic Simulation of Biological Reactions." Advanced Materials Research 345 (September 2011): 104–12. http://dx.doi.org/10.4028/www.scientific.net/amr.345.104.

Full text
Abstract:
To date, discrete event stochastic simulations of large scale biological reaction systems are extremely compute-intensive and time-consuming. Besides, it has been widely accepted that spatial factor plays a critical role in the dynamics of most biological reaction systems. The NSM (the Next Sub-Volume Method), a spatial variation of the Gillespie’s stochastic simulation algorithm (SSA), has been proposed for spatially stochastic simulation of those systems. While being able to explore high degree of parallelism in systems, NSM is inherently sequential, which still suffers from the problem of low simulation speed. Fine-grained parallel execution is an elegant way to speed up sequential simulations. Thus, based on the discrete event simulation framework JAMES II, we design and implement a PDES (Parallel Discrete Event Simulation) TW (time warp) simulator to enable the fine-grained parallel execution of spatial stochastic simulations of biological reaction systems using the ANSM (the Abstract NSM), a parallel variation of the NSM. The simulation results of classical Lotka-Volterra biological reaction system show that our time warp simulator obtains remarkable parallel speed-up against sequential execution of the NSM.I.IntroductionThe goal of Systems biology is to obtain system-level investigations of the structure and behavior of biological reaction systems by integrating biology with system theory, mathematics and computer science [1][3], since the isolated knowledge of parts can not explain the dynamics of a whole system. As the complement of “wet-lab” experiments, stochastic simulation, being called the “dry-computational” experiment, plays a more and more important role in computing systems biology [2]. Among many methods explored in systems biology, discrete event stochastic simulation is of greatly importance [4][5][6], since a great number of researches have present that stochasticity or “noise” have a crucial effect on the dynamics of small population biological reaction systems [4][7]. Furthermore, recent research shows that the stochasticity is not only important in biological reaction systems with small population but also in some moderate/large population systems [7].To date, Gillespie’s SSA [8] is widely considered to be the most accurate way to capture the dynamics of biological reaction systems instead of traditional mathematical method [5][9]. However, SSA-based stochastic simulation is confronted with two main challenges: Firstly, this type of simulation is extremely time-consuming, since when the types of species and the number of reactions in the biological system are large, SSA requires a huge amount of steps to sample these reactions; Secondly, the assumption that the systems are spatially homogeneous or well-stirred is hardly met in most real biological systems and spatial factors play a key role in the behaviors of most real biological systems [19][20][21][22][23][24]. The next sub-volume method (NSM) [18], presents us an elegant way to access the special problem via domain partition. To our disappointment, sequential stochastic simulation with the NSM is still very time-consuming, and additionally introduced diffusion among neighbor sub-volumes makes things worse. Whereas, the NSM explores a very high degree of parallelism among sub-volumes, and parallelization has been widely accepted as the most meaningful way to tackle the performance bottleneck of sequential simulations [26][27]. Thus, adapting parallel discrete event simulation (PDES) techniques to discrete event stochastic simulation would be particularly promising. Although there are a few attempts have been conducted [29][30][31], research in this filed is still in its infancy and many issues are in need of further discussion. The next section of the paper presents the background and related work in this domain. In section III, we give the details of design and implementation of model interfaces of LP paradigm and the time warp simulator based on the discrete event simulation framework JAMES II; the benchmark model and experiment results are shown in Section IV; in the last section, we conclude the paper with some future work.II. Background and Related WorkA. Parallel Discrete Event Simulation (PDES)The notion Logical Process (LP) is introduced to PDES as the abstract of the physical process [26], where a system consisting of many physical processes is usually modeled by a set of LP. LP is regarded as the smallest unit that can be executed in PDES and each LP holds a sub-partition of the whole system’s state variables as its private ones. When a LP processes an event, it can only modify the state variables of its own. If one LP needs to modify one of its neighbors’ state variables, it has to schedule an event to the target neighbor. That is to say event message exchanging is the only way that LPs interact with each other. Because of the data dependences or interactions among LPs, synchronization protocols have to be introduced to PDES to guarantee the so-called local causality constraint (LCC) [26]. By now, there are a larger number of synchronization algorithms have been proposed, e.g. the null-message [26], the time warp (TW) [32], breath time warp (BTW) [33] and etc. According to whether can events of LPs be processed optimistically, they are generally divided into two types: conservative algorithms and optimistic algorithms. However, Dematté and Mazza have theoretically pointed out the disadvantages of pure conservative parallel simulation for biochemical reaction systems [31]. B. NSM and ANSM The NSM is a spatial variation of Gillespie’ SSA, which integrates the direct method (DM) [8] with the next reaction method (NRM) [25]. The NSM presents us a pretty good way to tackle the aspect of space in biological systems by partitioning a spatially inhomogeneous system into many much more smaller “homogeneous” ones, which can be simulated by SSA separately. However, the NSM is inherently combined with the sequential semantics, and all sub-volumes share one common data structure for events or messages. Thus, directly parallelization of the NSM may be confronted with the so-called boundary problem and high costs of synchronously accessing the common data structure [29]. In order to obtain higher efficiency of parallel simulation, parallelization of NSM has to firstly free the NSM from the sequential semantics and secondly partition the shared data structure into many “parallel” ones. One of these is the abstract next sub-volume method (ANSM) [30]. In the ANSM, each sub-volume is modeled by a logical process (LP) based on the LP paradigm of PDES, where each LP held its own event queue and state variables (see Fig. 1). In addition, the so-called retraction mechanism was introduced in the ANSM too (see algorithm 1). Besides, based on the ANSM, Wang etc. [30] have experimentally tested the performance of several PDES algorithms in the platform called YH-SUPE [27]. However, their platform is designed for general simulation applications, thus it would sacrifice some performance for being not able to take into account the characteristics of biological reaction systems. Using the similar ideas of the ANSM, Dematté and Mazza have designed and realized an optimistic simulator. However, they processed events in time-stepped manner, which would lose a specific degree of precisions compared with the discrete event manner, and it is very hard to transfer a time-stepped simulation to a discrete event one. In addition, Jeschke etc.[29] have designed and implemented a dynamic time-window simulator to execution the NSM in parallel on the grid computing environment, however, they paid main attention on the analysis of communication costs and determining a better size of the time-window.Fig. 1: the variations from SSA to NSM and from NSM to ANSMC. JAMES II JAMES II is an open source discrete event simulation experiment framework developed by the University of Rostock in Germany. It focuses on high flexibility and scalability [11][13]. Based on the plug-in scheme [12], each function of JAMES II is defined as a specific plug-in type, and all plug-in types and plug-ins are declared in XML-files [13]. Combined with the factory method pattern JAMES II innovatively split up the model and simulator, which makes JAMES II is very flexible to add and reuse both of models and simulators. In addition, JAMES II supports various types of modelling formalisms, e.g. cellular automata, discrete event system specification (DEVS), SpacePi, StochasticPi and etc.[14]. Besides, a well-defined simulator selection mechanism is designed and developed in JAMES II, which can not only automatically choose the proper simulators according to the modeling formalism but also pick out a specific simulator from a serious of simulators supporting the same modeling formalism according to the user settings [15].III. The Model Interface and SimulatorAs we have mentioned in section II (part C), model and simulator are split up into two separate parts. Thus, in this section, we introduce the designation and implementation of model interface of LP paradigm and more importantly the time warp simulator.A. The Mod Interface of LP ParadigmJAMES II provides abstract model interfaces for different modeling formalism, based on which Wang etc. have designed and implemented model interface of LP paradigm[16]. However, this interface is not scalable well for parallel and distributed simulation of larger scale systems. In our implementation, we accommodate the interface to the situation of parallel and distributed situations. Firstly, the neighbor LP’s reference is replaced by its name in LP’s neighbor queue, because it is improper even dangerous that a local LP hold the references of other LPs in remote memory space. In addition, (pseudo-)random number plays a crucial role to obtain valid and meaningful results in stochastic simulations. However, it is still a very challenge work to find a good random number generator (RNG) [34]. Thus, in order to focus on our problems, we introduce one of the uniform RNGs of JAMES II to this model interface, where each LP holds a private RNG so that random number streams of different LPs can be independent stochastically. B. The Time Warp SimulatorBased on the simulator interface provided by JAMES II, we design and implement the time warp simulator, which contains the (master-)simulator, (LP-)simulator. The simulator works strictly as master/worker(s) paradigm for fine-grained parallel and distributed stochastic simulations. Communication costs are crucial to the performance of a fine-grained parallel and distributed simulation. Based on the Java remote method invocation (RMI) mechanism, P2P (peer-to-peer) communication is implemented among all (master-and LP-)simulators, where a simulator holds all the proxies of targeted ones that work on remote workers. One of the advantages of this communication approach is that PDES codes can be transferred to various hardwire environment, such as Clusters, Grids and distributed computing environment, with only a little modification; The other is that RMI mechanism is easy to realized and independent to any other non-Java libraries. Since the straggler event problem, states have to be saved to rollback events that are pre-processed optimistically. Each time being modified, the state is cloned to a queue by Java clone mechanism. Problem of this copy state saving approach is that it would cause loads of memory space. However, the problem can be made up by a condign GVT calculating mechanism. GVT reduction scheme also has a significant impact on the performance of parallel simulators, since it marks the highest time boundary of events that can be committed so that memories of fossils (processed events and states) less than GVT can be reallocated. GVT calculating is a very knotty for the notorious simultaneous reporting problem and transient messages problem. According to our problem, another GVT algorithm, called Twice Notification (TN-GVT) (see algorithm 2), is contributed to this already rich repository instead of implementing one of GVT algorithms in reference [26] and [28].This algorithm looks like the synchronous algorithm described in reference [26] (pp. 114), however, they are essentially different from each other. This algorithm has never stopped the simulators from processing events when GVT reduction, while algorithm in reference [26] blocks all simulators for GVT calculating. As for the transient message problem, it can be neglect in our implementation, because RMI based remote communication approach is synchronized, that means a simulator will not go on its processing until the remote the massage get to its destination. And because of this, the high-costs message acknowledgement, prevalent over many classical asynchronous GVT algorithms, is not needed anymore too, which should be constructive to the whole performance of the time warp simulator.IV. Benchmark Model and Experiment ResultsA. The Lotka-Volterra Predator-prey SystemIn our experiment, the spatial version of Lotka-Volterra predator-prey system is introduced as the benchmark model (see Fig. 2). We choose the system for two considerations: 1) this system is a classical experimental model that has been used in many related researches [8][30][31], so it is credible and the simulation results are comparable; 2) it is simple but helpful enough to test the issues we are interested in. The space of predator-prey System is partitioned into a2D NXNgrid, whereNdenotes the edge size of the grid. Initially the population of the Grass, Preys and Predators are set to 1000 in each single sub-volume (LP). In Fig. 2,r1,r2,r3stand for the reaction constants of the reaction 1, 2 and 3 respectively. We usedGrass,dPreyanddPredatorto stand for the diffusion rate of Grass, Prey and Predator separately. Being similar to reference [8], we also take the assumption that the population of the grass remains stable, and thusdGrassis set to zero.R1:Grass + Prey ->2Prey(1)R2:Predator +Prey -> 2Predator(2)R3:Predator -> NULL(3)r1=0.01; r2=0.01; r3=10(4)dGrass=0.0;dPrey=2.5;dPredato=5.0(5)Fig. 2: predator-prey systemB. Experiment ResultsThe simulation runs have been executed on a Linux Cluster with 40 computing nodes. Each computing node is equipped with two 64bit 2.53 GHz Intel Xeon QuadCore Processors with 24GB RAM, and nodes are interconnected with Gigabit Ethernet connection. The operating system is Kylin Server 3.5, with kernel 2.6.18. Experiments have been conducted on the benchmark model of different size of mode to investigate the execution time and speedup of the time warp simulator. As shown in Fig. 3, the execution time of simulation on single processor with 8 cores is compared. The result shows that it will take more wall clock time to simulate much larger scale systems for the same simulation time. This testifies the fact that larger scale systems will leads to more events in the same time interval. More importantly, the blue line shows that the sequential simulation performance declines very fast when the mode scale becomes large. The bottleneck of sequential simulator is due to the costs of accessing a long event queue to choose the next events. Besides, from the comparison between group 1 and group 2 in this experiment, we could also conclude that high diffusion rate increased the simulation time greatly both in sequential and parallel simulations. This is because LP paradigm has to split diffusion into two processes (diffusion (in) and diffusion (out) event) for two interactive LPs involved in diffusion and high diffusion rate will lead to high proportional of diffusion to reaction. In the second step shown in Fig. 4, the relationship between the speedups from time warp of two different model sizes and the number of work cores involved are demonstrated. The speedup is calculated against the sequential execution of the spatial reaction-diffusion systems model with the same model size and parameters using NSM.Fig. 4 shows the comparison of speedup of time warp on a64X64grid and a100X100grid. In the case of a64X64grid, under the condition that only one node is used, the lowest speedup (a little bigger than 1) is achieved when two cores involved, and the highest speedup (about 6) is achieved when 8 cores involved. The influence of the number of cores used in parallel simulation is investigated. In most cases, large number of cores could bring in considerable improvements in the performance of parallel simulation. Also, compared with the two results in Fig. 4, the simulation of larger model achieves better speedup. Combined with time tests (Fig. 3), we find that sequential simulator’s performance declines sharply when the model scale becomes very large, which makes the time warp simulator get better speed-up correspondingly.Fig. 3: Execution time (wall clock time) of Seq. and time warp with respect to different model sizes (N=32, 64, 100, and 128) and model parameters based on single computing node with 8 cores. Results of the test are grouped by the diffusion rates (Group 1: Sequential 1 and Time Warp 1. dPrey=2.5, dPredator=5.0; Group 2: dPrey=0.25, dPredator=0.5, Sequential 2 and Time Warp 2).Fig. 4: Speedup of time warp with respect to the number of work cores and the model size (N=64 and 100). Work cores are chose from one computing node. Diffusion rates are dPrey=2.5, dPredator=5.0 and dGrass=0.0.V. Conclusion and Future WorkIn this paper, a time warp simulator based on the discrete event simulation framework JAMES II is designed and implemented for fine-grained parallel and distributed discrete event spatial stochastic simulation of biological reaction systems. Several challenges have been overcome, such as state saving, roll back and especially GVT reduction in parallel execution of simulations. The Lotka-Volterra Predator-Prey system is chosen as the benchmark model to test the performance of our time warp simulator and the best experiment results show that it can obtain about 6 times of speed-up against the sequential simulation. The domain this paper concerns with is in the infancy, many interesting issues are worthy of further investigated, e.g. there are many excellent PDES optimistic synchronization algorithms (e.g. the BTW) as well. Next step, we would like to fill some of them into JAMES II. In addition, Gillespie approximation methods (tau-leap[10] etc.) sacrifice some degree of precision for higher simulation speed, but still could not address the aspect of space of biological reaction systems. The combination of spatial element and approximation methods would be very interesting and promising; however, the parallel execution of tau-leap methods should have to overcome many obstacles on the road ahead.AcknowledgmentThis work is supported by the National Natural Science Foundation of China (NSF) Grant (No.60773019) and the Ph.D. Programs Foundation of Ministry of Education of China (No. 200899980004). The authors would like to show their great gratitude to Dr. Jan Himmelspach and Dr. Roland Ewald at the University of Rostock, Germany for their invaluable advice and kindly help with JAMES II.ReferencesH. Kitano, "Computational systems biology." Nature, vol. 420, no. 6912, pp. 206-210, November 2002.H. Kitano, "Systems biology: a brief overview." Science (New York, N.Y.), vol. 295, no. 5560, pp. 1662-1664, March 2002.A. Aderem, "Systems biology: Its practice and challenges," Cell, vol. 121, no. 4, pp. 511-513, May 2005. [Online]. Available: http://dx.doi.org/10.1016/j.cell.2005.04.020.H. de Jong, "Modeling and simulation of genetic regulatory systems: A literature review," Journal of Computational Biology, vol. 9, no. 1, pp. 67-103, January 2002.C. W. Gardiner, Handbook of Stochastic Methods: for Physics, Chemistry and the Natural Sciences (Springer Series in Synergetics), 3rd ed. Springer, April 2004.D. T. Gillespie, "Simulation methods in systems biology," in Formal Methods for Computational Systems Biology, ser. Lecture Notes in Computer Science, M. Bernardo, P. Degano, and G. Zavattaro, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5016, ch. 5, pp. 125-167.Y. Tao, Y. Jia, and G. T. Dewey, "Stochastic fluctuations in gene expression far from equilibrium: Omega expansion and linear noise approximation," The Journal of Chemical Physics, vol. 122, no. 12, 2005.D. T. Gillespie, "Exact stochastic simulation of coupled chemical reactions," Journal of Physical Chemistry, vol. 81, no. 25, pp. 2340-2361, December 1977.D. T. Gillespie, "Stochastic simulation of chemical kinetics," Annual Review of Physical Chemistry, vol. 58, no. 1, pp. 35-55, 2007.D. T. Gillespie, "Approximate accelerated stochastic simulation of chemically reacting systems," The Journal of Chemical Physics, vol. 115, no. 4, pp. 1716-1733, 2001.J. Himmelspach, R. Ewald, and A. M. Uhrmacher, "A flexible and scalable experimentation layer," in WSC '08: Proceedings of the 40th Conference on Winter Simulation. Winter Simulation Conference, 2008, pp. 827-835.J. Himmelspach and A. M. Uhrmacher, "Plug'n simulate," in 40th Annual Simulation Symposium (ANSS'07). Washington, DC, USA: IEEE, March 2007, pp. 137-143.R. Ewald, J. Himmelspach, M. Jeschke, S. Leye, and A. M. Uhrmacher, "Flexible experimentation in the modeling and simulation framework james ii-implications for computational systems biology," Brief Bioinform, vol. 11, no. 3, pp. bbp067-300, January 2010.A. Uhrmacher, J. Himmelspach, M. Jeschke, M. John, S. Leye, C. Maus, M. Röhl, and R. Ewald, "One modelling formalism & simulator is not enough! a perspective for computational biology based on james ii," in Formal Methods in Systems Biology, ser. Lecture Notes in Computer Science, J. Fisher, Ed. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5054, ch. 9, pp. 123-138. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-68413-8_9.R. Ewald, J. Himmelspach, and A. M. Uhrmacher, "An algorithm selection approach for simulation systems," pads, vol. 0, pp. 91-98, 2008.Bing Wang, Jan Himmelspach, Roland Ewald, Yiping Yao, and Adelinde M Uhrmacher. Experimental analysis of logical process simulation algorithms in james ii[C]// In M. D. Rossetti, R. R. Hill, B. Johansson, A. Dunkin, and R. G. Ingalls, editors, Proceedings of the Winter Simulation Conference, IEEE Computer Science, 2009. 1167-1179.Ewald, J. Rössel, J. Himmelspach, and A. M. Uhrmacher, "A plug-in-based architecture for random number generation in simulation systems," in WSC '08: Proceedings of the 40th Conference on Winter Simulation. Winter Simulation Conference, 2008, pp. 836-844.J. Elf and M. Ehrenberg, "Spontaneous separation of bi-stable biochemical systems into spatial domains of opposite phases." Systems biology, vol. 1, no. 2, pp. 230-236, December 2004.K. Takahashi, S. Arjunan, and M. Tomita, "Space in systems biology of signaling pathways? Towards intracellular molecular crowding in silico," FEBS Letters, vol. 579, no. 8, pp. 1783-1788, March 2005.J. V. Rodriguez, J. A. Kaandorp, M. Dobrzynski, and J. G. Blom, "Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (pts) pathway in escherichia coli," Bioinformatics, vol. 22, no. 15, pp. 1895-1901, August 2006.D. Ridgway, G. Broderick, and M. Ellison, "Accommodating space, time and randomness in network simulation," Current Opinion in Biotechnology, vol. 17, no. 5, pp. 493-498, October 2006.J. V. Rodriguez, J. A. Kaandorp, M. Dobrzynski, and J. G. Blom, "Spatial stochastic modelling of the phosphoenolpyruvate-dependent phosphotransferase (pts) pathway in escherichia coli," Bioinformatics, vol. 22, no. 15, pp. 1895-1901, August 2006.W. G. Wilson, A. M. Deroos, and E. Mccauley, "Spatial instabilities within the diffusive lotka-volterra system: Individual-based simulation results," Theoretical Population Biology, vol. 43, no. 1, pp. 91-127, February 1993.K. Kruse and J. Elf. Kinetics in spatially extended systems. In Z. Szallasi, J. Stelling, and V. Periwal, editors, System Modeling in Cellular Biology. From Concepts to Nuts and Bolts, pages 177–198. MIT Press, Cambridge, MA, 2006.M. A. Gibson and J. Bruck, "Efficient exact stochastic simulation of chemical systems with many species and many channels," The Journal of Physical Chemistry A, vol. 104, no. 9, pp. 1876-1889, March 2000.R. M. Fujimoto, Parallel and Distributed Simulation Systems (Wiley Series on Parallel and Distributed Computing). Wiley-Interscience, January 2000.Y. Yao and Y. Zhang, “Solution for analytic simulation based on parallel processing,” Journal of System Simulation, vol. 20, No.24, pp. 6617–6621, 2008.G. Chen and B. K. Szymanski, "Dsim: scaling time warp to 1,033 processors," in WSC '05: Proceedings of the 37th conference on Winter simulation. Winter Simulation Conference, 2005, pp. 346-355.M. Jeschke, A. Park, R. Ewald, R. Fujimoto, and A. M. Uhrmacher, "Parallel and distributed spatial simulation of chemical reactions," in 2008 22nd Workshop on Principles of Advanced and Distributed Simulation. Washington, DC, USA: IEEE, June 2008, pp. 51-59.B. Wang, Y. Yao, Y. Zhao, B. Hou, and S. Peng, "Experimental analysis of optimistic synchronization algorithms for parallel simulation of reaction-diffusion systems," High Performance Computational Systems Biology, International Workshop on, vol. 0, pp. 91-100, October 2009.L. Dematté and T. Mazza, "On parallel stochastic simulation of diffusive systems," in Computational Methods in Systems Biology, M. Heiner and A. M. Uhrmacher, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, vol. 5307, ch. 16, pp. 191-210.D. R. Jefferson, "Virtual time," ACM Trans. Program. Lang. Syst., vol. 7, no. 3, pp. 404-425, July 1985.J. S. Steinman, "Breathing time warp," SIGSIM Simul. Dig., vol. 23, no. 1, pp. 109-118, July 1993. [Online]. Available: http://dx.doi.org/10.1145/174134.158473 S. K. Park and K. W. Miller, "Random number generators: good ones are hard to find," Commun. ACM, vol. 31, no. 10, pp. 1192-1201, October 1988.
APA, Harvard, Vancouver, ISO, and other styles
39

Calliess, Graff-Peter. "Reflexive Transnational Law." Zeitschrift für Rechtssoziologie 23, no. 2 (January 1, 2002). http://dx.doi.org/10.1515/zfrs-2002-0206.

Full text
Abstract:
SummaryThe author examines the emergence of a transnational private law in alternative dispute resolution bodies and private norm formulating agencies from a reflexive law perspective. After introducing the concept of reflexive law he applies the idea of law as a communicative system to the ongoing debate on the existence of a New Law Merchant or lex mercatoria. He then discusses some features of international commercial arbitration (e.g. the lack of transparency) which hinder self-reference (autopoiesis) and thus the production of legal certainty in lex mercatoria as an autonomous legal system. He then contrasts these findings with the Domain Name Dispute Resolution System, which as opposed to Lex Mercatoria was rationally planned and highly formally organised by WIPO and ICANN, and which is allowing for self-reference and thus is designed as an autopoietic legal system, albeit with a very limited scope, i.e. the interference of abusive domain name registrations with trademarks (cybersquatting). From the comparison of both examples the author derives some preliminary ideas regarding a theory of reflexive transnational law, suggesting that the established general trend of privatisation of civil law need to be accompanied by a civilisation of private law, i.e. the constitutionalization of transnational private regimes by embedding them into a procedural constitution of freedom.
APA, Harvard, Vancouver, ISO, and other styles
40

Winskel, Glynn, and Francesco Zappa Nardelli. "New-HOPLA--A Higher-Order Process Language with Name Generation." BRICS Report Series 11, no. 21 (October 11, 2004). http://dx.doi.org/10.7146/brics.v11i21.21846.

Full text
Abstract:
This paper introduces new-HOPLA, a concise but powerful language for higher-order nondeterministic processes with name generation. Its origins as a metalanguage for domain theory are sketched but for the most part the paper concentrates on its operational semantics. The language is typed, the type of a process describing the shape of the computation paths it can perform. Its transition semantics, bisimulation, congruence properties and expressive power are explored. Encodings are given of well-known process algebras, including pi-calculus, Higher-Order pi-calculus and Mobile Ambients.
APA, Harvard, Vancouver, ISO, and other styles
41

Yahashiri, Atsushi, Matthew A. Jorgenson, and David S. Weiss. "The SPOR Domain, a Widely Conserved Peptidoglycan Binding Domain That Targets Proteins to the Site of Cell Division." Journal of Bacteriology 199, no. 14 (April 10, 2017). http://dx.doi.org/10.1128/jb.00118-17.

Full text
Abstract:
ABSTRACT Sporulation-related repeat (SPOR) domains are small peptidoglycan (PG) binding domains found in thousands of bacterial proteins. The name “SPOR domain” stems from the fact that several early examples came from proteins involved in sporulation, but SPOR domain proteins are quite diverse and contribute to a variety of processes that involve remodeling of the PG sacculus, especially with respect to cell division. SPOR domains target proteins to the division site by binding to regions of PG devoid of stem peptides (“denuded” glycans), which in turn are enriched in septal PG by the intense, localized activity of cell wall amidases involved in daughter cell separation. This targeting mechanism sets SPOR domain proteins apart from most other septal ring proteins, which localize via protein-protein interactions. In addition to SPOR domains, bacteria contain several other PG-binding domains that can exploit features of the cell wall to target proteins to specific subcellular sites.
APA, Harvard, Vancouver, ISO, and other styles
42

Ye, Huanpeng, Zhen Fan, Guohong Chai, Guangye Li, Zixuan Wei, Jie Hu, Xinjun Sheng, Liang Chen, and Xiangyang Zhu. "Self-Related Stimuli Decoding With Auditory and Visual Modalities Using Stereo-Electroencephalography." Frontiers in Neuroscience 15 (May 4, 2021). http://dx.doi.org/10.3389/fnins.2021.653965.

Full text
Abstract:
Name recognition plays important role in self-related cognitive processes and also contributes to a variety of clinical applications, such as autism spectrum disorder diagnosis and consciousness disorder analysis. However, most previous name-related studies usually adopted noninvasive EEG or fMRI recordings, which were limited by low spatial resolution and temporal resolution, respectively, and thus millisecond-level response latencies in precise brain regions could not be measured using these noninvasive recordings. By invasive stereo-electroencephalography (SEEG) recordings that have high resolution in both the spatial and temporal domain, the current study distinguished the neural response to one's own name or a stranger's name, and explored common active brain regions in both auditory and visual modalities. The neural activities were classified using spatiotemporal features of high-gamma, beta, and alpha band. Results showed that different names could be decoded using multi-region SEEG signals, and the best classification performance was achieved at high gamma (60–145 Hz) band. In this case, auditory and visual modality-based name classification accuracies were 84.5 ± 8.3 and 79.9 ± 4.6%, respectively. Additionally, some single regions such as the supramarginal gyrus, middle temporal gyrus, and insula could also achieve remarkable accuracies for both modalities, supporting their roles in the processing of self-related information. The average latency of the difference between the two responses in these precise regions was 354 ± 63 and 285 ± 59 ms in the auditory and visual modality, respectively. This study suggested that name recognition was attributed to a distributed brain network, and the subsets with decoding capabilities might be potential implanted regions for awareness detection and cognition evaluation.
APA, Harvard, Vancouver, ISO, and other styles
43

Alkan, Mehmet, and Elif Taş Arslan. "DESIGN AND DEVELOPMENT OF AN LADM-BASED EXTERNAL DATA MODEL FOR LAND REGISTRY AND CADASTRE TRANSACTIONS IN TURKEY: A CASE STUDY OF TREASURY REAL PROPERTIES." Boletim de Ciências Geodésicas 27, no. 1 (2021). http://dx.doi.org/10.1590/s1982-21702021000100004.

Full text
Abstract:
Abstract: The processes starting with the identification and registration of treasury properties have an essential place in the cadastral systems. Spatial data modelling studies were conducted in 2002 to establish a common standard structure on the fundamental similarities of land management systems. These studies were stated as a beginning named Core Cadastral Domain Model (CCDM), since 2006, it has been started to be made under the name of LADM. This model was accepted in 2012 as a standard model in the field of land administration by the International Organization for Standardization (ISO). In this study, an external model class is proposed for LADM’s transactions related to Treasury’s real estates properties which are related National Property Automation Project (MEOP). In order to determine the deficiency of this current external model, databases containing records related to spatial data and property rights were examined, and the deficiencies related to transactions on treasury properties were determined. The created external class is associated with the LADM’s LA_Party, LA_RRR, LA_SpatialUnit and LA_BAUnit master classes. Herewith the standardization of the external data model is ensured. If the external model is implemented by the responsible standardization of the archiving processes will be more comfortable and faster to register.
APA, Harvard, Vancouver, ISO, and other styles
44

Heikkinen, Mikko, Anniina Kuusijärvi, Ville-Matti Riihikoski, and Leif Schulman. "Multi-domain Collection Management Simplified — the Finnish National Collection Management System Kotka." Biodiversity Information Science and Standards 4 (October 9, 2020). http://dx.doi.org/10.3897/biss.4.59119.

Full text
Abstract:
Many natural history museums share a common problem: a multitude of legacy collection management systems (CMS) and the difficulty of finding a new system to replace them. Kotka is a CMS developed starting in 2011 at the Finnish Museum of Natural History (Luomus) and Finnish Biodiversity Information Facility (FinBIF) (Heikkinen et al. 2019, Schulman et al. 2019) to solve this problem. It has grown into a national system used by all natural history museums in Finland, and currently contains over two million specimens from several domains (zoological, botanical, paleontological, microbial, tissue sample and botanic garden collections). Kotka is a web application where data can be entered, edited, searched and exported through a browser-based user interface. It supports designing and printing specimen labels, handling collection metadata and specimen transactions, and helps support Nagoya protocol compliance. Creating a shared system for multiple institutions and collection types is difficult due to differences in their current processes, data formats, future needs and opinions. The more independent actors there are involved, the more complicated the development becomes. Successful development requires some trade-offs. Kotka has chosen features and development principles that emphasize fast development into a multitude of different purposes. Kotka was developed using agile methods with a single person (a product owner) making development decisions, based on e.g., strategic objectives, customer value and user feedback. Technical design emphasizes efficient development and usage over completeness and formal structure of the data. It applies simple and pragmatic approaches and improves collection management by providing practical tools for the users. In these regards, Kotka differs in many ways from a traditional CMS. Kotka stores data in a mostly denormalized free text format and uses a simple hierarchical data model. This allows greater flexibility and makes it easy to add new data fields and structures based on user feedback. Data harmonization and quality assurance is a continuous process, instead of doing it before entering data into the system. For example, specimen data with a taxon name can be entered into Kotka before the taxon name has been entered into the accompanying FinBIF taxonomy database. Example: simplified data about two specimens in Kotka, which have not been fully harmonized yet. Taxon: Corvus corone cornix Country: FI Collector: Doe, John Coordinates: 668, 338 Coordinate system: Finnish uniform coordinate system Taxon: Corvus corone cornix Country: FI Collector: Doe, John Coordinates: 668, 338 Coordinate system: Finnish uniform coordinate system Taxon: Corvus cornix Country: Finland Collector: Doe, J. Coordinates: 60.2442, 25,7201 Coordinate system: WGS84 Taxon: Corvus cornix Country: Finland Collector: Doe, J. Coordinates: 60.2442, 25,7201 Coordinate system: WGS84 Kotka’s data model does not follow standards, but has grown organically to reflect practical needs from the users. This is true particularly of data collected in research projects, which are often unique and complicated (e.g. complex relationships between species), requiring new data fields and/or storing data as free text. The majority of the data can be converted into simplified standard formats (e.g. Darwin Core) for sharing. The main challenge with this has been vague definitions of many data sharing formats (e.g. Darwin Core, CETAF Specimen Preview Profile (CETAF 2020), allowing different interpretations. Kotka trusts its users: it places very few limitations on what users can do, and has very simple user role management. Kotka stores the full history of all data, which allows fixing any possible errors and prevents data loss. Kotka is open source software, but is tightly coupled with the infrastructure of the Finnish Biodiversity Information Facility (FinBIF). Currently, it is only offered as an online service (Software as a Service) hosted by FinBIF. However, it could be developed into a more modular system that could, for example, utilize multiple different database backends and taxonomy data sources.
APA, Harvard, Vancouver, ISO, and other styles
45

Kehlen, Astrid, Monique Haegele, Livia Böhme, Holger Cynis, Torsten Hoffmann, and Hans-Ulrich Demuth. "N-terminal pyroglutamate formation in CX3CL1 is essential for its full biologic activity." Bioscience Reports 37, no. 4 (August 24, 2017). http://dx.doi.org/10.1042/bsr20170712.

Full text
Abstract:
CX3CL1 (fractalkine) is a unique member of the CX3C chemokine family and mediates both adhesion and cell migration in inflammatory processes. Frequently, the activity of chemokines depends on a modified N-terminus as described for the N-terminus of CCL2 modified to a pGlu- (pyroglutamate) residue by QC (glutaminyl cyclase) activity. Here, we assess the role of the pGlu-modified residue of the CX3CL1 chemokine domain in human endothelial and smooth muscle cells. For the first time, we demonstrated using MS that QC (QPCT, gene name of QC) or its isoenzyme isoQC (iso-glutaminyl cyclase) (QPCTL, gene name of isoQC) catalyse the formation of N-terminal-modified pGlu-CX3CL1. Expression of QPCT is co-regulated with its substrates CCL2 and CX3CL1 in HUVECs (human umbilical vein endothelial cells) and HCASMCs (human coronary artery smooth muscle cells) upon stimulation with TNF-α and IL-1β whereas QPCTL expression is not affected. By contrast, inhibition of the NF-κB pathway using an IKK2 inhibitor decreased the expression of the co-regulated targets QPCT, CCL2, and CX3CL1. Furthermore, RNAi-mediated inhibition of QPCT expression resulted in a reduction in CCL2 and CX3CL1 mRNA. In HCASMCs, N-terminal-modified pGlu1-CX3CL1 induced a significant stronger effect on phosphorylation of ERK (extracellular signal regulated kinase) 1/2, Akt (protein kinase B), and p38 (p38 mitogen-activated protein kinase) kinases than the immature Gln1-CX3CL1 in a time- and concentration-dependent manner. Furthermore, pGlu1-CX3CL1 affected the expression of CCL2, CX3CL1, and the adhesion molecule ICAM1/CD54 (intercellular adhesion molecule-1) inducing in higher expression level compared with its Gln1-variant in both HCASMCs and HUVECs. These results strongly suggest that QC-catalysed N-terminal pGlu formation of CX3CL1 is important for the stability or the interaction with its receptor and opens new insights into the function of QC in inflammation.
APA, Harvard, Vancouver, ISO, and other styles
46

Jagiello, Robert, Ulrich Pomper, Makoto Yoneya, Sijia Zhao, and Maria Chait. "Rapid Brain Responses to Familiar vs. Unfamiliar Music – an EEG and Pupillometry study." Scientific Reports 9, no. 1 (October 30, 2019). http://dx.doi.org/10.1038/s41598-019-51759-9.

Full text
Abstract:
Abstract Human listeners exhibit marked sensitivity to familiar music, perhaps most readily revealed by popular “name that tune” games, in which listeners often succeed in recognizing a familiar song based on extremely brief presentation. In this work, we used electroencephalography (EEG) and pupillometry to reveal the temporal signatures of the brain processes that allow differentiation between a familiar, well liked, and unfamiliar piece of music. In contrast to previous work, which has quantified gradual changes in pupil diameter (the so-called “pupil dilation response”), here we focus on the occurrence of pupil dilation events. This approach is substantially more sensitive in the temporal domain and allowed us to tap early activity with the putative salience network. Participants (N = 10) passively listened to snippets (750 ms) of a familiar, personally relevant and, an acoustically matched, unfamiliar song, presented in random order. A group of control participants (N = 12), who were unfamiliar with all of the songs, was also tested. We reveal a rapid differentiation between snippets from familiar and unfamiliar songs: Pupil responses showed greater dilation rate to familiar music from 100–300 ms post-stimulus-onset, consistent with a faster activation of the autonomic salience network. Brain responses measured with EEG showed a later differentiation between familiar and unfamiliar music from 350 ms post onset. Remarkably, the cluster pattern identified in the EEG response is very similar to that commonly found in the classic old/new memory retrieval paradigms, suggesting that the recognition of brief, randomly presented, music snippets, draws on similar processes.
APA, Harvard, Vancouver, ISO, and other styles
47

Moreno-Martínez, Francisco Javier, and Iván Moratilla-Pérez. "Naming and Categorization in Healthy Participants: Crowded Domains and Blurred Effects of Gender." Spanish Journal of Psychology 19 (2016). http://dx.doi.org/10.1017/sjp.2016.59.

Full text
Abstract:
AbstractThe study of category-specific effects has produced compelling insights into the structure, organization and functioning of cognitive processes. According to some accounts, the greater intra-category structural similarity for living things (LT) contributes to faster access to superordinate pictorial information, making LT easier to classify than structurally dissimilar items (i.e., nonliving things: NLT). Conversely, LT would be harder to name than NLT, as they must compete with within-domain structurally similar items in order to be properly discriminated. Additionally, it has been reported that men perform better with NLT than women, whereas women surpass men with LT but the reasons for this remain unclear. In the current study, we explored both the visual crowding hypothesis and the effects of gender by testing the performance of 40 healthy participants in classification and naming tasks. Analyses revealed that LT were classified significantly faster than NLT (ηp2= .11), but named significantly slower (ηp2= .25). Interestingly, the same results persisted after removing atypical categories that are known to distort the interpretation of data from the analyses. Moreover, we did not find the expected effects of gender. Men were more accurate than women naming NLT (ηp2= .13), and women did not surpass men in any task.
APA, Harvard, Vancouver, ISO, and other styles
48

Yoder, Matthew, and Dmitry Dmitriev. "Nomenclature over 5 years in TaxonWorks: Approach, implementation, limitations and outcomes." Biodiversity Information Science and Standards 5 (September 20, 2021). http://dx.doi.org/10.3897/biss.5.75441.

Full text
Abstract:
We are now over four decades into digitally managing the names of Earth's species. As the number of federating (i.e., software that brings together previously disparate projects under a common infrastructure, for example TaxonWorks) and aggregating (e.g., International Plant Name Index, Catalog of Life (CoL)) efforts increase, there remains an unmet need for both the migration forward of old data, and for the production of new, precise and comprehensive nomenclatural catalogs. Given this context, we provide an overview of how TaxonWorks seeks to contribute to this effort, and where it might evolve in the future. In TaxonWorks, when we talk about governed names and relationships, we mean it in the sense of existing international codes of nomenclature (e.g., the International Code of Zoological Nomenclature (ICZN)). More technically, nomenclature is defined as a set of objective assertions that describe the relationships between the names given to biological taxa and the rules that determine how those names are governed. It is critical to note that this is not the same thing as the relationship between a name and a biological entity, but rather nomenclature in TaxonWorks represents the details of the (governed) relationships between names. Rather than thinking of nomenclature as changing (a verb commonly used to express frustration with biological nomenclature), it is useful to think of nomenclature as a set of data points, which grows over time. For example, when synonymy happens, we do not erase the past, but rather record a new context for the name(s) in question. The biological concept changes, but the nomenclature (names) simply keeps adding up. Behind the scenes, nomenclature in TaxonWorks is represented by a set of nodes and edges, i.e., a mathematical graph, or network (e.g., Fig. 1). Most names (i.e., nodes in the network) are what TaxonWorks calls "protonyms," monomial epithets that are used to construct, for example, bionomial names (not to be confused with "protonym" sensu the ICZN). Protonyms are linked to other protonyms via relationships defined in NOMEN, an ontology that encodes governed rules of nomenclature. Within the system, all data, nodes and edges, can be cited, i.e., linked to a source and therefore anchored in time and tied to authorship, and annotated with a variety of annotation types (e.g., notes, confidence levels, tags). The actual building of the graphs is greatly simplified by multiple user-interfaces that allow scientists to review (e.g. Fig. 2), create, filter, and add to (again, not "change") the nomenclatural history. As in any complex knowledge-representation model, there are outlying scenarios, or edge cases that emerge, making certain human tasks more complex than others. TaxonWorks is no exception, it has limitations in terms of what and how some things can be represented. While many complex representations are hidden by simplified user-interfaces, some, for example, the handling of the ICZN's Family-group name, batch-loading of invalid relationships, and comparative syncing against external resources need more work to simplify the processes presently required to meet catalogers' needs. The depth at which TaxonWorks can capture nomenclature is only really valuable if it can be used by others. This is facilitated by the application programming interface (API) serving its data (https://api.taxonworks.org), serving text files, and by exports to standards like the emerging Catalog of Life Data Package. With reference to real-world problems, we illustrate different ways in which the API can be used, for example, as integrated into spreadsheets, through the use of command line scripts, and serve in the generation of public-facing websites. Behind all this effort are an increasing number of people recording help videos, developing documentation, and troubleshooting software and technical issues. Major contributions have come from developers at many skill levels, from high school to senior software engineers, illustrating that TaxonWorks leads in enabling both technical and domain-based contributions. The health and growth of this community is a key factor in TaxonWork's potential long-term impact in the effort to unify the names of Earth's species.
APA, Harvard, Vancouver, ISO, and other styles
49

Sorli, Massimo. "LA MECCANICA DELLE MACCHINE NELL’INNOVAZIONE DEI PRODOTTI E DEI PROCESSI." Istituto Lombardo - Accademia di Scienze e Lettere - Incontri di Studio, October 5, 2018. http://dx.doi.org/10.4081/incontri.2018.390.

Full text
Abstract:
The present report highlights the current and future role of techniques and methodologies of the Mechanics of Machines, both in the design of devices and systems, and in the university training courses. The underlying theme of the presentation lays in the interpretation of the physical phenomenon which oversees the operation of the machines. This is the foundation allowing to define an input-output interaction between the physical quantities operating on the machine. The cause-effect relation offers the possibility to determine a set of analytical relations for the prediction of the operation of the machine and to simulate theoretical and / or numerical trends in time or frequency domain of the significant mechanical quantities. It is evident the magnitude of the physical phenomena that arise in the operation of a machine, resulting in a broad variety of related Mechanics of Machines topics: from the contact between bodies analysis to tribological aspects, from body geometry to kinematics, from the rigid to deformable body dynamics, from the interaction between mechanical bodies to manmachine interaction, from the kinematic and dynamic behavior of a mechanical system to its interface with the actuators, sensing and control, just to name some of them. It should also be considered that the interpretation of the physical phenomenon of organs of machines has to be supported by significant experimental campaigns, specifically reproduced in laboratory, or related to data from real applications in the different application domains. The evolution of Mechanics of Machines proved in the years to be able to respond to these two interacting and converging questions: on the one hand the need to identify analytical relations, possibly not based on sole mappings of data, but rather on representative analytical relations of physical quantities, and on the other hand, the need, even at the university level, to conduct appropriate laboratory test campaigns related to the real field operation of the machine. With reference to the first objective, the need to determine algorithms, typically non-linear, and the consequent simulations setups, has resulted in the passage of the Machine Mechanics from a theoretical subject, to a subject with strong computational valences creating tools for the prediction of the behavior of devices and systems, in relation to their diagnosis and health state. The second objective has required the achievement of competence also in the field of the test rigs, of sensing and measuring / data acquisition systems. The paper deals with the identification and the presentation of the different areas related to Machine Mechanics, exposing in a matrix the enabling technologies on the one hand and the application domains to which they apply in the other hand. The enabling technologies traditionally belong to the topics of kinematics, statics, dynamics (linear and nonlinear), to the interactions with the environment (force fields, interactions with fluids) and between surfaces (lubrication), control, automation and system identification, as well as to the study and identification of vibratory phenomena, vibro-acoustic and tribological ones, mechatronics, fluid-structure interactions, monitoring, diagnostics and prognostics of mechanical systems, fluid automation and robotics, fluidics and microfluidics, to the implementation of pneumatic, hydraulic, electric and non-conventional technologies, to environmentally friendly and renewable energy systems. The application domains relate to the mechanical systems, such as driving and operating machinery, mechanical devices, mechanisms, transmissions and drives, automatic and robotic, vehicles on road, rail, fixed wing and rotorcrafts, transportation and lifting systems, systems for the production of energy, the biomechanical systems. A summary of the ongoing activities in the different research groups of the Italian Universities is then presented, from which you can also highlight the methodology of the studies addressed, strongly aimed at a unifying approach through the use of fundamental methods of theoretical applied and experimental mechanics, with attention to environmental and energy sustainability, and significantly connected on one side with the state of international research, and on the other with the industrial and manufacturing reality of the country. At the end of the paper sectors of Machine Mechanics that in the opinion of the writer need to be investigated further are discussed. Some technological challenges, such as prognostic models applied to servo systems in primary flight controls for aircraft applications, are outlined. The state of the art in that domain highlights the contribution to the innovation of processes and products, challenge that need to go back to the inputoutput interactions at the base-mechanics layer. Without those aspects it is impossible to be able to predict the evolution of degradation in the actuation systems, and to determine the remaining life of a mechanical device.
APA, Harvard, Vancouver, ISO, and other styles
50

Glover, Stuart. "Failed Fantasies of Cohesion: Retrieving Positives from the Stalled Dream of Whole-of-Government Cultural Policy." M/C Journal 13, no. 1 (March 21, 2010). http://dx.doi.org/10.5204/mcj.213.

Full text
Abstract:
In mid-2001, in a cultural policy discussion at Arts Queensland, an Australian state government arts policy and funding apparatus, a senior arts bureaucrat seeking to draw a funding client’s gaze back to the bigger picture of what the state government was trying to achieve through its cultural policy settings excused his own abstracting comments with the phrase, “but then I might just be a policy ‘wank’”. There was some awkward laughter before one of his colleagues asked, “did you mean a policy ‘wonk’”? The incident was a misstatement of a term adopted in the 1990s to characterise the policy workers in the Clinton Whitehouse (Cunningham). This was not its exclusive use, but many saw Clinton as an exemplary wonk: less a pragmatic politician than one entertained by the elaboration of policy. The policy work of Clinton’s kitchen cabinet was, in part, driven by a pervasive rationalist belief in the usefulness of ordered policy processes as a method of producing social and economic outcomes, and, in part, by the seductions of policy-play: its ambivalences, its conundrums, and, in some sense, its aesthetics (Klein 193-94). There, far from being characterised as unproductive “self-abuse” of the body-politic, policy processes were alive as a pragmatic technology, an operationalisation of ideology, as an aestheticised field of play, but more than anything as a central rationalist tenant of government action. This final idea—the possibilities of policy for effecting change, promoting development, meeting government objectives—is at the centre of the bureaucratic imagination. Policy is effective. And a concomitant belief is that ordered or organised policy processes result in the best policy and the best outcomes. Starting with Harold Lasswell, policy theorists extended the general rationalist suppositions of Western representative democracies into executive government by arguing for the value of information/knowledge and the usefulness of ordered process in addressing thus identified policy problems. In the post-war period particularly, a case can be made for the usefulness of policy processes to government—although, in a paradox, these rationalist conceptions of the policy process were strangely irrational, even Utopian, in their view of transformational capacities possibilities of policy. The early policy scientists often moved beyond a view of policy science as a useful tool, to the advocacy of policy science and the policy scientist as panaceas for public ills (Parsons 18-19). The Utopian ambitions of policy science finds one of their extremes in the contemporary interest in whole-of-government approaches to policy making. Whole-of-governmentalism, concern with co-ordination of policy and delivery across all areas of the state, can seen as produced out of Western governments’ paradoxical concern with (on one hand) order, totality, and consistency, and (on the other) deconstructing existing mechanisms of public administration. Whole-of-governmentalism requires a horizontal purview of government goals, programs, outputs, processes, politics, and outcomes, alongside—and perhaps in tension with—the long-standing vertical purview that is fundamental to ministerial responsibility. This often presents a set of public management problems largely internal to government. Policy discussion and decision-making, while affecting community outcomes and stakeholder utility, are, in this circumstance, largely inter-agency in focus. Any eventual policy document may well have bureaucrats rather than citizens as its target readers—or at least as its closest readers. Internally, cohesion of objective, discourse, tool and delivery are pursued as a prime interests of policy making. Failing at Policy So what happens when whole-of-government policy processes, particularly cultural policy processes, break down or fail? Is there anything productive to be retrieved from a failed fantasy of policy cohesion? This paper examines the utility of a failure to cohere and order in cultural policy processes. I argue that the conditions of contemporary cultural policy-making, particularly the tension between the “boutique” scale of cultural policy-making bodies and the revised, near universal, remit of cultural policy, require policy work to be undertaken in an environment and in such a way that failure is almost inevitable. Coherence and cohesions are fundamental principles of whole-of-government policy but cultural policy ambitions are necessarily too comprehensive to be achievable. This is especially so for the small arts or cultural offices government that normally act as lead agencies for cultural policy development within government. Yet, that these failed processes can still give rise to positive outcomes or positive intermediate outputs that can be taken up in a productive way in the ongoing cycle of policy work that categorises contemporary cultural governance. Herein, I detail the development of Building the Future, a cultural policy planning paper (and the name of a policy planning process) undertaken within Arts Queensland in 1999 and 2000. (While this process is now ten years in the past, it is only with a decade past that as a consultant I am in apposition to write about the material.) The abandonment of this process before the production of a public policy program allows something to be said about the utility and role of failure in cultural policy-making. The working draft of Building the Future never became a public document, but the eight months of its development helped produce a series of shifts in the discourse of Queensland Government cultural policy: from “arts” to “creative industries”; and from arts bureaucracy-centred cultural policy to the whole-of-government policy frameworks. These concepts were then taken up and elaborated in the Creative Queensland policy statement published by Arts Queensland in October 2002, particularly the concern with creative industries; whole-of-government cultural policy; and the repositioning of Arts Queensland as a service agency to other potential cultural funding-bodies within government. Despite the failure of the Building the Future process, it had a role in the production of the policy document and policy processes that superseded it. This critique of cultural policy-making rather than cultural policy texts, announcements and settings is offered as part of a project to bring to cultural policy studies material and theoretical accounts of the particularities of making cultural policy. While directions in cultural policy have much to do with the overall directions of government—which might over the past decade be categorised as focus on de-regulation, out-sourcing of services—there are developments in cultural policy settings and in cultural policy processes that are particular to cultural policy and cultural policy-making. Central to the development of cultural policy studies and to cultural policy is a transformational broadening of the operant definition of culture within government (O'Regan). Following Raymond Williams, the domain of culture is broadened to include the high culture, popular culture, folk culture and the culture of everyday life. Accordingly, in some sense, every issue of governance is deemed to have a cultural dimension—be it policy questions around urban space, tourism, community building and so on. Contemporary governments are required to act with a concern for cultural questions both within and across a number of long-persisting and otherwise discrete policy silos. This has implications for cultural policy makers and for program delivery. The definition of culture as “everyday life”, while truistically defendable, becomes unwieldy as an imprimatur or a container for administrative activity. Transforming cultural policy into a domain incorporating most social policy and significant elements of economic policy makes the domain titanically large. Potentially, it compromises usual government efforts to order policy activity through the division or apportionment of responsibility (Glover and Cunningham 19). The problem has given rise to a new mode of policy-making which attends to the co-ordination of policy across and between levels of government, known as whole-of government policy-making (see O’Regan). Within the domain of cultural policy the task of whole-of-government cultural policy is complicated by the position of, and the limits upon, arts and cultural bureaux within state and federal governments. Dedicated cultural planning bureaux often operate as “boutique” agencies. They are usually discrete line agencies or line departments within government—only rarely are they part of the core policy function of departments of a Premier or a Prime Minister. Instead, like most line agencies, they lack the leverage within the bureaucracy or policy apparatus to deliver whole-of-government cultural policy change. In some sense, failure is the inevitable outcome of all policy processes, particularly when held up against the mechanistic representation of policy processes in policy typical of policy handbooks (see Bridgman and Davis 42). Against such models, which describe policy a series of discrete linear steps, all policy efforts fail. The rationalist assumptions of early policy models—and the rigid templates for policy process that arise from their assumptions—in retrospect condemn every policy process to failure or at least profound shortcoming. This is particularly so with whole-of-government cultural policy making To re-think this, it can be argued that the error then is not really in the failure of the process, which is invariably brought about by the difficulty for coherent policy process to survive exogenous complexity, but instead the error rests with the simplicity of policy models and assumptions about the possibility of cohesion. In some sense, mechanistic policy processes make failure endogenous. The contemporary experience of making policy has tended to erode any fantasies of order, clear process, or, even, clear-sightedness within government. Achieving a coherence to the policy message is nigh on impossible—likewise cohesion of the policy framework is unlikely. Yet, importantly, failed policy is not without value. The churn of policy work—the exercise of attempting cohrent policy-making—constitutes, in some sense, the deliberative function of government, and potentially operates as a force (and site) of change. Policy briefings, reports, and draft policies—the constitution of ideas in the policy process and the mechanism for their dissemination within the body of government and perhaps to other stakeholders—are discursive acts in the process of extending the discourse of government and forming its later actions. For arts and cultural policy agencies in particular, who act without the leverage or resources of central agencies, the expansive ambitions of whole-of-government cultural policy makes failure inevitable. In such a circumstance, retrieving some benefits at the margins of policy processes, through the churn of policy work towards cohesion, is an important consolation. Case study: Cultural Policy 2000 The policy process I wish to examine is now complete. It ran over the period 1999–2002, although I wish to concentrate on my involvement in the process in early 2000 during which, as a consultant to Arts Queensland, I generated a draft policy document, Building the Future: A policy framework for the next five years (working draft). The imperative to develop a new state cultural policy followed the election of the first Beattie Labor government in July 1998. By 1999, senior Arts Queensland staff began to argue (within government at least) for the development of a new state cultural policy. The bureaucrats perceived policy development as one way of establishing “traction” in the process of bidding for new funds for the portfolio. Arts Minister Matt Foley was initially reluctant to “green-light” the policy process, but eventually in early 1999 he acceded to it on the advice of Arts Queensland, the industry, his own policy advisors and the Department of Premier. As stated above, this case study is offered now because the passing of time makes the analysis of relatively sensitive material possible. From the outset, an abbreviated timeframe for consultation and drafting seem to guarantee a difficult birth for the policy document. This was compounded by a failure to clarity the aims and process of the project. In presenting the draft policy to the advisory group, it became clear that there was no agreed strategic purpose to the document: Was it to be an advertisement, a framework for policy ideas, an audit, or a report on achievements? Tied to this, were questions about the audience for the policy statement. Was it aimed at the public, the arts industry, bureaucrats inside Arts Queensland, or, in keeping with the whole-of-government inflection to the document and its putative use in bidding for funds inside government, bureaucrats outside of Arts Queensland? My own conception of the document was as a cultural policy framework for the whole-of-government for the coming five years. It would concentrate on cultural policy in three realms: Arts Queensland; the arts instrumentalities; and other departments (particularly the cultural initiatives undertaken by the Department of Premier and the Department of State Development). In order to do this I articulated (for myself) a series of goals for the document. It needed to provide the philosophical underpinnings for a new arts and cultural policy, discuss the cultural significance of “community” in the context of the arts, outline expansion plans for the arts infrastructure throughout Queensland, advance ideas for increased employment in the arts and cultural industries, explore the development of new audiences and markets, address contemporary issues of technology, globalisation and culture commodification, promote a whole-of-government approach to the arts and cultural industries, address social justice and equity concerns associated with cultural diversity, and present examples of current and new arts and cultural practices. Five key strategies were identified: i) building strong communities and supporting diversity; ii) building the creative industries and the cultural economy; iii) developing audiences and telling Queensland’s stories; iv) delivering to the world; and v) a new role for government. While the second aim of building the creative industries and the cultural economy was an addition to the existing Australian arts policy discourse, it is the articulation of a new role for government that is most radical here. The document went to the length of explicitly suggesting a series of actions to enable Arts Queensland to re-position itself inside government: develop an ongoing policy cycle; position Arts Queensland as a lead agency for cultural policy development; establish a mechanism for joint policy planning across the arts portfolio; adopt a whole-of-government approach to policy-making and program delivery; use arts and cultural strategies to deliver on social and economic policy agendas; centralise some cultural policy functions and project; maintain and develop mechanisms and peer assessment; establish long-term strategic relationships with the Commonwealth and local government; investigate new vehicles for arts and cultural investment; investigate partnerships between industry, community and government; and develop appropriate performance measures for the cultural industries. In short, the scope of the document was titanically large, and prohibitively expansive as a basis for policy change. A chief limitation of these aims is that they seem to place the cohesion and coherence of the policy discourse at the centre of the project—when it might have better privileged a concern with policy outputs and industry/community outcomes. The subsequent dismal fortunes of the document are instructive. The policy document went through several drafts over the first half of 2000. By August 2000, I had removed myself from the process and handed the drafting back to Arts Queensland which then produced shorter version less discursive than my initial draft. However, by November 2000, it is reasonable to say that the policy document was abandoned. Significantly, after May 2000 the working drafts began to be used as internal discussion documents with government. Thus, despite the abandonment of the policy process, largely due to the unworkable breadth of its ambition, the document had a continued policy utility. The subsequent discussions helped organise future policy statements and structural adjustments by government. After the re-election of the Beattie government in January 2001, a more substantial policy process was commenced with the earlier policy documents as a starting point. By early 2002 the document was in substantial draft. The eventual policy, Creative Queensland, was released in October 2002. Significantly, this document sought to advance two ideas that I believe the earlier process did much to mobilise: a whole-of-government approach to culture; and a broader operant definition of culture. It is important not to see these as ideas merely existing “textually” in the earlier policy draft of Building the Future, but instead to see them as ideas that had begun adhere themselves to the cultural policy mechanism of government, and begun to be deployed in internal policy discussions and in program design, before finding an eventual home in a published policy text. Analysis The productive effects of the aborted policy process in which I participated are difficult to quantify. They are difficult, in fact, to separate out from governments’ ongoing processes of producing and circulating policy ideas. What is clear is that the effects of Building the Future were not entirely negated by it never becoming public. Instead, despite only circulating to a readership of bureaucrats it represented the ideas of part of the bureaucracy at a point in time. In this instance, a “failed” policy process, and its intermediate outcomes, the draft policy, through the churn of policy work, assisted government towards an eventual policy statement and a new form of governmental organisation. This suggests that processes of cultural policy discussion, or policy churn, can be as productive as the public “enunciation” of formal policy in helping to organise ideas within government and determine programs and the allocation of resources. This is even so where the Utopian idealism of the policy process is abandoned for something more graspable or politic. For the small arts or cultural policy bureau this is an important incremental benefit. Two final implications should be noted. The first is for models of policy process. Bridgman and Davis’s model of the Australian policy cycle, despite its mechanistic qualities, is ambiguous about where the policy process begins and ends. In one instance they represent it as linear but strictly circular, always coming back to its own starting point (27). Elsewhere, however, they represent it as linear, but not necessarily circular, passing through eight stages with a defined beginning and end: identification of issues; policy analysis; choosing policy instruments; consultation; co-ordination; decision; implementation; and evaluation (28–29). What is clear from the 1999-2002 policy process—if we take the full period between when Arts Queensland began to organise the development of a new arts policy and its publication as Creative Queensland in October 2002—is that the policy process was not a linear one progressing in an orderly fashion towards policy outcomes. Instead, Building the Future, is a snapshot in time (namely early to mid-2000) of a fragmenting policy process; it reveals policy-making as involving a concurrency of policy activity rather than a progression through linear steps. Following Mark Considine’s conception of policy work as the state’s effort at “system-wide information exchange and policy transfer” (271), the document is concerned less in the ordering of resources than the organisation of policy discourse. The churn of policy is the mobilisation of information, or for Considine: policy-making, when considered as an innovation system among linked or interdependent actors, becomes a learning and regulating web based upon continuous exchanges of information and skill. Learning occurs through regulated exchange, rather than through heroic insight or special legislative feats of the kind regularly described in newspapers. (269) The acceptance of this underpins a turn in contemporary accounts of policy (Considine 252-72) where policy processes become contingent and incomplete Policy. The ordering of policy is something to be attempted rather than achieved. Policy becomes pragmatic and ad hoc. It is only coherent in as much as a policy statement represents a bringing together of elements of an agency or government’s objectives and program. The order, in some sense, arrives through the act of collection, narrativisation and representation. The second implication is more directly for cultural policy makers facing the prospect of whole-of-government cultural policy making. While it is reasonable for government to wish to make coherent totalising statements about its cultural interests, such ambitions bring the near certainty of failure for the small agency. Yet these failures of coherence and cohesion should be viewed as delivering incremental benefits through the effort and process of this policy “churn”. As was the case with the Building the Future policy process, while aborted it was not a totally wasted effort. Instead, Building the Future mobilised a set of ideas within Arts Queensland and within government. For the small arts or cultural bureaux approaching the enormous task of whole-of government cultural policy making such marginal benefits are important. References Arts Queensland. Creative Queensland: The Queensland Government Cultural Policy 2002. Brisbane: Arts Queensland, 2002. Bridgman, Peter, and Glyn Davis. Australian Policy Handbook. St Leonards: Allen & Unwin, 1998. Considine, Mark. Public Policy: A Critical Approach. South Melbourne: Palgrave Macmillan, 1996. Cunningham, Stuart. "Willing Wonkers at the Policy Factory." Media Information Australia 73 (1994): 4-7. Glover, Stuart, and Stuart Cunningham. "The New Brisbane." Artlink 23.2 (2003): 16-23. Glover, Stuart, and Gillian Gardiner. Building the Future: A Policy Framework for the Next Five Years (Working Draft). Brisbane: Arts Queensland, 2000. Klein, Joe. "Eight Years." New Yorker 16 & 23 Oct. 2000: 188-217. O'Regan, Tom. "Cultural Policy: Rejuvenate or Wither". 2001. rtf.file. (26 July): AKCCMP. 9 Aug. 2001. ‹http://www.gu.edu.au/centre/cmp>. Parsons, Wayne. Public Policy: An Introduction to the Theory and Practice of Policy Analysis. Aldershot: Edward Edgar, 1995.Williams, Raymond. Key Words: A Vocabulary of Culture and Society. London: Fontana, 1976.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography