Siga este enlace para ver otros tipos de publicaciones sobre el tema: EXpertise (Computer file).

Artículos de revistas sobre el tema "EXpertise (Computer file)"

Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros

Elija tipo de fuente:

Consulte los 50 mejores artículos de revistas para su investigación sobre el tema "EXpertise (Computer file)".

Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.

También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.

Explore artículos de revistas sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.

1

Haislip, Jacob Z., Khondkar E. Karim, Karen Jingrong Lin y Robert E. Pinsker. "The Influences of CEO IT Expertise and Board-Level Technology Committees on Form 8-K Disclosure Timeliness". Journal of Information Systems 34, n.º 2 (2 de agosto de 2019): 167–85. http://dx.doi.org/10.2308/isys-52530.

Texto completo
Resumen
ABSTRACT Recent research documents the improvement of Form 8-K disclosure timeliness in the post-Sarbanes-Oxley Act (SOX) era. However, it remains unclear why disclosure timeliness overall has improved, but disclosure timeliness for certain events has not improved. We examine firms' information technology (IT) management and IT governance in order to investigate their potential positive impacts on 8-K reporting timeliness. We find that, on average, IT-expert Chief Executive Officers (CEOs) and firms with board-level technology committees file Form 8-Ks in a timelier manner. Specifically, firms with IT-expert CEOs file a half-day sooner and firms with technology committees file a full-day sooner. Additional analyses show that firms with technology committees file 8-Ks in a timelier manner than firms without technology committees, even when the events are complicated or surprising. In aggregate, our evidence suggests that IT-expert CEOs and IT expertise on the board facilitates efficient IT utilization and is associated with timely disclosure. Data Availability: The data used are publicly available from the sources cited in the text.
Los estilos APA, Harvard, Vancouver, ISO, etc.
2

Musfika, Puput Nada y Depi Rusda. "Sistem Informasi Lowongan Kerja di Kota Sampit Berbasis Web". Building of Informatics, Technology and Science (BITS) 2, n.º 2 (10 de diciembre de 2020): 84–90. http://dx.doi.org/10.47065/bits.v2i2.498.

Texto completo
Resumen
The need for information is very much needed in line with the rapid technological advances in modern times such as today, especially the use of computerized systems that propagate in all fields. So, every Human Resource (HR) must always be required to fulfill the expertise, skills and abilities in running a computer hardware or software. In the field of information delivery services to companies or agencies that require labor, such as those in Sampit City, Central Borneo, currently information is conveyed only through several media such as newspapers, magazines, and information obtained by word of mouth. This makes it difficult to find accurate information for job applicants, and for companies it is also difficult to find prospective employees in a time efficient manner because it is still done without an information system. With the development of information technology, it is necessary to provide an Information System to solve problems in delivering job information directly from companies. With this Information System, applicants will also find it easy to receive information, submit job application files, and take written tests online. And the company will get employees in an effective and time-efficient way, starting from the delivery of information, file selection, written tests and passing results from written tests that can be processed directly through this information system.
Los estilos APA, Harvard, Vancouver, ISO, etc.
3

Mejía, Jezreel, Rafael Valencia-García, Giner Alor-Hernández y José A. Calvo-Manzano. "Knowledge Intensive Software Engineering Applications". JUCS - Journal of Universal Computer Science 27, n.º 2 (28 de febrero de 2021): 87–90. http://dx.doi.org/10.3897/jucs.65078.

Texto completo
Resumen
The use of Information and Communication Technologies (ICTs)  has become a competitive strategy that allows organizations to position themselves within their market of action. In addition, the evolution, advancement and use of ICTs within any type of organization have created new domains of interest. In this context, Knowledge-intensive software engineering applications are becoming crucial in organizations to support their performance. Knowledge-based technologies provide a consistent and reliable basis to face the challenges for organization, manipulation and visualization of the data and knowledge, playing a crucial role as the technological basis of the development of a large number of information systems. In software engineering, it involves the integration of various knowledge sources that are in constant change. Knowledge-intensive software applications are becoming more significant because the domains of many software applications are inherently knowledge-intensive and this knowledge is often not explicitly dealt with in software development. This impedes maintenance and reuse. Moreover, it is generally known that developing software requires expertise and experience, which are currently also implicit and could be made more tangible and reusable using knowledge-based or related techniques. Furthermore, organizations have recognized that the software engineering applications are an optimal way for providing solutions, because it is a file that is constantly evolving due to the new challenges. Examples of approaches that are directly related to this tendency are data analysis, software architectures, knowledge engineering, ontologies, conceptual modelling, domain analysis and domain engineering, business rules, workflow management, human and cultural factors, to mention but a few. Therefore, tools and techniques are necessary to capture and process knowledge in order to facilitate subsequent development efforts, especially in the domain of software engineering.  
Los estilos APA, Harvard, Vancouver, ISO, etc.
4

Qur'ana, Tri Wahyu, Al Fath Riza Kholdani y Hayati Noor. "Pelatihan Merakit dan Instalasi Laptop/Komputer pada Santri Yayasan Pendidikan Islam Pondok Pesantren Wali Songo Banjarbaru". PengabdianMu: Jurnal Ilmiah Pengabdian kepada Masyarakat 5, n.º 4 (26 de septiembre de 2020): 383–87. http://dx.doi.org/10.33084/pengabdianmu.v5i4.1270.

Texto completo
Resumen
Assembling a computer is a stage of bringing together the components needed to run correctly. To understand the correct computer assembly process, an understanding of computer hardware is required both logically and physically. Meanwhile, so that the computer can be operated according to its function, it is necessary to install it. Computer installation is installing a program (software) into a computer. All software (e.g., Microsoft Windows, Microsoft Office, and others) apart from that, also install functions to match the program with the tools installed on the computer and parse the compressed files. The expertise in assembling and installing laptops/computers can make a creative economy business field based on information technology. The Islamic Education Foundation of Wali Songo Islamic Boarding School is an Islamic religious, educational institution. Santri, as the nation's next-generation youth, needs to understand and have skills in assembling and installing laptops/computers, not only as passive users but also being active, the other side can also be used as part of driving the creative economy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
5

Hidayat, Salfiko y Malta Nelisa. "Kemas Ulang Informasi Randai bagi Siswa di SMA 1 Koto Timur Kabupaten Padang Pariaman". Ilmu Informasi Perpustakaan dan Kearsipan 8, n.º 1 (29 de octubre de 2019): 527. http://dx.doi.org/10.24036/107484-0934.

Texto completo
Resumen
AbstractThis paper discusses the need for randai information and the process of making Randai Information Repackage for Students at SMAN 1 V Koto Timur. This study aims to determine the need for randai information and how to re-make randai information for students in SMA N 1 V Koto Timur. This study uses a descriptive method that is conducting direct interviews with a number of students at SMAN N 1 V Koto Timur, Padang Pariaman Regency. First, the need for randai information in SMAN 1 V Koto Timur, there are still some students who want to meet the general information needs both from other information such as reading materials in the library such as books and randai activities themselves. Secondly, there are several stages in making information back: (1) Identifying user needs by collecting and examining what will be contained in the information package; (2) Finding the needed sources, searching for information by collecting selected books and articles/ journals from the internet; (3) The collection of information that has been obtained from several books and articles/ journals from the internet is then stored in a computer file or flashdisk; (4) Packaging information by selecting data from various sources of books and articles/ internet journals and packaged in printed form; (5) Determine the form of information packaging to be made, namely printed; (6) Editing, checking deficiencies and forming packaging to make it more attractive and easily read by information users; (7) Printing of packaging is in the form of print publication. Third, in making information repackaging there are some obstacles. First, Constraints: (1) Data search and information gathering; (2) Lack of expertise in making packaging; (3) Determination of information to be contained in the package. Second, the efforts made: (1) Gathering as much data as possible from books, journals and other official articles; (2) Asking for help from experts to help the process of making packaging; (3) Finding information from several journals and book.Keywords: information, randai, and repackage
Los estilos APA, Harvard, Vancouver, ISO, etc.
6

Abhilasha, Abhilasha. "A Survey on Cloud Computing and Its Benefits". INTERNATIONAL JOURNAL OF COMPUTERS & TECHNOLOGY 15, n.º 2 (17 de noviembre de 2015): 6499–503. http://dx.doi.org/10.24297/ijct.v15i2.568.

Texto completo
Resumen
Cloud computing is Internet based development and use of computer technology. It is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. Users need not have knowledge of, expertise in, or control over the technology infrastructure "in the cloud" that supports them. Cloud computing is very popular paradigm in the field of computer science where heterogeneous services are delivered to an organization's computer through the Internet. It allows individuals and businesses to access the applications without installation and can access their personal files on any internet connected computer or laptop. Cloud computing means saving and accessing the data over the internet instead of local storage. In this paper, we have conducted a survey on the models of cloud environment, benefits and issues related to it.Â
Los estilos APA, Harvard, Vancouver, ISO, etc.
7

Mitta, Deborah A. "Formulation of Expert System Knowledge". Proceedings of the Human Factors Society Annual Meeting 33, n.º 5 (octubre de 1989): 350. http://dx.doi.org/10.1177/154193128903300525.

Texto completo
Resumen
Expert system knowledge represents expertise obtained through formal education, training, and/or experience. Formal education provides deep knowledge of a particular domain; experience and training result in heuristic knowledge. A knowledge base defines the range of information and understanding with which the system is capable of dealing; therefore, its information must be structured and filed for ready access. The objective of this symposium is to address the challenges associated with establishment of valid expert system knowledge, specifically, knowledge to be used by expert system shells. As expert system knowledge is obtained, structured, and stored, it is formulated. In this symposium, knowledge formulation is addressed as a three-phase process: knowledge acquisition, the mechanics associated with structuring knowledge, and knowledge porting. Knowledge acquisition is the process of extracting expertise from a domain expert. Expertise may be collected through a series of interviews between the expert and a knowledge engineer or through sessions the expert holds with an automated knowledge acquisition tool. Thus, the ultimate outcome of knowledge acquisition is a collection of raw knowledge data. The following human factors issues become apparent: documenting mental models (where mental models are the expert's conceptualization of a problem), recording cognitive problem-solving strategies, and specifying an appropriate interface between the domain expert and the acquisition methodology. The knowledge structuring process involves the refinement of raw knowledge data, where knowledge is organized and assigned a semantic structure. One issue that must be considered is how to interpret knowledge data such that formal definitions, logical relationships, and facts can be established. Finally, formulation involves knowledge porting, that is, the movement of an expert system shell's knowledge base to various other shells. The outcome of this process is a portable knowledge base, where the challenges lie in maintaining consistent knowledge, understanding the constraints inherent to a shell (the shell's ability to incorporate all relevant knowledge), and designing an acceptable user-expert system interface. The fundamental component of any expert system is its knowledge base. The issues to be presented in this symposium are important because they address three processes that are critical to the development of a knowledge base. In addition to presenting computer science challenges, knowledge base formulation also presents human factors challenges, for example, understanding cognitive problem-solving processes, representing uncertain information, and defining human-expert system interface problems. This symposium will provide a forum for discussion of both types of challenges.
Los estilos APA, Harvard, Vancouver, ISO, etc.
8

Shi, Tingsheng, Ian K. Duncan y Michael T. Gastner. "go-cart.io: a web application for generating contiguous cartograms". Abstracts of the ICA 1 (15 de julio de 2019): 1–2. http://dx.doi.org/10.5194/ica-abs-1-333-2019.

Texto completo
Resumen
<p><strong>Abstract.</strong> Cartograms are maps in which the areas of regions (e.g. states, provinces) are rescaled to be proportional to statistical data (e.g. population size, gross domestic product). Cartograms are called “contiguous” if they maintain the topology of the conventional map (i.e. regions are displayed as neighbours on the cartogram if and only if they are geographic neighbours) [1]. An example of a contiguous cartogram, showing the 48 conterminous states of the USA with an area proportional to their population, is shown on the right of Figure 1. Such maps are an invaluable addition to a professional geographer’s toolbox. However, producing contiguous cartograms should not be the privilege of only a handful of experts in cartography. Journalists or bloggers, for example, may also benefit from a cartogram as an intriguing illustration of their own data. Similarly, high school students may enrich a term paper with a cartogram that can summarize data more effectively than raw numeric tables.</p><p>Until now, the creation of contiguous cartograms has been far from user-friendly, requiring computer skills that even experts in data visualization typically do not possess. In the past, publications that introduced new cartogram algorithms rarely included computer code. Some authors of more recent publications have posted their code online [1,2], but their software usually requires technical knowledge (e.g. about shell scripting, compiling, GIS) that pose insurmountable obstacles for most users. To remove these hurdles, we have recently developed the web application <i>go-cart.io</i> [3] with an interface that is easy to use, even for non-experts.</p><p>Over the past 15 years, several other applets have been posted on the worldwide web, but they either offer only a limited number of precomputed cartograms [4,5] or are no longer actively maintained [5–8]. In particular, the shift away from Java applets has made it challenging to run some of these legacy applications. This status quo has been against the current trend towards “citizen cartography”, mainly driven by online tools that enable even untrained users to produce maps from their own data. It has been shown that most users perceive contiguous cartograms, though potentially challenging to read, as an effective method to display data [9]. It is therefore timely to develop a new web interface that makes it easier to generate cartograms.</p><p>While previous cartogram generators required users to install software (e.g. Java) on their computer, <i>go-cart.io</i> is based on JavaScript that can be run in any contemporary web browser without additional downloads. We decided to simplify the data input as much as possible. We have curated a “library” of topologies so that users do not need GIS expertise to create geospatial vector data. The entries in this library are currently limited to only a few countries split into administrative divisions (e.g. USA by state, China by province), but we will expand the selection over the coming months. We may also, at a later stage of the project, allow users to upload their own map data. Users can select a country from a dropdown menu (highlighted in Figure 1). Afterwards users specify the desired areas and colours for each region on the cartogram either by editing a spreadsheet in the browser or by uploading a CSV file.</p><p>After data are transmitted, a remote server calculates the cartogram transformation with the recently developed fast flow-based algorithm [1]. Because the calculation is entirely server-side, we eliminate any dependence on the client’s hardware. We tested the application with various countries and input statistics. For typical input, the calculation finishes within 10 to 15 seconds. If the calculation needs substantially longer, the application displays a bar chart instead of a cartogram as a fallback. The cartogram is displayed in the browser window side by side with the conventional (i.e. equal-area) map (Figure 1). The user can explore both maps with various interactive features implemented using the D3.js library [10]:</p><ul><li><i>Linked brushing:</i> when the mouse hovers over a region on the equal-area map, the corresponding region is highlighted on the cartogram and vice versa.</li><li><i>Infotip:</i> a text box containing the name and statistical data of the highlighted region appears above the map (Figure 1).</li><li><i>Map switching:</i> users can smoothly morph the image from equal-area map to cartogram and vice versa by clicking on the cartogram selector (Figure 1).</li></ul><p>Users can save all generated equal-area maps and cartograms as SVG vector image files and directly share them on social media (Figure 1). We are currently conducting evaluations to measure how effective the application is in allowing users to easily generate and analyse their own cartograms. Our initial results suggest that these features are well received by users. We believe that, with a user-friendly interface, contiguous cartograms have the potential to gain more popularity as an attractive and engaging method to visualize geographic data.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
9

Carlicchi, E., S. Harari, A. Caminati, P. Fughelli y M. Zompatori. "Radiological diagnosis of fibrosing interstitial lung diseases: innovations and controversies". International Journal of Tuberculosis and Lung Disease 24, n.º 11 (1 de noviembre de 2020): 1156–64. http://dx.doi.org/10.5588/ijtld.19.0743.

Texto completo
Resumen
Following the introduction of new effective antifibrotic drugs, interest in fibrosing interstitial lung diseases (FILD) has been renewed. In this context, radiological evaluation of FILD plays a cardinal role. Radiological diagnosis is possible in about 50% of the cases, which allows the initiation of effective therapy, thereby avoiding invasive procedures such as surgical lung biopsy. Usual interstitial pneumonia (UIP) pattern may be diagnosed based on clinical, radiological, and pathological data. High-resolution computed tomography features of UIP have been widely described in literature; however, interpreting them remains challenging, even with specific expertise on the subject. Diagnostic difficulties are understandable given the continuous evolution of FILD classifications and their complexity. Both early-stage diseases and advanced or combined patterns are not easily classifiable, and many end up being labelled ‘indeterminate´ or ‘unclassifiable´. Especially in these cases, optimal patient management involves collaboration and communication between different specialists. Here, we discuss the most critical aspects of radiological interpretation in FILD diagnosis based on the most recent classifications. We believe that the clinicians´ awareness of radiological diagnostic issues of FILD would improve comprehension and dialogue between physicians and radiologists, leading to better clinical practice.
Los estilos APA, Harvard, Vancouver, ISO, etc.
10

Morvan, Hervé P. "Automating CFD for non-experts". Journal of Hydroinformatics 7, n.º 1 (1 de enero de 2005): 17–29. http://dx.doi.org/10.2166/hydro.2005.0003.

Texto completo
Resumen
The focus of the paper is on demonstrating how it is possible to automate complex CFD simulations using scripting language around and within the structure of the CFD command files. To illustrate this, the concept of an atmospheric pollution case is used and, more specifically, that of a water treatment plant. The code that is used is CFX-5 with PERL as a scripting ‘language’. The simulation of the factory atmospheric environment and its fluctuating conditions are fully automated. The simulation is based on a pre-defined generic CFD model, for which initial conditions, boundary conditions and source terms of atmospheric pollutant release are written automatically by the scripts using data recorded by measuring devices and stored on computers every half an hour as the simulation runs. When the correct amount of time has elapsed, the simulation pauses and the script updates the set-up using the newly recorded data. It then proceeds further, restarting from the appropriate result files. At each pause, a HTML report is also produced, which contains pictures of the area and summary tables. If a suitable criterion is defined in the post-treatment algorithm, such as a critical concentration for example, an alarm bell can be started, so that the technician knows the simulation has found a potential problem within the large domain that is thus monitored. The implications of this work are numerous. Firstly, non-CFD experts can run and use results from a CFD simulation without having to implement the models, run the simulation or fully understand the intricacy of the physics and mathematics that it contains. Going further, it is even possible to parametrize the generic model set-up, e.g. the domain dimensions or the location of emission sources, to make the case more flexible. Running the application remotely is also possible, using a web browser to submit the necessary input to the CFD code. Secondly, a very wide area can be monitored numerically, which would not be commercially viable with physical devices and field monitoring campaigns. Thirdly, such a simulation can be used to learn the general behaviour of, and the potential problems associated with, the region of interest and eventually set up a response plan to any given situation known to cause discomfort or form a health hazard to the neighbourhood. This feedback can be used to improve the operation of the plant and its safety, but also to enhance the model set-up for future simulations.
Los estilos APA, Harvard, Vancouver, ISO, etc.
11

Ponnamma Divakaran, Pradeep Kumar y Sladjana Nørskov. "Are online communities on par with experts in the evaluation of new movies? Evidence from the Fandango community". Information Technology & People 29, n.º 1 (7 de marzo de 2016): 120–45. http://dx.doi.org/10.1108/itp-02-2014-0042.

Texto completo
Resumen
Purpose – The purpose of this paper is to investigate two questions. First, are movie-based online community evaluations (CE) on par with film expert evaluations of new movies? Second, which group makes more reliable and accurate predictions of movie box office revenues: film reviewers or an online community? Design/methodology/approach – Data were collected from a movie-based online community Fandango for a 16-month period and included all movies released during this time (373 movies). The authors compared film reviewers’ evaluations with the online CE during the first eight weeks of the movie’s release. Findings – The study finds that community members evaluate movies differently than film reviewers. The results also reveal that CE have more predictive power than film reviewers’ evaluations, especially during the opening week of a movie. Research limitations/implications – The investigated online community is based in the USA, hence the findings are limited to this geographic context. Practical implications – The main implication is that film studios and movie-goers can rely more on CE than film reviewers’ evaluation for decision making. Online CE can help film studios in negotiating with distributors, theatre owners for the number of screens. Also, community reviews rather than film reviewers’ reviews are looked upon by future movie-goers for movie choice decisions. Originality/value – The study makes an original contribution to the motion picture performance research as well as to the growing research on online consumer communities by demonstrating the predictive potential of online communities with regards to evaluations of new movies.
Los estilos APA, Harvard, Vancouver, ISO, etc.
12

Lovelace, Samantha, Chantal Trudel, Catherine Dulude y W. James King. "Cost vs. Benefit: What does NVivo Video Analysis of EMR Simulations Add to Our Understanding of User Experience?" Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care 9, n.º 1 (septiembre de 2020): 24–32. http://dx.doi.org/10.1177/2327857920091056.

Texto completo
Resumen
Improving healthcare using phased, iterative and participatory methods requires time and resources to do comprehensively. The reality, particularly for practitioners, is that constraints related to human resources, cost and time may impact the rigor of data collection and analysis. Under such conditions, project teams may rely on tacit knowledge and expertise to fill in potential gaps in understanding and validate design decisions. But what kind of insights might emerge if we were freed from such constraints, and given the time to study a context in more detail? Our research group explored this question by using Computer Assisted Qualitative Data Analysis Software (NVivo) and qualitative research coding methods to analyze a sample of video data collected from a series of electronic medical record (EMR) workflow simulations that were originally used to support EMR implementation in a pediatric hospital. The results from the NVivo video analysis revealed some details not previously captured by initial data analysis methods, but at significant resource cost. A comparison of video analysis methods, findings and respective costs are compared and discussed in the context of design development and implementation.
Los estilos APA, Harvard, Vancouver, ISO, etc.
13

Ohkubo, Tomomasa, Ei-ichi Matsunaga, Junji Kawanaka, Takahisa Jitsuno, Shinji Motokoshi y Kunio Yoshida. "Recurrent Neural Network for Predicting Dielectric Mirror Reflectivity". Journal of Advanced Computational Intelligence and Intelligent Informatics 23, n.º 6 (20 de noviembre de 2019): 1012–18. http://dx.doi.org/10.20965/jaciii.2019.p1012.

Texto completo
Resumen
Optical devices often achieve their maximum effectiveness by using dielectric mirrors; however, their design techniques depend on expert knowledge in specifying the mirror properties. This expertise can also be achieved by machine learning, although it is not clear what kind of neural network would be effective for learning about dielectric mirrors. In this paper, we clarify that the recurrent neural network (RNN) is an effective approach to machine-learning for dielectric mirror properties. The relation between the thickness distribution of the mirror’s multiple film layers and the average reflectivity in the target wavelength region is used as the indicator in this study. Reflection from the dielectric multilayer film results from the sequence of interfering reflections from the boundaries between film layers. Therefore, the RNN, which is usually used for sequential data, is effective to learn the relationship between average reflectivity and the thickness of individual film layers in a dielectric mirror. We found that a RNN can predict its average reflectivity with a mean squared error (MSE) less than 10-4 from representative thickness distribution data (10 layers with alternating refractive indexes 2.3 and 1.4). Furthermore, we clarified that training data sets generated randomly lead to over-learning. It is necessary to generate training data sets from larger data sets so that the histogram of reflectivity becomes a flat distribution. In the future, we plan to apply this knowledge to design dielectric mirrors using neural network approaches such as generative adversarial networks, which do not require the know-how of experts.
Los estilos APA, Harvard, Vancouver, ISO, etc.
14

Samuel, Kehinde G., Nourou-Dine M. Bouare, Oumar Maïga y Mamadou K. Traoré. "A DEVS-based pivotal modeling formalism and its verification and validation framework". SIMULATION 96, n.º 12 (26 de septiembre de 2020): 969–92. http://dx.doi.org/10.1177/0037549720958056.

Texto completo
Resumen
System verification is an ever-lasting system engineering challenge. The increasing complexity in system simulation requires some level of expertise in handling the idioms of logic and discrete mathematics to correctly drive a full verification process. It is recognized that visual modeling can help to fill the knowledge gap between system experts and analysis experts. However, such an approach has been used on the one hand to specify the behavior of complex systems, and on the other hand to specify complex requirement properties, but not simultaneously. This paper proposes a framework that is unique in supporting a full system verification process based on the graphical modeling of both the system of interest and the requirements to be checked. Patterns are defined to transform the resulting models to formal specifications that a model checker can manipulate. A real-time crossing system is used to illustrate the proposed framework.
Los estilos APA, Harvard, Vancouver, ISO, etc.
15

Pettifer, S. R., J. R. Sinnott y T. K. Attwood. "UTOPIA—User-Friendly Tools for Operating Informatics Applications". Comparative and Functional Genomics 5, n.º 1 (2004): 56–60. http://dx.doi.org/10.1002/cfg.359.

Texto completo
Resumen
Bioinformaticians routinely analyse vast amounts of information held both in large remote databases and in flat data files hosted on local machines. The contemporary toolkit available for this purpose consists of anad hoccollection of data manipulation tools, scripting languages and visualization systems; these must often be combined in complex and bespoke ways, the result frequently being an unwieldy artefact capable of one specific task, which cannot easily be exploited or extended by other practitioners. Owing to the sizes of current databases and the scale of the analyses necessary, routine bioinformatics tasks are often automated, but many still require the unique experience and intuition of human researchers: this requires tools that support real-time interaction with complex datasets. Many existing tools have poor user interfaces and limited real-time performance when applied to realistically large datasets; much of the user's cognitive capacity is therefore focused on controlling the tool rather than on performing the research. The UTOPIA project is addressing some of these issues by building reusable software components that can be combined to make useful applications in the field of bioinformatics. Expertise in the fields of human computer interaction, high-performance rendering, and distributed systems is being guided by bioinformaticians and end-user biologists to create a toolkit that is both architecturally sound from a computing point of view, and directly addresses end-user and application-developer requirements.
Los estilos APA, Harvard, Vancouver, ISO, etc.
16

Karaçay, Leyli, Erkay Savaş y Halit Alptekin. "Intrusion Detection Over Encrypted Network Data". Computer Journal 63, n.º 4 (17 de noviembre de 2019): 604–19. http://dx.doi.org/10.1093/comjnl/bxz111.

Texto completo
Resumen
Abstract Effective protection against cyber-attacks requires constant monitoring and analysis of system data in an IT infrastructure, such as log files and network packets, which may contain private and sensitive information. Security operation centers (SOC), which are established to detect, analyze and respond to cyber-security incidents, often utilize detection models either for known types of attacks or for anomaly and applies them to the system data for detection. SOC are also motivated to keep their models private to capitalize on the models that are their propriety expertise, and to protect their detection strategies against adversarial machine learning. In this paper, we develop a protocol for privately evaluating detection models on the system data, in which privacy of both the system data and detection models is protected and information leakage is either prevented altogether or quantifiably decreased. Our main approach is to provide an end-to-end encryption for the system data and detection models utilizing lattice-based cryptography that allows homomorphic operations over ciphertext. We employ recent data sets in our experiments which demonstrate that the proposed privacy-preserving intrusion detection system is feasible in terms of execution times and bandwidth requirements and reliable in terms of accuracy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
17

Gerasimenko, N. I. "Specifics of using the Internet in the investigation of extremist crimes". Penitentiary Science 14, n.º 3 (2020): 388–93. http://dx.doi.org/10.46741/2686-9764-2020-14-3-388-393.

Texto completo
Resumen
A fairly large number of criminal acts can be attributed to crimes of an extremist nature, but not all of them can be committed via the Internet. This type of crime has a number of characteristic features that must be established at the initial and subsequent stages of the investigation: 1) the situation in which the crimes were committed, including place and time. A feature of the place is the possibility of committing a crime by a person located anywhere in the world where there is Internet access. The specifics of time include the fact that a crime can be committed for a long time – from the moment extremist information is posted on the network until it is blocked; 2) the methods of committing the considered category of crimes, the features of which are the placement of information containing elements of an extremist nature (texts, pictures, posts, songs, etc.), as well as great opportunities for concealing the crime. In addition to encrypting files, it is possible to quickly delete them from information carriers. Given these circumstances, the priority is the immediate recording of information with an extremist orientation contained in electronic media, and the identification of the persons who posted it. At the same time the key points in the investigation should be the examination of the Internet resource with the obligatory indication of the email address, name, information about feedback, contacts, as well as the appointment of computer and linguistic expertise.
Los estilos APA, Harvard, Vancouver, ISO, etc.
18

Butler III, Robert R. y Pablo V. Gejman. "Clinotator: analyzing ClinVar variation reports to prioritize reclassification efforts". F1000Research 7 (13 de abril de 2018): 462. http://dx.doi.org/10.12688/f1000research.14470.1.

Texto completo
Resumen
While ClinVar has become an indispensable resource for clinical variant interpretation, its sophisticated structure provides it with a daunting learning curve. Often the sheer depth of types of information provided can make it difficult to analyze variant information with high throughput. Clinotator is a fast and lightweight tool to extract important aspects of criteria-based clinical assertions; it uses that information to generate several metrics to assess the strength and consistency of the evidence supporting the variant clinical significance. Clinical assertions are weighted by significance type, age of submission and submitter expertise category to filter outdated or incomplete assertions that otherwise confound interpretation. This can be accomplished in batches: either lists of Variation IDs or dbSNP rsIDs, or with vcf files that are additionally annotated. Using sample sets ranging from 15,000–50,000 variants, we slice out problem variants in minutes without extensive computational effort (using only a personal computer) and corroborate recently reported trends of discordance hiding amongst the curated masses. With the rapidly growing body of variant evidence, most submitters and researchers have limited resources to devote to variant curation. Clinotator provides efficient, systematic prioritization of discordant variants in need of reclassification. The hope is that this tool can inform ClinVar curation and encourage submitters to keep their clinical assertions current by focusing their efforts. Additionally, researchers can utilize new metrics to analyze variants of interest in pursuit of new insights into pathogenicity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
19

Butler III, Robert R. y Pablo V. Gejman. "Clinotator: analyzing ClinVar variation reports to prioritize reclassification efforts". F1000Research 7 (20 de junio de 2018): 462. http://dx.doi.org/10.12688/f1000research.14470.2.

Texto completo
Resumen
While ClinVar has become an indispensable resource for clinical variant interpretation, its sophisticated structure provides it with a daunting learning curve. Often the sheer depth of types of information provided can make it difficult to analyze variant information with high throughput. Clinotator is a fast and lightweight tool to extract important aspects of criteria-based clinical assertions; it uses that information to generate several metrics to assess the strength and consistency of the evidence supporting the variant clinical significance. Clinical assertions are weighted by significance type, age of submission and submitter expertise category to filter outdated or incomplete assertions that otherwise confound interpretation. This can be accomplished in batches: either lists of Variation IDs or dbSNP rsIDs, or with vcf files that are additionally annotated. Using sample sets ranging from 15,000–50,000 variants, we slice out problem variants in minutes without extensive computational effort (using only a personal computer) and corroborate recently reported trends of discordance hiding amongst the curated masses. With the rapidly growing body of variant evidence, most submitters and researchers have limited resources to devote to variant curation. Clinotator provides efficient, systematic prioritization of discordant variants in need of reclassification. The hope is that this tool can inform ClinVar curation and encourage submitters to keep their clinical assertions current by focusing their efforts. Additionally, researchers can utilize new metrics to analyze variants of interest in pursuit of new insights into pathogenicity.
Los estilos APA, Harvard, Vancouver, ISO, etc.
20

Collins, Nick. "Automatic Composition of Electroacoustic Art Music Utilizing Machine Listening". Computer Music Journal 36, n.º 3 (septiembre de 2012): 8–23. http://dx.doi.org/10.1162/comj_a_00135.

Texto completo
Resumen
This article presents Autocousmatic, an algorithmic system that creates electroacoustic art music using machine-listening processes within the design cycle. After surveying previous projects in automated mixing and algorithmic composition, the design and implementation of the current system is outlined. An iterative, automatic effects processing system is coupled to machine-listening components, including the assessment of the “worthiness” of intermediate files to continue to a final mixing stage. Generation of the formal structure of output pieces utilizes models derived from a small corpus of exemplar electroacoustic music, and a dynamic time-warping similarity-measure technique drawn from music information retrieval is employed to decide between candidate final mixes. Evaluation of Autocousmatic has involved three main components: the entry of its output works into composition competitions, the public release of the software with an associated questionnaire and sound examples on SoundCloud, and direct feedback from three highly experienced electroacoustic composers. The article concludes with a discussion of the current status of the system, with regards to ideas from the computational creativity literature, among other sources, and suggestions for future work that may advance the compositional ability of the system beyond its current level and towards human-like expertise.
Los estilos APA, Harvard, Vancouver, ISO, etc.
21

Hupfauf, Sebastian, Mohammad Etemadi, Marina Fernández-Delgado Juárez, María Gómez-Brandón, Heribert Insam y Sabine Marie Podmirseg. "CoMA – an intuitive and user-friendly pipeline for amplicon-sequencing data analysis". PLOS ONE 15, n.º 12 (2 de diciembre de 2020): e0243241. http://dx.doi.org/10.1371/journal.pone.0243241.

Texto completo
Resumen
In recent years, there has been a veritable boost in next-generation sequencing (NGS) of gene amplicons in biological and medical studies. Huge amounts of data are produced and need to be analyzed adequately. Various online and offline analysis tools are available; however, most of them require extensive expertise in computer science or bioinformatics, and often a Linux-based operating system. Here, we introduce “CoMA–Comparative Microbiome Analysis” as a free and intuitive analysis pipeline for amplicon-sequencing data, compatible with any common operating system. Moreover, the tool offers various useful services including data pre-processing, quality checking, clustering to operational taxonomic units (OTUs), taxonomic assignment, data post-processing, data visualization, and statistical appraisal. The workflow results in highly esthetic and publication-ready graphics, as well as output files in standardized formats (e.g. tab-delimited OTU-table, BIOM, NEWICK tree) that can be used for more sophisticated analyses. The CoMA output was validated by a benchmark test, using three mock communities with different sample characteristics (primer set, amplicon length, diversity). The performance was compared with that of Mothur, QIIME and QIIME2-DADA2, popular packages for NGS data analysis. Furthermore, the functionality of CoMA is demonstrated on a practical example, investigating microbial communities from three different soils (grassland, forest, swamp). All tools performed well in the benchmark test and were able to reveal the majority of all genera in the mock communities. Also for the soil samples, the results of CoMA were congruent to those of the other pipelines, in particular when looking at the key microbial players.
Los estilos APA, Harvard, Vancouver, ISO, etc.
22

Giommi, P., Y. L. Chang, S. Turriziani, T. Glauch, C. Leto, F. Verrecchia, P. Padovani et al. "Open Universe survey of Swift-XRT GRB fields: Flux-limited sample of HBL blazars". Astronomy & Astrophysics 642 (octubre de 2020): A141. http://dx.doi.org/10.1051/0004-6361/202037921.

Texto completo
Resumen
Aims. The sample of serendipitous sources detected in all Swift-XRT images pointing at gamma ray bursts (GRBs) constitutes the largest existing medium-deep survey of the X-ray sky. To build such dataset we analysed all Swift X-ray images centred on GRBs and observed over a period of 15 years using automatic tools that do not require any expertise in X-ray astronomy. Besides presenting a new large X-ray survey and a complete sample of blazars, this work aims to be a step in the direction of achieving the ultimate goal of the Open Universe Initiative, which is to enable non-expert people to benefit fully from space science data, possibly extending the potential for scientific discovery, which is currently confined within a small number of highly specialised teams, to a much larger population. Methods. We used the Swift_deepsky Docker container encapsulated pipeline to build the largest existing flux-limited and unbiased sample of serendipitous X-ray sources. Swift_deepsky runs on any laptop or desktop computer with a modern operating system. The tool automatically downloads the data and the calibration files from the archives, runs the official Swift analysis software, and produces a number of results including images, the list of detected sources, X-ray fluxes, spectral energy distribution data, and spectral slope estimations. Results. We used our source list to build the LogN-LogS of extra-galactic sources, which perfectly matches that estimated by other satellites. Combining our survey with multi-frequency data, we selected a complete radio-flux-density-limited sample of high energy peaked blazars (HBL). The LogN-LogS built with this data set confirms that previous samples are incomplete below ∼20 mJy.
Los estilos APA, Harvard, Vancouver, ISO, etc.
23

Fromm, Davida, Saketh Katta, Mason Paccione, Sophia Hecht, Joel Greenhouse, Brian MacWhinney y Tatiana T. Schnur. "A Comparison of Manual Versus Automated Quantitative Production Analysis of Connected Speech". Journal of Speech, Language, and Hearing Research 64, n.º 4 (14 de abril de 2021): 1271–82. http://dx.doi.org/10.1044/2020_jslhr-20-00561.

Texto completo
Resumen
Purpose Analysis of connected speech in the field of adult neurogenic communication disorders is essential for research and clinical purposes, yet time and expertise are often cited as limiting factors. The purpose of this project was to create and evaluate an automated program to score and compute the measures from the Quantitative Production Analysis (QPA), an objective and systematic approach for measuring morphological and structural features of connected speech. Method The QPA was used to analyze transcripts of Cinderella stories from 109 individuals with acute–subacute left hemisphere stroke. Regression slopes and residuals were used to compare the results of manual scoring and automated scoring using the newly developed C-QPA command in CLAN, a set of programs for automatic analysis of language samples. Results The C-QPA command produced two spreadsheet outputs: an analysis spreadsheet with scores for each utterance in the language sample, and a summary spreadsheet with 18 score totals from the analysis spreadsheet and an additional 15 measures derived from those totals. Linear regression analysis revealed that 32 of the 33 measures had good agreement; auxiliary complexity index was the one score that did not have good agreement. Conclusions The C-QPA command can be used to perform automated analyses of language transcripts, saving time and training and providing reliable and valid quantification of connected speech. Transcribing in CHAT, the CLAN editor, also streamlined the process of transcript preparation for QPA and allowed for precise linking of media files to language transcripts for temporal analyses.
Los estilos APA, Harvard, Vancouver, ISO, etc.
24

Ruivo, Pedro, Vitor Santos y Tiago Oliveira. "Success Factors for Data Protection in Services and Support Roles". International Journal of Human Capital and Information Technology Professionals 6, n.º 3 (julio de 2015): 56–70. http://dx.doi.org/10.4018/ijhcitp.2015070104.

Texto completo
Resumen
The transformation of today's information and communications technology (ICT) firms requires the services and support organizations to think differently about customers data protection. Data protection represents one of the security and privacy areas considered to be the next “blue ocean” in leveraging the creation of business opportunities. Based in contemporary literature, the authors conducted a two phases' qualitative methodology - the expert's interviews and Delphi method to identify and rank 12 factors on which service and support professionals should follow in their daily tasks to ensure customer data protection: 1) Data classification, 2) Encryption, 3) Password protection, 4) Approved tools, 5) Access controls, 6) How many access data, 7) Testing data, 8) Geographic rules, 9) Data retention, 10) Data minimization, 11) Escalating issues, and 12) Readiness and training. This paper contribute to the growing body of knowledge of data protection filed. The authors provide directions for future work for practitioners and researchers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
25

Rao, D. V. Sridhara, R. Sankarasubramanian, Deepak Kumar, V. Singh, K. Mahadeva Bhat, P. Mishra, S. Vinayak et al. "Microstructural and Compositional Characterisation of Electronic Materials". Defence Science Journal 66, n.º 4 (28 de junio de 2016): 341. http://dx.doi.org/10.14429/dsj.66.10207.

Texto completo
Resumen
<p class="p1"> </p><p class="p2"><span class="s1"> </span>Microstructural and compositional characterisation of electronic materials in support of the development of GaAs, GaN, and GaSb based multilayer device structures is described. Electron microscopy techniques employing nanometer and sub-nanometer scale imaging capability of structure and chemistry have been widely used to characterise various aspects of electronic and optoelectronic device structures such as InGaAs quantum dots, InGaAs pseudomorphic (pHEMT), and metamorphic (mHEMT) layers and the ohmic metallisation of GaAs and GaN high electron mobility transistors, nichrome thin film resistors, GaN heteroepitaxy on sapphire and silicon substrates, as well as InAs and GaN nanowires. They also established convergent beam electron diffraction techniques for determination of lattice distortions in III-V compound semiconductors, EBSD for crystalline misorientation studies of GaN epilayers and high-angle annular dark field techniques coupled with digital image analysis for the mapping of composition and strain in the nanometric layered structures. Also, <em>in-situ </em>SEM experiments were performed on ohmic metallisation of pHEMT device structures. The established electron microscopy expertise for electronic materials with demonstrated examples is presented.</p>
Los estilos APA, Harvard, Vancouver, ISO, etc.
26

Abdullah Sharadgah, Talha y Rami Abdulatif Sa'di. "Preparedness of Institutions of Higher Education for Assessment in Virtual Learning Environments During the COVID-19 Lockdown: Evidence of Bona Fide Challenges and Pragmatic Solutions". Journal of Information Technology Education: Research 19 (2020): 755–74. http://dx.doi.org/10.28945/4615.

Texto completo
Resumen
Aim/Purpose: This study investigates the perceptions of faculty members at Prince Sattam bin Abdulaziz University, Saudi Arabia, towards preparedness of institutions of higher education (IHE) for assessment in virtual learning environments (VLEs) during the COVID-19 lockdown. In addition, the study explores evidence of bona fide challenges that impede the implementation of assessment in VLE for both formative and summative purposes, and it attempts to propose some pragmatic solutions. Background: Assessment of student performance is an essential aspect of teaching and learning. However, substantial challenges exist in assessing student learning in VLEs. Methodology: Data on faculty’s perceptions were collected using an e-survey. Ninety-six faculty members took part in this study. Contribution: This paper contributes to COVID-19 research by investigating preparedness of IHE for assessment in VLEs from faculty members’ perceptions. This practical research explores deleterious challenges that impede the implementation of assessment in VLE for both formative and summative purposes, and it proposes effective solutions to prevent future challenges. These solutions can be used by IHE to improve the quality of assessment in VLEs. Findings: The findings revealed that IHE were not fully prepared to provide a proper assessment in a VLE during the lockdown, nor did they have clear mechanisms for online assessment. The findings also showed that faculty members were not convinced that e-assessment could adequately assess all intended learning outcomes. They were convinced that most students cheated in a way or another. Additionally, faculty had other concerns about (1) the absence of advanced systems to prevent academic dishonesty; (2) insufficient qualifications of some faculty in e-assessment because most of them have never done it before, and e-assessment has never been mandated by the university before the pandemic; and (3) insufficient attention paid to formative assessment. Recommendations for Practitioners: It is recommended that decision makers help faculty members improve by continuous training on developing e-assessment tests for both formative and summative assessments. Decision makers should also ensure the inclusion of technology-based invigilation software to preclude cheating, make pedagogical and technical expertise available, and reconsider e-assessment mechanisms. Faculty members are recommended to attend training sessions if they do not master the basic skills of e-assessment and should devise a variety of innovative e-assessments for formative and summative purposes. Recommendation for Researchers: More similar work is needed to provide more solutions to the challenges identified in this paper regarding the e-assessment in response to the COVID-19 pandemic. Impact on Society: The study suggests introducing technology-based solutions to ensure e-assessment security, or holding tests in locations where they can be invigilated whilst rules of social distancing can still be applied. Future Research: Future research could suggest processes and mechanisms to help faculty develop assessment in VLEs more effectively.
Los estilos APA, Harvard, Vancouver, ISO, etc.
27

Izonin, Ivan y Nataliya Shakhovska. "Special issue: Informatics & data-driven medicine". Mathematical Biosciences and Engineering 18, n.º 5 (2021): 6430–33. http://dx.doi.org/10.3934/mbe.2021319.

Texto completo
Resumen
<abstract> <p>The current state of the development of Medicine today is changing dramatically. Previously, data of the patient's health were collected only during a visit to the clinic. These were small chunks of information obtained from observations or experimental studies by clinicians, and were recorded on paper or in small electronic files. The advances in computer power development, hardware and software tools and consequently design an emergence of miniature smart devices for various purposes (flexible electronic devices, medical tattoos, stick-on sensors, biochips etc.) can monitor various vital signs of patients in real time and collect such data comprehensively. There is a steady growth of such technologies in various fields of medicine for disease prevention, diagnosis, and therapy. Due to this, clinicians began to face similar problems as data scientists. They need to perform many different tasks, which are based on a huge amount of data, in some cases with incompleteness and uncertainty and in most others with complex, non-obvious connections between them and different for each individual patient (observation) as well as a lack of time to solve them effectively. These factors significantly decrease the quality of decision making, which usually affects the effectiveness of diagnosis or therapy. That is why the new concept in Medicine, widely known as Data-Driven Medicine, arises nowadays. This approach, which based on IoT and Artificial Intelligence, provide possibilities for efficiently process of the huge amounts of data of various types, stimulates new discoveries and provides the necessary integration and management of such information for enabling precision medical care. Such approach could create a new wave in health care. It will provide effective management of a huge amount of comprehensive information about the patient's condition; will increase the speed of clinician's expertise, and will maintain high accuracy analysis based on digital tools and machine learning. The combined use of different digital devices and artificial intelligence tools will provide an opportunity to deeply understand the disease, boost the accuracy and speed of its detection at early stages and improve the modes of diagnosis. Such invaluable information stimulates new ways to choose patient-oriented preventions and interventions for each individual case.</p> </abstract>
Los estilos APA, Harvard, Vancouver, ISO, etc.
28

Hsu, Jia-Lien y Shuh-Jiun Chang. "Generating Music Transition by Using a Transformer-Based Model". Electronics 10, n.º 18 (16 de septiembre de 2021): 2276. http://dx.doi.org/10.3390/electronics10182276.

Texto completo
Resumen
With the prevalence of online video-sharing platforms increasing in recent years, many people have started to create their own videos and upload them onto the Internet. In filmmaking, background music is also one of the major elements besides the footage. With matching background music, a video can not only convey information, but also immerse the viewers in the setting of a story. There is often not only one piece of background music, but several, which is why audio editing and music production software are required. However, music editing is a professional expertise, and it can be hard for amateur creators to compose ideal pieces for the video. At the same time, there are some online audio libraries and music archives for sharing audio/music samples. For beginners, one possible way to compose background music for a video is “arranging and integrating samples”, rather than making music from scratch. As a result, this leads to a problem. There might be some gaps between samples, in which we have to generate transitions to fill the gaps. In our research, we build a transformer-based model for generating a music transition to bridge two prepared music clips. We design and perform experiments to demonstrate that our results are promising. The results are also analysed by using a questionnaire to reveal a positive response from listeners, supporting that our generated transitions conform to background music.
Los estilos APA, Harvard, Vancouver, ISO, etc.
29

Spektor, Franchesca y Sarah Fox. "The ‘Working Body’: Interrogating and Reimagining the Productivist Impulses of Transhumanism through Crip-Centered Speculative Design". Somatechnics 10, n.º 3 (diciembre de 2020): 327–54. http://dx.doi.org/10.3366/soma.2020.0326.

Texto completo
Resumen
Appeals to ‘nature’ have historically led to normative claims about who is rendered valuable. These understandings elevate a universal, working body (read able-bodied, white, producing capital) that design and disability studies scholar Aimi Hamraie argues ‘has served as a template […] for centuries’ (2017: 20), becoming reified through our architectural, political, and technological infrastructures. Using the framing of the cyborg, we explore how contemporary assistive technologies have the potential to both reproduce and trouble such normative claims. The modern transhumanism movement imagines cyborg bodies as self-contained and invincible, championing assistive technologies that seek to assimilate disabled people towards ever-increasing standards of independent productivity and connecting worth with the body's capacity for labor. In contrast, disability justice communities see all bodies as inherently worthy and situated within a network of care-relationships. Rather than being invincible, the cripborg's relationship with technology is complicated by the ever-present functional and financial constraints of their assistive devices. Despite these lived experiences, the expertise and agency of disabled activist communities is rarely engaged throughout the design process. In this article, we use speculative design techniques to reimagine assistive technologies with members of disability communities, resulting in three fictional design proposals. The first is a manual for a malfunctioning exoskeleton, meant to fill in the gaps where corporate planned obsolescence and black-boxed design delimit repair and maintenance. The second is a zine instructing readers on how to build their own intimate prosthetics, emphasizing the need to design for pleasurable, embodied, and affective experience. The final design proposal is a city-owned fleet of assistive robots meant to push people in manual wheelchairs up hills or carry loads for elderly people, an example of an environmental adaptation which explores the problems of automating care. With and through these design concepts, we begin to explore assistive devices that center the values of disability communities, using design proposals to co-imagine versions of a more crip-centered future.
Los estilos APA, Harvard, Vancouver, ISO, etc.
30

Tarek, Menna, Ehab K. A. Mohamed, Mostaq M. Hussain y Mohamed A. K. Basuony. "The implication of information technology on the audit profession in developing country". International Journal of Accounting & Information Management 25, n.º 2 (2 de mayo de 2017): 237–55. http://dx.doi.org/10.1108/ijaim-03-2016-0022.

Texto completo
Resumen
Purpose Information technology (IT) largely affected contemporary businesses, and accordingly, it imposes challenges on the auditing profession. Several studies investigated the impact of IT, in terms of the extent of use of IT audit techniques, but very studies are available on the perceived importance of the said issue in developing countries. This study aims to explore the impact of implementing IT on the auditing profession in a developing country, namely, Egypt. Design/methodology/approach This study uses both quantitative and qualitative data. A survey of 112 auditors, representing three of the Big 4 audit firms as well as ten local audit firms in Egypt, is used to gather preliminary data, and semi-structured interviews are conducted to gather details/qualitative-pertained information. A field-based questionnaire developed by Bierstaker and Lowe (2008) is used in this study. This questionnaire is used first in conducting a pre-test, and then, the questionnaire for testing the final results is developed based on the feedback received from the test sample. Findings The findings of this study reveal that auditors’ perception regarding client’s IT complexity is significantly affected by the use of IT specialists and the IT expertise of the auditors. Besides, they perceive that the new audit applications’ importance and the extent of their usage are significantly affected by the IT expertise of the auditors. The results also reveal that the auditors’ perception regarding the client’s IT is not affected by the control risk assessment. However, the auditors perceive that the client’s IT is significantly affected by electronic data retention policies. The results also indicated that the auditors’ perception regarding the importance of the new audit applications is not affected by the client’s type of industry. The auditors find that the uses of audit applications as well as their IT expertise are not significantly affected by the audit firm size. However, they perceive that the client’s IT complexity as well as the extent of using IT specialists are significantly affected by the audit firm size. Research limitations/implications This study is subject to certain limitations. First, the sample size of this research is somehow small because it is based on the convenience sampling technique, and some of the respondents were not helpful in answering the surveys distributed for this research’s purpose. This can be attributed to the fear of the competitors that their opponent may want to gather information regarding their work to be able to succeed in the competition in the market so they become reluctant to provide any information about their firm. Even some people who were interested to participate were not having enough time because the surveys were distributed during the high season of their audit work and there was limited time for the research to be accomplished. Hence, it is difficult to generalize the results among all the audit firms in Egypt because this limits the scope of the analysis, and it can be a significant obstacle in finding a trend. However, this can be an opportunity for future research. Second, the questionnaire is long and people do not have enough time to complete it. This also affected the response rate. In addition to this, the language of the questionnaire was English, so some respondents from the local audit firms were finding difficulty in understanding some sophisticated IT terms. Practical implications This study makes some recommends/suggestions that can well be used to solve some practical problems regarding the issues concerned. This study focuses on accounting information system (AIS) training during the initial years of the auditors’ careers to help staff auditors when they become seniors to be more skilled with AIS expertise needed in today’s audit environment. Clear policy statements are important to direct employees so that IT auditors evaluate the adequacy of standards and comply with them. This study suggests increasing the use of AIS to enhance individual technical and analytical skill sets and to develop specialized teams capable of evaluating the effectiveness of computer systems during audit engagements. This study further recommends establishing Egyptian auditing standards in this electronic environment to guide the auditors while conducting their audit work. Social implications Auditors should prioritize causes of risks and manage them with clear understanding of who receives them, how they are communicated and what action should be taken in a given community/society. So, they have to determine and evaluate all risks according to the client’s type of industry (manufacturing, non-financial services and financial). Auditors also have to continually receive feedback on the utility of continuous auditing (CA) in assessing risk. In particular, it is better for the auditor to determine how the audit results will be used in the enterprise risk management activity performed by the management. In addition, privacy has several implications to auditing, and so, it has to be reflected in the audit program and planning as well as the handling of assignment files and reports. Alike, retention of electronic evidence for a limited period of time may require the auditor to select samples several times during the audit period rather than just at year end. Originality/value As mentioned, this study is conducted within a developing country’s context. The use and importance of IT is reality of time. However, very few studies are devoted to explore the use/importance of IT in auditing in developing countries, and thus, this study carries a significance to have better understanding about it. Moreover, knowledge of how IT is used, the related risks and the ability to use IT as a resource in the performance of audit work is essential for auditor effectiveness at all levels including developing countries.
Los estilos APA, Harvard, Vancouver, ISO, etc.
31

VerMilyea, M., J. M. M. Hall, S. M. Diakiw, A. Johnston, T. Nguyen, D. Perugini, A. Miller, A. Picou, A. P. Murphy y M. Perugini. "Development of an artificial intelligence-based assessment model for prediction of embryo viability using static images captured by optical light microscopy during IVF". Human Reproduction 35, n.º 4 (abril de 2020): 770–84. http://dx.doi.org/10.1093/humrep/deaa013.

Texto completo
Resumen
Abstract STUDY QUESTION Can an artificial intelligence (AI)-based model predict human embryo viability using images captured by optical light microscopy? SUMMARY ANSWER We have combined computer vision image processing methods and deep learning techniques to create the non-invasive Life Whisperer AI model for robust prediction of embryo viability, as measured by clinical pregnancy outcome, using single static images of Day 5 blastocysts obtained from standard optical light microscope systems. WHAT IS KNOWN ALREADY Embryo selection following IVF is a critical factor in determining the success of ensuing pregnancy. Traditional morphokinetic grading by trained embryologists can be subjective and variable, and other complementary techniques, such as time-lapse imaging, require costly equipment and have not reliably demonstrated predictive ability for the endpoint of clinical pregnancy. AI methods are being investigated as a promising means for improving embryo selection and predicting implantation and pregnancy outcomes. STUDY DESIGN, SIZE, DURATION These studies involved analysis of retrospectively collected data including standard optical light microscope images and clinical outcomes of 8886 embryos from 11 different IVF clinics, across three different countries, between 2011 and 2018. PARTICIPANTS/MATERIALS, SETTING, METHODS The AI-based model was trained using static two-dimensional optical light microscope images with known clinical pregnancy outcome as measured by fetal heartbeat to provide a confidence score for prediction of pregnancy. Predictive accuracy was determined by evaluating sensitivity, specificity and overall weighted accuracy, and was visualized using histograms of the distributions of predictions. Comparison to embryologists’ predictive accuracy was performed using a binary classification approach and a 5-band ranking comparison. MAIN RESULTS AND THE ROLE OF CHANCE The Life Whisperer AI model showed a sensitivity of 70.1% for viable embryos while maintaining a specificity of 60.5% for non-viable embryos across three independent blind test sets from different clinics. The weighted overall accuracy in each blind test set was &gt;63%, with a combined accuracy of 64.3% across both viable and non-viable embryos, demonstrating model robustness and generalizability beyond the result expected from chance. Distributions of predictions showed clear separation of correctly and incorrectly classified embryos. Binary comparison of viable/non-viable embryo classification demonstrated an improvement of 24.7% over embryologists’ accuracy (P = 0.047, n = 2, Student’s t test), and 5-band ranking comparison demonstrated an improvement of 42.0% over embryologists (P = 0.028, n = 2, Student’s t test). LIMITATIONS, REASONS FOR CAUTION The AI model developed here is limited to analysis of Day 5 embryos; therefore, further evaluation or modification of the model is needed to incorporate information from different time points. The endpoint described is clinical pregnancy as measured by fetal heartbeat, and this does not indicate the probability of live birth. The current investigation was performed with retrospectively collected data, and hence it will be of importance to collect data prospectively to assess real-world use of the AI model. WIDER IMPLICATIONS OF THE FINDINGS These studies demonstrated an improved predictive ability for evaluation of embryo viability when compared with embryologists’ traditional morphokinetic grading methods. The superior accuracy of the Life Whisperer AI model could lead to improved pregnancy success rates in IVF when used in a clinical setting. It could also potentially assist in standardization of embryo selection methods across multiple clinical environments, while eliminating the need for complex time-lapse imaging equipment. Finally, the cloud-based software application used to apply the Life Whisperer AI model in clinical practice makes it broadly applicable and globally scalable to IVF clinics worldwide. STUDY FUNDING/COMPETING INTEREST(S) Life Whisperer Diagnostics, Pty Ltd is a wholly owned subsidiary of the parent company, Presagen Pty Ltd. Funding for the study was provided by Presagen with grant funding received from the South Australian Government: Research, Commercialisation and Startup Fund (RCSF). ‘In kind’ support and embryology expertise to guide algorithm development were provided by Ovation Fertility. J.M.M.H., D.P. and M.P. are co-owners of Life Whisperer and Presagen. Presagen has filed a provisional patent for the technology described in this manuscript (52985P pending). A.P.M. owns stock in Life Whisperer, and S.M.D., A.J., T.N. and A.P.M. are employees of Life Whisperer.
Los estilos APA, Harvard, Vancouver, ISO, etc.
32

Kirkpatrick, Helen Beryl, Jennifer Brasch, Jacky Chan y Shaminderjot Singh Kang. "A Narrative Web-Based Study of Reasons To Go On Living after a Suicide Attempt: Positive Impacts of the Mental Health System". Journal of Mental Health and Addiction Nursing 1, n.º 1 (15 de febrero de 2017): e3-e9. http://dx.doi.org/10.22374/jmhan.v1i1.10.

Texto completo
Resumen
Background and Objective: Suicide attempts are 10-20X more common than completed suicide and an important risk factor for death by suicide, yet most people who attempt suicide do not die by suicide. The process of recovering after a suicide attempt has not been well studied. The Reasons to go on Living (RTGOL) Project, a narrative web-based study, focuses on experiences of people who have attempted suicide and made the decision to go on living, a process not well studied. Narrative research is ideally suited to understanding personal experiences critical to recovery following a suicide attempt, including the transition to a state of hopefulness. Voices from people with lived experience can help us plan and conceptualize this work. This paper reports on a secondary research question of the larger study: what stories do participants tell of the positive role/impact of the mental health system. Material and Methods: A website created for The RTGOL Project (www.thereasons.ca) enabled participants to anonymously submit a story about their suicide attempt and recovery, a process which enabled participation from a large and diverse group of participants. The only direction given was “if you have made a suicide attempt or seriously considered suicide and now want to go on living, we want to hear from you.” The unstructured narrative format allowed participants to describe their experiences in their own words, to include and emphasize what they considered important. Over 5 years, data analysis occurred in several phases over the course of the study, resulting in the identification of data that were inputted into an Excel file. This analysis used stories where participants described positive involvement with the mental health system (50 stories). Results: Several participants reflected on experiences many years previous, providing the privilege of learning how their life unfolded, what made a difference. Over a five-year period, 50 of 226 stories identified positive experiences with mental health care with sufficient details to allow analysis, and are the focus of this paper. There were a range of suicidal behaviours in these 50 stories, from suicidal ideation only to medically severe suicide attempts. Most described one or more suicide attempts. Three themes identified included: 1) trust and relationship with a health care professional, 2) the role of friends and family and friends, and 3) a wide range of services. Conclusion: Stories open a window into the experiences of the period after a suicide attempt. This study allowed for an understanding of how mental health professionals might help individuals who have attempted suicide write a different story, a life-affirming story. The stories that participants shared offer some understanding of “how” to provide support at a most-needed critical juncture for people as they interact with health care providers, including immediately after a suicide attempt. Results of this study reinforce that just one caring professional can make a tremendous difference to a person who has survived a suicide attempt. Key Words: web-based; suicide; suicide attempt; mental health system; narrative research Word Count: 478 Introduction My Third (or fourth) Suicide AttemptI laid in the back of the ambulance, the snow of too many doses of ativan dissolving on my tongue.They hadn't even cared enough about meto put someone in the back with me,and so, frustrated,I'd swallowed all the pills I had with me— not enough to do what I wanted it to right then,but more than enough to knock me out for a good 14 hours.I remember very little after that;benzodiazepines like ativan commonly cause pre- and post-amnesia, says Google helpfullyI wake up in a locked rooma woman manically drawing on the windows with crayonsthe colors of light through the glassdiffused into rainbows of joy scattered about the roomas if she were coloring on us all,all of the tattered remnants of humanity in a psych wardmade into a brittle mosaic, a quilt of many hues, a Technicolor dreamcoatand I thoughtI am so glad to be able to see this. (Story 187)The nurse opening that door will have a lasting impact on how this story unfolds and on this person’s life. Each year, almost one million people die from suicide, approximately one death every 40 seconds. Suicide attempts are much more frequent, with up to an estimated 20 attempts for every death by suicide.1 Suicide-related behaviours range from suicidal ideation and self-injury to death by suicide. We are unable to directly study those who die by suicide, but effective intervention after a suicide attempt could reduce the risk of subsequent death by suicide. Near-fatal suicide attempts have been used to explore the boundary with completed suicides. Findings indicated that violent suicide attempters and serious attempters (seriousness of the medical consequences to define near-fatal attempts) were more likely to make repeated, and higher lethality suicide attempts.2 In a case-control study, the medically severe suicide attempts group (78 participants), epidemiologically very similar to those who complete suicide, had significantly higher communication difficulties; the risk for death by suicide multiplied if accompanied by feelings of isolation and alienation.3 Most research in suicidology has been quantitative, focusing almost exclusively on identifying factors that may be predictive of suicidal behaviours, and on explanation rather than understanding.4 Qualitative research, focusing on the lived experiences of individuals who have attempted suicide, may provide a better understanding of how to respond in empathic and helpful ways to prevent future attempts and death by suicide.4,5 Fitzpatrick6 advocates for narrative research as a valuable qualitative method in suicide research, enabling people to construct and make sense of the experiences and their world, and imbue it with meaning. A review of qualitative studies examining the experiences of recovering from or living with suicidal ideation identified 5 interconnected themes: suffering, struggle, connection, turning points, and coping.7 Several additional qualitative studies about attempted suicide have been reported in the literature. Participants have included patients hospitalized for attempting suicide8, and/or suicidal ideation,9 out-patients following a suicide attempt and their caregivers,10 veterans with serious mental illness and at least one hospitalization for a suicide attempt or imminent suicide plan.11 Relationships were a consistent theme in these studies. Interpersonal relationships and an empathic environment were perceived as therapeutic and protective, enabling the expression of thoughts and self-understanding.8 Given the connection to relationship issues, the authors suggested it may be helpful to provide support for the relatives of patients who have attempted suicide. A sheltered, friendly environment and support systems, which included caring by family and friends, and treatment by mental health professionals, helped the suicidal healing process.10 Receiving empathic care led to positive changes and an increased level of insight; just one caring professional could make a tremendous difference.11 Kraft and colleagues9 concluded with the importance of hearing directly from those who are suicidal in order to help them, that only when we understand, “why suicide”, can we help with an alternative, “why life?” In a grounded theory study about help-seeking for self-injury, Long and colleagues12 identified that self-injury was not the problem for their participants, but a panacea, even if temporary, to painful life experiences. Participant narratives reflected a complex journey for those who self-injured: their wish when help-seeking was identified by the theme “to be treated like a person”. There has also been a focus on the role and potential impact of psychiatric/mental health nursing. Through interviews with experienced in-patient nurses, Carlen and Bengtsson13 identified the need to see suicidal patients as subjective human beings with unique experiences. This mirrors research with patients, which concluded that the interaction with personnel who are devoted, hope-mediating and committed may be crucial to a patient’s desire to continue living.14 Interviews with individuals who received mental health care for a suicidal crisis following a serious attempt led to the development of a theory for psychiatric nurses with the central variable, reconnecting the person with humanity across 3 phases: reflecting an image of humanity, guiding the individual back to humanity, and learning to live.15 Other research has identified important roles for nurses working with patients who have attempted suicide by enabling the expression of thoughts and developing self-understanding8, helping to see things differently and reconnecting with others,10 assisting the person in finding meaning from their experience to turn their lives around, and maintain/and develop positive connections with others.16 However, one literature review identified that negative attitudes toward self-harm were common among nurses, with more positive attitudes among mental health nurses than general nurses. The authors concluded that education, both reflective and interactive, could have a positive impact.17 This paper is one part of a larger web-based narrative study, the Reasons to go on Living Project (RTGOL), that seeks to understand the transition from making a suicide attempt to choosing life. When invited to tell their stories anonymously online, what information would people share about their suicide attempts? This paper reports on a secondary research question of the larger study: what stories do participants tell of the positive role/impact of the mental health system. The focus on the positive impact reflects an appreciative inquiry approach which can promote better practice.18 Methods Design and Sample A website created for The RTGOL Project (www.thereasons.ca) enabled participants to anonymously submit a story about their suicide attempt and recovery. Participants were required to read and agree with a consent form before being able to submit their story through a text box or by uploading a file. No demographic information was requested. Text submissions were embedded into an email and sent to an account created for the Project without collecting information about the IP address or other identifying information. The content of the website was reviewed by legal counsel before posting, and the study was approved by the local Research Ethics Board. Stories were collected for 5 years (July 2008-June 2013). The RTGOL Project enabled participation by a large, diverse audience, at their own convenience of time and location, providing they had computer access. The unstructured narrative format allowed participants to describe their experiences in their own words, to include and emphasize what they considered important. Of the 226 submissions to the website, 112 described involvement at some level with the mental health system, and 50 provided sufficient detail about positive experiences with mental health care to permit analysis. There were a range of suicidal behaviours in these 50 stories: 8 described suicidal ideation only; 9 met the criteria of medically severe suicide attempts3; 33 described one or more suicide attempts. For most participants, the last attempt had been some years in the past, even decades, prior to writing. Results Stories of positive experiences with mental health care described the idea of a door opening, a turning point, or helping the person to see their situation differently. Themes identified were: (1) relationship and trust with a Health Care Professional (HCP), (2) the role of family and friends (limited to in-hospital experiences), and (3) the opportunity to access a range of services. The many reflective submissions of experiences told many years after the suicide attempt(s) speaks to the lasting impact of the experience for that individual. Trust and Relationship with a Health Care Professional A trusting relationship with a health professional helped participants to see things in a different way, a more hopeful way and over time. “In that time of crisis, she never talked down to me, kept her promises, didn't panic, didn't give up, and she kept believing in me. I guess I essentially borrowed the hope that she had for me until I found hope for myself.” (Story# 35) My doctor has worked extensively with me. I now realize that this is what will keep me alive. To be able to feel in my heart that my doctor does care about me and truly wants to see me get better.” (Story 34). The writer in Story 150 was a nurse, an honours graduate. The 20 years following graduation included depression, hospitalizations and many suicide attempts. “One day after supper I took an entire bottle of prescription pills, then rode away on my bike. They found me late that night unconscious in a downtown park. My heart threatened to stop in the ICU.” Then later, “I finally found a person who was able to connect with me and help me climb out of the pit I was in. I asked her if anyone as sick as me could get better, and she said, “Yes”, she had seen it happen. Those were the words I had been waiting to hear! I quickly became very motivated to get better. I felt heard and like I had just found a big sister, a guide to help me figure out how to live in the world. This person was a nurse who worked as a trauma therapist.” At the time when the story was submitted, the writer was applying to a graduate program. Role of Family and Friends Several participants described being affected by their family’s response to their suicide attempt. Realizing the impact on their family and friends was, for some, a turning point. The writer in Story 20 told of experiences more than 30 years prior to the writing. She described her family of origin as “truly dysfunctional,” and she suffered from episodes of depression and hospitalization during her teen years. Following the birth of her second child, and many family difficulties, “It was at this point that I became suicidal.” She made a decision to kill herself by jumping off the balcony (6 stories). “At the very last second as I hung onto the railing of the balcony. I did not want to die but it was too late. I landed on the parking lot pavement.” She wrote that the pain was indescribable, due to many broken bones. “The physical pain can be unbearable. Then you get to see the pain and horror in the eyes of someone you love and who loves you. Many people suggested to my husband that he should leave me in the hospital, go on with life and forget about me. During the process of recovery in the hospital, my husband was with me every day…With the help of psychiatrists and a later hospitalization, I was actually diagnosed as bipolar…Since 1983, I have been taking lithium and have never had a recurrence of suicidal thoughts or for that matter any kind of depression.” The writer in Story 62 suffered childhood sexual abuse. When she came forward with it, she felt she was not heard. Self-harm on a regular basis was followed by “numerous overdoses trying to end my life.” Overdoses led to psychiatric hospitalizations that were unhelpful because she was unable to trust staff. “My way of thinking was that ending my life was the only answer. There had been numerous attempts, too many to count. My thoughts were that if I wasn’t alive I wouldn’t have to deal with my problems.” In her final attempt, she plunged over the side of a mountain, dropping 80 feet, resulting in several serious injuries. “I was so angry that I was still alive.” However, “During my hospitalization I began to realize that my family and friends were there by my side continuously, I began to realize that I wasn't only hurting myself. I was hurting all the important people in my life. It was then that I told myself I am going to do whatever it takes.” A turning point is not to say that the difficulties did not continue. The writer of Story 171 tells of a suicide attempt 7 years previous, and the ongoing anguish. She had been depressed for years and had thoughts of suicide on a daily basis. After a serious overdose, she woke up the next day in a hospital bed, her husband and 2 daughters at her bed. “Honestly, I was disappointed to wake up. But, then I saw how scared and hurt they were. Then I was sorry for what I had done to them. Since then I have thought of suicide but know that it is tragic for the family and is a hurt that can never be undone. Today I live with the thought that I am here for a reason and when it is God's time to take me then I will go. I do believe living is harder than dying. I do believe I was born for a purpose and when that is accomplished I will be released. …Until then I try to remind myself of how I am blessed and try to appreciate the wonders of the world and the people in it.” Range of Services The important role of mental health and recovery services was frequently mentioned, including dialectical behavioural therapy (DBT)/cognitive-behavioural therapy (CBT), recovery group, group therapy, Alcoholics Anonymous, accurate diagnosis, and medications. The writer in Story 30 was 83 years old when she submitted her story, reflecting on a life with both good and bad times. She first attempted suicide at age 10 or 12. A serious post-partum depression followed the birth of her second child, and over the years, she experienced periods of suicidal intent: “Consequently, a few years passed and I got to feeling suicidal again. I had pills in one pocket and a clipping for “The Recovery Group” in the other pocket. As I rode on the bus trying to make up my mind, I decided to go to the Recovery Group first. I could always take the pills later. I found the Recovery Group and yoga helpful; going to meetings sometimes twice a day until I got thinking more clearly and learned how to deal with my problems.” Several participants described the value of CBT or DBT in learning to challenge perceptions. “I have tools now to differentiate myself from the illness. I learned I'm not a bad person but bad things did happen to me and I survived.”(Story 3) “The fact is that we have thoughts that are helpful and thoughts that are destructive….. I knew it was up to me if I was to get better once and for all.” (Story 32): “In the hospital I was introduced to DBT. I saw a nurse (Tanya) every day and attended a group session twice a week, learning the techniques. I worked with the people who wanted to work with me this time. Tanya said the same thing my counselor did “there is no study that can prove whether or not suicide solves problems” and I felt as though I understood it then. If I am dead, then all the people that I kept pushing away and refusing their help would be devastated. If I killed myself with my own hand, my family would be so upset. DBT taught me how to ‘ride my emotional wave’. ……….. DBT has changed my life…….. My life is getting back in order now, thanks to DBT, and I have lots of reasons to go on living.”(Story 19) The writer of Story 67 described the importance of group therapy. “Group therapy was the most helpful for me. It gave me something besides myself to focus on. Empathy is such a powerful emotion and a pathway to love. And it was a huge relief to hear others felt the same and had developed tools of their own that I could try for myself! I think I needed to learn to communicate and recognize when I was piling everything up to build my despair. I don’t think I have found the best ways yet, but I am lifetimes away from that teenage girl.” (Story 67) The author of story 212 reflected on suicidal ideation beginning over 20 years earlier, at age 13. Her first attempt was at 28. “I thought everyone would be better off without me, especially my children, I felt like the worst mum ever, I felt like a burden to my family and I felt like I was a failure at life in general.” She had more suicide attempts, experienced the death of her father by suicide, and then finally found her doctor. “Now I’m on meds for a mood disorder and depression, my family watch me closely, and I see my doctor regularly. For the first time in 20 years, I love being a mum, a sister, a daughter, a friend, a cousin etc.” Discussion The 50 stories that describe positive experiences in the health care system constitute a larger group than most other similar studies, and most participants had made one or more suicide attempts. Several writers reflected back many years, telling stories of long ago, as with the 83-year old participant (Story 30) whose story provided the privilege of learning how the author’s life unfolded. In clinical practice, we often do not know – how did the story turn out? The stories that describe receiving health care speak to the impact of the experience, and the importance of the issues identified in the mental health system. We identified 3 themes, but it was often the combination that participants described in their stories that was powerful, as demonstrated in Story 20, the young new mother who had fallen from a balcony 30 years earlier. Voices from people with lived experience can help us plan and conceptualize our clinical work. Results are consistent with, and add to, the previous work on the importance of therapeutic relationships.8,10,11,14–16 It is from the stories in this study that we come to understand the powerful experience of seeing a family members’ reaction following a participant’s suicide attempt, and how that can be a potent turning point as identified by Lakeman and Fitzgerald.7 Ghio and colleagues8 and Lakeman16 identified the important role for staff/nurses in supporting families due to the connection to relationship issues. This research also calls for support for families to recognize the important role they have in helping the person understand how much they mean to them, and to promote the potential impact of a turning point. The importance of the range of services reflect Lakeman and Fitzgerald’s7 theme of coping, associating positive change by increasing the repertoire of coping strategies. These findings have implications for practice, research and education. Working with individuals who are suicidal can help them develop and tell a different story, help them move from a death-oriented to life-oriented position,15 from “why suicide” to “why life.”9 Hospitalization provides a person with the opportunity to reflect, to take time away from “the real world” to consider oneself, the suicide attempt, connections with family and friends and life goals, and to recover physically and emotionally. Hospitalization is also an opening to involve the family in the recovery process. The intensity of the immediate period following a suicide attempt provides a unique opportunity for nurses to support and coach families, to help both patients and family begin to see things differently and begin to create that different story. In this way, family and friends can be both a support to the person who has attempted suicide, and receive help in their own struggles with this experience. It is also important to recognize that this short period of opportunity is not specific to the nurses in psychiatric units, as the nurses caring for a person after a medically severe suicide attempt will frequently be the nurses in the ICU or Emergency departments. Education, both reflective and interactive, could have a positive impact.17 Helping staff develop the attitudes, skills and approach necessary to be helpful to a person post-suicide attempt is beginning to be reported in the literature.21 Further implications relate to nursing curriculum. Given the extent of suicidal ideation, suicide attempts and deaths by suicide, this merits an important focus. This could include specific scenarios, readings by people affected by suicide, both patients themselves and their families or survivors, and discussions with individuals who have made an attempt(s) and made a decision to go on living. All of this is, of course, not specific to nursing. All members of the interprofessional health care team can support the transition to recovery of a person after a suicide attempt using the strategies suggested in this paper, in addition to other evidence-based interventions and treatments. Findings from this study need to be considered in light of some specific limitations. First, the focus was on those who have made a decision to go on living, and we have only the information the participants included in their stories. No follow-up questions were possible. The nature of the research design meant that participants required access to a computer with Internet and the ability to communicate in English. This study does not provide a comprehensive view of in-patient care. However, it offers important inputs to enhance other aspects of care, such as assessing safety as a critical foundation to care. We consider these limitations were more than balanced by the richness of the many stories that a totally anonymous process allowed. Conclusion Stories open a window into the experiences of a person during the period after a suicide attempt. The RTGOL Project allowed for an understanding of how we might help suicidal individuals change the script, write a different story. The stories that participants shared give us some understanding of “how” to provide support at a most-needed critical juncture for people as they interact with health care providers immediately after a suicide attempt. While we cannot know the experiences of those who did not survive a suicide attempt, results of this study reinforce that just one caring professional can make a crucial difference to a person who has survived a suicide attempt. We end with where we began. Who will open the door? References 1. World Health Organization. Suicide prevention and special programmes. http://www.who.int/mental_health/prevention/suicide/suicideprevent/en/index.html Geneva: Author; 2013.2. Giner L, Jaussent I, Olie E, et al. Violent and serious suicide attempters: One step closer to suicide? J Clin Psychiatry 2014:73(3):3191–197.3. Levi-Belz Y, Gvion Y, Horesh N, et al. Mental pain, communication difficulties, and medically serious suicide attempts: A case-control study. Arch Suicide Res 2014:18:74–87.4. Hjelmeland H and Knizek BL. Why we need qualitative research in suicidology? Suicide Life Threat Behav 2010:40(1):74–80.5. Gunnell D. A population health perspective on suicide research and prevention: What we know, what we need to know, and policy priorities. Crisis 2015:36(3):155–60.6. Fitzpatrick S. Looking beyond the qualitative and quantitative divide: Narrative, ethics and representation in suicidology. Suicidol Online 2011:2:29–37.7. Lakeman R and FitzGerald M. How people live with or get over being suicidal: A review of qualitative studies. J Adv Nurs 2008:64(2):114–26.8. Ghio L, Zanelli E, Gotelli S, et al. Involving patients who attempt suicide in suicide prevention: A focus group study. J Psychiatr Ment Health Nurs 2011:18:510–18.9. Kraft TL, Jobes DA, Lineberry TW., Conrad, A., & Kung, S. Brief report: Why suicide? Perceptions of suicidal inpatients and reflections of clinical researchers. Arch Suicide Res 2010:14(4):375-382.10. Sun F, Long A, Tsao L, et al. The healing process following a suicide attempt: Context and intervening conditions. Arch Psychiatr Nurs 2014:28:66–61.11. Montross Thomas L, Palinkas L, et al. Yearning to be heard: What veterans teach us about suicide risk and effective interventions. Crisis 2014:35(3):161–67.12. Long M, Manktelow R, and Tracey A. The healing journey: Help seeking for self-injury among a community population. Qual Health Res 2015:25(7):932–44.13. Carlen P and Bengtsson A. Suicidal patients as experienced by psychiatric nurses in inpatient care. Int J Ment Health Nurs 2007:16:257–65.14. Samuelsson M, Wiklander M, Asberg M, et al. Psychiatric care as seen by the attempted suicide patient. J Adv Nurs 2000:32(3):635–43.15. Cutcliffe JR, Stevenson C, Jackson S, et al. A modified grounded theory study of how psychiatric nurses work with suicidal people. Int J Nurs Studies 2006:43(7):791–802.16. Lakeman, R. What can qualitative research tell us about helping a person who is suicidal? Nurs Times 2010:106(33):23–26.17. Karman P, Kool N, Poslawsky I, et al. Nurses’ attitudes toward self-harm: a literature review. J Psychiatr Ment Health Nurs 2015:22:65–75.18. Carter B. ‘One expertise among many’ – working appreciatively to make miracles instead of finding problems: Using appreciative inquiry as a way of reframing research. J Res Nurs 2006:11(1): 48–63.19. Lieblich A, Tuval-Mashiach R, Zilber T. Narrative research: Reading, analysis, and interpretation. Sage Publications; 1998.20. Braun V and Clarke V. Using thematic analysis in psychology. Qual Res Psychol 2006:3(2):77–101.21. Kishi Y, Otsuka K, Akiyama K, et al. Effects of a training workshop on suicide prevention among emergency room nurses. Crisis 2014:35(5):357–61.
Los estilos APA, Harvard, Vancouver, ISO, etc.
33

Velinder, Matt, Dillon Lee y Gabor Marth. "ped_draw: pedigree drawing with ease". BMC Bioinformatics 21, n.º 1 (diciembre de 2020). http://dx.doi.org/10.1186/s12859-020-03917-4.

Texto completo
Resumen
Abstract Background Pedigree files are ubiquitously used within bioinformatics and genetics studies to convey critical information about relatedness, sex and affected status of study samples. While the text based format of ped files is efficient for computational methods, it is not immediately intuitive to a bioinformatician or geneticist trying to understand family structures, many of which encode the affected status of individuals across multiple generations. The visualization of pedigrees into connected nodes with descriptive shapes and shading provides a far more interpretable format to recognize visual patterns and intuit family structures. Despite these advantages of a visual pedigree, it remains difficult to quickly and accurately visualize a pedigree given a pedigree text file. Results Here we describe ped_draw a command line and web tool as a simple and easy solution to pedigree visualization. Ped_draw is capable of drawing complex multi-generational pedigrees and conforms to the accepted standards for depicting pedigrees visually. The command line tool can be used as a simple one liner command, utilizing graphviz to generate an image file. The web tool, https://peddraw.github.io, allows the user to either: paste a pedigree file, type to construct a pedigree file in the text box or upload a pedigree file. Users can save the generated image file in various formats. Conclusions We believe ped_draw is a useful pedigree drawing tool that improves on current methods due to its ease of use and approachability. Ped_draw allows users with various levels of expertise to quickly and easily visualize pedigrees.
Los estilos APA, Harvard, Vancouver, ISO, etc.
34

Manik, Saut Irianto. "Proses Digital Imaging Iklan Cetak Indonesia". JSRW (Jurnal Senirupa Warna) 6, n.º 1 (26 de mayo de 2019). http://dx.doi.org/10.36806/jsrw.v6i1.31.

Texto completo
Resumen
Abstrak Rekayasa visual telah berkembang cepat dalam iklan cetak. Berbeda dengan era 1980-an hingga 1990-an yang masih menggunakan pola kerja produksi dengan tehnik manual, meliputi tehnik semprot Air Brush, Cutting kolase, double expose foto dan lainnya, saat ini proses kerja rekayasa visual telah menjadi pola kerja digital atau dikenal dengan digital imaging. Media cetak saat ini hadir dengan pengolahan digital imaging berbagai elemen huruf, gambar, foto, aksen bentuk, maupun gaya pewarnaan yang menciptakan berbagai gaya dinamis, imajinatif, dengan sentuhan teknologi. Penelitian ini memperlihatkan bahwa secara umum bentuk dan pola kerja keahlian ini meliputi ketrampilan menggunakan perangkat komputer grafis, kemampuan fotografi, dan kemampuan membuat konsep kreatif visual sesuai obyektivitas sebuah iklan. Penelitian ini juga menemukan bahwa dalam lingkup kerja industri iklan, crafting di dalam kerja visualisasi dengan proses digitalisasi menciptakan standar tertentu pada hasil akhir digital imaging, seperti kualitas gambar yang memerlukan keahlian manajemen file terhadap gambar, wajib menjadi perhatian khusus dalam proses perancangan, dari awal persiapan, proses digital imaging dan output dari visual.Kata Kunci: rekayasa visual, iklan cetak, crafting, digital imagingAbstractImage manipulation has developed rapidly in print advertising. Unlike the 80s to 90s, which still uses manual production techniques, include air brush spray techniques, cutting collages, double exposure photos and others, work processes of visual manipulationin the advertising industry today, has become a digital work pattern, known as digital imaging. Today’s print media comes with digital imaging processing various elements of letters, images or illustrations, photographs, shape accents, and coloring styles that create a variety of dynamic, imaginative, and feels a touch of technology.This study shows that in general the forms and patterns of work of this expertise include skills in using computer graphics devices, photographic abilities, and the ability to create creative visual concepts according to the objectivity of an advertisement. This study also found that in the advertising industry’s scope of work, Crafting in the work of visualization with the digitization process creates certain standards on digital imaging results, such as image quality that requires file management expertise for images, must be of particular concern in the design process, from the beginning of preparation, digital imaging processes and visual output.Keywords: Image manipulation, print ad, crafting, digital imaging
Los estilos APA, Harvard, Vancouver, ISO, etc.
35

Eagles, Nicholas J., Emily E. Burke, Jacob Leonard, Brianna K. Barry, Joshua M. Stolz, Louise Huuki, BaDoi N. Phan et al. "SPEAQeasy: a scalable pipeline for expression analysis and quantification for R/bioconductor-powered RNA-seq analyses". BMC Bioinformatics 22, n.º 1 (1 de mayo de 2021). http://dx.doi.org/10.1186/s12859-021-04142-3.

Texto completo
Resumen
Abstract Background RNA sequencing (RNA-seq) is a common and widespread biological assay, and an increasing amount of data is generated with it. In practice, there are a large number of individual steps a researcher must perform before raw RNA-seq reads yield directly valuable information, such as differential gene expression data. Existing software tools are typically specialized, only performing one step–such as alignment of reads to a reference genome–of a larger workflow. The demand for a more comprehensive and reproducible workflow has led to the production of a number of publicly available RNA-seq pipelines. However, we have found that most require computational expertise to set up or share among several users, are not actively maintained, or lack features we have found to be important in our own analyses. Results In response to these concerns, we have developed a Scalable Pipeline for Expression Analysis and Quantification (SPEAQeasy), which is easy to install and share, and provides a bridge towards R/Bioconductor downstream analysis solutions. SPEAQeasy is portable across computational frameworks (SGE, SLURM, local, docker integration) and different configuration files are provided (http://research.libd.org/SPEAQeasy/). Conclusions SPEAQeasy is user-friendly and lowers the computational-domain entry barrier for biologists and clinicians to RNA-seq data processing as the main input file is a table with sample names and their corresponding FASTQ files. The goal is to provide a flexible pipeline that is immediately usable by researchers, regardless of their technical background or computing environment.
Los estilos APA, Harvard, Vancouver, ISO, etc.
36

Zbili, Mickael y Sylvain Rama. "A Quick and Easy Way to Estimate Entropy and Mutual Information for Neuroscience". Frontiers in Neuroinformatics 15 (15 de junio de 2021). http://dx.doi.org/10.3389/fninf.2021.596443.

Texto completo
Resumen
Calculations of entropy of a signal or mutual information between two variables are valuable analytical tools in the field of neuroscience. They can be applied to all types of data, capture non-linear interactions and are model independent. Yet the limited size and number of recordings one can collect in a series of experiments makes their calculation highly prone to sampling bias. Mathematical methods to overcome this so-called “sampling disaster” exist, but require significant expertise, great time and computational costs. As such, there is a need for a simple, unbiased and computationally efficient tool for estimating the level of entropy and mutual information. In this article, we propose that application of entropy-encoding compression algorithms widely used in text and image compression fulfill these requirements. By simply saving the signal in PNG picture format and measuring the size of the file on the hard drive, we can estimate entropy changes through different conditions. Furthermore, with some simple modifications of the PNG file, we can also estimate the evolution of mutual information between a stimulus and the observed responses through different conditions. We first demonstrate the applicability of this method using white-noise-like signals. Then, while this method can be used in all kind of experimental conditions, we provide examples of its application in patch-clamp recordings, detection of place cells and histological data. Although this method does not give an absolute value of entropy or mutual information, it is mathematically correct, and its simplicity and broad use make it a powerful tool for their estimation through experiments.
Los estilos APA, Harvard, Vancouver, ISO, etc.
37

Wang, Yanan, Litao Yang, Geoffrey I. Webb, Zongyuan Ge y Jiangning Song. "OCTID: a one-class learning-based Python package for tumor image detection". Bioinformatics, 1 de junio de 2021. http://dx.doi.org/10.1093/bioinformatics/btab416.

Texto completo
Resumen
Abstract Motivation Tumor tile selection is a necessary prerequisite in patch-based cancer whole slide image analysis, which is labor-intensive and requires expertise. Whole slides are annotated as tumor or tumor free, but tiles within a tumor slide are not. As all tiles within a tumor free slide are tumor free, these can be used to capture tumor-free patterns using the one-class learning strategy. Results We present a Python package, termed OCTID, which combines a pretrained convolutional neural network (CNN) model, Uniform Manifold Approximation and Projection (UMAP) and one-class support vector machine to achieve accurate tumor tile classification using a training set of tumor free tiles. Benchmarking experiments on four H&E image datasets achieved remarkable performance in terms of F1-score (0.90 ± 0.06), Matthews correlation coefficient (0.93 ± 0.05) and accuracy (0.94 ± 0.03). Availability and implementation Detailed information can be found in the Supplementary File. Supplementary information Supplementary data are available at Bioinformatics online.
Los estilos APA, Harvard, Vancouver, ISO, etc.
38

Mohan, Prashant, Payam Haghighi, Prabath Vemulapalli, Nathan Kalish, Jami J. Shah y Joseph K. Davidson. "Toward Automatic Tolerancing of Mechanical Assemblies: Assembly Analyses". Journal of Computing and Information Science in Engineering 14, n.º 4 (7 de octubre de 2014). http://dx.doi.org/10.1115/1.4028592.

Texto completo
Resumen
Generating geometric dimensioning and tolerancing (GD&T) specifications for mechanical assemblies is a complex and tedious task, an expertise that few mechanical engineers possess. The task is often done by trial and error. While there are commercial systems to facilitate tolerance analysis, there is little support for tolerance synthesis. This paper presents a systematic approach toward collecting part and assembly characteristics in support of automating GD&T schema development and tolerance allocation for mechanical assemblies represented as neutral B-Rep. First, assembly characteristics are determined, then a tentative schema is determined and tolerances allocated. This is followed by adaptive iterations of analyses and refinement to achieve desired goals. This paper will present the preprocessing steps for assembly analysis needed for tolerance schema generation and allocation. Assembly analysis consists of four main tasks: assembly feature recognition (AFR), pattern detection, directions of control, and loop detection. This paper starts with identifying mating features in an assembly using the computer-aided design (CAD) file. Once the features are identified, patterns are determined among those features. Next, different directions of control for each part are identified and lastly, using all this information, all the possible loops existing in an assembly are searched.
Los estilos APA, Harvard, Vancouver, ISO, etc.
39

Bradley, Dale A. "Scenes of Transmission: Youth Culture, MP3 File Sharing, and Transferable Strategies of Cultural Practice". M/C Journal 9, n.º 1 (1 de marzo de 2006). http://dx.doi.org/10.5204/mcj.2585.

Texto completo
Resumen
The significance of computer mediated communication in relation to the transmission and circulation of discourse is not restricted to the ways in which this relatively recent form of communication enables self-identifying and relatively homogeneous groups to articulate, diffuse and circulate meaning. While the Internet has certainly provided a vital medium for such activities, there is another aspect of transmission that is also significant: the transmission of codes and practices between previously unrelated cultural formations through processes of convergence that occur via their engagement in online media. Of interest here are the ways in which the codes and practices constituting various cultural formations may find their way into other such formations through online practices. Online venues which facilitate the formation of virtual communities act as scenes for the interweaving of participants’ varied interests and, in so doing, bring disparate cultural practices together in new and potentially transformative manners. Viewed from this perspective, online communication not only provides a platform for discursive acts, but constitutes a venue wherein the practical usage of the medium offers up new, and transferable, tactics of communication and cultural practice. One of the most obvious examples of this phenomenon of “convergent transmission” is the now famous case of Napster. Beyond the well-discussed implications for, and ongoing adaptive transformation of, the music industry lies a peculiar moment of convergence wherein Internet Relay Chat (IRC) groups provided a scene for the transmission of cultural codes, values, and practices between a hacking subculture built around online communication and a broader youth culture that was beginning to embrace digital media as a means to enjoy music. The lines of transmission between these two groups were therefore borne by practices related to music, gift economies, computer networking and digital media. The community constituted by the early Napster (as well as other music sharing sites and networks) and the IRC-based discussions that informed their development were more than simply the sum of peer-to-peer (P2P) networks and online communication. I would argue that when taken together, Napster and IRC constituted an online scene for the sharing and dissemination of the hacking subculture’s beliefs and practices through the filter of “music-obsessed” youth culture. To understand Napster as a scene is to define it in relation to practices related to both popular and alternative modes for the production and consumption of cultural artifacts. Lee and Peterson (192-194) note that online scenes exhibit many similarities with the geographically-based scenes analyzed by Hebdige: a fair degree of demographic cohesiveness (typically defined such things as age, ethnicity, gender, sexuality, and class), shared cultural codes and worldviews, and a spectrum of participation ranging from the frequent and enduring relationships of a core constituency to the occasional participation of more peripheral members. As a combined P2P/IRC network, Napster is a means to circulate content rather than being, itself, some form content. Napster’s online circulation of cultural artefacts within and among various communities thus makes it a point of articulation between hacking subcultures and a broader youth culture. This articulation involves both the circulation of music files among participants, and the circulation of knowledge related to the technical modalities for engaging in file sharing. With regard to Napster, and perhaps subcultures in general, it is the formation of participatory communities rather than any particular cultural artefact that is paramount: the possibilities that the Internet offers young people for cultural participation now extend far beyond the types of symbolic transformation of products and resources … . Rather, such products and resources can themselves become both the object and product of collective creativity (Bennett 172). Shawn Fanning’s testimony to the judiciary committee investigating Napster notes at the outset that his reason for undertaking the development of the P2P network that would eventually become Napster was not driven by any intentional form of hacking, but was prompted by a friend’s simple desire to solve reliability issues associated with transmitting digital music files via the Internet: The Napster system that I designed combined a real time system for finding MP3s with chat rooms and instant messaging (functionally similar to IRC). The Chat rooms and instant messaging are integral to creating the community experience; I imagined that they would be used similarly to how people use IRC – as a means for people to learn from each other and develop ongoing relationships (Fanning). The notion of community is not only applicable to those who chose to share music over Napster, but to the development of Napster itself. As Andrews notes, Fanning participated in a number of IRC channels devoted to programming (primarily #winprog for the development of Napster) as well as to channels like #mpeg3 which discussed social and technical issues related to MP3s as well as advice on where and how to get them. Spitz and Hunter focus on the role of community in the development of Napster and point out that: the technology emerged gradually from interactions between and within social groups with different degrees of inclusion in multiple overlapping frames, as opposed to there being a single theoretical breakthrough. ... Based on their involvement in other spaces, such as online communities, Fanning and company’s immediate goals were much more personal and utilitarian—to provide a tool to help themselves and other enthusiasts find and access music on the Internet (171-172). Developed with the aid of numerous long-time and occasional participants to both #winprog and #mpeg3, Napster’s technical component was the product of (at least) two scenes constituted via IRC-based online communities. The first, #winprog, consisted of a subculture of “hardcore” Windows programmers (and hackers) freely sharing ideas, advice, expertise, and computer code in an environment of mutual assistance. While the participants on #mpeg3 represented a much wider community, #mpeg3 also demonstrates the qualities of a scene inasmuch as it constituted a virtual community based not only on shared interests in a variety of musical genres, but of sharing media content in the form of MP3s and related software. One obvious commonality among these two scenes is that they both rely upon informal gift economies as a means by which to transmit cultural codes via the circulation of material objects. With Napster, the gift economy that emerges in relation to he “hacker ethic” of sharing both code and expertise (Levy; Himanen; Wark) here combines with the more generalized and abstract gift economies constituted by the tendency within youth culture to engage in the sharing of media products related to particular lifestyles and subcultures. The development of Napster therefore provided a mechanism by which these two gift economies could come together to form a single overlapping scene combining computing and youth cultures. It should be noted, however, that while Napster was (and still is) typically branded as a youth-based phenomenon, its constituency actually encompassed a broader age demographic wherein membership tended to correlate more closely with “online tenure” than age (Spitz & Hunter 173). Nonetheless, the simultaneously rancorous and laudatory discourse surrounding Napster framed it as a phenomenon indicating the emergence of an IT-savvy youth culture. What occurred with Napster was therefore a situation wherein two scenes came together—one based on hacking, the other on MP3s. Their shared propensity toward informal gift economies allowed them to converge upon notions of P2P networking and IRC-based communities, and this produced a new set of cultural practices centred upon the fusion of file transfers and popular music. The activity of music sharing and the creation of networks to carry it out have, needless to say, proved to have a transformative effect on the circulation of these cultural products. The co-mingling of cultural practices between these two online scenes seems so obvious today that it often seems that it was inevitable. It must be remembered, however, that hacking and music did not seem to be so closely related in 1998. The development of Napster is thus a testament of sorts to the potential for computer mediated communication to effect convergent transformations via the transmission of tactical and communal practices among seemingly unrelated arenas of culture. References Andrews, Robert. “Chat Room That Built the World”. Wired News. Nov. 6, 2005. http://www.wired.com/news/technology/1,69394-0.html>. Bennett, Andy. “Virtual Subculture? Youth, Identity and the Internet”. After Subculture: Critical Studies in Contemporary Youth Culture. Eds. Andy Bennett & Keith Kahn-Harris. London: Palgrave Macmillan, 2004. Fanning, Shawn. Testimony before the Senate Judiciary Committee in Provo, Utah. Oct. 9, 2000. http://judiciary.senate.gov/oldsite/1092000_sf.htm>. Hebdige, Dick. Subculture: The Meaning of Style. London: Routledge, 1991. Himanen, Pekka. The Hacker Ethic. New York: Random House Books, 2001. Lee, Steve S., & Richard A. Peterson. “Internet-Based Virtual Music Scenes: The Case of P2 in Alt.Country Music.” Music Scenes: Local, Transnational, and Virtual. Eds. Andy Bennett & Richard A. Peterson. Nashville: Vanderbilt UP, 2004. Levy, Steven. Hackers. New York: Penguin Books, 1984. Spitz, David & Starling D. Hunter. “Contested Codes: The Social Construction of Napster”. The Information Society 21 (2005): 169-80. Wark, McKenzie. A Hacker Manifesto. Cambridge: Harvard UP, 2004. Citation reference for this article MLA Style Bradley, Dale A. "Scenes of Transmission: Youth Culture, MP3 File Sharing, and Transferable Strategies of Cultural Practice." M/C Journal 9.1 (2006). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0603/05-bradley.php>. APA Style Bradley, D. (Mar. 2006) "Scenes of Transmission: Youth Culture, MP3 File Sharing, and Transferable Strategies of Cultural Practice," M/C Journal, 9(1). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0603/05-bradley.php>.
Los estilos APA, Harvard, Vancouver, ISO, etc.
40

Whitmeyer, S. J. y M. Dordevic. "Creating virtual geologic mapping exercises in a changing world". Geosphere, 23 de diciembre de 2020. http://dx.doi.org/10.1130/ges02308.1.

Texto completo
Resumen
Fieldwork has long been considered an essential component of geoscience research and education, with student field experiences consistently valued for their effectiveness in developing expertise in geoscience skills and cognitive abilities. However, some geoscience disciplines recently have exhibited a decreasing focus on data collection in the field. Additionally, some students have been disinclined to pursue a geoscience career if physical fieldwork is perceived as necessary for the completion of their academic degree. More recently, travel restrictions due to the COVID-19 pandemic have restricted access to field locations for many students and geoscience researchers. As a result, geoscience educators are developing virtual field trips and exercises that address many of the learning objectives of traditional in-person field experiences. These virtual field trips and exercises use a variety of online and computer platforms, including web-based and desktop versions of Google Earth (GE). In this contribution, we highlight how educators can create virtual geoscience field trips and exercises using web GE, desktop GE, and a web-based tool for generating oriented geologic symbology for GE. Examples of methods and approaches for creating virtual field experiences in GE are provided for a virtual field trip that uses a web GE presentation to replicate a typical class field trip, and for a geologic mapping exercise that uses a KML file uploaded into web or desktop GE. Important differences between web and desktop GE are discussed, with consideration for which platform might be most effective for specific educational objectives. Challenges and opportunities related to virtual field trips are discussed in comparison with traditional in-person, on-location field trips. It is suggested that in a post–COVID-19 world, a combination of in-person and virtual hybrid field experiences might prove the most effective approach for producing a more inclusive and equitable learning environment, and thus strengthening the geoscience workforce.
Los estilos APA, Harvard, Vancouver, ISO, etc.
41

Smith, Hazel y Roger T. Dean. "Posthuman Collaboration: Multimedia, Improvisation, and Computer Mediation". M/C Journal 9, n.º 2 (1 de mayo de 2006). http://dx.doi.org/10.5204/mcj.2619.

Texto completo
Resumen
Artistic collaboration involves a certain loss of self, because it arises out of the merging of participants. In this sense collaboration questions the notion of the creative individual and the myth of the isolated artistic genius. As such, artistic collaborations can be subversive interventions into the concept of authorship and the ideologies that surround it (Smith 189-194). Collaborations also often simultaneously superimpose many different approaches to the collaborative process. Collaboration is therefore a multiplicitous activity in which different kinds of interactivity are interlinked; this process may also be facilitated by improvisation which allows for continuous modification of the interactions (Smith and Dean, Improvisation). Even when we are writing individually, we are always collaborating with prior texts and employing ideas from others, advice and editing suggestions. This eclectic aspect of creative work has led some to argue that collaboration is the dominant mode, while individual creativity is an illusion (Stillinger; Bennett 94-107). One of the reasons why collaboration tends to be multiplicitous is that contemporary creative endeavour sometimes involves collaboration across different media and with computers. Artworks are created by an ‘assemblage’ of different expertises, media, and machines in which the computer may be a ‘participant’. In this respect contemporary collaboration is what Katherine Hayles calls posthuman: for Hayles ‘the posthuman subject is an amalgam, a collection of heterogeneous components, a material-informational entity whose boundaries undergo continuous construction and reconstruction (Hayles 3). Particularly important here is her argument about the conceptual shifts that information systems are creating. She suggests that the binary of presence and absence is being progressively replaced in cultural and literary thought by the binary of pattern and randomness created by information systems and computer mediation (Hayles 25-49). In other words, we used to be primarily concerned with human interactions, even if sometimes it was the lack of them, as in Roland Barthes concept of ‘the death of the author’. However, this has shifted to our concern with computer systems as methods of organisation. Nevertheless, Hayles argues, computers can never totally replace embodied human subjects, rather we need to continually negotiate between presence and pattern, absence and randomness (Hayles 25-49). This very negotiation is central to many computer-mediated collaborations. Our own collaborative practice—Roger is primarily a musician and Hazel primarily a writer but we both have interdisciplinary performance and technological expertise—spans 15 years and has resulted in approximately 18 collaborative works. They are all cross-media: initially these brought together word and sound; now they sometimes also include image. They all involve multiple forms of collaboration, improvised and unfixed elements, and computer interfaces. Here we want to outline some of the stages in the making of our recent collaboration, Time, the Magician, and its ‘posthuman’ engagement with computerised processes. Time, the Magician is a collaborative performance and sound-video piece. It combines words, sound and image, and involves composed and improvised elements as well as computer mediation. It was conceived largely by us, but the first performance, at the Sydney Conservatorium of Music in 2005 also involved collaboration with Greg White (sound processing) and Sandy Evans (saxophone). The piece begins with a poem by Hazel, initially performed solo, and then juxtaposed with live and improvised sound. This sound involves some real-time and pre-recorded sampling and processing of the voice: this—together with other sonic materials—creates a ‘voicescape’ in which the rhythm, pitch, and timbre of the voice are manipulated and the voice is also spatialised in the performance space (Smith and Dean, “Voicescapes”). The performance of the poem is followed (slightly overlapping) by screened text created in the real-time image-processing program Jitter, and this is also juxtaposed with sound and voice samples. One of the important aspects of the piece is its variability: the video-manipulated text and images change both in order and appearance each time, and the sampling and manipulation of the voice is different too. The example here shows short extracts from the longer performance of the work at the Sydney 2005 event. (This is a Quicktime 7 compressed video of excerpts from the first performance of Time, the Magician by Hazel Smith and Roger Dean. The performance was given by austraLYSIS (Roger Dean, computer sound and image; Sandy Evans, saxophone; Hazel Smith, speaker; Greg White, computer sound and sound projection) at the Sydney Conservatorium of Music, October 2005. The piece in its entirety lasts about 11 minutes, while these excerpts last about four minutes, and are not cross-faded, but simply juxtaposed. The piece itself will later be released elsewhere as a Web video/sound piece, made directly from the sound and the Jitter-processed images which accompany it. This Quicktime 7 performance video uses AAC audio compression (44kHz stereo), H.264 video compression (320x230), and has c. 15fps and 200kbits/sec.; it is prepared for HTTP fast-start streaming. It requires the Quicktime 7 plugin, and on Macintosh works best with Safari or Firefox – Explorer is no longer supported for Macintosh. The total file size is c. 6MB. You can also access the file directly through this link.) All of our collaborations have involved different working processes. Sometimes we start with a particular topic or process in mind, but the process is always explorative and the eventual outcome unpredictable. Usually periods of working individually—often successively rather than simultaneously—alternate with discussion. We will now each describe our different roles in this particular collaboration, and the points of intersection between them. Hazel In creating Time, the Magician we made an initial decision that Roger—who would be responsible for the programming and sound component of the piece—would work with Jitter, which we had successfully used for a previous collaboration. I would write the words, and I decided early on that I would like our collaboration to circle around ideas—which interested both Roger and me—about evolution, movement, emergence, and time. We decided that I would first write some text that would then be used as the basis of the piece, but I had no idea at this stage what form the text would take, and whether I would produce one continuous text or a number of textual fragments. In the early stages I read and ‘collaborated with’ a number of different texts, particularly Elizabeth Grosz’s book The Nick of Time. I was interested in the way Grosz sees Darwin’s work as a treatise on difference—she argues that for Darwin there are no clear-cut distinctions between different species and no absolute origin of the species. I was also stimulated by her idea that political resistance is always potential, if latent, in the repressive regimes or social structures of the past. As I was reading and absorbing the material, I opened a file on my computer and—using a ‘bottom-up’ approach—started to write fragments, sometimes working with the Grosz text as direct trigger. A poem evolved which was a continuous whole but also discontinuous in essence: it consisted of many small fragments that, when glued together and transformed in relation to each other, reverberated though association. This was appropriate, because as the writing process developed I had decided that I would write a poem, but then also disassemble it for the screened version. This way Roger could turn each segment into a module in Jitter, and program the sequence so that the texts would appear in a different order each time. After I had written the poem we decided on a putative structure for the work: the poem would be performed first, the musical element would start about halfway through, and the screened version—with the fragmented texts—would follow. Roger said that he would video some background material to go behind the texts, but he also suggested that I design the texts as visual objects with coloured letters, different fonts, and free spatial arrangements, as I had in some previous multimedia pieces. So I turned the texts into visual designs: this often resulted in my pulling apart sentences, phrases and words and rearranging them. I then converted the texts files into jpg files and gave them to Roger to work on. Roger When Hazel gave me her 32 text images, I turned these into a QuickTime video with 10 seconds per image/frame. I also shot a 5 minute ‘background’ video of vegetation and ground, often moving the camera quickly over blurred objects or zooming in very close to them. The video was then edited as a continually moving sequence with an oscillation between clearly defined and abstracted objects, and between artificial and natural ones. The Jitter interface is constructed largely as a sequence of three processing modules. One of these involves continuously changing the way in which two layers (in this case text and background) are mixed; the second, rotation and feedback of segments from one or both layers; and the third a kind of dripping across the image, with feedback, of segments from one or both layers. The interface is performable, in that the timing and sequence can be altered as the piece progresses, and within any one module most of the parameters are available for performer control—this is the essence of what we call ‘hyperimprovisation’ (Dean). Both text and image layers are ‘granulated’: after a randomly variable length of time—between 2 and 20 seconds or so—there is a jump to a randomly chosen new position in the video, and these jumps occur independently for the two layers. Having established this approach to the image generation, and the overall shape of the piece (as defined above), the remaining aspects were left to the creative choices of the performers. In the Sydney performance both Greg White and I exploited real-time processing of the spoken text by means of the live feed and pre-recorded material. In addition we used long buffers (which contained the present performance of the text) to access the spoken text after Hazel had finished her performed opening segment. I worked on the sound and speech components with some granulation and feedback techniques, throughout, while Greg used a range of other techniques, as well as focusing on the spatial movement of the sound around four loudspeakers surrounding the performance and listening space. Sandy Evans (saxophone)—who was familiar with the overall timeline—improvised freely while viewing the video and listening to our soundscape. In this first performance, while I drove the sound, the computer ‘posthumanly’ (that is without intervention) drove the image. I worked largely with MSP (Max Signal Processing), a part of the MAX/MSP/Jitter suite of platforms for midi, sound and image, to complement sonically the Jitter-mediated video. So processes of granulation, feedback, spatial rotation (of image) or redistribution (of sound)—as well as re-emergence of objects which had been retained in the memory of the computer—were common to both the sound and image manipulation. There was therefore a degree of algorithmic synaesthesia—that is shared algorithms between image and sound (Dean, Whitelaw, Smith, and Worrall). The collaborative process involved a range of stimuli: not only with regard to those of process, as discussed, but also in relation to the ideas in the text Hazel provided. The concepts of evolution, movement, and emergence which were important to her writing also informed and influenced the choice of biological and artificial objects in the background video, and the nature and juxtaposition of the processing modules for both sound and image. Conclusion If we return to the issues raised at the beginning of this article, we can see how our collaboration does involve the merging of participants and the destabilising of the concept of authorship. The poem was not complete after Hazel had written it—or even after she had dislocated it—but is continually reassembled by the Jitter interface that Roger has constructed. The visual images were also produced first by Hazel, then fused with Roger’s video in continuously changing formations through the Jitter interface. The performance may involve collaboration by several people who were not involved in the original conception of the work, indicating how collaboration can become an extended and accumulative process. The collaboration also simultaneously superimposes several different kinds of collaborative process, including the intertextual encounter with the Grosz text; the intermedia fusion of text, image and sound; the participation of a number of different people with differentiated roles and varying degrees of input; and collaboration with the computer. It is an assemblage in the terms mentioned earlier: a continuously modulating conjunction of different expertises, media, and machines. Finally, the collaboration is simultaneously both human and posthuman. It negotiates—in the way Hayles suggests—between pattern, presence, randomness, and absence. On the one hand, it involves human intervention (the writing of the poem, the live music-making, the shooting of the video, the discussion between participants) though sometimes those interventions are hidden, merged, or subsumed. On the other hand, the Jitter interface allows for both tight programming and elements of variability and unpredictability. In this way the collaboration displaces the autonomous subject with what Hayles calls a ‘distributed system’ (Hayles 290). The consequence is that the collaborative process never reaches an endpoint: the computer interface will construct the piece differently each time, we may choose to interact with it in performance, and the sound performance will always contain many improvised and unpredictable elements. The collaborative process, like the work it produces, is ongoing, emergent, and mutating. References Bennett, Andrew. The Author. London: Routledge, 2005. Dean, Roger T. Hyperimprovisation: Computer Interactive Sound Improvisation; with CD-ROM. Madison, WI: A-R Editions, 2003. Dean, Roger, Mitchell Whitelaw, Hazel Smith, and David Worrall. “The Mirage of Real-Time Algorithmic Synaesthesia: Some Compositional Mechanisms and Research Agendas in Computer Music and Sonification.” Contemporary Music Review, in press. Grosz, Elizabeth. The Nick of Time: Politics, Evolution and the Untimely. Sydney: Allen and Unwin, 2004. Hayles, N. Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: U of Chicago P, 1999. Smith, Hazel. Hyperscapes in the Poetry of Frank O’Hara: Difference, Homosexuality, Topography. Liverpool: Liverpool UP, 2000. Smith, Hazel, and Roger T. Dean. Improvisation, Hypermedia and the Arts since 1945. London: Harwood Academic, 1997. ———. “Voicescapes and Sonic Structures in the Creation of Sound Technodrama.” Performance Research 8.1 (2003): 112-23. Stillinger, Jack. Multiple Authorship and the Myth of Solitary Genius. Oxford: Oxford UP, 1991. Citation reference for this article MLA Style Smith, Hazel, and Roger T. Dean. "Posthuman Collaboration: Multimedia, Improvisation, and Computer Mediation." M/C Journal 9.2 (2006). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0605/14-smithdean.php>. APA Style Smith, H., and R. Dean. (May 2006) "Posthuman Collaboration: Multimedia, Improvisation, and Computer Mediation," M/C Journal, 9(2). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0605/14-smithdean.php>.
Los estilos APA, Harvard, Vancouver, ISO, etc.
42

Getz, Wayne M., Richard Salter, Ludovica Luisa Vissat y Nir Horvitz. "A versatile web app for identifying the drivers of COVID-19 epidemics". Journal of Translational Medicine 19, n.º 1 (16 de marzo de 2021). http://dx.doi.org/10.1186/s12967-021-02736-2.

Texto completo
Resumen
Abstract Background No versatile web app exists that allows epidemiologists and managers around the world to comprehensively analyze the impacts of COVID-19 mitigation. The http://covid-webapp.numerusinc.com/ web app presented here fills this gap. Methods Our web app uses a model that explicitly identifies susceptible, contact, latent, asymptomatic, symptomatic and recovered classes of individuals, and a parallel set of response classes, subject to lower pathogen-contact rates. The user inputs a CSV file of incidence and, if of interest, mortality rate data. A default set of parameters is available that can be overwritten through input or online entry, and a user-selected subset of these can be fitted to the model using maximum-likelihood estimation (MLE). Model fitting and forecasting intervals are specifiable and changes to parameters allow counterfactual and forecasting scenarios. Confidence or credible intervals can be generated using stochastic simulations, based on MLE values, or on an inputted CSV file containing Markov chain Monte Carlo (MCMC) estimates of one or more parameters. Results We illustrate the use of our web app in extracting social distancing, social relaxation, surveillance or virulence switching functions (i.e., time varying drivers) from the incidence and mortality rates of COVID-19 epidemics in Israel, South Africa, and England. The Israeli outbreak exhibits four distinct phases: initial outbreak, social distancing, social relaxation, and a second wave mitigation phase. An MCMC projection of this latter phase suggests the Israeli epidemic will continue to produce into late November an average of around 1500 new case per day, unless the population practices social-relaxation measures at least 5-fold below the level in August, which itself is 4-fold below the level at the start of July. Our analysis of the relatively late South African outbreak that became the world’s fifth largest COVID-19 epidemic in July revealed that the decline through late July and early August was characterised by a social distancing driver operating at more than twice the per-capita applicable-disease-class (pc-adc) rate of the social relaxation driver. Our analysis of the relatively early English outbreak, identified a more than 2-fold improvement in surveillance over the course of the epidemic. It also identified a pc-adc social distancing rate in early August that, though nearly four times the pc-adc social relaxation rate, appeared to barely contain a second wave that would break out if social distancing was further relaxed. Conclusion Our web app provides policy makers and health officers who have no epidemiological modelling or computer coding expertise with an invaluable tool for assessing the impacts of different outbreak mitigation policies and measures. This includes an ability to generate an epidemic-suppression or curve-flattening index that measures the intensity with which behavioural responses suppress or flatten the epidemic curve in the region under consideration.
Los estilos APA, Harvard, Vancouver, ISO, etc.
43

Tesařová, Markéta, Eglantine Heude, Glenda Comai, Tomáš Zikmund, Markéta Kaucká, Igor Adameyko, Shahragim Tajbakhsh y Jozef Kaiser. "An interactive and intuitive visualisation method for X-ray computed tomography data of biological samples in 3D Portable Document Format". Scientific Reports 9, n.º 1 (17 de octubre de 2019). http://dx.doi.org/10.1038/s41598-019-51180-2.

Texto completo
Resumen
Abstract 3D imaging approaches based on X-ray microcomputed tomography (microCT) have become increasingly accessible with advancements in methods, instruments and expertise. The synergy of material and life sciences has impacted biomedical research by proposing new tools for investigation. However, data sharing remains challenging as microCT files are usually in the range of gigabytes and require specific and expensive software for rendering and interpretation. Here, we provide an advanced method for visualisation and interpretation of microCT data with small file formats, readable on all operating systems, using freely available Portable Document Format (PDF) software. Our method is based on the conversion of volumetric data into interactive 3D PDF, allowing rotation, movement, magnification and setting modifications of objects, thus providing an intuitive approach to analyse structures in a 3D context. We describe the complete pipeline from data acquisition, data processing and compression, to 3D PDF formatting on an example of craniofacial anatomical morphology in the mouse embryo. Our procedure is widely applicable in biological research and can be used as a framework to analyse volumetric data from any research field relying on 3D rendering and CT-biomedical imaging.
Los estilos APA, Harvard, Vancouver, ISO, etc.
44

Akpan, Margaret, Anietie Francis Udofia y Ndifreke Enefiok Edem. "DIGITAL TECHNOLOGY AND COSTUME DESIGN IN ANIMATTION: A STRUCTURALIST READING OF PETER DEL’S FROZEN". International Review of Humanities Studies, 31 de julio de 2020. http://dx.doi.org/10.7454/irhs.v0i0.264.

Texto completo
Resumen
The deployment of digital technology to create captivating spectacles in films has reflected the boundlessness of man’s ingenuity in recreating his world. Such creativity reflects considerably in the fluidity of using the computer to generate human concepts of the role of costume grounded on theatrical system in animation. The world of animation is always synergized with marvels that defy the rational proofs for objectivity in the human world. This paper evaluates the use of digital technology to generate costume design to reflect the system of human thought in animation using Peter Del Velcho’s Frozen as paradigm. The paper uses qualitative research method to examine facts and bases its argument on Structuralism. Findings show that costume functions as a system in theatrical and film productions. In animation, the insight of human imagination through costume design is easily brought to bear, and the reality of creativity to man is ingrained in communication through captivating pictorials without impairment. Costume design can be generated from computer through knowledge and expertise. With the knowledge of computer, especially in the academic environment, unlimited streams of creativity may unfold to support the reality of entrepreneurial schemes in the society. When a costume is designed to conform to the order of a design system within a system of thought to pass on information, any medium can function as a dependable conduit to communicate.
Los estilos APA, Harvard, Vancouver, ISO, etc.
45

Akpan, Margaret, Anietie Francis Udofia y Ndifreke Enefiok Edem. "DIGITAL TECHNOLOGY AND COSTUME DESIGN IN ANIMATION: A STRUCTURALIST READING OF PETER DEL’S FROZEN". International Review of Humanities Studies 5, n.º 2 (31 de julio de 2020). http://dx.doi.org/10.7454/irhs.v5i2.276.

Texto completo
Resumen
The deployment of digital technology to create captivating spectacles in films has reflected the boundlessness of man’s ingenuity in recreating his world. Such creativity reflects considerably in the fluidity of using the computer to generate human concepts of the role of costume grounded on theatrical system in animation. The world of animation is always synergized with marvels that defy the rational proofs for objectivity in the human world. This paper evaluates the use of digital technology to generate costume design to reflect the system of human thought in animation using Peter Del Velcho’s Frozen as paradigm. The paper uses qualitative research method to examine facts and bases its argument on Structuralism. Findings show that costume functions as a system in theatrical and film productions. In animation, the insight of human imagination through costume design is easily brought to bear, and the reality of creativity to man is ingrained in communication through captivating pictorials without impairment. Costume design can be generated from computer through knowledge and expertise. With the knowledge of computer, especially in the academic environment, unlimited streams of creativity may unfold to support the reality of entrepreneurial schemes in the society. When a costume is designed to conform to the order of a design system within a system of thought to pass on information, any medium can function as a dependable conduit to communicate.
Los estilos APA, Harvard, Vancouver, ISO, etc.
46

Chiche, Alebachew. "HYBRID DECISION SUPPORT SYSTEM FRAMEWORK FOR CROP YIELD PREDICTION AND RECOMMENDATION". International Journal of Computing, 30 de junio de 2019, 181–90. http://dx.doi.org/10.47839/ijc.18.2.1416.

Texto completo
Resumen
In this paper, a hybrid decision support system is presented which uses both quantitative and qualitative data to provide effective and efficient decision making for crop yield prediction and suggestion. Our framework integrates KD-DSS and DD-DSS for solving complex problems by complementing the existing gap of individual decision support system in agriculture domain. For analyzing collected quantitative data of agriculture research center, our framework uses artificial neural network as a data mining technique. So, we use ANN for uncovering hidden knowledge in stored dataset. And this knowledge is further integrated with the knowledge base developed by acquiring qualitative data from expertise and represented using an IF-THEN production rule. The integration of knowledge collected from both qualitative and quantitative source of data provides a potential advantage for solving complex problems for decision makers. Finally, we will have the opportunity to enhance the framework coupling the features which can provide a group knowledge sharing among decision makers. So, this feature can present the opportunities to fill the disparity of decisions made by different decision makers.
Los estilos APA, Harvard, Vancouver, ISO, etc.
47

Song, Sunah, Brigid M. Wilson, Joseph Marek y Robin L. P. Jump. "Use of electronic pharmacy transaction data and website development to assess antibiotic use in nursing homes". BMC Medical Informatics and Decision Making 21, n.º 1 (5 de mayo de 2021). http://dx.doi.org/10.1186/s12911-021-01509-7.

Texto completo
Resumen
Abstract Background In 2017, the Centers for Medicare and Medicaid Services required all long-term care facilities, including nursing homes, to have an antibiotic stewardship program. Many nursing homes lack the resources, expertise, or infrastructure to track and analyze antibiotic use measures. Here, we demonstrate that pharmacy invoices are a viable source of data to track and report antibiotic use in nursing homes. Methods The dispensing pharmacy working with several nursing homes in the same healthcare corporation provided pharmacy invoices from 2014 to 2016 as files formatted as comma separated values. We aggregated these files by aligning elements into a consistent set of variables and assessed the completeness of data from each nursing home over time. Data cleaning involved removing rows that did not describe systemic medications, de-duplication, consolidating prescription refills, and removing prescriptions for insulin and opioids, which are medications that were not administered at a regular dose or schedule. After merging this cleaned invoice data to nursing home census data including bed days of care and publicly available data characterizing bed allocation for each nursing home, we used the resulting database to describe several antibiotic use metrics and generated an interactive website to permit further analysis. Results The resultant database permitted assessment of the following antibiotic use metrics: days of antibiotic therapy, length of antibiotic therapy, rate of antibiotic starts, and the antibiotic spectrum index. Further, we created a template for summarizing data within a facility and comparing across facilities. https://sunahsong.shinyapps.io/USNursingHomes/. Conclusions Lack of resources and infrastructure contributes to challenges facing nursing homes as they develop antibiotic stewardship programs. Our experience with using pharmacy invoice data may serve as a useful approach for nursing homes to track and report antibiotic use.
Los estilos APA, Harvard, Vancouver, ISO, etc.
48

Lyk, Patricia Bianca, Gunver Majgaard, Lotte Vallentin-Holbech, Julie Dalgaard Guldager, Timo Dietrich, Sharyn Rundle-Thiele y Christiane Stock. "Co‑Designing and Learning in Virtual Reality: Development of Tool for Alcohol Resistance Training". Electronic Journal of e-Learning 18, n.º 3 (1 de julio de 2020). http://dx.doi.org/10.34190/ejel.20.18.3.002.

Texto completo
Resumen
This paper presents the design process of a Danish educational virtual reality (VR) application for alcohol prevention. Denmark is one on the countries in Europe with the highest alcohol consumption among adolescents. Alcohol abuse is a risk factor for a variety of diseases and contributes as a significant factor to motor vehicle accidents. The application offers first‑hand experiences with alcohol in a safe environment. This is done by simulating a party situation using 125 different 360‑degree movie sequences and displaying it in a virtual reality headset. The users create their own experience through a choose your own adventure game experience. The experience is designed to acquire skills for recognizing and handling peer pressure, which has been found to be one of the main reasons for drinking initiation. These skills are acquired though experimental learning. The application is a product of a co‑design process involving 10 students (aged 18‑28) studying film making and game design at Askov Folk High School (a special kind of Danish boarding school without exams for young adults), Denmark, their teachers, alcohol experts from social services and researchers with expertise within health promotion, social marketing, VR, interaction design and game development. Additionally, 35 students from Askov Boarding School (aged 15‑17) participated as actors and extras. This article contributes to research within development of 360‑degree video applications for experimental learning with a practical example. The iterative design process of the application, containing exploration of key concepts, concept design, prototype design, pre‑usability testing, innovation design and usability test is described, as well as our reflections on virtual experimental learning in the application.
Los estilos APA, Harvard, Vancouver, ISO, etc.
49

Vasiljević, Dragan, Julijana Vasiljević y Boris Ribarić. "Multi-criteria analysis of WWW domain efficiency on social behavior in cyber space". JITA - Journal of Information Technology and Applications (Banja Luka) - APEIRON 20, n.º 2 (20 de septiembre de 2020). http://dx.doi.org/10.7251/jit2002096v.

Texto completo
Resumen
The level of technological development, as well as technology, allows a contemporary individual to put any possible files, photos or multimedia contents on his internet-connected computer. As a result, nowadays we practically have an enormous amount of data, available to almost any possible individual worldwide. People make connections over Web service throughout internet as visible communication. World Wide Web represents the most prominent internet field thus partly influencing internet users in contemporary world. Defining efficiency of World Wide Web domain within cyber space means a lot to social behavior. This paper deals with estimating efficiency of World Wide Web domain on social affairs in cyber space with the use of multi-criteria analysis. Based on the criteria chosen, World Wide Web domain efficiency assessment in cyber space has been conducted, with the emphasis on the influences towards efficiency in the domain of fulfilled influences on social affairs. Identification of such World Wide Web fields facilitates the process of technological progress on one hand or facilitates recognition, prevention and protection of human and material resources on the other hand. World Wide Web domain efficiency in cyber space analysis has been performed through the method of Analytic Hierarchy Process (AHP method), while the efficiency expertise of World Wide Web domain on social behavior in cyber space has been performed within a software tool “Super Decision 2.6.0 – RC1“. For the sake of the comparative data analysis, an “on–line“ survey has been made on a representatvie sample of 148 individuals, applying a five-degree Likert Scale of attitudes as well as the analysis of obtained data within a software tool used for statistical data processing “Statistical Package for the Social Sciences“. Upon a completion of performed analysis based on an influence significance, the following World Wide Web domains were singled out: Facebook, Youtube, Wikipedia and Twitter.
Los estilos APA, Harvard, Vancouver, ISO, etc.
50

Crestaz, Ezio, Michele Pellegrini y Peter Schätzl. "Tight-coupling of groundwater flow and transport modelling engines with spatial databases and GIS technology: a new approach integrating Feflow and ArcGIS". Acque Sotterranee - Italian Journal of Groundwater 1, n.º 2 (30 de septiembre de 2012). http://dx.doi.org/10.7343/as-005-12-0014.

Texto completo
Resumen
Implementation of groundwater flow and transport numerical models is generally a challenge, time-consuming and financially-demanding task, in charge to specialized modelers and consulting firms. At a later stage, within clearly stated limits of applicability, these models are often expected to be made available to less knowledgeable personnel to support/design and running of predictive simulations within more familiar environments than specialized simulation systems. GIS systems coupled with spatial databases appear to be ideal candidates to address problem above, due to their much wider diffusion and expertise availability. Current paper discusses the issue from a tight-coupling architecture perspective, aimed at integration of spatial databases, GIS and numerical simulation engines, addressing both observed and computed data management, retrieval and spatio-temporal analysis issues. Observed data can be migrated to the central database repository and then used to set up transient simulation conditions in the background, at run time, while limiting additional complexity and integrity failure risks as data duplication during data transfer through proprietary file formats. Similarly, simulation scenarios can be set up in a familiar GIS system and stored to spatial database for later reference. As numerical engine is tightly coupled with the GIS, simulations can be run within the environment and results themselves saved to the database. Further tasks, as spatio-temporal analysis (i.e. for postcalibration auditing scopes), cartography production and geovisualization, can then be addressed using traditional GIS tools. Benefits of such an approach include more effective data management practices, integration and availability of modeling facilities in a familiar environment, streamlining spatial analysis processes and geovisualization requirements for the non-modelers community. Major drawbacks include limited 3D and time-dependent support in traditional GIS, and lack of dedicated calibration, analysis and visualization tools. A system implementation based upon ESRI geodatabase, ArcGIS and state-of-the-art finite element 3D flow and transport numerical code Feflow is presented and critically assessed.
Los estilos APA, Harvard, Vancouver, ISO, etc.
Ofrecemos descuentos en todos los planes premium para autores cuyas obras están incluidas en selecciones literarias temáticas. ¡Contáctenos para obtener un código promocional único!

Pasar a la bibliografía