Segui questo link per vedere altri tipi di pubblicazioni sul tema: Communication user-developer.

Articoli di riviste sul tema "Communication user-developer"

Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili

Scegli il tipo di fonte:

Vedi i top-50 articoli di riviste per l'attività di ricerca sul tema "Communication user-developer".

Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.

Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.

Vedi gli articoli di riviste di molte aree scientifiche e compila una bibliografia corretta.

1

Gallivan, Michael J., e Mark Keil. "The user-developer communication process: a critical case study". Information Systems Journal 13, n. 1 (gennaio 2003): 37–68. http://dx.doi.org/10.1046/j.1365-2575.2003.00138.x.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
2

Helminen, Pia, Samuli Mäkinen e Mari Holopainen. "Better user-developer communication in service development by collaborative physical modelling". International Journal of Services and Operations Management 23, n. 2 (2016): 169. http://dx.doi.org/10.1504/ijsom.2016.074054.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
3

Guimaraes, Tor, Ketan Paranjape e Thomas Timmerman. "Looking At The User Side Of Software Engineering For Project Success". SAM Advanced Management Journal 86, n. 2 (30 giugno 2021): 15–31. http://dx.doi.org/10.52770/bafm5902.

Testo completo
Abstract (sommario):
As a general definition, software engineering is “the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software.” The importance of user-related factors has long been recognized by various researchers as important to the successful implementation of any commercially available system. This study attempts to test the importance of these factors as determinants of software engineering project success as measured by adherence to specified requirements, compliance with initial budget estimations, timeliness of agreed delivery, and overall user satisfaction with the product delivered. It has brought together some user-related variables (degree of user participation, user expertise, user/developer communication, user training, user influence, and user conflict) previously studied separately by different authors into a more cohesive model. Data regarding 178 system development projects using software engineering methodologies has been used to test proposed relationships between the independent variables and project success as defined in this study. The results confirm the importance of user participation, training, expertise, user/developer communication, and lack of user conflict for improving project success.
Gli stili APA, Harvard, Vancouver, ISO e altri
4

Setiorini, Kusumaningdiah Retno. "PENGARUH PEMAKAI DAN KONFLIK PEMAKAI TERHADAP KUALITAS SISTEM INFORMASI AKUNTANSI DENGAN LIMA VARIABEL MODERATING DI SKPD PEKANBARU". JESI (Jurnal Ekonomi Syariah Indonesia) 5, n. 1 (23 maggio 2016): 93. http://dx.doi.org/10.21927/jesi.2015.5(1).93-111.

Testo completo
Abstract (sommario):
Abstract The purpose of this study was to test the infl uence of users and user confl ict on the quality of accounting information systems with superior support, user communications developer, the complexity of the task, the complexity of the system, the user experience as a variable moderating. The method used survey method with processed primary data obtained from the questionnaire. The population in this study was 98 SKPD Pekanbaru which has a function as a task of fi nancial matters relating to the quality of accounting information systems with the number of samples that can be analyzed as much data as 78 respondents. The sampling technique used purposive sampling. Test data validation and reliability of respondent data using Pearson product moment and Cronbach’s alpha. Data were analyzed using Moderated Regression Analysis (MRA). The results showed (1) Effect of users positive effect on the quality of accounting information systems. (2) Confl ict users will not negatively affect the quality of accounting information system. (3) The infl uence of supervisor support moderated wearer no positive effect on the quality of accounting information systems. (4) The effect of the user moderated user communications developer is not a positive infl uence on the quality of accounting information systems. (5) Effect of users moderated complexity of the task is not a positive infl uence on the quality of accounting information systems. (6) The effect of the user moderated system complexity is not a positive infl uence on the quality of accounting information systems. (7) Effect of the wearer moderated user experience no positive effect on the quality of accounting information systems. (8) Confl ict wearer superior failed to support moderated negative effect on the quality of accounting information systems. (9) Confl ict wearer user communication moderated the developer is not a negative effect on the quality of accounting information systems. (10) The confl ict wearer moderated complexity of the task is not a negative effect on the quality of accounting information systems. (11) Confl icts user moderated system complexity is not a negative effect on the quality of accounting information systems. (12) Confl icts wearer moderated user experience a negative effect on the quality of accounting information system. Keywords: infl uence of users, user confl ict, supervisor support, user communications developer, the complexity of the task, the complexity of the system, the user experience.
Gli stili APA, Harvard, Vancouver, ISO e altri
5

Safitri, Gita Najmi, e I. Made Pande Dwiana Putra. "Analisis Faktor-Faktor yang Mempengaruhi Kinerja Sistem Informasi Akuntansi pada Lembaga Perkreditan Desa". E-Jurnal Akuntansi 31, n. 2 (22 febbraio 2021): 414. http://dx.doi.org/10.24843/eja.2021.v31.i02.p12.

Testo completo
Abstract (sommario):
This study aims to determine the effect of user involvement variables in system development, user education and training programs, personal technical skills, user communication and information systems developers and organizational size on the performance of the Accounting Information System. Respondents in this study were 40 employees found in the LPD in South Denpasar District. Determination of the sample using purposive sampling method. The analytical tool used is multiple regression analysis. The results of this study indicate that the variables of user involvement in system development, user training and education programs, personal technical skills, user communication and information system developers and organizational size have a significant positive effect on SIA performance. Keywords: User Involvement; Training and Education; Personal Engineering; User and Developer Communication.
Gli stili APA, Harvard, Vancouver, ISO e altri
6

Gómez, M., e J. Cervantes. "User Interface Transition Diagrams for customer–developer communication improvement in software development projects". Journal of Systems and Software 86, n. 9 (settembre 2013): 2394–410. http://dx.doi.org/10.1016/j.jss.2013.04.022.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
7

Schaffitzel, Wilhelm, e Uwe Kersten. "Introducing CAD systems. Problems and the role of user–developer communication in their solution". Behaviour & Information Technology 4, n. 1 (gennaio 1985): 47–61. http://dx.doi.org/10.1080/01449298508901786.

Testo completo
Gli stili APA, Harvard, Vancouver, ISO e altri
8

Lee, Yunjin, Mingyu Lim e Yangchan Moon. "Application-level Communication Services for Development of Social Networking Systems". International Journal of Electrical and Computer Engineering (IJECE) 5, n. 3 (1 giugno 2015): 586. http://dx.doi.org/10.11591/ijece.v5i3.pp586-598.

Testo completo
Abstract (sommario):
In this paper, we present our communication middleware (CM), which is designed to reduce the effort of developing common communication functionalities for social networking services (SNSs) in the client-server model. SNS developers can apply the application-level communication services of CM both to an SNS server and to client applications simply by calling application programming interfaces (APIs) and configuring various options related to communication services. CM was developed to enable SNS developers to easily build fundamental services such as transmission of a user-defined event, user membership and authentication management, friend management, content upload and download with different numbers of attachments, chat management, and direct file transfer. All of the communication services also provide options that a developer can customize according to his or her SNS requirements.
Gli stili APA, Harvard, Vancouver, ISO e altri
9

Hale, Holly J. "Primer on the Implementation of a Pharmacy Intranet Site to Improve Department Communication". Hospital Pharmacy 48, n. 7 (luglio 2013): 574–79. http://dx.doi.org/10.1310/hpj4807-574.

Testo completo
Abstract (sommario):
Purpose The purpose of the article is to describe the experience of selecting, developing, and implementing a pharmacy department intranet site with commentary regarding application to other institutions. Clinical practitioners and supporting staff need an effective, efficient, organized, and user-friendly communication tool to utilize and relay information required to optimize patient care. Summary To create a functional and user-friendly department intranet site, department leadership and staff should be involved in the process from selection of product through implementation. A product that supports both document storage management and communication delivery and has the capability to be customized to provide varied levels of site access is desirable. The designation of an intranet site owner/developer within the department will facilitate purposeful site design and site maintenance execution. A well-designed and up-to-date site along with formal end-user training are essential for staff adoption and continued utilization. Conclusion Development of a department intranet site requires a considerable time investment by several members of the department. The implementation of an intranet site can be an important step toward achieving improved communications. Staff utilization of this resource is key to its success.
Gli stili APA, Harvard, Vancouver, ISO e altri
10

Khoirrani, Tri Lestari, e Risma Nur Arifah. "Default Solution on Sharia Housing in The View of Private Law and Islamic Law". AT-TURAS: Jurnal Studi Keislaman 8, n. 1 (30 giugno 2021): 27–48. http://dx.doi.org/10.33650/at-turas.v8i1.1603.

Testo completo
Abstract (sommario):
Sharia housing with sharia developer system using cash and in house method without involving the bank is thriving, because in the advert mentioned no fines and confiscations. While every financing there is a possibility of contract breach, such as La Tansa Cluster Malang Housing, there are several users doing breach of contract. This breach of contract can cause problems between developer and user so that solution is needed. The goals of this research are determining factors that led to the breach of contract and the efforts to resolve trade breach of contract in La Tansa Cluster Malang in terms of civil code and Islamic laws. This research uses empirical legal research with a sociological juridical approach, a concept approach, and a legislative approach. Data collected by interviews with developer of La Tansa Cluster Malang and the user, and then analyzed using qualitative descriptive analysis methods. The results showed that breach of contract occurred in La Tansa Cluster Malang due to lack of user candidate analysis, postpone payments, family deaths, business failures, serious illness, and inaccurate financial predictions. According to civil code, the efforts to resolve contract breach of sharia housing trade in La Tansa Cluster Malang are doing deliberation, communication, time extension, PPJB canceling (according to the Article 1338 paragraph (2) of Civil Code), and money returning (according to Article 1267 and Article 1248 of Civil Code). While the efforts to resolve the breach of contract in La Tansa Cluster Malang according to Islamic laws are doing deliberation or reconciliation (shulh), communication, time extension (according to surah al-Baqarah (2): 280), PPJB canceling (based on fasakh iqalah), and money returning (based on dhaman al ‘aqdi).
Gli stili APA, Harvard, Vancouver, ISO e altri
11

Rohimah, Selma Osa, Wahyu Andhyka Kusuma e Rafiatul Husna. "PENGGALIAN KARAKTERISTIK PENGGUNA PADA FASE ELISITASI PERANGKAT LUNAK MENGGUNAKAN USER PERSONA". SINTECH (Science and Information Technology) Journal 4, n. 1 (21 aprile 2021): 22–28. http://dx.doi.org/10.31598/sintechjournal.v4i1.572.

Testo completo
Abstract (sommario):
Requirements elicitation is activities aimed at finding the needs of a system through communication with system users. The purpose of requirements elicitation is to find out what problems have been encountered. User persona is a fictional representation of the ideal user. The software developer will begin the design process by conducting user research, establishing communication with the target user and identifying exactly what they need with the product to be made. Personas are generally based on these users and combine the needs, goals, and behavior patterns observed from the target audience. This study aims to determine information systems related to the characteristics and needs of users of the software, so that the data generated in the form of qualitative data from prospective users. By using the hypotheses that are made, which will later be analyzed and used as a basis for developing projects. The results of this study are information in accordance with user needs that will be implemented using a use case and software prototype.
Gli stili APA, Harvard, Vancouver, ISO e altri
12

Nurhadryani, Yani, Halimah Tus Sa'diah, Desta Wirnas e Firman Ardiansyah. "Evaluasi ICT (Information and Communication Technology) Literacy Petani Kedelai". Jurnal Ilmu Komputer dan Agri-Informatika 5, n. 2 (12 aprile 2019): 128. http://dx.doi.org/10.29244/jika.5.2.128-133.

Testo completo
Abstract (sommario):
<p>Penerapan ICT di bidang pertanian dapat meningkatkan layanan informasi bagi para petani karena dapat menyediakan informasi yang relevan dan tepat waktu. Namun, masih terdapat hambatan dan kegagalan dalam adopsi teknologi dikarenakan adanya perbedaan prefensi antara <em>developer </em>dan <em>user</em>. Oleh karena itu, diperlukan evaluasi ICT <em>literacy</em> petani agar <em>software</em> yang dikembangkan dapat dimanfaatkan secara optimal oleh petani. Penelitian ini bertujuan untuk mengukur ICT <em>literacy</em> petani melalui evaluasi kuesioner berdasarkan <em>framework</em> ETS (<em>Educational Testing Service</em>). Ukuran sampel yang digunakan adalah 30 petani kedelai. Dari 30 responden, 73% petani telah menggunakan HP,13 % komputer dan 7% internet. ICT <em>proficiency</em> petani dalam penggunaan HP, komputer dan internet adalah 59%, 21% dan 18%. Hal ini menunjukkan bahwa HP merupakan tool yang tepat untuk penerapan ICT dibidang pertanian. ICT yang dapat dikembangkan oleh <em>developer</em> untuk diterapkan langsung kepada petani sebagai <em>user</em> utamanya hanya berbasis SMS<em> Gateway</em>. Aplikasi berbasis <em>website</em> dan <em>mobile</em> belum dapat diimplementasikan langsung kepada para petani karena sebagian besar petani belum menggunakan <em>smartphone</em> dan penggunaan komputer serta internet masih cukup rendah.</p><p>Kata kunci: evaluasi ICT, ICT, ICT literacy, ICT petani, ICT proficiency</p>
Gli stili APA, Harvard, Vancouver, ISO e altri
13

Yang, Lin, e Cheng Rui Zhang. "Development of Industrial Ethernet Windows Driver for Motion Control System". Advanced Materials Research 197-198 (febbraio 2011): 1751–56. http://dx.doi.org/10.4028/www.scientific.net/amr.197-198.1751.

Testo completo
Abstract (sommario):
An industrial Ethernet based motion control system is presented in the paper. The first slave node in the line topology is responsible for the precise cyclic communication, and it will reduce the real-time requirement on the main controller. The communication protocol is implemented as Windows NDIS protocol driver, and provides higher priority than normal user mode application. In addition, the development environment and process are also discussed, through which the developer can debug the system software at a single computer in both simulated mode and real mode.
Gli stili APA, Harvard, Vancouver, ISO e altri
14

Humphreys, Sal, Brian Fitzgerald, John Banks e Nic Suzor. "Fan-Based Production for Computer Games: User-Led Innovation, the ‘Drift of Value’ and Intellectual Property Rights". Media International Australia 114, n. 1 (febbraio 2005): 16–29. http://dx.doi.org/10.1177/1329878x0511400104.

Testo completo
Abstract (sommario):
Fan-based or third party content creation has assumed an integral place in the multi-million dollar computer games industry. The emerging production ecology that involves new kinds of distributed organisations and ad hoc networks epitomises the ‘drift of value’ from producer to consumer and allows us to understand how user-led innovation influences the creative industries. But the ability to control intellectual property rights in content production is critical to the power structures and social dynamic that are being created in this space. Trainz, a train simulation game released by Brisbane developer Auran, which relies heavily on fan-created content for its success, is used as a case study. The licence agreements between Auran and the fan creators are analysed in order to understand how the balance between the commercial and non-commercial is achieved and how the tension between open networks of collaboration and closed structures of commercial competitive environments are negotiated.
Gli stili APA, Harvard, Vancouver, ISO e altri
15

STEWART, JAMES, e SAMPSA HYYSALO. "INTERMEDIARIES, USERS AND SOCIAL LEARNING IN TECHNOLOGICAL INNOVATION". International Journal of Innovation Management 12, n. 03 (settembre 2008): 295–325. http://dx.doi.org/10.1142/s1363919608002035.

Testo completo
Abstract (sommario):
This paper explores the role of intermediaries in the development and appropriation of new technologies. We focus on intermediaries that facilitate user innovation, and the linking of user innovation into supply side activities. We review findings on intermediaries in some of our studies and other available literature to build a framework to explore of how intermediaries work in making innovation happen. We make sense of these processes by taking a long-term view of the dynamics of technology and market development using the social learning in technological innovation (SLTI) framework. Our primary concern is with innovation intermediaries and their core roles of configuring, facilitating and brokering technologies, uses and relationships in uncertain and emerging markets. We show the range of positions and influence they have along the supply-use axis in a number of different innovation contexts, and how they are able to bridge the user-developer innovation domains. Equipped with these insights, we explore in more depth how intermediaries affect the shape of new information and communication technologies and the importance of identifying and nurturing the user-side intermediaries that are crucial to innovation success.
Gli stili APA, Harvard, Vancouver, ISO e altri
16

Zhang, Jun An, Ya Hong Guo e Guo Min Mo. "A Software Hardware Co-Design Approach for FPGAs on Nios II Soft-Core Processors". Applied Mechanics and Materials 373-375 (agosto 2013): 1591–94. http://dx.doi.org/10.4028/www.scientific.net/amm.373-375.1591.

Testo completo
Abstract (sommario):
In order to prove the applicability of the design approach with complex System-on-Chip (SoC), equipments for real-time electrocardiographic (ECG) signal generator and corresponding algorithm have been implemented in this study. The study mainly focused on completing a SoC design which constructs a customizable system via user interface to an FPGA Chip in accordance with the need of a specific application. In the proposed design flow the architecture of the generated hardware is tailored to match the communication structure of the application. This allows the developer to meet the system's performance, size and power consumption requirements with short time to market. The feature-rich multimedia products can meet market expectations of high performance at low cost and lower energy consumption.
Gli stili APA, Harvard, Vancouver, ISO e altri
17

Mohd Zukhi, Mohd Zhafri, Azham Hussain e Husniza Husni. "Culturicon Design Model for Social Mobile Application". International Journal of Interactive Mobile Technologies (iJIM) 14, n. 05 (7 aprile 2020): 16. http://dx.doi.org/10.3991/ijim.v14i05.13313.

Testo completo
Abstract (sommario):
The usage of emoticon in computer-mediated communication has been growing rapidly among users, especially in social media. Emoticon has been used to express feelings, emotions, gestures, actions and places. Despite the growing number of emoticon users around the world, study on the cultural elements of the emoticon is still lacking. This research aims to propose a model for the development of Culturicon, which is Cultural-Based Emoticon. In doing so, a verification process must be done to the proposed model to ensure that the model is well verified. Expert review method was used for the verification method. Experts from the field of Human-Computer Interaction, User Experience and cultural study especially the academicians were chosen. In addition, application developer and graphic designer also were chosen as expert from the industry. The experts were approached by email and performed the verification by answering online questionnaire provided. The result obtained from these experts were analyzed and amendment were made based from the comments and suggestions. Results showed that 91% experts agreed the connections and flows of all components in the proposed model are logical and readable. Expert verification is important to ensure that the model is being develop correctly. By having this model, it can aid designer and developer in designing meaningful and effective Culturicon.
Gli stili APA, Harvard, Vancouver, ISO e altri
18

Topçu, Okan, e Levent Yilmaz. "Rapid prototyping of cognitive agent simulations using C-BML transformations". Journal of Defense Modeling and Simulation: Applications, Methodology, Technology 17, n. 2 (4 luglio 2019): 155–73. http://dx.doi.org/10.1177/1548512919860222.

Testo completo
Abstract (sommario):
Simulating battle management is an essential technique used in planning and mission rehearsal as well as training. Simulation development costs tend to be high due to the complexity of cognitive system architectures in such applications. Due to this complexity, it takes significant effort for a simulation developer to comprehend the problem domain enough to capture accurately in a simulation code. Domain-specific languages (DSL) play an important role in narrowing the communication gap between the domain user and the developer and hence facilitate rapid development. In command and control (C2) applications, the coalition battle management language (C-BML) serves as a DSL for exchanging battle information among C2 systems, simulations, and autonomous elements. In this article, we use a rapid prototyping framework for cognitive agents and demonstrate deployment of agent systems by adopting the model driven engineering approach. To this end, we extend the use of C-BML and automatically transform it in a cognitive agent model, which is then used for adaptive decision making at runtime. As a result, during a simulation run, it is possible to initialize and modify an agent’s goal reasoning model. The cognitive agent models are based on the deliberative coherence theory, which provides a goal reasoning system in terms of coherence-driven agents.
Gli stili APA, Harvard, Vancouver, ISO e altri
19

Goerigk, Lars, e Nisha Mehta. "A Trip to the Density Functional Theory Zoo: Warnings and Recommendations for the User". Australian Journal of Chemistry 72, n. 8 (2019): 563. http://dx.doi.org/10.1071/ch19023.

Testo completo
Abstract (sommario):
This account is written for general users of density functional theory (DFT) methods as well as experimental researchers who are new to the field and would like to conduct such calculations. Its main emphasis lies on how to find a way through the confusing ‘zoo’ of DFT by addressing common misconceptions and highlighting those modern methods that should ideally be used in calculations of energetic properties and geometries. A particular focus is on highly popular methods and the important fact that popularity does not imply accuracy. In this context, we present a new analysis of the openly available data published in Swart and co-workers’ famous annual ‘DFT poll’ (http://www.marcelswart.eu/dft-poll/) to demonstrate the existing communication gap between the DFT user and developer communities. We show that despite considerable methodological advances in the field, the perception of some parts of the user community regarding their favourite approaches has changed little. It is hoped that this account makes a contribution towards changing this status and that users are inspired to adjust their current computational protocols to accommodate strategies that are based on proven robustness, accuracy, and efficiency rather than popularity.
Gli stili APA, Harvard, Vancouver, ISO e altri
20

Duguay, Stefanie, Jean Burgess e Nicolas Suzor. "Queer women’s experiences of patchwork platform governance on Tinder, Instagram, and Vine". Convergence: The International Journal of Research into New Media Technologies 26, n. 2 (19 giugno 2018): 237–52. http://dx.doi.org/10.1177/1354856518781530.

Testo completo
Abstract (sommario):
Leaked documents, press coverage, and user protests have increasingly drawn attention to social media platforms’ seemingly contradictory governance practices. We investigate the governance approaches of Tinder, Instagram, and Vine through detailed analyses of each platform, using the ‘walkthrough method’ (Light, Burgess, and Duguay, 2016 The walkthrough method: An approach to the study of apps. New Media & Society 20(3).), as well as interviews with their queer female users. Across these three platforms, we identify a common approach we call ‘patchwork platform governance’: one that relies on formal policies and content moderation mechanisms but pays little attention to dominant platform technocultures (including both developer cultures and cultures of use) and their sustaining architectures. Our analysis of these platforms and reported user experiences shows that formal governance measures like Terms of Service and flagging mechanisms did not protect users from harassment, discrimination, and censorship. Key components of the platforms’ architectures, including cross-platform connectivity, hashtag filtering, and algorithmic recommendation systems, reinforced these technocultures. This significantly limited queer women’s ability to participate and be visible on these platforms, as they often self-censored to avoid harassment, reduced the scope of their activities, or left the platform altogether. Based on these findings, we argue that there is a need for platforms to take more systematic approaches to governance that comprehensively consider the role of a platform’s architecture in shaping and sustaining dominant technocultures.
Gli stili APA, Harvard, Vancouver, ISO e altri
21

Bulatova, Anastasia E., Denis A. Evsevichev, Oksana V. Maksimova e Mikhail K. Samokhvalov. "PROGRAM DEVELOPMENT FOR DESIGNING SIMULATORS OF ADVANCED FLIGHT ELECTRONIC SUPPORT SYSTEMS". Автоматизация Процессов Управления 62, n. 4 (2020): 131–39. http://dx.doi.org/10.35752/1991-2927-2020-4-62-131-139.

Testo completo
Abstract (sommario):
The main radio equipment used by aviation specialists are communication, radar and radio navigation systems. Training of students in these areas is a complex task that requires significant efforts of lecturer. The article proposes a concept of simulator designing that are supposed to be adopted in the educational process of civil aviation institutions based on hybrid approach to the virtual simulators designing. Such concept was applied in the development of methodology for designing of simulators for flight electronic support, which involves solving the problem of selective structural synthesis. The problem of selective structural synthesis of design solutions is described using a morphological tree and implies finding the graph edge that ensures meeting a set of rules with the constraints specified in the process of specification drawing up, which is replenished by the developer-user. Due to random nature, the structural optimization method implies a sequential search for possible options. A computer-aided system of simulator designing for flight electronic support systems was developed based on the methodology. The development of a computer-aided system of simulator designing for radio communications, radar and navigation equipment makes it possible to provide effective training of future pilots and air traffic control officers.
Gli stili APA, Harvard, Vancouver, ISO e altri
22

Hafit, Hanayanti, Chiam Wui Xiang, Munirah Mohd Yusof, Norfaradilla Wahid e Shahreen Kassim. "Malaysian Sign Language Mobile Learning Application : A recommendation app to communicate with hearing-impaired communities". International Journal of Electrical and Computer Engineering (IJECE) 9, n. 6 (1 dicembre 2019): 5512. http://dx.doi.org/10.11591/ijece.v9i6.pp5512-5518.

Testo completo
Abstract (sommario):
Malaysian Sign Language (MSL) is an important language that is used as the primary communication method for the deaf communities with the others. Currently, the MSL is poorly known by the Malaysians and the existing platform of learning the sign language is inefficient, not to mention the incomplete functionality of the existing mobile learning application in the market. Hence, the purpose of developing this application is aimed to increase the knowledge and recognition of the public towards the MSL and allows them to learn the sign language more effectively. One of the features in this application is sign detection, which could analyze the image captured by phone camera into sign meaning. The application also comprised various categories of the sign for efficient learning and quiz to test user knowledge against their learned sign language. Besides, there is a feedback module for the user to express their opinions and suggestions towards the application to the developer. This application is aimed to help the public to learn MSL efficiently by selecting the category of sign they wish to learn and test themselves by using the quiz module. Besides, the application could also detect the unknown sign by capturing the image and analyze it. The application helped to raise the recognition of MSL among the public and expose the public to the sign language knowledge. It had also become a small help in breaking the barrier of communication between the deaf communities and the public.
Gli stili APA, Harvard, Vancouver, ISO e altri
23

Nagy, Jeff, e Fred Turner. "The Selling of Virtual Reality: Novelty and Continuity in the Cultural Integration of Technology". Communication, Culture and Critique 12, n. 4 (26 novembre 2019): 535–52. http://dx.doi.org/10.1093/ccc/tcz038.

Testo completo
Abstract (sommario):
Abstract Since the spring of 2014, the consumer virtual reality (VR) industry has once again been racing to reach the public, providing an opportunity to track an emerging medium’s cultural integration in real time. We examined three sites on the sales chain that stretches from the laboratory to the living room: industry developer conferences, industrial prototypes, and end-user experiences. At each of these sites, marketers renegotiate VR’s novelty in order to sell it to specific constituencies. Paradoxically, these negotiations reveal how VR, typically presented as a disruptive innovation, has been called upon to stabilize and ensure the continuity of the past: that is, of particular cultural forms and of the industrial and technological infrastructures that sustain them. We argue that the enculturation of VR demonstrates that the processes that summon new technologies and construct them as novel also reinforce existing—and often unspoken—agreements about the ways that culture should be organized.
Gli stili APA, Harvard, Vancouver, ISO e altri
24

Nansen, Bjorn, e Rowan Wilken. "Techniques of the tactile body: A cultural phenomenology of toddlers and mobile touchscreens". Convergence: The International Journal of Research into New Media Technologies 25, n. 1 (4 dicembre 2018): 60–76. http://dx.doi.org/10.1177/1354856518815040.

Testo completo
Abstract (sommario):
A key dimension of young children’s mobile media engagement and play centres on their embodied relations, and how these are shaped with and through the interfaces, materiality and mobility of haptic media. This article explores these embodied dimensions of young children’s mobile media use, drawing on research from (1) ethnographic observation of young children’s play practices in family homes, (2) analysis of videos of young children’s tactile media interaction shared on YouTube and (3) analysis of user interface (UI) and mobile app developer literature, such as the ‘Event Handling Guide for iOS’, which encodes touchscreen interaction through the design constraints and possibilities of gesture input techniques. Taking as its starting point Marcel Mauss’ famous reflection on body techniques, this article draws on past and present research on mobile technologies, tactility and everyday life, to explore what might be involved in developing a ‘cultural phenomenology’ of mobile touchscreens. This research and analysis reveals the emergence of what we term a haptic habitus or cultivation of embodied dispositions for touchscreen conduct and competence.
Gli stili APA, Harvard, Vancouver, ISO e altri
25

Baliś, Bartosz, Marian Bubak, Włodzimierz Funika, Roland Wismüller, Marcin Radecki, Tomasz Szepieniec, Tomasz Arodź e Marcin Kurdziel. "Grid Environment for On-line Application Monitoring and Performance Analysis". Scientific Programming 12, n. 4 (2004): 239–51. http://dx.doi.org/10.1155/2004/896517.

Testo completo
Abstract (sommario):
This paper presents an application monitoring infrastructure developed within the CrossGrid project. The software is aimed at enabling performance measurements for the application developer and in this way facilitating the development of applications in the Grid environment. The application monitoring infrastructure is composed of a distributed monitoring system, the OCM-G, and a performance analysis tool called G-PM. The OCM-G is an on-line, grid-enabled, monitoring system, while G-PM is an advanced graphical tool which allows to evaluate and present the results of performance monitoring, to support optimization of the application execution. G-PM supports build-in standard metrics and user-defined metrics expressed in the Performance Measurement Specification Language (PMSL). Communication between the G-PM and the OCM-G is performed according to a well-defined protocol, OMIS (On-line Monitoring Interface Specification). In this paper, the architecture and features of the OCM-G and G-PM are described as well as an example of use of the monitoring infrastructure to visualize the status and communication in the application, to evaluate the performance, including discovering the reason of the performance flaw.
Gli stili APA, Harvard, Vancouver, ISO e altri
26

Wallace, Daniel F., John A. Dawson e Clent J. Blaylock. "Tactical Information GUI Engineering & Requirements Specification (TIGERS): A Top-Down HMI Engineering Process". Proceedings of the Human Factors and Ergonomics Society Annual Meeting 39, n. 18 (ottobre 1995): 1185–89. http://dx.doi.org/10.1177/154193129503901808.

Testo completo
Abstract (sommario):
The actual design of graphical user interfaces (GUIs) for supervisory control systems largely falls to software developers, as opposed to qualified human engineers (HEs). This is due in large part to the disconnect among the primary players (operational subject matter experts (SMEs), software developers, & human engineers) and the lack of a suitable communications vehicle to bring all these critical perspectives to bear in the design process. We define a process, TIGERS (Tactical Information GUI Engineering & Requirements Specification), which provides a vehicle whereby SMEs can play a more active role in defining the system “process” from a top-down perspective. Together with a human engineer, the SME articulates the critical decisions to be made, the information, and information sources required to support each decision. This articulation uses “operational sequence diagrams” (OSDs) as the primary tool or medium for communication. Once the OSDs are so articulated, the human engineer can better define the optimal display format of that information, define the critical system events that impact that decision, and obtain validation reviews from the SME and developer. This articulation of the tasks, and information requirements are then sufficient to permit actual system design. Byproducts from this process are workload simulation parameters, explicit documentation of the HMI design process, and a traceability matrix to support design specification. We present this approach, provide two case studies, and identify how it can be applied to other systems development projects.
Gli stili APA, Harvard, Vancouver, ISO e altri
27

Al-Turany, Mohammad, Alexey Rybalchenko, Dennis Klein, Matthias Kretz, Dmytro Kresan, Radoslaw Karabowicz, Andrey Lebedev, Anar Manafov, Thorsten Kollegger e Florian Uhlig. "ALFA: A framework for building distributed applications". EPJ Web of Conferences 245 (2020): 05021. http://dx.doi.org/10.1051/epjconf/202024505021.

Testo completo
Abstract (sommario):
The ALFA framework is a joint development between ALICE Online-Offline and FairRoot teams. ALFA has a distributed architecture, i.e. a collection of highly maintainable, testable, loosely coupled, independently deployable processes. ALFA allows the developer to focus on building singlefunction modules with well-defined interfaces and operations. The communication between the independent processes is handled by FairMQ transport layer. FairMQ offers multiple implementations of its abstract data transport interface, it integrates some popular data transport technologies like ZeroMQ and nanomsg. Furthermore it also provides shared memory and RDMA transport (based on libfabric) for high throughput, low latency applications. Moreover, FairMQ allows the single process to use multiple and different transports at the same time. FairMQ based processes can be controlled and orchestrated via different systems by implementing the corresponding plugin. However, ALFA delivers also the Dynamic Deployment System (DDS) as an independent set of utilities and interfaces, providing a dynamic distribution of different user processes on any Resource Management System (RMS) or a laptop. ALFA is already being tested and used by different experiments in different stages of data processing as it offers an easy integration of heterogeneous hardware and software.
Gli stili APA, Harvard, Vancouver, ISO e altri
28

Kavats, Elena Aleksandrovna, e Artem Aleksandrovich Kostenko. "ANALYSIS OF CONNECTION METHODS OF TELEGRAM ROBOTS WITH SERVER PART". System technologies 3, n. 122 (10 ottobre 2019): 19–24. http://dx.doi.org/10.34185/1562-9945-3-122-2019-03.

Testo completo
Abstract (sommario):
The paper analyzes the methods of interaction of robotic applications with Telegram servers. A comparison was made between the standard polling method (Long Polling) and Webhook, both from the speed of application interaction with the end user and the complexity of the installation from the point of view of the developer. The interaction mechanism of telegrams-bot with Webhook-enabled telegram servers, which significantly improves the performance of the program as a whole, saving the user’s query execution time and increasing fault tolerance.The purpose of the study is to compare the methods of interaction between the application-work Telegrams written in Python, as well as the implementation of these methods in practice, in order to identify the complexity of writing both solutions.The Webhook method is a way to deliver real-time data to applications. Unlike traditional APIs, where you need to specify data more often to get information in real time, Webhook sends data immediately.It is proposed to consider the two most common communication options work and the Telegram server. The most common option is to periodically poll the Telegram servers for new information. All this is done through Long Polling, that is, the use opens for a short time and all updates immediately arrive bot.In the work, an alternative communication option was proposed for the application to work with Telegram servers using Webhook. During the work on changing the data exchange method from standard polling (Long Polling) to Webhook, its indisputable advantage in loaded applications, namely on the number of incoming requests over a thousand, was proved (Long Polling).
Gli stili APA, Harvard, Vancouver, ISO e altri
29

Zhou, Lei, Chu Zhang, Mohamed Farag Taha, Zhengjun Qiu e Yong He. "Determination of Leaf Water Content with a Portable NIRS System Based on Deep Learning and Information Fusion Analysis". Transactions of the ASABE 64, n. 1 (2021): 127–35. http://dx.doi.org/10.13031/trans.13989.

Testo completo
Abstract (sommario):
HighlightsA portable NIRS system with local computing hardware was developed for leaf water content determination.The proposed convolutional neural network for regression showed a satisfactory performance.Decision fusion of multiple regression models achieved a higher precision than single models.All of the devices and machine intelligence algorithms were integrated into the system.Software was developed for system control and user interface.Abstract. Spectroscopy has been widely used as a valid non-destructive technique for the determination of crop physiological parameters. In this study, a portable near-infrared spectroscopy (NIRS) system was developed for rapid measurement of rape (Brassica campestris) leaf water content. An integrated spectrometer (900 to 1700 nm) was used to collect the spectra. A Wi-Fi module was adopted for driving the spectrometer and realizing data communication. The NVIDIA Jetson Nano developer kit was employed to handle the received spectra and perform computing tasks. Three embedded spectral analysis models, including support vector regression (SVR), partial least square regression (PLSR), and deep convolutional neural network for regression (CNN-R), and decision fusions of these methods were built and compared. The results demonstrated that the separate models produced satisfactory predictions. The proposed system achieved the highest precision based on the fusion of PLSR and CNN-R. The hardware devices and analytical algorithms were all integrated into the proposed portable system, and the tested samples were collected from an actual field environment, which shows great potential of the system for outdoor applications. Keywords: Decision fusion, Deep learning, Leaf water content, Local computing, Portable NIRS system.
Gli stili APA, Harvard, Vancouver, ISO e altri
30

Pushkar, Oleksandr, e Yuliia Tararyntseva. "Methodical recommendations for creating video content in the digital environment on the example of developing a promotional video for universities". Marketing and Digital Technologies 5, n. 2 (29 giugno 2021): 27–35. http://dx.doi.org/10.15276/mdt.5.2.2021.3.

Testo completo
Abstract (sommario):
The aim of the article.Impression management is especially relevant in digital marketing. This is due to the fact that more and more communication in today's audience takes place in the digital environment in mobile phones and other gadgets. The problem is the lack of systematic information on the practical implementation of the concept of the economy of impressions in the digital environment. The purpose of the article is to form practical recommendations for creating video content in the digital environment on the example of developing a promotional video of the university. Analyses results.The article presents methodological recommendations for the practical application of the concept of the economy of impressions in the digital environment on the example of the development of a promotional video of the university. The significance of impressions in the process of information processing as a factor that holds the user's attention is specified. Successfully designed digital content will enable the user's imagination, transfer him to the situation presented and make it part of the action that takes place. Thanks to the proposed recommendations, the developer will be able to offer content that will be a sensory user experience. Conclusions and directions for further research. The stages of planning and implementation of the show in the digital environment in accordance with the laws of drama are substantiated. The action should take place in the user's imagination, under the influence of emotions that arise under the impressions of what is seen or heard. Summarized elements of storytelling and digital marketing, which are recommended in the development of emotional digital content. The components of the show in digital format are given. Motives for creating stories for a promotional video of a higher education institution are proposed.
Gli stili APA, Harvard, Vancouver, ISO e altri
31

Андрій Шуляк. "FEATURES OF IT TEACHER’S PROFESSIONAL ACTIVITY". Psychological and Pedagogical Problems of Modern School, n. 2(4) (4 settembre 2020): 140–49. http://dx.doi.org/10.31499/2706-6258.2(4).2020.223061.

Testo completo
Abstract (sommario):
The article reveals the features of IT teacher professional activity. We found out the difference between the specialists' training of in other specialties and subject teachers from the professional activity of IT teachers. The use of informatics by the IT teacher has specifics that take into account the efficiency of the educational process, on the one hand, and has a number of limitations, on the other hand.The difference between the activities of the future IT teacher and future specialist in the field of information technology (programmer) is the use of all opportunities of ICT in the educational process and the focus on achieving new educational outcomes namely use, improvement and creation of teaching methods; expert assessment of electronic educational resources; use, design, creation and editing of electronic educational resources; the use of the potential of the distributed information resource; organization of information interaction; management of the educational process on the basis of automation of information and methodological support; psychological and pedagogical diagnostics of the level of education on the basis of computer diagnostic methods of control and assessment of studentsʼ knowledge; development of new software, hardware, as well as methods of using web resources in the educational process, etc.There are three levels of mastering the didactic capabilities of WEB-resources: the level of a Web-resource user, the level of a WEB-resource developer, the level of the teacher of WEB-technologies. The tasks of IT teachers and students in the information and communication educational environment have been highlighted. The specifics of the work of the IT teacher have been revealed. The professional and personal qualities of the future IT teacher have been determined. We defined the “portrait of a modern IT teacher”.
Gli stili APA, Harvard, Vancouver, ISO e altri
32

Mozzilli, Simone Lehwess, Elisa Sassi, Ludmilla Rossi, Viviane Sonaglio, Fernanda Francisco, Adriana Duarte, Lisandra Panzoldo dos Santos, Gica Yabu e Simone Treiger Sredni. "Creating and developing games throughout interdisciplinarity and patient participation." Journal of Clinical Oncology 35, n. 15_suppl (20 maggio 2017): e18231-e18231. http://dx.doi.org/10.1200/jco.2017.35.15_suppl.e18231.

Testo completo
Abstract (sommario):
e18231 Background: In an increasingly technological world, digital information about childhood cancer that is easily accessible to pediatric patients, their families and healthcare providers is still missing. In this scenario, we decided to bring all our stakeholders – patients, caregivers, healthcare professionals, creatives and entrepreneurs - together to create an application for tablet and mobile devices that would help children demystify cancer and its treatment. Methods: Materials and platforms used included: 1- Interdisciplinarity: which involves the combination of two or more academic disciplines into one activity; 2- Design Thinking: a human-centered methodology for a rapid ideation (idea generation) with the ability to visualize and adapt the results in near real-time; 3- Agile UX: a methodology which integrates the user experience design team and the developer team; 4- Patient Centered Design; 5- Project Management Platforms; 6- Collaboration and Communication Tools. Results: 1- Development of an application with 20 minigames for iOS and Android platforms, in a language that is easily accessible to children with an attractive an colorful interface where the child can identify him/herself with a custom designed character that guides them through different steps of the treatment. The initiative was highlighted in the AppStore and resulted in almost 4 million views. 2- Acquisition and expansion of interdisciplinarity for all stakeholders. 3- And most importantly, development of empathy and compassion for our patients. Conclusions: The interdisciplinarity and the effective participation of all the stakeholders in the project, allowed an integrated access to information, facilitating the understanding, identification, codification and availability of information that resulted in generation of easily accessible knowledge to our targeted population. The greatest transformation was not only in the technological advance itself, but in our new way to associate knowledge with human factors in order to educate, empower and improve the quality of life of children fighting cancer. www.beaba.org/games The app is currently available only in Portuguese, but we are raising funding to translate to other languages.
Gli stili APA, Harvard, Vancouver, ISO e altri
33

VANHOOF, K., e J. SURMA. "COMPARING TWO HYBRID EXPERT SYSTEM SHELLS". International Journal of Software Engineering and Knowledge Engineering 04, n. 01 (marzo 1994): 159–64. http://dx.doi.org/10.1142/s0218194094000088.

Testo completo
Abstract (sommario):
This paper describes in full detail an analysis of two expert system shells: Level 5 Object and Kappa PC. The major components of these tools (knowledge representation, inference and control, developer interface, user interface and explanation facility, interface to external data sources, support and documentation) were studied and tested by means of small prototypes. Results and experiences of this work are given together with some software engineering remarks.
Gli stili APA, Harvard, Vancouver, ISO e altri
34

Vasconcelos, Leandro Guarino, Laercio Augusto Baldochi e Rafael Duarte Coelho Santos. "An approach to support the construction of adaptive Web applications". International Journal of Web Information Systems 16, n. 2 (26 febbraio 2020): 171–99. http://dx.doi.org/10.1108/ijwis-12-2018-0089.

Testo completo
Abstract (sommario):
Purpose This paper aims to presents Real-time Usage Mining (RUM), an approach that exploits the rich information provided by client logs to support the construction of adaptive Web applications. The main goal of RUM is to provide useful information about the behavior of users that are currently browsing a Web application. By consuming this information, the application is able to adapt its user interface in real-time to enhance the user experience. RUM provides two types of services as follows: support for the detection of struggling users; and user profiling based on the detection of behavior patterns. Design/methodology/approach RUM leverages the previous study on usability evaluation to provide a service that evaluates the usability of tasks performed by users while they browse applications. This evaluation is based on a metric that allows the detection of struggling users, making it possible to identify these users as soon as few logs from their interaction are processed. RUM also exploits log mining techniques to detect usage patterns, which are then associated with user profiles previously defined by the application specialist. After associating usage patterns to user profiles, RUM is able to classify users as they browse applications, allowing the application developer to tailor the user interface according to the users’ needs and preferences. Findings The proposed approach was exploited to improve user experience in real-world Web applications. Experiments showed that RUM was effective to provide support for struggling users to complete tasks. Moreover, it was also effective to detect usage patterns and associate them with user profiles. Originality/value Although the literature reports studies that explore client logs to support both the detection of struggling users and the user profiling based on usage patterns, no existing solutions provide support for detecting users from specific profiles or struggling users, in real-time, while they are browsing Web applications. RUM also provides a toolkit that allows the approach to be easily deployed in any Web application.
Gli stili APA, Harvard, Vancouver, ISO e altri
35

Ekler, Péter, e Dániel Pásztor. "Alkalmazott mesterséges intelligencia felhasználási területei és biztonsági kérdései – Mesterséges intelligencia a gyakorlatban". Scientia et Securitas 1, n. 1 (17 dicembre 2020): 35–42. http://dx.doi.org/10.1556/112.2020.00006.

Testo completo
Abstract (sommario):
Összefoglalás. A mesterséges intelligencia az elmúlt években hatalmas fejlődésen ment keresztül, melynek köszönhetően ma már rengeteg különböző szakterületen megtalálható valamilyen formában, rengeteg kutatás szerves részévé vált. Ez leginkább az egyre inkább fejlődő tanulóalgoritmusoknak, illetve a Big Data környezetnek köszönhető, mely óriási mennyiségű tanítóadatot képes szolgáltatni. A cikk célja, hogy összefoglalja a technológia jelenlegi állapotát. Ismertetésre kerül a mesterséges intelligencia történelme, az alkalmazási területek egy nagyobb része, melyek központi eleme a mesterséges intelligencia. Ezek mellett rámutat a mesterséges intelligencia különböző biztonsági réseire, illetve a kiberbiztonság területén való felhasználhatóságra. A cikk a jelenlegi mesterséges intelligencia alkalmazások egy szeletét mutatja be, melyek jól illusztrálják a széles felhasználási területet. Summary. In the past years artificial intelligence has seen several improvements, which drove its usage to grow in various different areas and became the focus of many researches. This can be attributed to improvements made in the learning algorithms and Big Data techniques, which can provide tremendous amount of training. The goal of this paper is to summarize the current state of artificial intelligence. We present its history, introduce the terminology used, and show technological areas using artificial intelligence as a core part of their applications. The paper also introduces the security concerns related to artificial intelligence solutions but also highlights how the technology can be used to enhance security in different applications. Finally, we present future opportunities and possible improvements. The paper shows some general artificial intelligence applications that demonstrate the wide range usage of the technology. Many applications are built around artificial intelligence technologies and there are many services that a developer can use to achieve intelligent behavior. The foundation of different approaches is a well-designed learning algorithm, while the key to every learning algorithm is the quality of the data set that is used during the learning phase. There are applications that focus on image processing like face detection or other gesture detection to identify a person. Other solutions compare signatures while others are for object or plate number detection (for example the automatic parking system of an office building). Artificial intelligence and accurate data handling can be also used for anomaly detection in a real time system. For example, there are ongoing researches for anomaly detection at the ZalaZone autonomous car test field based on the collected sensor data. There are also more general applications like user profiling and automatic content recommendation by using behavior analysis techniques. However, the artificial intelligence technology also has security risks needed to be eliminated before applying an application publicly. One concern is the generation of fake contents. These must be detected with other algorithms that focus on small but noticeable differences. It is also essential to protect the data which is used by the learning algorithm and protect the logic flow of the solution. Network security can help to protect these applications. Artificial intelligence can also help strengthen the security of a solution as it is able to detect network anomalies and signs of a security issue. Therefore, the technology is widely used in IT security to prevent different type of attacks. As different BigData technologies, computational power, and storage capacity increase over time, there is space for improved artificial intelligence solution that can learn from large and real time data sets. The advancements in sensors can also help to give more precise data for different solutions. Finally, advanced natural language processing can help with communication between humans and computer based solutions.
Gli stili APA, Harvard, Vancouver, ISO e altri
36

ORTIGOSA, ALVARO, e MARCELO CAMPO. "USING INCREMENTAL PLANNING TO FOSTER APPLICATION FRAMEWORK REUSE". International Journal of Software Engineering and Knowledge Engineering 10, n. 04 (agosto 2000): 433–48. http://dx.doi.org/10.1142/s0218194000000237.

Testo completo
Abstract (sommario):
In this work, we present an approach for documenting object-oriented application frameworks and use the documentation to guide the framework instantiation process. Our approach is based on a shift from a framework-centered to a functionality-centered documentation, through which a tool can guide the instantiation process according to the functionality required for the new application. The fundamental idea of our work is the combination of the concept of user-tasks modeling and least commitment planning methods to guide the instantiation process. Based on these techniques, the tool is able to present the different high level activities that can be carried out when creating a new application from a framework to the developer, taking as a basis the documentation provided by the designer through instantiation rules.
Gli stili APA, Harvard, Vancouver, ISO e altri
37

Biørn-Hansen, Andreas, Tor-Morten Grønli, Gheorghita Ghinea e Sahel Alouneh. "An Empirical Study of Cross-Platform Mobile Development in Industry". Wireless Communications and Mobile Computing 2019 (3 gennaio 2019): 1–12. http://dx.doi.org/10.1155/2019/5743892.

Testo completo
Abstract (sommario):
The purpose of this study is to report on the industry’s perspectives and opinions on cross-platform mobile development, with an emphasis on the popularity, adoption, and arising issues related to the use of technical development frameworks and tools. We designed and conducted an online survey questionnaire, for which 101 participants were recruited from various developer-oriented online forums and websites. A total of five questions are reported in this study, of which two employed a Likert scale instrument, while three were based on multiple choice. In terms of technical frameworks, we find that PhoneGap, the Ionic Framework, and React Native were the most popular in use, both in hobby projects and in professional settings. The participants report an awareness of trade-offs when embracing cross-platform technologies and consider penalties in performance and user experience to be expected. This is also in line with what is reported in academic research. We find patterns in the reported perceived issues which match both older and newer research, thus rendering the findings a point of departure for further endevours.
Gli stili APA, Harvard, Vancouver, ISO e altri
38

Ibrahim, Idris Skloul, Peter J. B. King e Hans-Wolfgang Loidl. "NsGTFA: A GUI Tool to Easily Measure Network Performance through the Ns2 Trace File". Journal of Intelligent Systems 24, n. 4 (1 dicembre 2015): 467–77. http://dx.doi.org/10.1515/jisys-2014-0153.

Testo completo
Abstract (sommario):
AbstractNs2 is an open-source communications network simulator primarily used in research and teaching. Ns2 provides substantial support for simulation of TCP, routing, and multicast protocols over wired and wireless networks. Although Ns2 is a widely used powerful simulator, it lacks a way to measure networks that are used to assess reliability and performance metrics (e.g., the number of packets transferred from source to destination, delay in packets, packet loss, etc.) and it does not analyse the trace files it produces. The data obtained from the simulations are not straightforward to analyse. Ns2 is still unable to provide any data analysis statistics or graphics as requested. Moreover, the analysis of the Ns2 trace file using any software scripts requires further steps by a developer to do data processing and then produce graphical outputs. Lack of standardisation of tools means that results from different users may not be strictly comparable. There are alternative tools; however, most of them are not standalone applications, requiring some additional libraries. Also, they lack a user-friendly interface. This article presents the architecture and development considerations for the NsGTFA (Ns2 GUI Trace File Analyser) tool, which intends to simplify the management and enable the statistical analysis of trace files generated during network simulations. NsGTFA runs under Windows and has a friendly graphical user interface. This tool is a very fast standalone application implemented in VC++, taking as input an Ns2 trace file. It can output two-dimensional (2D) and 3D graphs (points, lines, and bar charts) or data sets, whatever the trace file format (Tagged, Old, or New). It is also possible to specify the output of standard network performance metrics. NsGTFA satisfies most user needs. There is no complex installation process, and no external libraries are needed.
Gli stili APA, Harvard, Vancouver, ISO e altri
39

Al-Saadi, Tareq Ali, Tamer Mohammed Aljarrah, Anahed Mudheher Alhashemi e Azham Hussain. "A Systematic Review of Usability Challenges and Testing in Mobile Health". International Journal of Accounting and Financial Reporting 5, n. 2 (12 luglio 2015): 1. http://dx.doi.org/10.5296/ijafr.v5i2.8004.

Testo completo
Abstract (sommario):
Nowadays, the combining of advanced mobile communications and mobile account now in portable devices named "smart phones" has becomes more great uses. Among of these include health care professionals. Few studies in the challenge, blurred reality challenge facing the patient and developer alike in the usability of mobile health. Therefore, this paper aims to analyze the usability challenges in mobile health and usability testing. The systematic review was using for collecting the prior studies that relation with our study. This study concentrates on the three digital libraries Google scholar, ACM and IEEE, as well as, the researcher selected the studies between 2007 and 2015. The results from this systematic were selected 11 studies of 106 based on the inclusions criteria. In more details, the usability challenges found that 27% offered User Interface, 22% tasks and screen size, 16% insert media and 13% network. On the other hand, usability use found that, 46% of the selected studies the usability use of formal type of 45% informal and 9% mixed formal and informal. Sum up, the use of smart phones is getting more on health care and day out. Medical applications make smart phones useful tools in the practice of evidence-based medicine at the point of care, in addition to its use in mobile clinical communications. This study will making a contribution to the researchers to extract over the impact of the challenges on usability testing and the types of usability in mobile health.
Gli stili APA, Harvard, Vancouver, ISO e altri
40

Ghorbani, M., S. Swift, S. J. E. Taylor e A. M. Payne. "Design of a Flexible, User Friendly Feature Matrix Generation System and its Application on Biomedical Datasets". Journal of Grid Computing 18, n. 3 (27 aprile 2020): 507–27. http://dx.doi.org/10.1007/s10723-020-09518-y.

Testo completo
Abstract (sommario):
Abstract The generation of a feature matrix is the first step in conducting machine learning analyses on complex data sets such as those containing DNA, RNA or protein sequences. These matrices contain information for each object which have to be identified using complex algorithms to interrogate the data. They are normally generated by combining the results of running such algorithms across various datasets from different and distributed data sources. Thus for non-computing experts the generation of such matrices prove a barrier to employing machine learning techniques. Further since datasets are becoming larger this barrier is augmented by the limitations of the single personal computer most often used by investigators to carry out such analyses. Here we propose a user friendly system to generate feature matrices in a way that is flexible, scalable and extendable. Additionally by making use of The Berkeley Open Infrastructure for Network Computing (BOINC) software, the process can be speeded up using distributed volunteer computing possible in most institutions. The system makes use of a combination of the Grid and Cloud User Support Environment (gUSE), combined with the Web Services Parallel Grid Runtime and Developer Environment Portal (WS-PGRADE) to create workflow-based science gateways that allow users to submit work to the distributed computing. This report demonstrates the use of our proposed WS-PGRADE/gUSE BOINC system to identify features to populate matrices from very large DNA sequence data repositories, however we propose that this system could be used to analyse a wide variety of feature sets including image, numerical and text data.
Gli stili APA, Harvard, Vancouver, ISO e altri
41

Abayomi-Alli, Adebayo Adewumi, Oluwasefunmi 'Tale Arogundade, Sanjay Misra, Mulkah Opeyemi Akala, Abiodun Motunrayo Ikotun e Bolanle Adefowoke Ojokoh. "An Ontology-Based Information Extraction System for Organic Farming". International Journal on Semantic Web and Information Systems 17, n. 2 (aprile 2021): 79–99. http://dx.doi.org/10.4018/ijswis.2021040105.

Testo completo
Abstract (sommario):
In the existing farming system, information is obtained manually, and most times, farmers act based on their discretion. Sometimes, farmers rely on information from experts and extension officers for decision making. In recent times, a lot of information systems are available with relevant information on organic farming practices; however, such information is scattered in different context, form, and media all over the internet, making their retrieval difficult. The use of ontology with the aid of a conceptual scheme makes the comprehensive and detailed formalization of any subject domain possible. This study is aimed at acquiring, storing, and providing organic farming-based information available to current and intending software developer who may wish to develop applications for farmers. It employs information extraction (IE) and ontology development techniques to develop an ontology-based information extraction (OBIE) system called ontology-based information extraction system for organic farming (OBIESOF). The knowledge base was built using protégé editor; Java was used for the implementation of the ontology knowledge base with the aid of the high-level application programming language for working web ontology language application program interface (OWL API). In contrast, HermiT was used to checking the consistencies of the ontology and for submitting queries in order to verify their validity. The queries were expressed in description logic (DL) query language. The authors tested the capability of the ontology to respond to user queries by posing instances of the competency questions from DL query interface. The answers generated by the ontology were promising and serve as positive pointers to its usefulness as a knowledge repository.
Gli stili APA, Harvard, Vancouver, ISO e altri
42

Ardimento, Pasquale, Maria Teresa Baldassarre, Marta Cimitile e Giuseppe Visaggio. "Empirical Validation of Knowledge Packages as Facilitators for Knowledge Transfer". Journal of Information & Knowledge Management 08, n. 03 (settembre 2009): 229–40. http://dx.doi.org/10.1142/s021964920900235x.

Testo completo
Abstract (sommario):
Transfer of research results in production systems requires, among others, that knowledge be explicit and understandable by stakeholders. Such transfer is demanding, as so many researchers have been studying alternative ways to classic approaches such as books and papers that favour knowledge acquisition on behalf of users. In this context, we propose the concept of Knowledge Experience Package (KEP) with a specific structure as an alternative. The KEP contains both the conceptual model(s) of the research results which make up the innovation, including all the necessary documentation ranging from papers or book chapters; and the experience collected in acquiring it in business processes, appropriately structured. The structure allows the identification of the knowledge chunk(s) that the developer, who is acquiring the knowledge, needs in order to simplify the acquisition process. The experience is needed to point out the scenarios that the user will most likely face and therefore refer to. Both structure and experience are important factors for the innovation transferability and efficacy. Furthermore, we have carried out an experiment which compared the efficacy of this instrument with the classic ones, along with the comprehensibility of the information enclosed in a KEP rather than in a set of Papers. The experiment has pointed out that knowledge packages are more effective than traditional ones for knowledge transfer.
Gli stili APA, Harvard, Vancouver, ISO e altri
43

Li, Xiaochang, e Zhengjun Zhai. "UHNVM: A Universal Heterogeneous Cache Design with Non-Volatile Memory". Electronics 10, n. 15 (22 luglio 2021): 1760. http://dx.doi.org/10.3390/electronics10151760.

Testo completo
Abstract (sommario):
During the recent decades, non-volatile memory (NVM) has been anticipated to scale up the main memory size, improve the performance of applications, and reduce the speed gap between main memory and storage devices, while supporting persistent storage to cope with power outages. However, to fit NVM, all existing DRAM-based applications have to be rewritten by developers. Therefore, the developer must have a good understanding of targeted application codes, so as to manually distinguish and store data fit for NVM. In order to intelligently facilitate NVM deployment for existing legacy applications, we propose a universal heterogeneous cache hierarchy which is able to automatically select and store the appropriate data of applications for non-volatile memory (UHNVM), without compulsory code understanding. In this article, a program context (PC) technique is proposed in the user space to help UHNVM to classify data. Comparing to the conventional hot or cold files categories, the PC technique can categorize application data in a fine-grained manner, enabling us to store them either in NVM or SSDs efficiently for better performance. Our experimental results using a real Optane dual-inline-memory-module (DIMM) card show that our new heterogeneous architecture reduces elapsed times by about 11% compared to the conventional kernel memory configuration without NVM.
Gli stili APA, Harvard, Vancouver, ISO e altri
44

König, Olaf. "Cartographic storytelling: 150 years of Swiss Federal Population Census". Abstracts of the ICA 1 (15 luglio 2019): 1. http://dx.doi.org/10.5194/ica-abs-1-182-2019.

Testo completo
Abstract (sommario):
<p><strong>Abstract.</strong> The Swiss Federal Statistical Office (FSO) was founded in 1850 with the first exhaustive Population Census, which from then on was conducted on a ten-year cycle until 2000, and which was replaced after that by the annual “structural surveys” based on a population sample and the use of data from administrative registers. In 2018, the FSO started a project aiming to emphasize the full value of historical data from these previous Population Censuses records in order to publish some significant historical results and present on that basis some of the major developments that have occurred in Switzerland in the last 150 years. In this context, analog data have been digitized, and with this raw material, stories on important themes concerning the development of modern Switzerland have been written. These stories consist of a narrative approach that focuses on visual communication by mixing cartographic visualizations, charts and historical photographs, which support the written text and significantly contribute to the narrative.</p><p>Eight stories were drafted under this project, and their choice is based on both the availability of data over time, as well as the importance of these topics for the Swiss population and society – with the activity of official statistics always being a mirror that reflects society’s concerns. The various topics addressed in this project are population dynamics and demographic structure, cultural and religious aspects, the development of building area and the occupation of the territory, the structural development of the economy and finally the changes in the institutional structure of the country. These topics are addressed in their temporal and spatial dimension, and cover a period of more than 150 years.</p><p>This narrative approach – unique in the context of the FSO’s statistical dissemination – requires important work in the field of data visualization in particular with regard to thematic maps. Indeed, the spatial resolution of the digitized data –the smallest institutional spatial division; the Swiss municipalities – has required the production of new historicized geometries for every single Population Census since 1850. The fusions and dissociations of spatial units that have occurred during the past 150 years have profoundly marked the institutional structure of Switzerland. This dynamic is a challenge in the ongoing work of data management and cartographic production, and the FSO is proud to now have basemaps that precisely describe the state of the geometry for every census. This enables the production of numerous series of thematic maps at municipality level, and provides map readers the opportunity to observe changes in Swisssociety and its structures with unprecedented resolution over a very long time.</p><p>The dissemination of these stories is ensured through a website created ad hoc for the occasion. The production work is carried out in close collaboration with a web developer, a graphic designer and the FSO’s cartography competence center. The aim is to produce a new, original web publication intended for a broad audience, that is both relevant and attractive, and has a layout optimized to invite the user onto a visual journey in time along the history of the Federal Population Census.</p><p>With regard to cartographic visualizations, the produced maps have been the subject of a rehabilitated layout for maximum readability and efficiency and a high aesthetic quality. The addition of comments and the focus on specific observations facilitates reading and interpreting maps and supports the narrative. To provide maximum flexibility with respect to the graphics and enable quick loading of visualizations, these are integrated into HTML pages as SVG, which can subsequently be animated in the website. In a concurrent and complementary way, the produced maps are also made available in the Interactive Statistical Atlas of Switzerland (which is the FSO’s main means of thematic maps dissemination). This allows for the interactive exploration of maps, the visualization of animated time series, and data dissemination in the form of downloadable Excel files directly from the application.</p><p>This attempt at (carto)graphic narration is an opportunity to question narrative approaches in the field of graphic visualization in a very concrete framework of historical data valorization. Since storytelling and its cartographic variants in the form of story maps are an important trend today, this project provides an example and a contribution to this approach. The presentation will focus on presenting the structure and content of the stories, focusing on the cartographic and technical aspects of storytelling, and presenting the different choices and challenges encountered. In addition, editorial and technical strengths and weaknesses will also be discussed. As this project is a work in progress that will take place throughout 2019, this contribution also aims to be submitted to peer review, in order to improve our products in the future.
Gli stili APA, Harvard, Vancouver, ISO e altri
45

Aplonia Lau, Elfreda. "Pengaruh Partisipasi Pemakai Terhadap Kepuasan Pemakai Dalam Pengembangan Sistem Informasi Dengan Faktor Kontinjensi Dan Pengaruhnya Terhadap Kinerja Pemakai Pada Perusahaan Menengah Di Kalimantan Timur". DiE: Jurnal Ilmu Ekonomi dan Manajemen 10, n. 1 (1 gennaio 2013). http://dx.doi.org/10.30996/die.v10i1.235.

Testo completo
Abstract (sommario):
This study examines behavioral accounting on the basis of the theory of information systems. Respondents in this study were 200 respondents of the users of information systems in medium enteprise in East Kalimantan. The results indicate that user participation in the development of information systems influence user satisfaction in information system development, although the magnitude of the effect is 9%. The result indicates that user - developer communication, task complexcity, system complexcity, user influence, organisation culture moderated the effect of user’s participation on user’s satisfaction in information system development. It was also found that top management support acts as independent predictor on user’s satisfaction in information system development. User-developer communication having positive effetc on user’s satisfaction in information system development. User influence having negative effect on user’s satisfaction in information system development. Task complexcity and system coplexcity having negative effect on user’s satisfaction in information system development. Organizational culture having positive effect on user’s satisfaction in information system development. User’s satisfaction having positif effect on user’s performance in information system development.Keywords: User’s Participation, Top Management Support, Communication User-Developer, Task Complexity, System Complexity, User’s Influence, Organizational Cul-ture, User’s Satisfaction and User’s Performance.
Gli stili APA, Harvard, Vancouver, ISO e altri
46

"Strategies and Quality Guidelines for Effective User Interface Design". International Journal of Innovative Technology and Exploring Engineering 9, n. 5 (10 marzo 2020): 778–82. http://dx.doi.org/10.35940/ijitee.d1849.039520.

Testo completo
Abstract (sommario):
Graphical user interface plays the vital in the human computer interaction. It exchanges the information and improves the communication. The user interface development represents an initial step towards the integration among the software developer and Human Computer interaction. Upcoming complexity Systems needs the developer to make User interface design is more flexible, understandable, adaptable and accessible to the end user. Software development identifies the importance of user interface design but don’t form concise guidelines for its construction and its quality with in the life cycle. The intent of the research paper is to describe the guidelines for user interface design quality, customization and construction. It highlights the various benefits, quality guidelines, construction and supporting tools in perspective of user interface design. The information is useful for researchers, developers and professional of the user interface design for forth coming generations.
Gli stili APA, Harvard, Vancouver, ISO e altri
47

"Application of Augmented Reality in Learning Bakery for Autism Spectrum Disorder". International Journal of Engineering and Advanced Technology 9, n. 1 (30 ottobre 2019): 2616–20. http://dx.doi.org/10.35940/ijeat.a9853.109119.

Testo completo
Abstract (sommario):
Many young children with ASD have difficulty learning the language spoken. Some kids can use just single words while others can create sounds. Someone else couldn't talk at all. Communication is a vital skill in life that contributes to improved interactions and quality of life. Researchers think it's difficult for all young children who have communication impairments to generate and maintain a mental representation in conjunction with the instant fact, that building capabilities are critical. A child with ASD can be taught to interact in numerous distinct ways. For example, a child can study to use the sign language, exchange objects or images and use a voice output electronic device. All these methods of communication are efficient and useful, but we would also like to teach the child to interact with the language spoken. The questionnaire has been distributed to the target user who is a special education teachers from PEMATA for user acceptance testing. Each respondent runs individual tests after the developer provides a brief explanation of the application. They need to scan and test the augmented reality applications on recipe book.
Gli stili APA, Harvard, Vancouver, ISO e altri
48

Cardinale, Yudith C., Eduardo A. Blanco e Jesus Oliveira. "JADIMA: Virtual Machine Architecture for building JAVA Applications on Grid Platforms". CLEI Electronic Journal 9, n. 2 (1 dicembre 2006). http://dx.doi.org/10.19153/cleiej.9.2.3.

Testo completo
Abstract (sommario):
This paper describes JADIMA (Java Distributed Machine), a collabora- tive platform to construct high performance distributed JAVA applications. JADIMA is a system that automatically manages the remote libraries used in a JAVA application. JADIMA takes the advantages of portability, modular- ity, flexibility and object oriented model of JAVA, while incorporating well known techniques of communication and security. The result is a simple and efficient distributed environment upon which applications and data are eas- ily shared and highly portable amongst heterogeneous platforms and multi- ple users. JADIMA allows compilation and execution of JAVA applications which use distributed libraries, without the need of keeping them either in the developer or the user hosts. To illustrate the functionality and characteristics of JADIMA, we show examples of constructing real applications with several levels of library package dependencies in distributed environments.
Gli stili APA, Harvard, Vancouver, ISO e altri
49

Saghafian, Mina, Karin Laumann e Martin Rasmussen Skogstad. "Organizational Challenges of Development and Implementation of Virtual Reality Solution for Industrial Operation". Frontiers in Psychology 12 (22 settembre 2021). http://dx.doi.org/10.3389/fpsyg.2021.704723.

Testo completo
Abstract (sommario):
This research investigated the organizational challenges related to the development and implementation of virtual reality (VR) technology for operation in a conservative heavy machinery industry. The incorporation of a VR solution for heavy machinery equipment enhanced the safety and convenience of operation under dangerous work conditions. However, the development and implementation processes faced challenges. Furthermore, the adoption of the solution by users was perceived to be slower than anticipated. We aimed to explore the main challenges that the developer organization faced and how it also influenced user organizations. Due to the exploratory nature of the research, qualitative analysis was chosen, interviews were conducted, and thematic analysis was applied. The themes and subthemes were identified and discussed. The results showed the existence of challenges related to technology maturity, managerial challenges regarding communication and support coordination, workload, and multiple stakeholder management. The findings emphasize the importance of attending to the existing and potential organizational challenges before and throughout technological innovation. Theoretical and managerial implications are discussed, and a future research agenda is suggested.
Gli stili APA, Harvard, Vancouver, ISO e altri
50

Burgess, Jean, e Axel Bruns. "Twitter Archives and the Challenges of "Big Social Data" for Media and Communication Research". M/C Journal 15, n. 5 (11 ottobre 2012). http://dx.doi.org/10.5204/mcj.561.

Testo completo
Abstract (sommario):
Lists and Social MediaLists have long been an ordering mechanism for computer-mediated social interaction. While far from being the first such mechanism, blogrolls offered an opportunity for bloggers to provide a list of their peers; the present generation of social media environments similarly provide lists of friends and followers. Where blogrolls and other earlier lists may have been user-generated, the social media lists of today are more likely to have been produced by the platforms themselves, and are of intrinsic value to the platform providers at least as much as to the users themselves; both Facebook and Twitter have highlighted the importance of their respective “social graphs” (their databases of user connections) as fundamental elements of their fledgling business models. This represents what Mejias describes as “nodocentrism,” which “renders all human interaction in terms of network dynamics (not just any network, but a digital network with a profit-driven infrastructure).”The communicative content of social media spaces is also frequently rendered in the form of lists. Famously, blogs are defined in the first place by their reverse-chronological listing of posts (Walker Rettberg), but the same is true for current social media platforms: Twitter, Facebook, and other social media platforms are inherently centred around an infinite, constantly updated and extended list of posts made by individual users and their connections.The concept of the list implies a certain degree of order, and the orderliness of content lists as provided through the latest generation of centralised social media platforms has also led to the development of more comprehensive and powerful, commercial as well as scholarly, research approaches to the study of social media. Using the example of Twitter, this article discusses the challenges of such “big data” research as it draws on the content lists provided by proprietary social media platforms.Twitter Archives for ResearchTwitter is a particularly useful source of social media data: using the Twitter API (the Application Programming Interface, which provides structured access to communication data in standardised formats) it is possible, with a little effort and sufficient technical resources, for researchers to gather very large archives of public tweets concerned with a particular topic, theme or event. Essentially, the API delivers very long lists of hundreds, thousands, or millions of tweets, and metadata about those tweets; such data can then be sliced, diced and visualised in a wide range of ways, in order to understand the dynamics of social media communication. Such research is frequently oriented around pre-existing research questions, but is typically conducted at unprecedented scale. The projects of media and communication researchers such as Papacharissi and de Fatima Oliveira, Wood and Baughman, or Lotan, et al.—to name just a handful of recent examples—rely fundamentally on Twitter datasets which now routinely comprise millions of tweets and associated metadata, collected according to a wide range of criteria. What is common to all such cases, however, is the need to make new methodological choices in the processing and analysis of such large datasets on mediated social interaction.Our own work is broadly concerned with understanding the role of social media in the contemporary media ecology, with a focus on the formation and dynamics of interest- and issues-based publics. We have mined and analysed large archives of Twitter data to understand contemporary crisis communication (Bruns et al), the role of social media in elections (Burgess and Bruns), and the nature of contemporary audience engagement with television entertainment and news media (Harrington, Highfield, and Bruns). Using a custom installation of the open source Twitter archiving tool yourTwapperkeeper, we capture and archive all the available tweets (and their associated metadata) containing a specified keyword (like “Olympics” or “dubstep”), name (Gillard, Bieber, Obama) or hashtag (#ausvotes, #royalwedding, #qldfloods). In their simplest form, such Twitter archives are commonly stored as delimited (e.g. comma- or tab-separated) text files, with each of the following values in a separate column: text: contents of the tweet itself, in 140 characters or less to_user_id: numerical ID of the tweet recipient (for @replies) from_user: screen name of the tweet sender id: numerical ID of the tweet itself from_user_id: numerical ID of the tweet sender iso_language_code: code (e.g. en, de, fr, ...) of the sender’s default language source: client software used to tweet (e.g. Web, Tweetdeck, ...) profile_image_url: URL of the tweet sender’s profile picture geo_type: format of the sender’s geographical coordinates geo_coordinates_0: first element of the geographical coordinates geo_coordinates_1: second element of the geographical coordinates created_at: tweet timestamp in human-readable format time: tweet timestamp as a numerical Unix timestampIn order to process the data, we typically run a number of our own scripts (written in the programming language Gawk) which manipulate or filter the records in various ways, and apply a series of temporal, qualitative and categorical metrics to the data, enabling us to discern patterns of activity over time, as well as to identify topics and themes, key actors, and the relations among them; in some circumstances we may also undertake further processes of filtering and close textual analysis of the content of the tweets. Network analysis (of the relationships among actors in a discussion; or among key themes) is undertaken using the open source application Gephi. While a detailed methodological discussion is beyond the scope of this article, further details and examples of our methods and tools for data analysis and visualisation, including copies of our Gawk scripts, are available on our comprehensive project website, Mapping Online Publics.In this article, we reflect on the technical, epistemological and political challenges of such uses of large-scale Twitter archives within media and communication studies research, positioning this work in the context of the phenomenon that Lev Manovich has called “big social data.” In doing so, we recognise that our empirical work on Twitter is concerned with a complex research site that is itself shaped by a complex range of human and non-human actors, within a dynamic, indeed volatile media ecology (Fuller), and using data collection and analysis methods that are in themselves deeply embedded in this ecology. “Big Social Data”As Manovich’s term implies, the Big Data paradigm has recently arrived in media, communication and cultural studies—significantly later than it did in the hard sciences, in more traditionally computational branches of social science, and perhaps even in the first wave of digital humanities research (which largely applied computational methods to pre-existing, historical “big data” corpora)—and this shift has been provoked in large part by the dramatic quantitative growth and apparently increased cultural importance of social media—hence, “big social data.” As Manovich puts it: For the first time, we can follow [the] imaginations, opinions, ideas, and feelings of hundreds of millions of people. We can see the images and the videos they create and comment on, monitor the conversations they are engaged in, read their blog posts and tweets, navigate their maps, listen to their track lists, and follow their trajectories in physical space. (Manovich 461) This moment has arrived in media, communication and cultural studies because of the increased scale of social media participation and the textual traces that this participation leaves behind—allowing researchers, equipped with digital tools and methods, to “study social and cultural processes and dynamics in new ways” (Manovich 461). However, and crucially for our purposes in this article, many of these scholarly possibilities would remain latent if it were not for the widespread availability of Open APIs for social software (including social media) platforms. APIs are technical specifications of how one software application should access another, thereby allowing the embedding or cross-publishing of social content across Websites (so that your tweets can appear in your Facebook timeline, for example), or allowing third-party developers to build additional applications on social media platforms (like the Twitter user ranking service Klout), while also allowing platform owners to impose de facto regulation on such third-party uses via the same code. While platform providers do not necessarily have scholarship in mind, the data access affordances of APIs are also available for research purposes. As Manovich notes, until very recently almost all truly “big data” approaches to social media research had been undertaken by computer scientists (464). But as part of a broader “computational turn” in the digital humanities (Berry), and because of the increased availability to non-specialists of data access and analysis tools, media, communication and cultural studies scholars are beginning to catch up. Many of the new, large-scale research projects examining the societal uses and impacts of social media—including our own—which have been initiated by various media, communication, and cultural studies research leaders around the world have begun their work by taking stock of, and often substantially extending through new development, the range of available tools and methods for data analysis. The research infrastructure developed by such projects, therefore, now reflects their own disciplinary backgrounds at least as much as it does the fundamental principles of computer science. In turn, such new and often experimental tools and methods necessarily also provoke new epistemological and methodological challenges. The Twitter API and Twitter ArchivesThe Open API was a key aspect of mid-2000s ideas about the value of the open Web and “Web 2.0” business models (O’Reilly), emphasising the open, cross-platform sharing of content as well as promoting innovation at the margins via third-party application development—and it was in this ideological environment that the microblogging service Twitter launched and experienced rapid growth in popularity among users and developers alike. As José van Dijck cogently argues, however, a complex interplay of technical, economic and social dynamics has seen Twitter shift from a relatively open, ad hoc and user-centred platform toward a more formalised media business: For Twitter, the shift from being primarily a conversational communication tool to being a global, ad-supported followers tool took place in a relatively short time span. This shift did not simply result from the owner’s choice for a distinct business model or from the company’s decision to change hardware features. Instead, the proliferation of Twitter as a tool has been a complex process in which technological adjustments are intricately intertwined with changes in user base, transformations of content and choices for revenue models. (van Dijck 343)The specifications of Twitter’s API, as well as the written guidelines for its use by developers (Twitter, “Developer Rules”) are an excellent example of these “technological adjustments” and the ways they are deeply interwined with Twitter’s search for a viable revenue model. These changes show how the apparent semantic openness or “interpretive flexibility” of the term “platform” allows its meaning to be reshaped over time as the business models of platform owners change (Gillespie).The release of the API was first announced on the Twitter blog in September 2006 (Stone), not long after the service’s launch but after some popular third-party applications (like a mashup of Twitter with Google Maps creating a dynamic display of recently posted tweets around the world) had already been developed. Since then Twitter has seen a flourishing of what the company itself referred to as the “Twitter ecosystem” (Twitter, “Developer Rules”), including third-party developed client software (like Twitterific and TweetDeck), institutional use cases (such as large-scale social media visualisations of the London Riots in The Guardian), and parasitic business models (including social media metrics services like HootSuite and Klout).While the history of Twitter’s API rules and related regulatory instruments (such as its Developer Rules of the Road and Terms of Use) has many twists and turns, there have been two particularly important recent controversies around data access and control. First, the company locked out developers and researchers from direct “firehose” (very high volume) access to the Twitter feed; this was accompanied by a crackdown on free and public Twitter archiving services like 140Kit and the Web version of Twapperkeeper (Sample), and coincided with the establishment of what was at the time a monopoly content licensing arrangement between Twitter and Gnip, a company which charges commercial rates for high-volume API access to tweets (and content from other social media platforms). A second wave of controversy among the developer community occurred in August 2012 in response to Twitter’s release of its latest API rules (Sippey), which introduce further, significant limits to API use and usability in certain circumstances. In essence, the result of these changes to the Twitter API rules, announced without meaningful consultation with the developer community which created the Twitter ecosystem, is a forced rebalancing of development activities: on the one hand, Twitter is explicitly seeking to “limit” (Sippey) the further development of API-based third-party tools which support “consumer engagement activities” (such as end-user clients), in order to boost the use of its own end-user interfaces; on the other hand, it aims to “encourage” the further development of “consumer analytics” and “business analytics” as well as “business engagement” tools. Implicit in these changes is a repositioning of Twitter users (increasingly as content consumers rather than active communicators), but also of commercial and academic researchers investigating the uses of Twitter (as providing a narrow range of existing Twitter “analytics” rather than engaging in a more comprehensive investigation both of how Twitter is used, and of how such uses continue to evolve). The changes represent an attempt by the company to cement a certain, commercially viable and valuable, vision of how Twitter should be used (and analysed), and to prevent or at least delay further evolution beyond this desired stage. Although such attempts to “freeze” development may well be in vain, given the considerable, documented role which the Twitter user base has historically played in exploring new and unforeseen uses of Twitter (Bruns), it undermines scholarly research efforts to examine actual Twitter uses at least temporarily—meaning that researchers are increasingly forced to invest time and resources in finding workarounds for the new restrictions imposed by the Twitter API.Technical, Political, and Epistemological IssuesIn their recent article “Critical Questions for Big Data,” danah boyd and Kate Crawford have drawn our attention to the limitations, politics and ethics of big data approaches in the social sciences more broadly, but also touching on social media as a particularly prevalent site of social datamining. In response, we offer the following complementary points specifically related to data-driven Twitter research relying on archives of tweets gathered using the Twitter API.First, somewhat differently from most digital humanities (where researchers often begin with a large pre-existing textual corpus), in the case of Twitter research we have no access to an original set of texts—we can access only what Twitter’s proprietary and frequently changing API will provide. The tools Twitter researchers use rely on various combinations of parts of the Twitter API—or, more accurately, the various Twitter APIs (particularly the Search and Streaming APIs). As discussed above, of course, in providing an API, Twitter is driven not by scholarly concerns but by an attempt to serve a range of potentially value-generating end-users—particularly those with whom Twitter can create business-to-business relationships, as in their recent exclusive partnership with NBC in covering the 2012 London Olympics.The following section from Twitter’s own developer FAQ highlights the potential conflicts between the business-case usage scenarios under which the APIs are provided and the actual uses to which they are often put by academic researchers or other dataminers:Twitter’s search is optimized to serve relevant tweets to end-users in response to direct, non-recurring queries such as #hashtags, URLs, domains, and keywords. The Search API (which also powers Twitter’s search widget) is an interface to this search engine. Our search service is not meant to be an exhaustive archive of public tweets and not all tweets are indexed or returned. Some results are refined to better combat spam and increase relevance. Due to capacity constraints, the index currently only covers about a week’s worth of tweets. (Twitter, “Frequently Asked Questions”)Because external researchers do not have access to the full, “raw” data, against which we could compare the retrieved archives which we use in our later analyses, and because our data access regimes rely so heavily on Twitter’s APIs—each with its technical quirks and limitations—it is impossible for us to say with any certainty that we are capturing a complete archive or even a “representative” sample (whatever “representative” might mean in a data-driven, textualist paradigm). In other words, the “lists” of tweets delivered to us on the basis of a keyword search are not necessarily complete; and there is no way of knowing how incomplete they are. The total yield of even the most robust capture system (using the Streaming API and not relying only on Search) depends on a number of variables: rate limiting, the filtering and spam-limiting functions of Twitter’s search algorithm, server outages and so on; further, because Twitter prohibits the sharing of data sets it is difficult to compare notes with other research teams.In terms of epistemology, too, the primary reliance on large datasets produces a new mode of scholarship in media, communication and cultural studies: what emerges is a form of data-driven research which tends towards abductive reasoning; in doing so, it highlights tensions between the traditional research questions in discourse or text-based disciplines like media and communication studies, and the assumptions and modes of pattern recognition that are required when working from the “inside out” of a corpus, rather than from the outside in (for an extended discussion of these epistemological issues in the digital humanities more generally, see Dixon).Finally, even the heuristics of our analyses of Twitter datasets are mediated by the API: the datapoints that are hardwired into the data naturally become the most salient, further shaping the type of analysis that can be done. For example, a common process in our research is to use the syntax of tweets to categorise it as one of the following types of activity: original tweets: tweets which are neither @reply nor retweetretweets: tweets which contain RT @user… (or similar) unedited retweets: retweets which start with RT @user… edited retweets: retweets do not start with RT @user…genuine @replies: tweets which contain @user, but are not retweetsURL sharing: tweets which contain URLs(Retweets which are made using the Twitter “retweet button,” resulting in verbatim passing-along without the RT @user syntax or an opportunity to add further comment during the retweet process, form yet another category, which cannot be tracked particularly effectively using the Twitter API.)These categories are driven by the textual and technical markers of specific kinds of interactions that are built into the syntax of Twitter itself (@replies or @mentions, RTs); and specific modes of referentiality (URLs). All of them focus on (and thereby tend to privilege) more informational modes of communication, rather than the ephemeral, affective, or ambiently intimate uses of Twitter that can be illuminated more easily using ethnographic approaches: approaches that can actually focus on the individual user, their social contexts, and the broader cultural context of the traces they leave on Twitter. ConclusionsIn this article we have described and reflected on some of the sociotechnical, political and economic aspects of the lists of tweets—the structured Twitter data upon which our research relies—which may be gathered using the Twitter API. As we have argued elsewhere (Bruns and Burgess)—and, hopefully, have begun to demonstrate in this paper—media and communication studies scholars who are actually engaged in using computational methods are well-positioned to contribute to both the methodological advances we highlight at the beginning of this paper and the political debates around computational methods in the “big social data” moment on which the discussion in the second part of the paper focusses. One pressing issue in the area of methodology is to build on current advances to bring together large-scale datamining approaches with ethnographic and other qualitative approaches, especially including close textual analysis. More broadly, in engaging with the “big social data” moment there is a pressing need for the development of code literacy in media, communication and cultural studies. In the first place, such literacy has important instrumental uses: as Manovich argues, much big data research in the humanities requires costly and time-consuming (and sometimes alienating) partnerships with technical experts (typically, computer scientists), because the free tools available to non-programmers are still limited in utility in comparison to what can be achieved using raw data and original code (Manovich, 472).But code literacy is also a requirement of scholarly rigour in the context of what David Berry calls the “computational turn,” representing a “third wave” of Digital Humanities. Berry suggests code and software might increasingly become in themselves objects of, and not only tools for, research: I suggest that we introduce a humanistic approach to the subject of computer code, paying attention to the wider aspects of code and software, and connecting them to the materiality of this growing digital world. With this in mind, the question of code becomes increasingly important for understanding in the digital humanities, and serves as a condition of possibility for the many new computational forms that mediate our experience of contemporary culture and society. (Berry 17)A first step here lies in developing a more robust working knowledge of the conceptual models and methodological priorities assumed by the workings of both the tools and the sources we use for “big social data” research. Understanding how something like the Twitter API mediates the cultures of use of the platform, as well as reflexively engaging with its mediating role in data-driven Twitter research, promotes a much more materialist critical understanding of the politics of the social media platforms (Gillespie) that are now such powerful actors in the media ecology. ReferencesBerry, David M. “Introduction: Understanding Digital Humanities.” Understanding Digital Humanities. Ed. David M. Berry. London: Palgrave Macmillan, 2012. 1-20.boyd, danah, and Kate Crawford. “Critical Questions for Big Data.” Information, Communication & Society 15.5 (2012): 662-79.Bruns, Axel. “Ad Hoc Innovation by Users of Social Networks: The Case of Twitter.” ZSI Discussion Paper 16 (2012). 18 Sep. 2012 ‹https://www.zsi.at/object/publication/2186›.Bruns, Axel, and Jean Burgess. “Notes towards the Scientific Study of Public Communication on Twitter.” Keynote presented at the Conference on Science and the Internet, Düsseldorf, 4 Aug. 2012. 18 Sep. 2012 http://snurb.info/files/2012/Notes%20towards%20the%20Scientific%20Study%20of%20Public%20Communication%20on%20Twitter.pdfBruns, Axel, Jean Burgess, Kate Crawford, and Frances Shaw. “#qldfloods and @QPSMedia: Crisis Communication on Twitter in the 2011 South East Queensland Floods.” Brisbane: ARC Centre of Excellence for Creative Industries and Innovation, 2012. 18 Sep. 2012 ‹http://cci.edu.au/floodsreport.pdf›Burgess, Jean E. & Bruns, Axel (2012) “(Not) the Twitter Election: The Dynamics of the #ausvotes Conversation in Relation to the Australian Media Ecology.” Journalism Practice 6.3 (2012): 384-402Dixon, Dan. “Analysis Tool Or Research Methodology: Is There an Epistemology for Patterns?” Understanding Digital Humanities. Ed. David M. Berry. London: Palgrave Macmillan, 2012. 191-209.Fuller, Matthew. Media Ecologies: Materialist Energies in Art and Technoculture. Cambridge, Mass.: MIT P, 2005.Gillespie, Tarleton. “The Politics of ‘Platforms’.” New Media & Society 12.3 (2010): 347-64.Harrington, Stephen, Highfield, Timothy J., & Bruns, Axel (2012) “More than a Backchannel: Twitter and Television.” Ed. José Manuel Noguera. Audience Interactivity and Participation. COST Action ISO906 Transforming Audiences, Transforming Societies, Brussels, Belgium, pp. 13-17. 18 Sept. 2012 http://www.cost-transforming-audiences.eu/system/files/essays-and-interview-essays-18-06-12.pdfLotan, Gilad, Erhardt Graeff, Mike Ananny, Devin Gaffney, Ian Pearce, and danah boyd. “The Arab Spring: The Revolutions Were Tweeted: Information Flows during the 2011 Tunisian and Egyptian Revolutions.” International Journal of Communication 5 (2011): 1375-1405. 18 Sep. 2012 ‹http://ijoc.org/ojs/index.php/ijoc/article/view/1246/613›.Manovich, Lev. “Trending: The Promises and the Challenges of Big Social Data.” Debates in the Digital Humanities. Ed. Matthew K. Gold. Minneapolis: U of Minnesota P, 2012. 460-75.Mejias, Ulises A. “Liberation Technology and the Arab Spring: From Utopia to Atopia and Beyond.” Fibreculture Journal 20 (2012). 18 Sep. 2012 ‹http://twenty.fibreculturejournal.org/2012/06/20/fcj-147-liberation-technology-and-the-arab-spring-from-utopia-to-atopia-and-beyond/›.O’Reilly, Tim. “What is Web 2.0? Design Patterns and Business Models for the Next Generation of Software.” O’Reilly Network 30 Sep. 2005. 18 Sep. 2012 ‹http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html›.Papacharissi, Zizi, and Maria de Fatima Oliveira. “Affective News and Networked Publics: The Rhythms of News Storytelling on #Egypt.” Journal of Communication 62.2 (2012): 266-82.Sample, Mark. “The End of Twapperkeeper (and What to Do about It).” ProfHacker. The Chronicle of Higher Education 8 Mar. 2011. 18 Sep. 2012 ‹http://chronicle.com/blogs/profhacker/the-end-of-twapperkeeper-and-what-to-do-about-it/31582›.Sippey, Michael. “Changes Coming in Version 1.1 of the Twitter API.” 16 Aug. 2012. Twitter Developers Blog. 18 Sep. 2012 ‹https://dev.Twitter.com/blog/changes-coming-to-Twitter-api›.Stone, Biz. “Introducing the Twitter API.” Twitter Blog 20 Sep. 2006. 18 Sep. 2012 ‹http://blog.Twitter.com/2006/09/introducing-Twitter-api.html›.Twitter. “Developer Rules of the Road.” Twitter Developers Website 17 May 2012. 18 Sep. 2012 ‹https://dev.Twitter.com/terms/api-terms›.Twitter. “Frequently Asked Questions.” 18 Sep. 2012 ‹https://dev.twitter.com/docs/faq›.Van Dijck, José. “Tracing Twitter: The Rise of a Microblogging Platform.” International Journal of Media and Cultural Politics 7.3 (2011): 333-48.Walker Rettberg, Jill. Blogging. Cambridge: Polity, 2008.Wood, Megan M., and Linda Baughman. “Glee Fandom and Twitter: Something New, or More of the Same Old Thing?” Communication Studies 63.3 (2012): 328-44.
Gli stili APA, Harvard, Vancouver, ISO e altri
Offriamo sconti su tutti i piani premium per gli autori le cui opere sono incluse in raccolte letterarie tematiche. Contattaci per ottenere un codice promozionale unico!

Vai alla bibliografia