To see the other types of publications on this topic, follow the link: Multimedia systems Programming.

Journal articles on the topic 'Multimedia systems Programming'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Multimedia systems Programming.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

POZZI, SILVANO, and LUCA GIACHINO. "AN OBJECT-ORIENTED PROGRAMMING ENVIRONMENT FOR MULTIMEDIA COOPERATIVE INFORMATION SYSTEMS." International Journal of Cooperative Information Systems 03, no. 01 (1994): 3–23. http://dx.doi.org/10.1142/s0218215794000028.

Full text
Abstract:
This paper illustrates an object-oriented programming environment, called Application Conference Interface (ACI), which has been designed in order to facilitate the implementation of cooperative information systems. It interfaces developers of cooperative applications with services provided by a software platform, called ImagineDesk. The platform offers a rich set of services which can be exploited by developers of cooperative applications in order to manage them, to exchange multimedia data and to control users' interactions according to their roles. Basically, the ACI provides a set of local abstractions of remote services. These abstractions take the form of local objects, hiding the details of the underlying physical network from the application developer. By exploiting the object-oriented paradigm, the ACI clearly confines the host environment and network constraints in few easily upgradable objects, thus resulting in a highly system-independent architecture.
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Da Quan, Yong Qiang Shi, Tian Wang, Xiao Kai Wu, and Qi Li Zhou. "Secured Publishing AntiBlocking Multimedia Realtime Billboard Display." Advanced Materials Research 282-283 (July 2011): 104–7. http://dx.doi.org/10.4028/www.scientific.net/amr.282-283.104.

Full text
Abstract:
Large screen display system has good prospects for the formation of gaps that have been published. The introduction of this thesis programming before large-screen display system on the current development of software technology involved in a more comprehensive overview and discussion. In introducing the structure of the remote large screen systems design and related implementation techniques, the article discusses the big screen remote system software development, and details of video programming. Due to time and capacity constraints, only the planning and design to complete a thesis as part of the program inside the actual function, but also be further improved.
APA, Harvard, Vancouver, ISO, and other styles
3

Wijaya, Marvin Chandra. "Perancangan Pembelajaran Fisika Menggunakan Multimedia Interaktif untuk Meningkatkan Minat Mahasiswa terhadap Mata Kuliah Fisika." Science and Physics Education Journal (SPEJ) 3, no. 1 (2019): 28–36. http://dx.doi.org/10.31539/spej.v3i1.928.

Full text
Abstract:
This study aims to measure the increased interest of students in the Maranatha Christian University Computer Systems Study Program in Physics Subjects. The increase in interest uses interactive multimedia-based learning systems. The learning system is designed using the SMIL (Synchronized Multimedia Interactive Language) programming language. The physics course learning system designed in this study is course material about Object Motion (Regular Straight Motion / GLB, Irregularly Changed Straight Motion / GLBB, Free Falling Motion / GJB, Vertical Upward Motion / GVA, Vertical Down Motion / GVB) . Thirty students were taken as samples to see interest in Physics courses. Data retrieval is taken before and after the provision of learning materials using interactive multimedia. The average value of interest in Physics Subjects before the use of interactive multimedia learning materials was 7.12 on a scale of 10, increasing to 8.57 after the use of interactive multimedia learning materials. Hypothesis testing uses paired t test. The results of statistical processing showed a significant increase in student interest in Physics. 
 Keywords: Physics Learning, Interactive Multimedia, SMIL, Object Motion
APA, Harvard, Vancouver, ISO, and other styles
4

Ci, Song, Haohong Wang, and Dalei Wu. "A Theoretical Framework for Quality-Aware Cross-Layer Optimized Wireless Multimedia Communications." Advances in Multimedia 2008 (2008): 1–10. http://dx.doi.org/10.1155/2008/543674.

Full text
Abstract:
Although cross-layer has been thought as one of the most effective and efficient ways for multimedia communications over wireless networks and a plethora of research has been done in this area, there is still lacking of a rigorous mathematical model to gain in-depth understanding of cross-layer design tradeoffs, spanning from application layer to physical layer. As a result, many existing cross-layer designs enhance the performance of certain layers at the price of either introducing side effects to the overall system performance or violating the syntax and semantics of the layered network architecture. Therefore, lacking of a rigorous theoretical study makes existing cross-layer designs rely on heuristic approaches which are unable to guarantee sound results efficiently and consistently. In this paper, we attempt to fill this gap and develop a new methodological foundation for cross-layer design in wireless multimedia communications. We first introduce a delay-distortion-driven cross-layer optimization framework which can be solved as a large-scale dynamic programming problem. Then, we present new approximate dynamic programming based on significance measure and sensitivity analysis for high-dimensional nonlinear cross-layer optimization in support of real-time multimedia applications. The major contribution of this paper is to present the first rigorous theoretical modeling for integrated cross-layer control and optimization in wireless multimedia communications, providing design insights into multimedia communications over current wireless networks and throwing light on design optimization of the next-generation wireless multimedia systems and networks.
APA, Harvard, Vancouver, ISO, and other styles
5

Wang, Cuimin. "Research on Optimal Design of Short-Life Cycle Product Logistics Supply Chain Based on Multicriteria Decision Model." Security and Communication Networks 2021 (April 1, 2021): 1–12. http://dx.doi.org/10.1155/2021/5564831.

Full text
Abstract:
In order to study the special production, distribution, transportation, and sales of short-lived and short-lived multimedia products, this paper studies the cost and benefit optimization of a three-level supply chain network consisting of a supplier-producer-distribution center. Simultaneously, to effectively describe the actual inventory level of the retailer’s multimedia products, this paper introduces a new nonlinear function to define the real-time inventory of multimedia products to establish a biobjective nonlinear mixed-integer programming model to estimate the inventory. The research results show that the model established in this paper can not only provide the overall cost and optimal decision-making plan for the short-lived dairy supply chain but also the design method is better than the standard constraint method. Relevant research results have important potential guiding significance for research and design of supply chain network structure, measurement of multimedia product inventory levels, and improvement of customer satisfaction. The optimized design of the logistics supply chain of multimedia products with short life cycle under the background of new retail is beneficial.
APA, Harvard, Vancouver, ISO, and other styles
6

Blair, G. S., G. Coulson, M. Papathomas, et al. "A programming model and system infrastructure for real-time synchronization in distributed multimedia systems." IEEE Journal on Selected Areas in Communications 14, no. 1 (1996): 249–63. http://dx.doi.org/10.1109/49.481709.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Fonseca Chiu, Lotzy Beatriz. "Taller de objetos de aprendizaje multimedia para compartir." Tecnología Educativa Revista CONAIC 1, no. 1 (2021): 8–17. http://dx.doi.org/10.32671/terc.v1i1.172.

Full text
Abstract:
El trabajo tiene como finalidad difundir los resultados de implementar un taller de objetos de aprendizaje multimedia, en el cual jóvenes universitarios desarrollaron objetos de aprendizaje que incluían contenido multimedia y esos objetos de aprendizaje se compartían exponiéndose en diferentes escuelas de diversos niveles educativos como preescolar, primaria y preparatoria, los objetos de aprendizaje se crearon en la materia de Programación de sistemas multimedia en los calendarios 2013 B y 2014 A materia que imparto en el Centro Universitario de Ciencias Exactas e Ingenierías (CUCEI) de la Universidad de Guadalajara.
 The paper aims to disseminate the results of implementing a workshop objects multimedia learning, in which university students developed learning objects including multimedia content and those learning objects shared exposing in different schools of different educational levels as preschool, primary and high school, learning objects are created in the field of multimedia systems programming calendars 2013 B and 2014 A subject that I teach at the University Center of Exact Sciences and Engineering (CUCEI) at the University of Guadalajara Sciences.
APA, Harvard, Vancouver, ISO, and other styles
8

Iliev, Ilko Tsonev, and Svetlana Zhelyazkova Vasileva. "An Innovative Approach." International Journal of Technology and Educational Marketing 4, no. 2 (2014): 18–27. http://dx.doi.org/10.4018/ijtem.2014070102.

Full text
Abstract:
The Content management systems (CMS) automates and facilitates the process of adding and modifying the contents of the Web sites, organization, control and publication of a large number of documents and other content, such as images and multimedia resources. CMS is attractive for specialists in various fields of human activity who want to publish on the Internet, but have little knowledge in computer programming and web-programming in particular. The article views some opportunities provided by CMS Drupal for designing and making electronic textbooks, in an example of electronic textbook on Spreadsheets.
APA, Harvard, Vancouver, ISO, and other styles
9

Shanmugapriya, Kumaraperumal, and RajaMani Suja Mani Malar. "An Effective Technique to Track Objects with the Aid of Rough Set Theory and Evolutionary Programming." Journal of Intelligent Systems 28, no. 1 (2019): 1–13. http://dx.doi.org/10.1515/jisys-2016-0351.

Full text
Abstract:
AbstractDue to its wide range of applications, the impact of multimedia in the real world has shown stupendous growth. Texts, images, audio, and video are the different forms of multimedia which are utilized by humans in various applications such as education and surveillance applications. A wide range of research has been carried out, and here in this paper, we propose an object racking with the aid of rough set theory in combination with the eminent soft computing technique evolutionary programming. Initially, the input video is segregated into frames, then the frames that belong to particular shots are identified through the shot segmentation process, and after that the object to be tracked is identified manually. Subsequently, the shape and texture feature is extracted, and then the rough set theory is applied. This is done to identify the presence of object in the frames. Consequently, genetic algorithm (GA) is utilized for the object monitoring process to mark the object with variant color. As a result, the selected object is tracked in an effective manner.
APA, Harvard, Vancouver, ISO, and other styles
10

SCHERP, ANSGAR, CARSTEN SAATHOFF, and STEFAN SCHEGLMANN. "A PATTERN SYSTEM FOR DESCRIBING THE SEMANTICS OF STRUCTURED MULTIMEDIA DOCUMENTS." International Journal of Semantic Computing 06, no. 03 (2012): 263–88. http://dx.doi.org/10.1142/s1793351x12400089.

Full text
Abstract:
Today's metadata models and metadata standards often focus on a specific media type only, lack combinability with other metadata models, or are limited with respect to the features they support. Thus they are not sufficient to describe the semantics of rich, structured multimedia documents. To overcome these limitations, we have developed a comprehensive model for representing multimedia metadata, the Multimedia Metadata Ontology (M3O). The M3O has been developed by an extensive analysis of related work and abstracts from the features of existing metadata models and metadata standards. It is based on the foundational ontology DOLCE+DnS Ultralight and makes use of ontology design patterns. The M3O serves as generic modeling framework for integrating the existing metadata models and metadata standards rather than replacing them. As such, the M3O can be used internally as semantic data model within complex multimedia applications such as authoring tools or multimedia management systems. To make use of the M3O in concrete multimedia applications, a generic application programming interface (API) has been implemented based on a sophisticated persistence layer that provides explicit support for ontology design patterns. To demonstrate applicability of the M3O API, we have integrated and applied it with our SemanticMM4U framework for the multi-channel generation of semantically annotated multimedia documents.
APA, Harvard, Vancouver, ISO, and other styles
11

Risso-Montaldo, Claudio Enrique, Franco Rafael Robledo-Amoza, and Sergio Enrique Nesmachnow-Cánovas. "Solving the Quality of Service Multicast Tree Problem." Proceedings of the Institute for System Programming of the RAS 33, no. 2 (2021): 163–72. http://dx.doi.org/10.15514/ispras-2021-33(2)-10.

Full text
Abstract:
This article presents a flow-based mixed integer programming formulation for the Quality of Service Multicast Tree problem. This is a relevant problem related to nowadays telecommunication networksto distribute multimedia over cloud-based Internet systems. To the best of our knowledge, no previous mixed integer programming formulation was proposed for Quality of Service Multicast Tree Problem. Experimental evaluation is performed over a set of realistic problem instances from SteinLib, to prove that standard exact solvers can find solutions to real-world size instances. Exact method is applied for benchmarking the proposed formulations, finding optimal solutions and low feasible-to-optimal gaps in reasonable execution times.
APA, Harvard, Vancouver, ISO, and other styles
12

Pisanelli, D. M., F. L. Ricci, F. Consorti, A. Piermattei, and F. Ferri. "Toward a General Model for the Description of Multimedia Clinical Data." Methods of Information in Medicine 37, no. 03 (1998): 278–84. http://dx.doi.org/10.1055/s-0038-1634537.

Full text
Abstract:
AbstractThe patient folder integrates information originating from heterogeneous sources. For this reason computerized tools for patient data management should exploit the advantages of multimediality and offer an integrated environment for data presentation, and image and biosignal visualization. Object-oriented modeling is the best approach for designing systems for multimedia patient folder management.We propose an object-oriented model, able to define the entities constituting the patient folder and their logical organization. This model has sufficient flexibility to adapt to the most varied clinical environments. It allows the physician to structure the information needed for his/her patient folder without employing a programming language.
APA, Harvard, Vancouver, ISO, and other styles
13

Chen, Yen-Kuang, David W. Lin, John V. McCanny, and Edwin H. M. Sha. "Guest Editorial: Special Issue on Design and Programming of Signal Processors for Multimedia Communication." Journal of Signal Processing Systems 51, no. 3 (2008): 207–8. http://dx.doi.org/10.1007/s11265-007-0159-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Hrabovskyi, Yevhen, Natalia Brynza, and Olga Vilkhivska. "DEVELOPMENT OF INFORMATION VISUALIZATION METHODS FOR USE IN MULTIMEDIA APPLICATIONS." EUREKA: Physics and Engineering 1 (January 31, 2020): 3–17. http://dx.doi.org/10.21303/2461-4262.2020.001103.

Full text
Abstract:
The aim of the article is development of a technique for visualizing information for use in multimedia applications. In this study, to visualize information, it is proposed first to compile a list of key terms of the subject area and create data tables. Based on the structuring of fragments of the subject area, a visual display of key terms in the form of pictograms, a visual display of key terms in the form of images, and a visual display of data tables are performed. The types of visual structures that should be used to visualize information for further use in multimedia applications are considered. The analysis of existing visual structures in desktop publishing systems and word processors is performed. To build a mechanism for visualizing information about the task as a presentation, a multimedia application is developed using Microsoft Visual Studio software, the C# programming language by using the Windows Forms application programming interface. An algorithm is proposed for separating pieces of information text that have key terms. Tabular data was visualized using the “parametric ruler” metaphorical visualization method, based on the metaphor of a slide rule. The use of the parametric ruler method on the example of data visualization for the font design of children's publications is proposed. Interaction of using the method is ensured due to the fact that the user will enter the size of the size that interests for it and will see the ratio of the values of other parameters. The practical result of the work is the creation of a multimedia application “Visualization of Publishing Standards” for the visualization of information for the font design of publications for children. The result of the software implementation is the finished multimedia applications, which, according to the standardization visualization technique in terms of prepress preparation of publications, is the final product of the third stage of the presentation of the visual form
APA, Harvard, Vancouver, ISO, and other styles
15

Sam�ani, Sam�ani. "Rancang Bangun Aplikasi Pengawasan Dan Pengendalian Komputer Laboratorium Multimedia STMIK Palangkaraya." Jurnal Sains Komputer dan Teknologi Informasi 1, no. 1 (2018): 33–38. http://dx.doi.org/10.33084/jsakti.v1i1.548.

Full text
Abstract:
The progress of technology and computer surveillance systems at this time has provided a significant positive contribution in all fields to support and improve the performance of a system. One of the technological advances in the field of computers is computer monitoring and control. Computer monitoring and control systems used in computer laboratories at the College of Information and Computer Management (STMIK) Palangkaraya still use a paid system or must first purchase to be able to use it in every computer laboratory room, so that not all computer laboratory rooms can be monitored using the system because there are not a few costs to purchase the system. Therefore, to supervise all the STMIK Palangkaraya laboratory rooms, a computer monitoring and control system is built that can be used free of charge. The software development method used is the System Development Life Cycle (SDLC) method with a waterfall model development approach. The programming language used is Microsoft Visual Basic. Based on the results of tests on the system that was built to produce all the processes and facilities contained in the system running in accordance with the test items that have been set.
APA, Harvard, Vancouver, ISO, and other styles
16

Segec, Pavel, and Tatiana Kovacikova. "A Survey of Open Source Products for Building a SIP Communication Platform." Advances in Multimedia 2011 (2011): 1–21. http://dx.doi.org/10.1155/2011/372591.

Full text
Abstract:
The Session Initiation Protocol (SIP) is a multimedia signalling protocol that has evolved into a widely adopted communication standard. The integration of SIP into existing IP networks has fostered IP networks becoming a convergence platform for both real-time and non-real-time multimedia communications. This converged platform integrates data, voice, video, presence, messaging, and conference services into a single network that offers new communication experiences for users. The open source community has contributed to SIP adoption through the development of open source software for both SIP clients and servers. In this paper, we provide a survey on open SIP systems that can be built using publically available software. We identify SIP features for service development and programming, services and applications of a SIP-converged platform, and the most important technologies supporting SIP functionalities. We propose an advanced converged IP communication platform that uses SIP for service delivery. The platform supports audio and video calls, along with media services such as audio conferences, voicemail, presence, and instant messaging. Using SIP Application Programming Interfaces (APIs), the platform allows the deployment of advanced integrated services. The platform is implemented with open source software. Architecture components run on standardized hardware with no need for special purpose investments.
APA, Harvard, Vancouver, ISO, and other styles
17

Putri, Hasanah, Iqbal Shadiq, and Gigin Gantini Putri. "Interactive Learning Media for Cellular Communication Systems using the Multimedia Development Life Cycle Model." Jurnal Online Informatika 6, no. 1 (2021): 1. http://dx.doi.org/10.15575/join.v6i1.544.

Full text
Abstract:
Based on the observations conducted to the students of Diploma of Telecommunications Engineering Telkom University. It revealed that the students have difficulty learning and understanding the chapters of call processing and network optimization in the course of cellular communication systems. It has resulted from the current learning media, which are only in the form of textbooks and Powerpoint slides considered less attractive. Hence, the learning process becomes ineffective and has an impact on low learning outcomes. In this study, an interactive learning media was designed with the Multimedia Development Life Cycle (MDLC) method, Adobe Flash professional CS6 software, using the action script 2.0 programming language. Learning media were designed according to users’ needs and learning outcomes of cellular communication system courses. Based on the testing results, the functionality showed 100% of features function as design specifications. Meanwhile, the user satisfaction testing results obtained an average MOS of 4.73, which means that the learning media is classified great. Furthermore, based on the quantitative testing, the average value of Quiz after using this interactive learning media was 81, which means that the learning media can increase students’ interest so that it affects the increase in learning outcomes by 66% from previous years.
APA, Harvard, Vancouver, ISO, and other styles
18

Korol, Alona, Olena Blashkova, Viktoriia Kravchenko, and Anna Khilya. "WEB-TECHNOLOGIES AND MULTIMEDIA SYSTEMS IN THE TRAINING OF PROFESSIONALS IN THE EDUCATION SYSTEM." ENVIRONMENT. TECHNOLOGIES. RESOURCES. Proceedings of the International Scientific and Practical Conference 2 (June 17, 2021): 244–48. http://dx.doi.org/10.17770/etr2021vol2.6570.

Full text
Abstract:
Contemporary technologies of training specialists in different professions envisage mastering the skills of a 'quality user' of computer technologies. At the same time, the issues of training primary school teachers, specialists in inclusive and remedial education to use more complex multimedia systems with the need to understand programming processes have often been excluded from basic courses as an 'unnecessary' component.But considering the current trends towards distance education, the “rejuvenation” of 'advanced users' and the need to maintain a educator's reputation, the trend towards introducing such systemic courses in the training process for specialists in primary, inclusive and correctional education has become a kind of concept for professional competence. It was based on the needs of today's teachers to transfer knowledge through multimedia systems (creating interactive databases, web pages, blogs or websites, preparing and conducting WebQuests, using computer games from producers or their own development through the platforms Wordwall, Etreniki, Flippity and Scratch programmes, etc.) that became the deciding factor for introducing specific topics on their use into training courses and expanding basic programmes of computer competency. Also, the training process began to include interaction with the training audience through social media. This multi-component work to develop a «modern/advanced» teacher provides the basis not only for improving the quality of the educational process, but also for its individualization according to the needs of each participant and his/her special educational needs, allowing to change the complexity of tasks, the saturation of tasks with audiovisual information.
APA, Harvard, Vancouver, ISO, and other styles
19

MacIntyre, Blair, Marco Lohse, Jay David Bolter, and Emmanuel Moreno. "Integrating 2-D Video Actors into 3-D Augmented-Reality Systems." Presence: Teleoperators and Virtual Environments 11, no. 2 (2002): 189–202. http://dx.doi.org/10.1162/1054746021470621.

Full text
Abstract:
In this paper, we discuss the integration of 2-D video actors into 3-D augmentedreality (AR) systems. In the context of our research on narrative forms for AR, we have found ourselves needing highly expressive content that is most easily created by human actors. We discuss the feasibility and utility of using video actors in an AR situation and then present our Video Actor Framework (including the VideoActor editor and the Video3D Java package) for easily integrating 2-D videos of actors into Java 3D, an object-oriented 3-D graphics programming environment. The framework is based on the idea of supporting tight spatial and temporal synchronization between the content of the video and the rest of the 3-D world. We present a number of illustrative examples that demonstrate the utility of the toolkit and editor. We close with a discussion and example of our recent work implementing these ideas in Macromedia Director, a popular multimedia production tool.
APA, Harvard, Vancouver, ISO, and other styles
20

Zambelli, Cristian, Lorenzo Zuolo, Antonio Aldarese, Salvatrice Scommegna, Rino Micheloni, and Piero Olivo. "Assessing the Role of Program Suspend Operation in 3D NAND Flash Based Solid State Drives." Electronics 10, no. 12 (2021): 1394. http://dx.doi.org/10.3390/electronics10121394.

Full text
Abstract:
3D NAND Flash is the preferred storage medium for dense mass storage applications, including Solid State Drives and multimedia cards. Improving the latency of these systems is a mandatory task to narrow the gap between computing elements, such as CPUs and GPUs, and the storage environment. To this extent, relatively time-consuming operations in the storage media, such as data programming and data erasing, need to be prioritized and be potentially suspendable by shorter operations, like data reading, in order to improve the overall system quality of service. However, such benefits are strongly dependent on the storage characteristics and on the timing of the single operations. In this work, we investigate, through an extensive characterization, the impacts of suspending the data programming operation in a 3D NAND Flash device. System-level simulations proved that such operations must be carefully characterized before exercising them on Solid State Drives to eventually understand the performance benefits introduced and to disclose all the potential shortcomings.
APA, Harvard, Vancouver, ISO, and other styles
21

Hasan, Mohammed Zaki, and Tat-Chee Wan. "Optimized Quality of Service for Real-Time Wireless Sensor Networks Using a Partitioning Multipath Routing Approach." Journal of Computer Networks and Communications 2013 (2013): 1–18. http://dx.doi.org/10.1155/2013/497157.

Full text
Abstract:
Multimedia sensor networks for real-time applications have strict constraints on delay, packet loss, and energy consumption requirements. For example, video streaming in a disaster-management scenario requires careful handling to ensure that the end-to-end delay is within the acceptable range and the video is received properly without any distortion. The failure to transmit a video stream effectively occurs for many reasons, including sensor function limitations, excessive power consumption, and a lack of routing reliability. We propose a novel mathematical model for quality of service (QoS) route determination that enables a sensor to determine the optimal path for minimising resource use while satisfying the required QoS constraints. The proposed mathematical model uses the Lagrangian relaxation mixed integer programming technique to define critical parameters and appropriate objective functions for controlling the adaptive QoS constrained route discovery process. Performance trade-offs between QoS requirements and energy efficiency were simulated using the LINGO mathematical programming language. The proposed approach significantly improves the network lifetime, while reducing energy consumption and decreasing average end-to-end delays within the sensor network via optimised resource sharing in intermediate nodes compared with existing routing algorithms.
APA, Harvard, Vancouver, ISO, and other styles
22

Kearns, Jodi, and Brian C. O’Connor. "Clownpants in the classroom? Hypnotizing chickens? Measurement of structural distraction in visual presentation documents." Journal of Documentation 70, no. 4 (2014): 526–43. http://dx.doi.org/10.1108/jd-01-2013-0009.

Full text
Abstract:
Purpose – The purpose of this paper is to consider the structure of entertainment media as a possible foundation for measuring aspects of visual presentations that could enhance or interfere with audience engagement. Design/methodology/approach – Factors that might account for the large number of negative comments about visual presentations are identified and a method of calculating entropy measurements for form attributes of presentations is introduced. Findings – Entropy calculations provide a numerical measure of structural elements that account for engagement or distraction. A set of peer evaluations of educational presentations is used to calibrate a distraction factor algorithm. Research limitations/implications – Distraction as a consequence of document structure might enable engineering of a balance between document structure and content in document formats not yet explored by mechanical entropy calculations. Practical implications – Mathematical calculations of structural elements (form attributes) support what multimedia presentation viewers have been observing for years (documented in numerous journals and newspapers from education to business to military fields): engineering PowerPoint presentations necessarily involves attention to engagement vs distraction in the audience. Originality/value – Exploring aspects of document structures has been demonstrated to calibrate viewer perceptions to calculated measurements in moving image documents, and now in images and multimedia presentation documents extending Claude Shannon's early work communication channels and James Watt and Robert Krull's work on television programming.
APA, Harvard, Vancouver, ISO, and other styles
23

Atzenbeck, Claus. "Interview with Norman Meyrowitz." ACM SIGWEB Newsletter, Autumn (March 2021): 1–5. http://dx.doi.org/10.1145/3460304.3460306.

Full text
Abstract:
Norm Meyrowitz is currently an Adjunct Professor of the Practice of Computer Science at Brown University. He received an Sc.B. in Computer Science from Brown in 1981, and is recognized for his work on linking and multimedia technology for the Internet and for the evolution of Web development software. In the 1980s, Norm served as a Co-Director of Brown University's Institute for Research in Information and Scholarship, where he led the development of Intermedia, a hypermedia system that influenced both the creator of the Web and the creator of the Mosaic Web browser. In mid-1980s, he helped start two ACM conferences - OOPSLA (Object-Oriented Programming, Systems, and Languages) and Hypertext '87, which continue to this day. Following his work in academia, Norm worked for several years as the Director of System/User Software for pen/tablet pioneer GO Corporation before transitioning to his role as President of Product Development at Macromedia (later acquired by Adobe). At Macromedia, Norm oversaw a variety of Web development and multimedia products at Macromedia, including Shockwave, Dreamweaver, Flash, the latter of which had more than 4 billion downloads in its heyday in the 1990s and early 2000s.
APA, Harvard, Vancouver, ISO, and other styles
24

BOOTY, W. G., and I. W. S. WONG. "WATER QUALITY MODELLING WITHIN THE RAISON EXPERT SYSTEM." Journal of Biological Systems 02, no. 04 (1994): 453–66. http://dx.doi.org/10.1142/s0218339094000283.

Full text
Abstract:
The RAISON (Regional Analysis by Intelligent Systems ON a microcomputer) system is a multimedia environmental data analysis tool-kit that contains a fully integrated database management system, spreadsheet, G.I.S., graphics, statistics, modelling and expert system modules as well as a programming language that allows the user to create specialized applications. This paper presents case studies of modelling applications which illustrate the utility of the system in assisting the users of water quality models to make the models more user friendly. This is accomplished through the use of added visualization of inputs and auxiliary information as well as on-line knowledge added. This system also enables the user to represent model results in numerous graphical forms as well as animated results presented on maps. In addition, it has the ability to interface models with expert systems to aid in the selection and use of models and in the interpretation of results.
APA, Harvard, Vancouver, ISO, and other styles
25

DATTOLO, ANTONINA, and VINCENZO LOIA. "DISTRIBUTED INFORMATION AND CONTROL IN A CONCURRENT HYPERMEDIA-ORIENTED ARCHITECTURE." International Journal of Software Engineering and Knowledge Engineering 10, no. 03 (2000): 345–69. http://dx.doi.org/10.1142/s0218194000000158.

Full text
Abstract:
The market for parallel and distributed computing systems keeps growing. Technological advances in processor power, networking, telecommunication and multimedia are stimulating the development of applications requiring parallel and distributed computing. An important research problem in this area is the need to find a robust bridge between the decentralisation of knowledge sources in information-based systems and the distribution of computational power. Consequently, the attention of the research community has been directed towards high-level, concurrent, distributed programming. This work proposes a new hypermedia framework based on the metaphor of the actor model. The storage and run-time layers are represented entirely as communities of independent actors that cooperate in order to accomplish common goals, such as version management or user adaptivity. These goals involve fundamental and complex hypermedia issues, which, thanks to the distribution of tasks, are treated in an efficient and simple way.
APA, Harvard, Vancouver, ISO, and other styles
26

Khor, Ean-Teng, and Sheng-Hung Chung. "PERFORMANCE EVALUATION OF CONCEPTUAL MODEL INSTANCE (CMI) DATA FOR E-LEARNING MULTIMEDIA PRESENTATIONS IN SCORM RUN-TIME ENVIRONMENT." Asian Association of Open Universities Journal 5, no. 2 (2010): 78–88. http://dx.doi.org/10.1108/aaouj-05-02-2010-b003.

Full text
Abstract:
The paper aims to describe dynamic presentation generator that presents the same content in different ways through different media object and presentation layouts. The different media presentation will then display to student via Learning Management Systems (LMS). The paper also presents SCORM Compliant Learning Objects and Conceptual Model Instance (CMI). The prototype is then evaluated to demonstrate the performance of the students where the CMI data are collected for each student. The data collected include media preference, test score and time spent to study the Computer Programming subjects. The results show that with more media object used, students spent more time on the Webpages. However, the results also showed that using more media object may not produce better results in the assessment.
APA, Harvard, Vancouver, ISO, and other styles
27

Izatri, Dini Idzni, Nofita Idaroka Rohmah, and Renny Sari Dewi. "Identifikasi Risiko pada Perpustakaan Daerah Gresik dengan NIST SP 800-30." JURIKOM (Jurnal Riset Komputer) 7, no. 1 (2020): 50. http://dx.doi.org/10.30865/jurikom.v7i1.1756.

Full text
Abstract:
With the rapid development of technology in Indonesia, several companies and government institutions have begun to implement IT in their systems, as well as the Gresik Regency Regional Library. Information Technology is a field of technology management and covers various fields including but not limited to things such as processes, computer software, information systems, computer hardware, programming languages, and data construction. In short, what makes data, information or knowledge felt in any visual format, through any mechanism of multimedia distribution, is considered part of Information Technology. Regional Library of Gresik Regency is one of the institutions from the government that has implemented Information Technology in their system. Gresik district library has about thirty thousand books consisting of novels, magazines, school textbooks, literature, and others. The Regional Library of Gresik Regency is now using the INLIS LITE application, this application is used by the library, from the collection of books to the list of library members
APA, Harvard, Vancouver, ISO, and other styles
28

Yang, Xu, Yumin Hou, and Hu He. "A Processing-in-Memory Architecture Programming Paradigm for Wireless Internet-of-Things Applications." Sensors 19, no. 1 (2019): 140. http://dx.doi.org/10.3390/s19010140.

Full text
Abstract:
The widespread applications of the wireless Internet of Things (IoT) is one of the leading factors in the emerging of Big Data. Huge amounts of data need to be transferred and processed. The bandwidth and latency of data transfers have posed a new challenge for traditional computing systems. Under Big Data application scenarios, the movement of large scales of data would influence performance, power efficiency, and reliability, which are the three fundamental attributes of a computing system. Thus, changes in the computing paradigm are demanding. Processing-in- Memory (PIM), aiming at placing computation as close as possible to memory, has become of great interest to academia as well as industries. In this work, we propose a programming paradigm for PIM architecture that is suitable for wireless IoT applications. A data-transferring mechanism and middleware architecture are presented. We present our methods and experiences on simulation-platform design, as well as FPGA demo design, for PIM architecture. Typical applications in IoT, such as multimedia and MapReduce programs, are used as demonstration of our method’s validity and efficiency. The programs could successfully run on the simulation platform built based on Gem5 and on the FPGA demo. Results show that our method could largely reduce power consumption and execution time for those programs, which is very beneficial in IoT applications.
APA, Harvard, Vancouver, ISO, and other styles
29

Cochran, Edward L. "Control Room User Interface Technology in the Year 2000: Evolution or Revolution?" Proceedings of the Human Factors Society Annual Meeting 36, no. 4 (1992): 460–64. http://dx.doi.org/10.1177/154193129203600442.

Full text
Abstract:
Control rooms—central facilities used to manage large systems such as industrial processes and communication networks—are a relatively recent innovation. As the operators of large systems are asked to perform more efficiently, use more sophisticated control systems, and take on more duties and responsibilities, developers of control room equipment have sought to improve operators' ability to interact effectively with their systems. Control rooms have evolved as a result: Pen recorders and mechanical gauges were replaced by text displays on low-resolution monochrome cathode ray tubes, which were in turn supplanted by higher resolution color graphics displays. A new generation of technology now emerging from multiple disciplines will greatly affect control rooms. Some of these technologies, such as bigger displays, improved simulations, and better graphics, represent evolutionary advances. Others, including artificial intelligence technologies such as user intent recognition and context-sensitive aiding, user interface technologies such as virtual reality, multimedia, and true three-dimensional displays, and systems technologies such as object-oriented programming techniques and high-performance communications, may well revolutionize the control rooms of the future, replacing supervisory control with collaborative operations in which the system and the operator will share tasks associated with planning, conducting, and optimizing operations.
APA, Harvard, Vancouver, ISO, and other styles
30

Chomyim, Chiraphorn, Settachai Chaisanit, and Apichai Trangansri. "Low Cost Mobile Robot Kits Design as a Teaching Tool for Education and Research." Applied Mechanics and Materials 752-753 (April 2015): 1010–15. http://dx.doi.org/10.4028/www.scientific.net/amm.752-753.1010.

Full text
Abstract:
Robot technology is important for students to be able to work in 21st century and to share technical knowhow. Learning robot technology is very difficult because the high costs of hardware equipment; it was used mainly in the hi-tech subject’s area such as military affairs, space, medicine, engineering, etc. With the improvements and developments of technology, Robot technology is now open for the general public’s use. In basic education, it is also more and more popular to use robot when teaching science, physics, technology, etc. Unfortunately, the robot equipment also still high costs. This project concentrates on the design and develops a low cost mobile robot kits as a teaching tool for education and research. In this project we developed a tool that will help students to understand technology based on mobile robotics technology and electronic engineering. The robot kit in this project was designed to create modular systems, considering assembly requirements of different types of objects as well as various user activities. Each module of the robot kit takes care of a core function required to operate automata. The modules help students build kinetic sculptures quickly and easily, eliminating the need for unsafe tasks such as soldering. Further, the features of this robot kit can be expanded to allow integration with programming modular and multimedia programming environment, making it possible to develop more creative automata for improve education and research forward.
APA, Harvard, Vancouver, ISO, and other styles
31

Riabov, Vladimir V. "Teaching Online Computer-Science Courses in LMS and Cloud Environment." International Journal of Quality Assurance in Engineering and Technology Education 5, no. 4 (2016): 12–41. http://dx.doi.org/10.4018/ijqaete.2016100102.

Full text
Abstract:
The author shares his experiences teaching various online computer-science courses (via the Canvas™ and synchronous web conferencing tools) using state-of-the-art free-license software tools for conducting online virtual labs and numerous students' projects. The labs were designed to help students explore modern, sophisticated techniques in several areas of computer science: computer-system analysis and design, programming in C/C++ and Java, software quality assurance, data communication in networking systems, computer security, system simulation and modeling, numerical analysis, image processing, multimedia applications, Web development, and database design and management. All the online courses include “warm-up” exercises and lab-based projects that provide students with knowledge, instructions, and hands-on experience, and that motivate them in selecting topics for technology overviews and research. To concentrate mostly on the students' hands-on training, the “flipped classroom” pedagogy and individual or team tutoring were used in the online classes. The preventive strategies on plagiarism and cheating among students were developed and successfully implemented in the virtual classroom using the Cloud environment.
APA, Harvard, Vancouver, ISO, and other styles
32

Boutsi, Argyro-Maria, Charalabos Ioannidis, and Sofia Soile. "An Integrated Approach to 3D Web Visualization of Cultural Heritage Heterogeneous Datasets." Remote Sensing 11, no. 21 (2019): 2508. http://dx.doi.org/10.3390/rs11212508.

Full text
Abstract:
The evolution of the high-quality 3D archaeological representations from niche products to integrated online media has not yet been completed. Digital archives of the field often lack multimodal data interoperability, user interaction and intelligibility. A web-based cultural heritage archive that compensates for these issues is presented in this paper. The multi-resolution 3D models constitute the core of the visualization on top of which supportive documentation data and multimedia content are spatial and logical connected. Our holistic approach focuses on the dynamic manipulation of the 3D scene through the development of advanced navigation mechanisms and information retrieval tools. Users parse the multi-modal content in a geo-referenced way through interactive annotation systems over cultural points of interest and automatic narrative tours. Multiple 3D and 2D viewpoints are enabled in real-time to support data inspection. The implementation exploits front-end programming languages, 3D graphic libraries and visualization frameworks to handle efficiently the asynchronous operations and preserve the initial assets’ accuracy. The choice of Greece’s Meteora, UNESCO world site, as a case study accounts for the platform’s applicability to complex geometries and large-scale historical environments.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhang, Pei, and Wenying Zhang. "Differential Cryptanalysis on Block Cipher Skinny with MILP Program." Security and Communication Networks 2018 (October 4, 2018): 1–11. http://dx.doi.org/10.1155/2018/3780407.

Full text
Abstract:
With the widespread use of RFID technology and the rapid development of Internet of Things, the research of lightweight block cipher has become one of the hot issues in cryptography research. In recent years, lightweight block ciphers have emerged and are widely used, and their security is also crucial. Skinny-64/192 can be used to protect data security such as the applications of wireless multimedia and wireless sensor networks. In this paper, we use the new method to verify the security of Skinny-64/192. The method is called mixed-integer linear programming (MILP) which can characterize precisely the linear operation and nonlinear operation in a round function. By applying MILP program, we can automatically find a 11-round differential characteristic for Skinny-64/192 with the minimum number of active s-boxes. The probability of differential trail is 2-147, that is, far greater than 2-192 which is the probability of success for an exhaustive search. In addition, comparing this method with the one proposed by Sun et al., we also have a great improvement; that is, no new variables will be added in ShiftRows operation. It can reduce greatly the number of variables and improve the running speed of the computer. Besides, the experimental result proves that Skinny-64/192 is safe on 11-round differential analysis and validates the effectiveness of the MILP method.
APA, Harvard, Vancouver, ISO, and other styles
34

Fraietta, Angelo, Oliver Bown, Sam Ferguson, Sam Gillespie, and Liam Bray. "Rapid Composition for Networked Devices: HappyBrackets." Computer Music Journal 43, no. 2-3 (2020): 89–108. http://dx.doi.org/10.1162/comj_a_00520.

Full text
Abstract:
This article introduces an open-source Java-based programming environment for creative coding of agglomerative systems using Internet-of-Things (IoT) technologies. Our software originally focused on digital signal processing of audio—including synthesis, sampling, granular sample playback, and a suite of basic effects—but composers now use it to interface with sensors and peripherals through general-purpose input/output and external networked systems. This article examines and addresses the strategies required to integrate novel embedded musical interfaces and creative coding paradigms through an IoT infrastructure. These include: the use of advanced tooling features of a professional integrated development environment as a composition or performance interface rather than just as a compiler; techniques to create media works using features such as autodetection of sensors; seamless and serverless communication among devices on the network; and uploading, updating, and running of new compositions to the device without interruption. Furthermore, we examined the difficulties many novice programmers experience when learning to write code, and we developed strategies to address these difficulties without restricting the potential available in the coding environment. We also examined and developed methods to monitor and debug devices over the network, allowing artists and programmers to set and retrieve current variable values to or from these devices during the performance and composition stages. Finally, we describe three types of art work that demonstrate how the software, called HappyBrackets, is being used in live-coding and dance performances, in interactive sound installations, and as an advanced composition and performance tool for multimedia works.
APA, Harvard, Vancouver, ISO, and other styles
35

Wang, Kan, Ruijie Wang, Junhuai Li, and Meng Li. "Joint V2V-Assisted Clustering, Caching, and Multicast Beamforming in Vehicular Edge Networks." Wireless Communications and Mobile Computing 2020 (November 19, 2020): 1–12. http://dx.doi.org/10.1155/2020/8837751.

Full text
Abstract:
As an emerging type of Internet of Things (IoT), Internet of Vehicles (IoV) denotes the vehicle network capable of supporting diverse types of intelligent services and has attracted great attention in the 5G era. In this study, we consider the multimedia content caching with multicast beamforming in IoV-based vehicular edge networks. First, we formulate a joint vehicle-to-vehicle- (V2V-) assisted clustering, caching, and multicasting optimization problem, to minimize the weighted sum of flow cost and power cost, subject to the quality-of-service (QoS) constraints for each multicast group. Then, with the two-timescale setup, the intractable and stochastic original problem is decoupled at separate timescales. More precisely, at the large timescale, we leverage the sample average approximation (SAA) technique to solve the joint V2V-assisted clustering and caching problem and then demonstrate the equivalence of optimal solutions between the original problem and its relaxed linear programming (LP) counterpart; and at the small timescale, we leverage the successive convex approximation (SCA) method to solve the nonconvex multicast beamforming problem, whereby a series of convex subproblems can be acquired, with the convergence also assured. Finally, simulations are conducted with different system parameters to show the effectiveness of the proposed algorithm, revealing that the network performance can benefit from not only the power saving from wireless multicast beamforming in vehicular networks but also the content caching among vehicles.
APA, Harvard, Vancouver, ISO, and other styles
36

Wali, Muhammad, and Lukman Ahmad. "Computer Assisted Learning (CAL): A Learning Support System Solution." Webology 18, no. 1 (2021): 299–314. http://dx.doi.org/10.14704/web/v18i1/web18090.

Full text
Abstract:
This study tries to provide a solution by engineering existing information re-sources, designing it into a support system to facilitate lecturers or teaching staff to create, load and present multimedia teaching materials, both to groups and personal, network-based or stand-alone. The use of HTML language as a basis for web-based applications and the use of local networks can make Computer Assisted Learning more flexible and have unlimited functions on the architecture de-signed by researchers to use the function data methods as a data framework stored by CAL. Broadly speaking, this research is divided into three stages, namely pre-development data collection, development and implementation, and post-development data collection. Pre-development data collection is intended to obtain a preliminary study of the core problems being faced, while the development and implementation phase focuses on modeling software design into diagrams and making programming code to implement the design that has been made. While the stages of post-development data collection are for the improvement of applications made, drawing conclusions, and suggestions for further re-search topics. From the results of this study, several conclusions were obtained, the CAL that was built made it easier for students in learning and CAL was built by utilizing HTML Sheets as a medium for presentation of teaching material because it was very effective for displaying teaching materials in the form of text, images, animation, audio, and video.
APA, Harvard, Vancouver, ISO, and other styles
37

Faiztyan, Irham Fa'idh, R. Rizal Isnanto, and Eko Didik Widianto. "Perancangan dan Pembuatan Aplikasi Visualisasi 3D Interaktif Masjid Agung Jawa Tengah Menggunakan Unity3D." Jurnal Teknologi dan Sistem Komputer 3, no. 2 (2015): 207. http://dx.doi.org/10.14710/jtsiskom.3.2.2015.207-212.

Full text
Abstract:
Previously, people who want to visit a tourist attraction should come to that place. If you can’t visit it normally, you can only read or heard from a source. Therefore made 3-dimensional visualization application. The object to be visualized in this research is Great Mosque of Central Java. This application aims to facilitate the introduction of the Great Mosque of Central Java. This application is created using Unity3D and Sketchup software, where the programming language used is UnityScript and JavaScript. The design phase made using Multimedia Development Life Cycle, and then proceed with the design using Flowchart. Implementation phase is done by implementing a 3D model and program implementation. The testing phase is done with a black-box method, as well as testing of the frame rate per second, memory and processor usage, how long rendering process, and user testing. The results show that this application runs well on Windows operating systems. Buttons and functions within the application has been running well with the respective functionality. From the test results on the application of the Great Mosque of Central Java visualization can be shown that the process of rendering both real time and non real time rendering requires a high performance from graphics card and processor. Based on the testing that has been done, the application is quite easy to run by the user, the objects that exist in the application is quite similar to the original object, and this application provides benefits to its users.
APA, Harvard, Vancouver, ISO, and other styles
38

Ghosh, Maitrayee. "Hack the Library! a first timer’s look at the 29th Computers in Libraries conference in Washington, DC." Library Hi Tech News 31, no. 5 (2014): 1–4. http://dx.doi.org/10.1108/lhtn-05-2014-0031.

Full text
Abstract:
Purpose – The purpose of this paper is to focus on selected presentations from the 29th Computers in Libraries (CIL) conference that took place at Washington Hilton hotel, Washington, DC. In addition to its content, the CIL (2014) conference provided opportunities to discuss best practices and emerging issues with IT professionals, vendors and “techno” librarians, especially from North America. There was a conference within a conference – the Internet@Schools track integrated into CIL 2014 as Track E on Monday, April 7, and Tuesday, April 8. Design/methodology/approach – Reports from the viewpoint of a first-time attendee of CIL (2014) present a summary of the selected presentations with more detail on networking events and the exhibition. The CIL (2014) conference attracted librarians from 13 countries other than the USA. It is difficult to document the entire conference happenings in a single report because of several tracks (A-E) and number of speakers; therefore, a selective approach is used. Findings – The CIL (2014) in Washington, DC, is considered a major North American library technology conference for librarians and information managers. As a first-time attendee, the author found that CIL (2014) is informative; it covered technology applications in libraries and strategies to enhance communication – useful to librarians and information professionals both in the USA and internationally. The conference was full of innovative ideas and revealed the diversity of current developments in library service delivery, especially in North America. Originality/value – Today, more and more library users are using various innovative technologies including mobile apps, data visualization, application programming interfaces, open-source and multimedia. Phones (smart phones) and tablets are emerging as popular choices to access content. This report is a summary of selected educational sessions/presentations in CIL (2014) on diverse technology-related topics, especially mobile technology in libraries that will be of particular interest to readers and useful for professionals who did not attend CIL (2014) in Washington, DC.
APA, Harvard, Vancouver, ISO, and other styles
39

Sgurev, Vassil, Vladimir Jotsov, and Mincho Hadjiski. "Intelligent Systems: Methodology, Models, and Applications in Emerging Technologies." Journal of Advanced Computational Intelligence and Intelligent Informatics 9, no. 1 (2005): 3–4. http://dx.doi.org/10.20965/jaciii.2005.p0003.

Full text
Abstract:
From year to year the number of investigations on intelligent systems grows rapidly. For example this year 245 papers from 45 countries were sent for the Second International IEEE Conference on Intelligent Systems (www.ieee-is.org; www.fnts-bg.org/is) and this is an increase of more than 50% by all indicators. The presented papers on intelligent systems were marked by big audiences and they provoked a significant interest that ultimately led to the formation of vivid discussions, exchange of ideas and locally provoked the creation of working groups for different applied projects. All this reflects the worldwide tendencies for the leading role of the research on intelligent systems theoretically and practically. The greater part of the presented research dealt with traditional for the intelligent systems problems like artificial intelligence, knowledge engineering, intelligent agents, neural and fuzzy networks, intelligent data processing, intelligent control and decision making systems, and also new interdisciplinary problems like ontology and semantics in Internet, fuzzy intuitionistic logic. The majority of papers from the European and American researchers are dedicated to the theory and the applications of the intelligent systems with machine learning, fuzzy inference or uncertainty. Another big group of papers focuses on the domain of building and integrating ontologies of applications with heterogeneous multiagent systems. A great number of papers on intelligent systems deals with fuzzy sets. The papers of many other researchers underscore the significance of the contemporary perception-oriented methods and also of different applications in the intelligent systems. On the first place this is valid for the paradigm of L. A. Zadeh 'computing with words'. The Guest Editors in the present specialized journal volume would like to introduce a wealth of research with an applied and theoretical character that possesses a common characteristic and it is the conference best papers complemented and updated by the new elaborations of the authors during the last half a year. A short description of the presented in the volume papers follows. In 'Combining Local and Global Access to Ontologies in a Multiagent System' <B>R. Brena and H. Ceballos (Mexico)</B> proposed an original way for operation with ontologies where a part of the ontology is processed by a client's component and the rest is transmitted to the other agents by an ontology agent. The inter-agent communication is improved in this way. In 'Fuzzy Querying of Evolutive Situations: Application to Driving Situations' <B>S. Ould Yahia and S. Loriette-Rougegrez (France)</B> present an approach to analysis of driving situations using multimedia images and fuzzy estimates that will improve the driver's security. In 'Rememberng What You Forget in an Online Shopping Context' <B>M. Halvey and M. Keane (Ireland)</B> presented their approach to constructing online system that predicts the items for future shopping sessions using a novel idea called Memory Zones. In 'Reinforcement Learning for Online Industrial Process Control' the authors <B>J. Govindhasamy et al. (Ireland)</B> use a synthesis of dynamic programming, reinforcement learning and backpropagation for a goal of modeling and controlling an industrial grinding process. The felicitous combination of methods contributes for a greater effectiveness of the applications compared to the existing controllers. In 'Dynamic Visualization of Information: From Database to Dataspace' the authors <B>C. St-Jacques and L. Paquin (Canada)</B> suggested a friendly online access to large multimedia databases. <B>W. Huang (UK)</B> redefines in 'Towards Context-Aware Knowledge Management in e-Enterprises' the concept of context in intelligent systems and proposes a set of meta-information elements for context description in a business environment. His approach is applicable in the E-business, in the Semantic Web and in the Semantic Grid. In 'Block-Based Change Detection in the Presence of Ambient Illuminaion Variations' <B>T. Alexandropoulos et al. (Greece)</B> use a statistic analysis, clustering and pattern recognition algorithms, etc. for the goal of noise extraction and the global illumination correction. In 'Combining Argumentation and Web Search Technology: Towards a Qualitative Approach for Ranking Results' <B>C. Chesñevar (Spain) and A. Maguitman (USA)</B> proposed a recommender system for improving the WEB search. Defeasible argumentation and decision support methods have been used in the system. In 'Modified Axiomatic Basis of Subjective Probability' <B>K. Tenekedjiev et al. (Bulgaria)</B> make a contribution to the axiomatic approach to subjective uncertainty by introducing a modified set of six axioms to subjective probabilities. In 'Fuzzy Rationality in Quantitative Decision Analysis' <B>N. Nikolova et al. (Bulgaria)</B> present a discussion on fuzzy rationality in the elicitation of subjective probabilities and utilities. The possibility to make this special issue was politely offered to the Guest Editors by Prof. Kaoru Hirota, Prof. Toshio Fukuda and we thank them for that. Due to the help of Kenta Uchino and also due to the new elaborations presented by explorers from Europe and America the appearance of this special issue became possible.
APA, Harvard, Vancouver, ISO, and other styles
40

Guseva, E. N., I. Y. Efimova, and T. N. Varfolomeeva. "The method of formation of skills of simulation modeling the it professional." Open Education 23, no. 1 (2019): 4–13. http://dx.doi.org/10.21686/1818-4243-2019-1-4-13.

Full text
Abstract:
Purpose of the study.The aim is to create a technique targeted at the development of simulation skills in higher education environment, where students are competent in applying information technologies in economics. The relevance of the research lies in the fact that the existing methodological developments often focus on a specific software tool or methodology that cannot respond to all economic problems. A specialist in simulation modeling should possess integrative interdisciplinary knowledge from related scientific fields, for example, probability theory and mathematical statistics, higher mathematics, be familiar with other methods of solving economic problems: linear, nonlinear, dynamic programming, optimization; show proficiency in structural and functional analysis; be able to explore complex processes and systems comprehensively.Materials and methods.The following pedagogical approaches and teaching methods were implemented in this research: ● a systematic approach to solving complex problems based on the modelling economic objects as systems operating in a certain environment, ● activity approach to develop students’ professional competences in the process of creation, debugging and optimization of computer models of economic systems; ● problem teaching method in the framework of research and analysis of educational problems of the subject area; ● implemented interactive teaching methods; ● multimedia methods in the content of teaching materials of the discipline, including electronic manuals, educational videos, as well as multimedia presentations. The research also utilized information technologies, in which computers, communication equipment and software environments are: ● means to provide educational material to students for the transfer of knowledge; ● tools for designing, developing and conducting simulation experiments. In addition, we used the following special professional technologies, methods and tools in the process of teaching students: ● structural and functional modeling methodology; ● discrete-event approach to simulation methodology; ● special software for development and research of simulation models of economic processes and systems: Arena 15.0, AnyLogic 8.3.2.According to the requirements of the new educational standards, the student must master a sufficiently large amount of general cultural, professional and specialized competencies included in the curriculum. The application of the proposed approaches and methods allows to provide effective development of skills of simulation modeling of educational programs for bachelors of «Applied Informatics» and «Business Informatics».Results.The study created a method of teaching students the skills of simulation modeling. The research also established the model of formation of readiness of the IT specialist to the development of simulation models of economic processes and systems in higher school. We also identified important methodological conditions for the formation of professional competencies of students in the field of modeling, such as: ● application of a systematic approach to the analysis of domain problems, as well as for the synthesis of mathematical simulation models of business processes and economic systems; ● practical orientation of the content of training (selection and research in the educational process of the most characteristic, typical problems of the economy); ● integration of interdisciplinary knowledge, methods and approaches to solve complex problems.Conclusion.The method can equip students with skills of simulation modeling with various areas of practical application. First, this technique can be used by university students who are engaged in pursuing practical skills and basic system knowledge in the field of simulation. Secondly, teachers can use it, conducting courses: “Computer modeling”, “Mathematical and simulation modeling”, “Modeling of processes and systems” in the educational process of the University to improve the professional competence of students training simulation modeling. Third, the outcomes may be of interest to managers of educational programs in the areas of: “Applied Informatics”, “Business Informatics”, etc. to improve the structure and sequence of disciplines of competence-oriented curricula. Finally, the application of the proposed methodology in the educational process of the University will enhance professional expertise of young specialists and undoubtedly address the needs of potential employers.
APA, Harvard, Vancouver, ISO, and other styles
41

Adam, Alif Syaiful, and Nadi Suprapto. "One-Stop Physics E-Book Package Development for Senior High School Learning Media." International Journal of Emerging Technologies in Learning (iJET) 14, no. 19 (2019): 150. http://dx.doi.org/10.3991/ijet.v14i19.10761.

Full text
Abstract:
Many e-Book versions of Physics learning have used to support for gaining more information. Mostly, these e-Books didn’t contain virtual laboratory environment due to the student’s visualization in Physics concept. For this objection, an e-Book package was involved to developing, branded as Beboo (Bilingual e-Book), one stop e-Book package completed by physics visualize animation, virtual laboratory environment, student’s worksheet, supporting video and self-final test. Beboo uses to assist the second-year student in senior high school to learn static fluids concept, including of the condition in static fluids and the Pascal-Archimedes principle applying in an everyday situation. This e-Book was developed by a range of multimedia including text, pictures, diagrams, sound effect, music, video, and animation in interactive and effective elements and required multiskilling that are content crating and programming. Besides, it contains the interesting part in physics learning, a laboratory activity, designed in two simulators on U-Pipe and Hydraulic Lift due to the student’s understanding of the hydrostatic and pascal law systems. For a common self-final test, Beboo provides the final test feature, shown in multiple choice for concept achievement. In general, Beboo develops in three main stages (needs assessment, design and development & implementation) known as Hannafin and Peck Model and have the validation score in a range of 84%- 94%. Furthermore, Beboo should enlarge team due to more chapter coverage and Android platform support. Overall, Beboo uses for an alternative use for school with no real laboratory and as the minimal point to the next e-Book development. Moreover, this media developed as the answer in technology integration in 21st-century learning.
APA, Harvard, Vancouver, ISO, and other styles
42

Holloway, Joshua, Vignesh Kannan, Yi Zhang, Damon Chandler, and Sohum Sohoni. "GPU Acceleration of the Most Apparent Distortion Image Quality Assessment Algorithm." Journal of Imaging 4, no. 10 (2018): 111. http://dx.doi.org/10.3390/jimaging4100111.

Full text
Abstract:
The primary function of multimedia systems is to seamlessly transform and display content to users while maintaining the perception of acceptable quality. For images and videos, perceptual quality assessment algorithms play an important role in determining what is acceptable quality and what is unacceptable from a human visual perspective. As modern image quality assessment (IQA) algorithms gain widespread adoption, it is important to achieve a balance between their computational efficiency and their quality prediction accuracy. One way to improve computational performance to meet real-time constraints is to use simplistic models of visual perception, but such an approach has a serious drawback in terms of poor-quality predictions and limited robustness to changing distortions and viewing conditions. In this paper, we investigate the advantages and potential bottlenecks of implementing a best-in-class IQA algorithm, Most Apparent Distortion, on graphics processing units (GPUs). Our results suggest that an understanding of the GPU and CPU architectures, combined with detailed knowledge of the IQA algorithm, can lead to non-trivial speedups without compromising prediction accuracy. A single-GPU and a multi-GPU implementation showed a 24× and a 33× speedup, respectively, over the baseline CPU implementation. A bottleneck analysis revealed the kernels with the highest runtimes, and a microarchitectural analysis illustrated the underlying reasons for the high runtimes of these kernels. Programs written with optimizations such as blocking that map well to CPU memory hierarchies do not map well to the GPU’s memory hierarchy. While compute unified device architecture (CUDA) is convenient to use and is powerful in facilitating general purpose GPU (GPGPU) programming, knowledge of how a program interacts with the underlying hardware is essential for understanding performance bottlenecks and resolving them.
APA, Harvard, Vancouver, ISO, and other styles
43

Rylova, O. G. "Features of training 3d computer modeling and visualization of future teachers of informatics." «System analysis and applied information science», no. 4 (February 6, 2019): 83–88. http://dx.doi.org/10.21122/2309-4923-2018-4-83-88.

Full text
Abstract:
The cognitive, developing, illustrative potential of three-dimensional computer graphics actualizes the training of future teachers in its technologies. The future teacher of computer science should be able to develop three-dimensional models and animations of the studied objects (phenomena and processes) in computer science, physics, mathematics and other academic disciplines, create copyright electronic educational resources with three-dimensional illustrations, apply augmented reality and 3D printing in professional activities.The article describes the features of teaching three-dimensional computer modeling and visualization of future informatics teachers, identified on the basis of an analysis of the training system for students studying in specialty 1–02 05 02 Physics and Informatics at the Maxim Tank Belarusian State Pedagogical University.The main directions for improving the learning process of three-dimensional computer graphics of future informatics teachers are indicated. Questions on three-dimensional computer modeling and visualization will pass through the content line through the content of six academic disciplines («Computer graphics and multimedia», «Computational methods and computer modeling», «Programming Technologies and Algorithmization Methods», «Information Technologies in Education», «Methods teaching informatics» and «Architecture and software of computer systems»). This will ensure consistency in the introduction and study of concepts, the choice of forms and methods of teaching, the development of teaching and methodical support. For the implementation of interdisciplinarity, it is proposed to carry out practice-oriented interdisciplinary educational projects in the framework of the study of academic disciplines in three subject areas «Computer Science», «Physics» and «Mathematics». The development of a methodical system of interdisciplinary education will ensure the formation of readiness to teach three-dimensional graphics at the stage of general secondary education and to realize its didactic possibilities in organizing educational and research activities of students.
APA, Harvard, Vancouver, ISO, and other styles
44

Patil, Uday AJ. "Public Health at the Public Library: Preventive Health Programs Implemented in Large Public Libraries." SLIS Connecting 9, no. 1 (2020). http://dx.doi.org/10.18785/slis.0901.07.

Full text
Abstract:
Amid the opioid epidemic and COVID-19 pandemic, the public sector is consumed with health promotion and disease prevention. Preventive programs serve a significant purpose in ensuring population health and reducing burden on the healthcare system (Cohen et al., 2008; Neumann & Cohen, 2009). People are increasingly turning to educational resources outside of the traditional healthcare sector to ward off diseases or alleviate pre-existing conditions (Eakin et al., 1980; Eng et al., 1998). Public library systems often carry such resources, in print and multimedia form, at no cost. Some libraries are providing health programming to supplement, contextualize, or incentivize the use of such resources (Murray, 2008; National Network of Libraries of Medicine, 2014). This study examines preventive health programming offered in the largest public library systems nationwide.
APA, Harvard, Vancouver, ISO, and other styles
45

Liliana, Lydia, Adam Surya Wijaya, Nico Fernando, Henny Hartono, and Dwi Hosanna Bangkalang. "YUK LES: INFORMATION SYSTEMS ON ONLINE PRIVATE COURSE SERVICES BASED ON MOBILE APPLICATION." JBASE - Journal of Business and Audit Information Systems 2, no. 2 (2019). http://dx.doi.org/10.30813/jbase.v2i2.1726.

Full text
Abstract:
One way to improve students' academic and non-academic abilities is by taking tutoring. Tutoring is an effort to achieve maximum learning outcomes in accordance with the field of interest. In the era of technological revolution 4.0, we are always demanded to be up to date on their abilities in non-academic / soft skills such as video editing, programming, dance, music, multimedia, etc. Education 4.0 should be able to provide easy access to education and mobility in learning formal and non-formal fields. The application "Yuk Les" brings together the community / students who need private tutoring and people who have abilities in various non-academic fields based on Android. With this application, it is expected to facilitate the public in finding private lessons in accordance with the needs of the field of interest and opening new jobs for the community / students who have abilities in non-academic fields.
APA, Harvard, Vancouver, ISO, and other styles
46

Awaludin, Ludi. "STRATEGI PENGUATAN KOMPETENSI SDM TEKNOLOGI INFORMASI&KOMUNIKASI (TIK) DALAM MENGOPTIMALKAN PENERAPAN SISTEM PEMERINTAHAN BERBASIS ELEKTRONIK (SPBE)." Paradigma POLISTAAT Jurnal Ilmu Sosial dan Ilmu Politik, December 31, 2019. http://dx.doi.org/10.23969/paradigmapolistaat.v2i2.2115.

Full text
Abstract:
Governance based on the development of information technology and innovation requires human resources who have competence in the application of Electronic Based Government Systems (SPBE), namely the administration of government that utilizes information and communication technology to provide services to SPBE Users. Strategy Improvement of human resource (HR) competence in Information & Communication Technology (ICT) at the Department of Communication and Information and Statistics of West Bandung Regency is done through e-learning methods on computer network competence, programming competence and Multimedia competence. Implementation through the learning of e-learning methods, especially with the introduction of Electronic-Based Government System (SPBE), obtained an average of 9.09 results, which shows that the introduction of SPBE has succeeded in increasing the competence of HR in Information & Communication Technology (ICT) at the Office of Communication and Information Technology and Statistics of West Bandung Regency.
APA, Harvard, Vancouver, ISO, and other styles
47

Smith, Hazel, and Roger T. Dean. "Posthuman Collaboration: Multimedia, Improvisation, and Computer Mediation." M/C Journal 9, no. 2 (2006). http://dx.doi.org/10.5204/mcj.2619.

Full text
Abstract:
Artistic collaboration involves a certain loss of self, because it arises out of the merging of participants. In this sense collaboration questions the notion of the creative individual and the myth of the isolated artistic genius. As such, artistic collaborations can be subversive interventions into the concept of authorship and the ideologies that surround it (Smith 189-194). Collaborations also often simultaneously superimpose many different approaches to the collaborative process. Collaboration is therefore a multiplicitous activity in which different kinds of interactivity are interlinked; this process may also be facilitated by improvisation which allows for continuous modification of the interactions (Smith and Dean, Improvisation). Even when we are writing individually, we are always collaborating with prior texts and employing ideas from others, advice and editing suggestions. This eclectic aspect of creative work has led some to argue that collaboration is the dominant mode, while individual creativity is an illusion (Stillinger; Bennett 94-107). One of the reasons why collaboration tends to be multiplicitous is that contemporary creative endeavour sometimes involves collaboration across different media and with computers. Artworks are created by an ‘assemblage’ of different expertises, media, and machines in which the computer may be a ‘participant’. In this respect contemporary collaboration is what Katherine Hayles calls posthuman: for Hayles ‘the posthuman subject is an amalgam, a collection of heterogeneous components, a material-informational entity whose boundaries undergo continuous construction and reconstruction (Hayles 3). Particularly important here is her argument about the conceptual shifts that information systems are creating. She suggests that the binary of presence and absence is being progressively replaced in cultural and literary thought by the binary of pattern and randomness created by information systems and computer mediation (Hayles 25-49). In other words, we used to be primarily concerned with human interactions, even if sometimes it was the lack of them, as in Roland Barthes concept of ‘the death of the author’. However, this has shifted to our concern with computer systems as methods of organisation. Nevertheless, Hayles argues, computers can never totally replace embodied human subjects, rather we need to continually negotiate between presence and pattern, absence and randomness (Hayles 25-49). This very negotiation is central to many computer-mediated collaborations. Our own collaborative practice—Roger is primarily a musician and Hazel primarily a writer but we both have interdisciplinary performance and technological expertise—spans 15 years and has resulted in approximately 18 collaborative works. They are all cross-media: initially these brought together word and sound; now they sometimes also include image. They all involve multiple forms of collaboration, improvised and unfixed elements, and computer interfaces. Here we want to outline some of the stages in the making of our recent collaboration, Time, the Magician, and its ‘posthuman’ engagement with computerised processes. Time, the Magician is a collaborative performance and sound-video piece. It combines words, sound and image, and involves composed and improvised elements as well as computer mediation. It was conceived largely by us, but the first performance, at the Sydney Conservatorium of Music in 2005 also involved collaboration with Greg White (sound processing) and Sandy Evans (saxophone). The piece begins with a poem by Hazel, initially performed solo, and then juxtaposed with live and improvised sound. This sound involves some real-time and pre-recorded sampling and processing of the voice: this—together with other sonic materials—creates a ‘voicescape’ in which the rhythm, pitch, and timbre of the voice are manipulated and the voice is also spatialised in the performance space (Smith and Dean, “Voicescapes”). The performance of the poem is followed (slightly overlapping) by screened text created in the real-time image-processing program Jitter, and this is also juxtaposed with sound and voice samples. One of the important aspects of the piece is its variability: the video-manipulated text and images change both in order and appearance each time, and the sampling and manipulation of the voice is different too. The example here shows short extracts from the longer performance of the work at the Sydney 2005 event. (This is a Quicktime 7 compressed video of excerpts from the first performance of Time, the Magician by Hazel Smith and Roger Dean. The performance was given by austraLYSIS (Roger Dean, computer sound and image; Sandy Evans, saxophone; Hazel Smith, speaker; Greg White, computer sound and sound projection) at the Sydney Conservatorium of Music, October 2005. The piece in its entirety lasts about 11 minutes, while these excerpts last about four minutes, and are not cross-faded, but simply juxtaposed. The piece itself will later be released elsewhere as a Web video/sound piece, made directly from the sound and the Jitter-processed images which accompany it. This Quicktime 7 performance video uses AAC audio compression (44kHz stereo), H.264 video compression (320x230), and has c. 15fps and 200kbits/sec.; it is prepared for HTTP fast-start streaming. It requires the Quicktime 7 plugin, and on Macintosh works best with Safari or Firefox – Explorer is no longer supported for Macintosh. The total file size is c. 6MB. You can also access the file directly through this link.) All of our collaborations have involved different working processes. Sometimes we start with a particular topic or process in mind, but the process is always explorative and the eventual outcome unpredictable. Usually periods of working individually—often successively rather than simultaneously—alternate with discussion. We will now each describe our different roles in this particular collaboration, and the points of intersection between them. Hazel In creating Time, the Magician we made an initial decision that Roger—who would be responsible for the programming and sound component of the piece—would work with Jitter, which we had successfully used for a previous collaboration. I would write the words, and I decided early on that I would like our collaboration to circle around ideas—which interested both Roger and me—about evolution, movement, emergence, and time. We decided that I would first write some text that would then be used as the basis of the piece, but I had no idea at this stage what form the text would take, and whether I would produce one continuous text or a number of textual fragments. In the early stages I read and ‘collaborated with’ a number of different texts, particularly Elizabeth Grosz’s book The Nick of Time. I was interested in the way Grosz sees Darwin’s work as a treatise on difference—she argues that for Darwin there are no clear-cut distinctions between different species and no absolute origin of the species. I was also stimulated by her idea that political resistance is always potential, if latent, in the repressive regimes or social structures of the past. As I was reading and absorbing the material, I opened a file on my computer and—using a ‘bottom-up’ approach—started to write fragments, sometimes working with the Grosz text as direct trigger. A poem evolved which was a continuous whole but also discontinuous in essence: it consisted of many small fragments that, when glued together and transformed in relation to each other, reverberated though association. This was appropriate, because as the writing process developed I had decided that I would write a poem, but then also disassemble it for the screened version. This way Roger could turn each segment into a module in Jitter, and program the sequence so that the texts would appear in a different order each time. After I had written the poem we decided on a putative structure for the work: the poem would be performed first, the musical element would start about halfway through, and the screened version—with the fragmented texts—would follow. Roger said that he would video some background material to go behind the texts, but he also suggested that I design the texts as visual objects with coloured letters, different fonts, and free spatial arrangements, as I had in some previous multimedia pieces. So I turned the texts into visual designs: this often resulted in my pulling apart sentences, phrases and words and rearranging them. I then converted the texts files into jpg files and gave them to Roger to work on. Roger When Hazel gave me her 32 text images, I turned these into a QuickTime video with 10 seconds per image/frame. I also shot a 5 minute ‘background’ video of vegetation and ground, often moving the camera quickly over blurred objects or zooming in very close to them. The video was then edited as a continually moving sequence with an oscillation between clearly defined and abstracted objects, and between artificial and natural ones. The Jitter interface is constructed largely as a sequence of three processing modules. One of these involves continuously changing the way in which two layers (in this case text and background) are mixed; the second, rotation and feedback of segments from one or both layers; and the third a kind of dripping across the image, with feedback, of segments from one or both layers. The interface is performable, in that the timing and sequence can be altered as the piece progresses, and within any one module most of the parameters are available for performer control—this is the essence of what we call ‘hyperimprovisation’ (Dean). Both text and image layers are ‘granulated’: after a randomly variable length of time—between 2 and 20 seconds or so—there is a jump to a randomly chosen new position in the video, and these jumps occur independently for the two layers. Having established this approach to the image generation, and the overall shape of the piece (as defined above), the remaining aspects were left to the creative choices of the performers. In the Sydney performance both Greg White and I exploited real-time processing of the spoken text by means of the live feed and pre-recorded material. In addition we used long buffers (which contained the present performance of the text) to access the spoken text after Hazel had finished her performed opening segment. I worked on the sound and speech components with some granulation and feedback techniques, throughout, while Greg used a range of other techniques, as well as focusing on the spatial movement of the sound around four loudspeakers surrounding the performance and listening space. Sandy Evans (saxophone)—who was familiar with the overall timeline—improvised freely while viewing the video and listening to our soundscape. In this first performance, while I drove the sound, the computer ‘posthumanly’ (that is without intervention) drove the image. I worked largely with MSP (Max Signal Processing), a part of the MAX/MSP/Jitter suite of platforms for midi, sound and image, to complement sonically the Jitter-mediated video. So processes of granulation, feedback, spatial rotation (of image) or redistribution (of sound)—as well as re-emergence of objects which had been retained in the memory of the computer—were common to both the sound and image manipulation. There was therefore a degree of algorithmic synaesthesia—that is shared algorithms between image and sound (Dean, Whitelaw, Smith, and Worrall). The collaborative process involved a range of stimuli: not only with regard to those of process, as discussed, but also in relation to the ideas in the text Hazel provided. The concepts of evolution, movement, and emergence which were important to her writing also informed and influenced the choice of biological and artificial objects in the background video, and the nature and juxtaposition of the processing modules for both sound and image. Conclusion If we return to the issues raised at the beginning of this article, we can see how our collaboration does involve the merging of participants and the destabilising of the concept of authorship. The poem was not complete after Hazel had written it—or even after she had dislocated it—but is continually reassembled by the Jitter interface that Roger has constructed. The visual images were also produced first by Hazel, then fused with Roger’s video in continuously changing formations through the Jitter interface. The performance may involve collaboration by several people who were not involved in the original conception of the work, indicating how collaboration can become an extended and accumulative process. The collaboration also simultaneously superimposes several different kinds of collaborative process, including the intertextual encounter with the Grosz text; the intermedia fusion of text, image and sound; the participation of a number of different people with differentiated roles and varying degrees of input; and collaboration with the computer. It is an assemblage in the terms mentioned earlier: a continuously modulating conjunction of different expertises, media, and machines. Finally, the collaboration is simultaneously both human and posthuman. It negotiates—in the way Hayles suggests—between pattern, presence, randomness, and absence. On the one hand, it involves human intervention (the writing of the poem, the live music-making, the shooting of the video, the discussion between participants) though sometimes those interventions are hidden, merged, or subsumed. On the other hand, the Jitter interface allows for both tight programming and elements of variability and unpredictability. In this way the collaboration displaces the autonomous subject with what Hayles calls a ‘distributed system’ (Hayles 290). The consequence is that the collaborative process never reaches an endpoint: the computer interface will construct the piece differently each time, we may choose to interact with it in performance, and the sound performance will always contain many improvised and unpredictable elements. The collaborative process, like the work it produces, is ongoing, emergent, and mutating. References Bennett, Andrew. The Author. London: Routledge, 2005. Dean, Roger T. Hyperimprovisation: Computer Interactive Sound Improvisation; with CD-ROM. Madison, WI: A-R Editions, 2003. Dean, Roger, Mitchell Whitelaw, Hazel Smith, and David Worrall. “The Mirage of Real-Time Algorithmic Synaesthesia: Some Compositional Mechanisms and Research Agendas in Computer Music and Sonification.” Contemporary Music Review, in press. Grosz, Elizabeth. The Nick of Time: Politics, Evolution and the Untimely. Sydney: Allen and Unwin, 2004. Hayles, N. Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: U of Chicago P, 1999. Smith, Hazel. Hyperscapes in the Poetry of Frank O’Hara: Difference, Homosexuality, Topography. Liverpool: Liverpool UP, 2000. Smith, Hazel, and Roger T. Dean. Improvisation, Hypermedia and the Arts since 1945. London: Harwood Academic, 1997. ———. “Voicescapes and Sonic Structures in the Creation of Sound Technodrama.” Performance Research 8.1 (2003): 112-23. Stillinger, Jack. Multiple Authorship and the Myth of Solitary Genius. Oxford: Oxford UP, 1991. Citation reference for this article MLA Style Smith, Hazel, and Roger T. Dean. "Posthuman Collaboration: Multimedia, Improvisation, and Computer Mediation." M/C Journal 9.2 (2006). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0605/14-smithdean.php>. APA Style Smith, H., and R. Dean. (May 2006) "Posthuman Collaboration: Multimedia, Improvisation, and Computer Mediation," M/C Journal, 9(2). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0605/14-smithdean.php>.
APA, Harvard, Vancouver, ISO, and other styles
48

Glöckler, Falko, James Macklin, David Shorthouse, Christian Bölling, Satpal Bilkhu, and Christian Gendreau. "DINA—Development of open source and open services for natural history collections & research." Biodiversity Information Science and Standards 4 (October 6, 2020). http://dx.doi.org/10.3897/biss.4.59070.

Full text
Abstract:
The DINA Consortium (DINA = “DIgital information system for NAtural history data”, https://dina-project.net) is a framework for like-minded practitioners of natural history collections to collaborate on the development of distributed, open source software that empowers and sustains collections management. Target collections include zoology, botany, mycology, geology, paleontology, and living collections. The DINA software will also permit the compilation of biodiversity inventories and will robustly support both observation and molecular data. The DINA Consortium focuses on an open source software philosophy and on community-driven open development. Contributors share their development resources and expertise for the benefit of all participants. The DINA System is explicitly designed as a loosely coupled set of web-enabled modules. At its core, this modular ecosystem includes strict guidelines for the structure of Web application programming interfaces (APIs), which guarantees the interoperability of all components (https://github.com/DINA-Web). Important to the DINA philosophy is that users (e.g., collection managers, curators) be actively engaged in an agile development process. This ensures that the product is pleasing for everyday use, includes efficient yet flexible workflows, and implements best practices in specimen data capture and management. There are three options for developing a DINA module: create a new module compliant with the specifications (Fig. 1), modify an existing code-base to attain compliance (Fig. 2), or wrap a compliant API around existing code that cannot be or may not be modified (e.g., infeasible, dependencies on other systems, closed code) (Fig. 3). create a new module compliant with the specifications (Fig. 1), modify an existing code-base to attain compliance (Fig. 2), or wrap a compliant API around existing code that cannot be or may not be modified (e.g., infeasible, dependencies on other systems, closed code) (Fig. 3). All three of these scenarios have been applied in the modules recently developed: a module for molecular data (SeqDB), modules for multimedia, documents and agents data and a service module for printing labels and reports: The SeqDB collection management and molecular tracking system (Bilkhu et al. 2017) has evolved through two of these scenarios. Originally, the required architectural changes were going to be added into the codebase, but after some time, the development team recognised that the technical debt inherent in the project wasn’t worth the effort of modification and refactoring. Instead a new codebase was created bringing forward the best parts of the system oriented around the molecular data model for Sanger Sequencing and Next Generation Sequencing (NGS) workflows. In the case of the Multimedia and Document Store module and the Agents module, a brand new codebase was established whose technology choices were aligned with the DINA vision. These two modules have been created from fundamental use cases for collection management and digitization workflows and will continue to evolve as more modules come online and broaden their scope. The DINA Labels & Reporting module is a generic service for transforming data in arbitrary printable layouts based on customizable templates. In order to use the module in combination with data managed in collection management software Specify (http://specifysoftware.org) for printing labels of collection objects, we wrapped the Specify 7 API with a DINA-compliant API layer called the “DINA Specify Broker”. This allows for using the easy-to-use web-based template engine within the DINA Labels & Reports module without changing Specify’s codebase. In our presentation we will explain the DINA development philosophy and will outline benefits for different stakeholders who directly or indirectly use collections data and related research data in their daily workflows. We will also highlight opportunities for joining the DINA Consortium and how to best engage with members of DINA who share their expertise in natural science, biodiversity informatics and geoinformatics.
APA, Harvard, Vancouver, ISO, and other styles
49

Mizrach, Steven. "Natives on the Electronic Frontier." M/C Journal 3, no. 6 (2000). http://dx.doi.org/10.5204/mcj.1890.

Full text
Abstract:
Introduction Many anthropologists and other academics have attempted to argue that the spread of technology is a global homogenising force, socialising the remaining indigenous groups across the planet into an indistinct Western "monoculture" focussed on consumption, where they are rapidly losing their cultural distinctiveness. In many cases, these intellectuals -– people such as Jerry Mander -- often blame the diffusion of television (particularly through new innovations that are allowing it to penetrate further into rural areas, such as satellite and cable) as a key force in the effort to "assimilate" indigenous groups and eradicate their unique identities. Such writers suggest that indigenous groups can do nothing to resist the onslaught of the technologically, economically, and aesthetically superior power of Western television. Ironically, while often protesting the plight of indigenous groups and heralding their need for cultural survival, these authors often fail to recognise these groups’ abilities to fend for themselves and preserve their cultural integrity. On the other side of the debate are visual anthropologists and others who are arguing that indigenous groups are quickly becoming savvy to Western technologies, and that they are now using them for cultural revitalisation, linguistic revival, and the creation of outlets for the indigenous voice. In this school of thought, technology is seen not so much as a threat to indigenous groups, but instead as a remarkable opportunity to reverse the misfortunes of these groups at the hands of colonisation and national programmes of attempted assimilation. From this perspective, the rush of indigenous groups to adopt new technologies comes hand-in-hand with recent efforts to assert their tribal sovereignty and their independence. Technology has become a "weapon" in their struggle for technological autonomy. As a result, many are starting their own television stations and networks, and thus transforming the way television operates in their societies -– away from global monocultures and toward local interests. I hypothesise that in fact there is no correlation between television viewing and acculturation, and that, in fact, the more familiar people are with the technology of television and the current way the technology is utilised, the more likely they are to be interested in using it to revive and promote their own culture. Whatever slight negative effect exists depends on the degree to which local people can understand and redirect how that technology is used within their own cultural context. However, it should be stated that for terms of this investigation, I consider the technologies of "video" and "television" to be identical. One is the recording aspect, and the other the distribution aspect, of the same technology. Once people become aware that they can control what is on the television screen through the instrumentality of video, they immediately begin attempting to assert cultural values through it. And this is precisely what is going on on the Cheyenne River Reservation. This project is significant because the phenomenon of globalisation is real and Western technologies such as video, radio, and PCs are spreading throughout the world, including the "Fourth World" of the planet’s indigenous peoples. However, in order to deal with the phenomenon of globalisation, anthropologists and others may need to deal more realistically with the phenomenon of technological diffusion, which operates far less simply than they might assume. Well-meaning anthropologists seeking to "protect" indigenous groups from the "invasion" of technologies which will change their way of life may be doing these groups a disservice. If they turned some of their effort away from fending off these technologies and toward teaching indigenous groups how to use them, perhaps they might have a better result in creating a better future for them. I hope this study will show a more productive model for dealing with technological diffusion and what effects it has on cultural change in indigenous societies. There have been very few authors that have dealt with this topic head-on. One of the first to do so was Pace (1993), who suggested that some Brazilian Indians were acculturating more quickly as a result of television finally coming to their remote villages in the 1960s. Molohon (1984) looked at two Cree communities, and found that the one which had more heavy television viewing was culturally closer to its neighboring white towns. Zimmerman (1996) fingered television as one of the key elements in causing Indian teenagers to lose their sense of identity, thus putting them at higher risk for suicide. Gillespie (1995) argued that television is actually a ‘weapon’ of national states everywhere in their efforts to assimilate and socialise indigenous and other ethnic minority groups. In contrast, authors like Weiner (1997), Straubhaar (1991), and Graburn (1982) have all critiqued these approaches, suggesting that they deny subjectivity and critical thinking to indigenous TV audiences. Each of these researchers suggest, based on their field work, that indigenous people are no more likely than anybody else to believe that the things they see on television are true, and no more likely to adopt the values or worldviews promoted by Western TV programmers and advertisers. In fact, Graburn has observed that the Inuit became so disgusted with what they saw on Canadian national television, that they went out and started their own TV network in an effort to provide their people with meaningful alternatives on their screens. Bell (1995) sounds a cautionary note against studies like Graburn’s, noting that the efforts of indigenous New Zealanders to create their own TV programming for local markets failed, largely because they were crowded out by the "media imperialism" of outside international television. Although the indigenous groups there tried to put their own faces on the screen, many local viewers preferred to see the faces of J.R. Ewing and company, and lowered the ratings share of these efforts. Salween (1991) thinks that global media "cultural imperialism" is real -– that it is an objective pursued by international television marketers -– and suggests a media effects approach might be the best way to see whether it works. Woll (1987) notes that historically many ethnic groups have formed their self-images based on the way they have been portrayed onscreen, and that so far these portrayals have been far from sympathetic. In fact, even once these groups started their own cinemas or TV programmes, they unconsciously perpetuated stereotypes first foisted on them by other people. This study tends to side with those who have observed that indigenous people do not tend to "roll over" in the wake of the onslaught of Western television. Although cautionary studies need to be examined carefully, this research will posit that although the dominant forces controlling TV are antithetical to indigenous groups and their goals, the efforts of indigenous people to take control of their TV screens and their own "media literacy" are also increasing. Thus, this study should contribute to the viewpoint that perhaps the best way to save indigenous groups from cultural eradication is to give them access to television and show them how to set up their own stations and distribute their own video programming. In fact, it appears to be the case that TV, the Internet, and electronic 'new media' are helping to foster a process of cultural renewal, not just among the Lakota, but also among the Inuit, the Australian aborigines, and other indigenous groups. These new technologies are helping them renew their native languages, cultural values, and ceremonial traditions, sometimes by giving them new vehicles and forms. Methods The research for this project was conducted on the Cheyenne River Sioux Reservation headquartered in Eagle Butte, South Dakota. Participants chosen for this project were Lakota Sioux who were of the age of consent (18 or older) and who were tribal members living on the reservation. They were given a survey which consisted of five components: a demographic question section identifying their age, gender, and individual data; a technology question section identifying what technologies they had in their home; a TV question section measuring the amount of television they watched; an acculturation question section determining their comparative level of acculturation; and a cultural knowledge question section determining their knowledge of Lakota history. This questionnaire was often followed up by unstructured ethnographic interviews. Thirty-three people of mixed age and gender were given this questionnaire, and for the purposes of this research paper, I focussed primarily on their responses dealing with television and acculturation. These people were chosen through strictly random sampling based on picking addresses at random from the phone book and visiting their houses. The television section asked specifically how many hours of TV they watched per day and per week, what shows they watched, what kinds of shows they preferred, and what rooms in their home had TVs. The acculturation section asked them questions such as how much they used the Lakota language, how close their values were to Lakota values, and how much participation they had in traditional indigenous rituals and customs. To assure open and honest responses, each participant filled out a consent form, and was promised anonymity of their answers. To avoid data contamination, I remained with each person until they completed the questionnaire. For my data analysis, I attempted to determine if there was any correlation (Pearson’s coefficient r of correlation) between such things as hours of TV viewed per week or years of TV ownership with such things as the number of traditional ceremonies they attended in the past year, the number of non-traditional Lakota values they had, their fluency in the Lakota language, their level of cultural knowledge, or the number of traditional practices and customs they had engaged in in their lives. Through simple statistical tests, I determined whether television viewing had any impact on these variables which were reasonable proxies for level of acculturation. Findings Having chosen two independent variables, hours of TV watched per week, and years of TV ownership, I tested if there was any significant correlation between them and the dependent variables of Lakota peoples’ level of cultural knowledge, participation in traditional practices, conformity of values to non-Lakota or non-traditional values, fluency in Lakota, and participation in traditional ceremonies (Table 1). These variables all seemed like reasonable proxies for acculturation since acculturated Lakota would know less of their own culture, go to fewer ceremonies, and so on. The cultural knowledge score was based on how many complete answers the respondents knew to ‘fill in the blank’ questions regarding Lakota history, historical figures, and important events. Participation in traditional practices was based on how many items they marked in a survey of whether or not they had ever raised a tipi, used traditional medicine, etc. The score for conformity to non-Lakota values was based on how many items they marked with a contrary answer to the emic Lakota value system ("the seven Ws".) Lakota fluency was based on how well they could speak, write, or use the Lakota language. And ceremonial attendance was based on the number of traditional ceremonies they had attended in the past year. There were no significant correlations between either of these TV-related variables and these indexes of acculturation. Table 1. R-Scores (Pearson’s Coefficient of Correlation) between Variables Representing Television and Acculturation R-SCORES Cultural Knowledge Traditional Practices Modern Values Lakota Fluency Ceremonial Attendance Years Owning TV 0.1399 -0.0445 -0.4646 -0.0660 0.1465 Hours of TV/Week -0.3414 -0.2640 -0.2798 -0.3349 0.2048 The strongest correlation was between the number of years the Lakota person owned a television, and the number of non-Lakota (or ‘modern Western’) values they held in their value system. But even that correlation was pretty weak, and nowhere near the r-score of other linear correlations, such as between their age and the number of children they had. How much television Lakota people watched did not seem to have any influence on how much cultural knowledge they knew, how many traditional practices they had participated in, how many non-Lakota values they held, how well they spoke or used the Lakota language, or how many ceremonies they attended. Even though there does not appear to be anything unusual about their television preferences, and in general they are watching the same shows as other non-Lakota people on the reservation, they are not becoming more acculturated as a result of their exposure to television. Although the Lakota people may be losing aspects of their culture, language, and traditions, other causes seem to be at the forefront than television. I also found that people who were very interested in television production as well as consumption saw this as a tool for putting more Lakota-oriented programs on the air. The more they knew about how television worked, the more they were interested in using it as a tool in their own community. And where I was working at the Cultural Center, there was an effort to videotape many community and cultural events. The Center had a massive archive of videotaped material, but unfortunately while they had faithfully recorded all kinds of cultural events, many of them were not quite "broadcast ready". There was more focus on showing these video programmes, especially oral history interviews with elders, on VCRs in the school system, and in integrating them into various kinds of multimedia and hypermedia. While the Cultural Center had begun broadcasting (remotely through a radio modem) a weekly radio show, ‘Wakpa Waste’ (Good Morning CRST), on the radio station to the north, KLND-Standing Rock, there had never been any forays into TV broadcasting. The Cultural Center director had looked into the feasibility of putting up a television signal transmission tower, and had applied for a grant to erect one, but that grant was denied. The local cable system in Eagle Butte unfortunately lacked the technology to carry true "local access" programming; although the Channel 8 of the system carried CRST News and text announcements, there was no open channel available to carry locally produced public access programming. The way the cable system was set up, it was purely a "relay" or feed from news and channels from elsewhere. Also, people were investing heavily in satellite systems, especially the new DBS (direct broadcast satellite) receivers, and would not be able to pick up local access programmes anyway. The main problem hindering the Lakotas’ efforts to preserve their culture through TV and video was lack of access to broadcast distribution technology. They had the interest, the means, and the stock of programming to put on the air. They had the production and editing equipment, although not the studios to do a "live" show. Were they able to have more local access to and control over TV distribution technology, they would have a potent "arsenal" for resisting the drastic acculturation their community is undergoing. TV has the potential to be a tool for great cultural revitalisation, but because the technology and know-how for producing it was located elsewhere, the Lakotas could not benefit from it. Discussion I hypothesised that the effects if TV viewing on levels of indigenous acculturation would be negligible. The data support my hypothesis that TV does not seem to have a major correlation with other indices of acculturation. Previous studies by anthropologists such as Pace and Molohon suggested that TV was a key determinant in the acculturation of indigenous people in Brazil and the U.S. -– this being the theory of cultural imperialism. However, this research suggests that TV’s effect on the decline of indigenous culture is weak and inconclusive. In fact, the qualitative data suggest that the Lakota most familiar with TV are also the most interested in using it as a tool for cultural preservation. Although the CRST Lakota currently lack the means for mass broadcast of cultural programming, there is great interest in it, and new technologies such as the Internet and micro-broadcast may give them the means. There are other examples of this phenomenon worldwide, which suggest that the Lakota experience is not unique. In recent years, Australian Aborigines, Canadian Inuit, and Brazilian Kayapo have each begun ambitious efforts in creating satellite-based television networks that allow them to reach their far-flung populations with programming in their own indigenous language. In Australia, Aboriginal activists have created music television programming which has helped them assert their position in land claims disputes with the Australian government (Michaels 1994), and also to educate the Europeans of Australia about the aboriginal way of life. In Canada, the Inuit have also created satellite TV networks which are indigenous-owned and operated and carry traditional cultural programming (Valaskakis 1992). Like the Aborigines and the Inuit, the Lakota through their HVJ Lakota Cultural Center are beginning to create their own radio and video programming on a smaller scale, but are beginning to examine using the reservation's cable network to carry some of this material. Since my quantitative survey included only 33 respondents, the data are not as robust as would be determined from a larger sample. However, ethnographic interviews focussing on how people approach TV, as well as other qualitative data, support the inferences of the quantitative research. It is not clear that my work with the Lakota is necessarily generalisable to other populations. Practically, it does suggest that anthropologists interested in cultural and linguistic preservation should strive to increase indigenous access to, and control of, TV production technology. ‘Protecting’ indigenous groups from new technologies may cause more harm than good. Future applied anthropologists should work with the ‘natives’ and help teach them how to adopt and adapt this technology for their own purposes. Although this is a matter that I deal with more intensively in my dissertation, it also appears to me to be the case that, contrary to the warnings of Mander, many indigenous cultures are not being culturally assimilated by media technology, but instead are assimilating the technology into their own particular cultural contexts. The technology is part of a process of revitalisation or renewal -- although there is a definite process of change and adaptation underway, this actually represents an 'updating' of old cultural practices for new situations in an attempt to make them viable for the modern situation. Indeed, I think that the Internet, globally, is allowing indigenous people to reassert themselves as a Fourth World "power bloc" on the world stage, as linkages are being formed between Saami, Maya, Lakota, Kayapo, Inuit, and Aborigines. Further research should focus on: why TV seems to have a greater acculturative influence on certain indigenous groups rather than others; whether indigenous people can truly compete equally in the broadcast "marketplace" with Western cultural programming; and whether attempts to quantify the success of TV/video technology in cultural preservation and revival can truly demonstrate that this technology plays a positive role. In conclusion, social scientists may need to take a sidelong look at why precisely they have been such strong critics of introducing new technologies into indigenous societies. There is a better role that they can play –- that of technology ‘broker’. They can cooperate with indigenous groups, serving to facilitate the exchange of knowledge, expertise, and technology between them and the majority society. References Bell, Avril. "'An Endangered Species’: Local Programming in the New Zealand Television Market." Media, Culture & Society 17.1 (1995): 182-202. Gillespie, Marie. Television, Ethnicity, and Cultural Change. New York: Routledge, 1995. Graburn, Nelson. "Television and the Canadian Inuit". Inuit Etudes 6.2 (1982): 7-24. Michaels, Eric. Bad Aboriginal Art: Tradition, Media, and Technological Horizons. Minneapolis: U of Minnesota P, 1994. Molohon, K.T. "Responses to Television in Two Swampy Cree Communities on the West James Bay." Kroeber Anthropology Society Papers 63/64 (1982): 95-103. Pace, Richard. "First-Time Televiewing in Amazonia: Television Acculturation in Gurupa, Brazil." Ethnology 32.1 (1993): 187-206. Salween, Michael. "Cultural Imperialism: A Media Effects Approach." Critical Studies in Mass Communication 8.2 (1991): 29-39. Straubhaar, J. "Beyond Media Imperialism: Asymmetrical Interdependence and Cultural Proximity". Critical Studies in Mass Communication 8.1 (1991): 39-70. Valaskakis, Gail. "Communication, Culture, and Technology: Satellites and Northern Native Broadcasting in Canada". Ethnic Minority Media: An International Perspective. Newbury Park: Sage Publications, 1992. Weiner, J. "Televisualist Anthropology: Representation, Aesthetics, Politics." Current Anthropology 38.3 (1997): 197-236. Woll, Allen. Ethnic and Racial Images in American Film and Television: Historical Essays and Bibliography. New York: Garland Press, 1987. Zimmerman, M. "The Development of a Measure of Enculturation for Native American Youth." American Journal of Community Psychology 24.1 (1996): 295-311. Citation reference for this article MLA style: Steven Mizrach. "Natives on the Electronic Frontier: Television and Cultural Change on the Cheyenne River Sioux Reservation." M/C: A Journal of Media and Culture 3.6 (2000). [your date of access] <http://www.api-network.com/mc/0012/natives.php>. Chicago style: Steven Mizrach, "Natives on the Electronic Frontier: Television and Cultural Change on the Cheyenne River Sioux Reservation," M/C: A Journal of Media and Culture 3, no. 6 (2000), <http://www.api-network.com/mc/0012/natives.php> ([your date of access]). APA style: Steven Mizrach. (2000) Natives on the electronic frontier: television and cultural change on the Cheyenne River Sioux Reservation. M/C: A Journal of Media and Culture 3(6). <http://www.api-network.com/mc/0012/natives.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
50

Holmes, Ashley M. "Cohesion, Adhesion and Incoherence: Magazine Production with a Flickr Special Interest Group." M/C Journal 13, no. 1 (2010). http://dx.doi.org/10.5204/mcj.210.

Full text
Abstract:
This paper provides embedded, reflective practice-based insight arising from my experience collaborating to produce online and print-on-demand editions of a magazine showcasing the photography of members of haphazart! Contemporary Abstracts group (hereafter referred to as haphazart!). The group’s online visual, textual and activity-based practices via the photo sharing social networking site Flickr are portrayed as achieving cohesive visual identity. Stylistic analysis of pictures in support of this claim is not attempted. Rather negotiation, that Elliot has previously described in M/C Journal as innate in collaboration, is identified as the unifying factor. However, the collaborators’ adherence to Flickr’s communication platform proves problematic in the editorial context. Some technical incoherence with possible broader cultural implications is encountered during the process of repurposing images from screen to print. A Scan of Relevant Literature The photographic gaze perceives and captures objects which seem to ‘carry within them ready-made’ a work of art. But the reminiscences of the gaze are only made possible by knowing and associating with groups that define a tradition. The list of valorised subjects is not actually defined with reference to a culture, but rather by familiarity with a limited group. (Chamboredon 144) As part of the array of socio-cultural practices afforded by Web 2.0 interoperability, sites of produsage (Bruns) are foci for studies originating in many disciplines. Flickr provides a rich source of data that researchers interested in the interface between the technological and the social find useful to analyse. Access to the Flickr application programming interface enables quantitative researchers to observe a variety of means by which information is propagated, disseminated and shared. Some findings from this kind of research confirm the intuitive. For example, Negoecsu et al. find that “a large percentage of users engage in sharing with groups and that they do so significantly” ("Analyzing Flickr Groups" 425). They suggest that Flickr’s Groups feature appears to “naturally bring together two key aspects of social media: content and relations.” They also find evidence for what they call hyper-groups, which are “communities consisting of groups of Flickr groups” ("Flickr Hypergroups" 813). Two separate findings from another research team appear to contradict each other. On one hand, describing what they call “social cascades,” Cha et al. claim that “content in the form of ideas, products, and messages spreads across social networks like a virus” ("Characterising Social Cascades"). Yet in 2009 they claim that homocity and reciprocity ensure that “popularity of pictures is localised” ("Measurement-Driven Analysis"). Mislove et al. reflect that the affordances of Flickr influence the growth patterns they observe. There is optimism shared by some empiricists that through collation and analysis of Flickr tag data, the matching of perceptual structures of images and image annotation techniques will yield ontology-based taxonomy useful in automatic image annotation and ultimately, the Semantic Web endeavour (Kennedy et al.; Su et al.; Xu et al.). Qualitative researchers using ethnographic interview techniques also find Flickr a valuable resource. In concluding that the photo sharing hobby is for many a “serious leisure” activity, Cox et al. propose that “Flickr is not just a neutral information system but also value laden and has a role within a wider cultural order.” They also suggest that “there is genuinely greater scope for individual creativity, releasing the individual to explore their own identity in a way not possible with a camera club.” Davies claims that “online spaces provide an arena where collaboration over meanings can be transformative, impacting on how individuals locate themselves within local and global contexts” (550). She says that through shared ways of describing and commenting on images, Flickrites develop a common criticality in their endeavour to understand images, each other and their world (554).From a psychologist’s perspective, Suler observes that “interpersonal relationships rarely form and develop by images alone” ("Image, Word, Action" 559). He says that Flickr participants communicate in three dimensions: textual (which he calls “verbal”), visual, and via the interpersonal actions that the site affords, such as Favourites. This latter observation can surely be supplemented by including the various games that groups configure within the constraints of the discussion forums. These often include submissions to a theme and voting to select a winning image. Suler describes the place in Flickr where one finds identity as one’s “cyberpsychological niche” (556). However, many participants subscribe to multiple groups—45.6% of Flickrites who share images share them with more than 20 groups (Negoescu et al., "Analyzing Flickr Groups" 420). Is this a reflection of the existence of the hyper-groups they describe (2009) or, of the ranging that people do in search of a niche? It is also probable that some people explore more than a singular identity or visual style. Harrison and Bartell suggest that there are more interesting questions than why users create media products or what motivates them to do so: the more interesting questions center on understanding what users will choose to do ultimately with [Web2.0] capabilities [...] in what terms to define the success of their efforts, and what impact the opportunity for individual and collaborative expression will have on the evolution of communicative forms and character. (167) This paper addresseses such questions. It arises from a participatory observational context which differs from that of the research described above. It is intended that a different perspective about online group-based participation within the Flickr social networking matrix will avail. However, it will be seen that the themes cited in this introductory review prove pertinent. Context As a university teacher of a range of subjects in the digital media field, from contemporary photomedia to social media to collaborative multimedia practice, it is entirely appropriate that I embed myself in projects that engage, challenge and provide me with relevant first-hand experience. As an academic I also undertake and publish research. As a practicing new media artist I exhibit publically on a regular basis and consider myself semi-professional with respect to this activity. While there are common elements to both approaches to research, this paper is written more from the point of view of ‘reflective practice’ (Holmes, "Reconciling Experimentum") rather than ‘embedded ethnography’ (Pink). It is necessarily and unapologetically reflexive. Abstract Photography Hyper-Group A search of all Flickr groups using the query “abstract” is currently likely to return around 14,700 results. However, only in around thirty of them does the group name, its stated rules and, the stream of images that flow through the pool arguably reflect a sense of collective concept and aesthetic that is coherently abstract. This loose complex of groups comprises a hyper-group. Members of these groups often have co-memberships, reciprocal contacts, and regularly post images to a range of groups and comment on others’ posts to be found throughout. Given that one of Flickr’s largest groups, Black and White, currently has around 131,150 members and hosts 2,093,241 items in its pool, these abstract special interest groups are relatively small. The largest, Abstract Photos, has 11,338 members and hosts 89,306 items in its pool. The group that is the focus of this paper, haphazart!, currently has 2,536 members who have submitted 53,309 items. The group pool is more like a constantly flowing river because the most recently added images are foremost. Older images become buried in an archive of pages which cannot be reverse accessed at a rate greater than the seven pages linked from a current view. A member’s presence is most immediate through images posted to a pool. This structural feature of Flickr promotes a desire for currency; a need to post regularly to maintain presence. Negotiating Coherence to the Abstract The self-managing social dynamics in groups has, as Suler proposes to be the case for individuals, three dimensions: visual, textual and action. A group integrates the diverse elements, relationships and values which cumulatively constitute its identity with contributions from members in these dimensions. First impressions of that identity are usually derived from the group home page which consists of principal features: the group name, a selection of twelve most recent posts to the pool, some kind of description, a selection of six of the most recent discussion topics, and a list of rules (if any). In some of these groups, what is considered to constitute an abstract photographic image is described on the group home page. In some it is left to be contested and becomes the topic of ongoing forum debates. In others the specific issue is not discussed—the images are left to speak for themselves. Administrators of some groups require that images are vetted for acceptance. In haphazart! particular administrators dutifully delete from the pool on a regular basis any images that they deem not to comply with the group ethic. Whether reasons are given or not is left to the individual prosecutor. Mostly offending images just disappear from the group pool without trace. These are some of the ways that the coherence of a group’s visual identity is established and maintained. Two groups out of the abstract photography hyper-group are noteworthy in that their discussion forums are particularly active. A discussion is just the start of a new thread and may have any number of posts under it. At time of writing Abstract Photos has 195 discussions and haphazart! — the most talkative by this measure—has 333. Haphazart! invites submissions of images to regularly changing themes. There is always lively and idiosyncratic banter in the forum over the selection of a theme. To be submitted an image needs to be identified by a specific theme tag as announced on the group home page. The tag can be added by the photographer themselves or by anyone else who deems the image appropriate to the theme. An exhibition process ensues. Participant curators search all Flickr items according to the theme tag and select from the outcome images they deem to most appropriately and abstractly address the theme. Copies of the images together with comments by the curators are posted to a dedicated discussion board. Other members may also provide responses. This activity forms an ongoing record that may serve as a public indicator of the aesthetic that underlies the group’s identity. In Abstract Photos there is an ongoing discussion forum where one can submit an image and request that the moderators rule as to whether or not the image is ‘abstract’. The same group has ongoing discussions labelled “Hall of Appropriate” where worthy images are reposted and celebrated and, “Hall of Inappropriate” where images posted to the group pool have been removed and relegated because abstraction has been “so far stretched from its definition that it now resides in a parallel universe” (Askin). Reasons are mostly courteously provided. In haphazart! a relatively small core of around twelve group members regularly contribute to the group discussion board. A curious aspect of this communication is that even though participants present visually with a ‘buddy icon’ and most with a screen name not their real name, it is usual practice to address each other in discussions by their real Christian names, even when this is not evident in a member’s profile. This seems to indicate a common desire for authenticity. The makeup of the core varies from time to time depending on other activities in a member’s life. Although one or two may be professionally or semi-professionally engaged as photographers or artists or academics, most of these people would likely consider themselves to be “serious amateurs” (Cox). They are internationally dispersed with bias to the US, UK, Europe and Australia. English is the common language though not the natural tongue of some. The age range is approximately 35 to 65 and the gender mix 50/50. The group is three years old. Where Do We Go to from Here? In early January 2009 the haphazart! core was sparked into a frenzy of discussion by a post from a member headed “Where do we go to from here?” A proposal was mooted to produce a ‘book’ featuring images and texts representative of the group. Within three days a new public group with invited membership dedicated to the idea had been established. A smaller working party then retreated to a private Flickr group. Four months later Issue One of haphazart! magazine was available in print-on-demand and online formats. Following however is a brief critically reflective review of some of the collaborative curatorial, editorial and production processes for Issue Two which commenced in early June 2009. Most of the team had also been involved with Issue One. I was the only newcomer and replaced the person who had undertaken the design for Issue One. I was not provided access to the prior private editorial ruminations but apparently the collaborative curatorial and editorial decision-making practices the group had previously established persisted, and these took place entirely within the discussion forums of a new dedicated private Flickr group. Over a five-month period there were 1066 posts in 54 discussions concerning matters such as: change of format from the previous; selection of themes, artists and images; conduct of and editing of interviews; authoring of texts; copyright and reproduction. The idiom of those communications can be described as: discursive, sporadic, idiosyncratic, resourceful, collegial, cooperative, emphatic, earnest and purposeful. The selection process could not be said to follow anything close to a shared manifesto, or articulation of style. It was established that there would be two primary themes: the square format and contributors’ use of colour. Selection progressed by way of visual presentation and counter presentation until some kind of consensus was reached often involving informal votes of preference. Stretching the Limits of the Flickr Social Tools The magazine editorial collaborators continue to use the facilities with which they are familiar from regular Flickr group participation. However, the strict vertically linear format of the Flickr discussion format is particularly unsuited to lengthy, complex, asynchronous, multithreaded discussion. For this purpose it causes unnecessary strain, fatigue and confusion. Where images are included, the forums have set and maximum display sizes and are not flexibly configured into matrixes. Images cannot readily be communally changed or moved about like texts in a wiki. Likewise, the Flickrmail facility is of limited use for specialist editorial processes. Attachments cannot be added. This opinion expressed by a collaborator in the initial, open discussion for Issue One prevailed among Issue Two participants: do we want the members to go to another site to observe what is going on with the magazine? if that’s ok, then using google groups or something like that might make sense; if we want others to observe (and learn from) the process - we may want to do it here [in Flickr]. (Valentine) The opinion appears socially constructive; but because the final editorial process and production processes took place in a separate private forum, ultimately the suggested learning between one issue and the next did not take place. During Issue Two development the reluctance to try other online collaboration tools for the selection processes requiring visual comparative evaluation of images and trials of sequencing adhered. A number of ingenious methods of working within Flickr were devised and deployed and, in my opinion, proved frustratingly impractical and inefficient. The digital layout, design, collation and formatting of images and texts, all took place on my personal computer using professional software tools. Difficulties arose in progressively sharing this work for the purposes of review, appraisal and proofing. Eventually I ignored protests and insisted the team review demonstrations I had converted for sharing in Google Documents. But, with only one exception, I could not tempt collaborators to try commenting or editing in that environment. For example, instead of moving the sequence of images dynamically themselves, or even typing suggestions directly into Google Documents, they would post responses in Flickr. To Share and to Hold From the first imaginings of Issue One the need to have as an outcome something in one’s hands was expressed and this objective is apparently shared by all in the haphazart! core as an ongoing imperative. Various printing options have been nominated, discussed and evaluated. In the end one print-on-demand provider was selected on the basis of recommendation. The ethos of haphazart! is clearly not profit-making and conflicts with that of the printing organisation. Presumably to maintain an incentive to purchase the print copy online preview is restricted to the first 15 pages. To satisfy the co-requisite to make available the full 120 pages for free online viewing a second host that specialises in online presentation of publications is also utilised. In this way haphazart! members satisfy their common desires for sharing selected visual content and ideas with an online special interest audience and, for a physical object of art to relish—with all the connotations of preciousness, fetish, talisman, trophy, and bookish notions of haptic pleasure and visual treasure. The irony of publishing a frozen chunk of the ever-flowing Flickriver, whose temporally changing nature is arguably one of its most interesting qualities, is not a consideration. Most of them profess to be simply satisfying their own desire for self expression and would eschew any critical judgement as to whether this anarchic and discursive mode of operation results in a coherent statement about contemporary photographic abstraction. However there remains a distinct possibility that a number of core haphazart!ists aspire to transcend: popular taste; the discernment encouraged in camera clubs; and, the rhetoric of those involved professionally (Bourdieu et al.); and seek to engage with the “awareness of illegitimacy and the difficulties implied by the constitution of photography as an artistic medium” (Chamboredon 130). Incoherence: A Technical Note My personal experience of photography ranges from the filmic to the digital (Holmes, "Bridging Adelaide"). For a number of years I specialised in facsimile graphic reproduction of artwork. In those days I became aware that films were ‘blind’ to the psychophysical affect of some few particular paint pigments. They just could not be reproduced. Even so, as I handled the dozens of images contributed to haphazart!2, converting them from the pixellated place where Flickr exists to the resolution and gamut of the ink based colour space of books, I was surprised at the number of hue values that exist in the former that do not translate into the latter. In some cases the affect is subtle so that judicious tweaking of colour levels or local colour adjustment will satisfy discerning comparison between the screenic original and the ‘soft proof’ that simulates the printed outcome. In other cases a conversion simply does not compute. I am moved to contemplate, along with Harrison and Bartell (op. cit.) just how much of the experience of media in the shared digital space is incomparably new? Acknowledgement Acting on the advice of researchers experienced in cyberethnography (Bruckman; Suler, "Ethics") I have obtained the consent of co-collaborators to comment freely on proceedings that took place in a private forum. They have been given the opportunity to review and suggest changes to the account. References Askin, Dean (aka: dnskct). “Hall of Inappropriate.” Abstract Photos/Discuss/Hall of Inappropriate, 2010. 12 Jan. 2010 ‹http://www.flickr.com/groups/abstractphotos/discuss/72157623148695254/>. Bourdieu, Pierre, Luc Boltanski, Robert Castel, Jean-Claude Chamboredeon, and Dominique Schnapper. Photography: A Middle-Brow Art. 1965. Trans. Shaun Whiteside. Stanford: Stanford UP, 1990. Bruckman, Amy. Studying the Amateur Artist: A Perspective on Disguising Data Collected in Human Subjects Research on the Internet. 2002. 12 Jan. 2010 ‹http://www.nyu.edu/projects/nissenbaum/ethics_bru_full.html>. Bruns, Axel. “Towards Produsage: Futures for User-Led Content Production.” Proceedings: Cultural Attitudes towards Communication and Technology 2006. Perth: Murdoch U, 2006. 275–84. ———, and Mark Bahnisch. Social Media: Tools for User-Generated Content. Vol. 1 – “State of the Art.” Sydney: Smart Services CRC, 2009. Cha, Meeyoung, Alan Mislove, Ben Adams, and Krishna P. Gummadi. “Characterizing Social Cascades in Flickr.” Proceedings of the First Workshop on Online Social Networks. ACM, 2008. 13–18. ———, Alan Mislove, and Krishna P. Gummadi. “A Measurement-Driven Analysis of Information Propagation in the Flickr Social Network." WWW '09: Proceedings of the 18th International Conference on World Wide Web. ACM, 2009. 721–730. Cox, A.M., P.D. Clough, and J. Marlow. “Flickr: A First Look at User Behaviour in the Context of Photography as Serious Leisure.” Information Research 13.1 (March 2008). 12 Dec. 2009 ‹http://informationr.net/ir/13-1/paper336.html>. Chamboredon, Jean-Claude. “Mechanical Art, Natural Art: Photographic Artists.” Photography: A Middle-Brow Art. Pierre Bourdieu. et al. 1965. Trans. Shaun Whiteside. Stanford: Stanford UP, 1990. 129–149. Davies, Julia. “Display, Identity and the Everyday: Self-Presentation through Online Image Sharing.” Discourse: Studies in the Cultural Politics of Education 28.4 (Dec. 2007): 549–564. Elliott, Mark. “Stigmergic Collaboration: The Evolution of Group Work.” M/C Journal 9.2 (2006). 12 Jan. 2010 ‹http://journal.media-culture.org.au/0605/03-elliott.php>. Harrison, Teresa, M., and Brea Barthel. “Wielding New Media in Web 2.0: Exploring the History of Engagement with the Collaborative Construction of Media Products.” New Media & Society 11.1-2 (2009): 155–178. Holmes, Ashley. “‘Bridging Adelaide 2001’: Photography and Hyperimage, Spanning Paradigms.” VSMM 2000 Conference Proceedings. International Society for Virtual Systems and Multimedia, 2000. 79–88. ———. “Reconciling Experimentum and Experientia: Reflective Practice Research Methodology for the Creative Industries”. Speculation & Innovation: Applying Practice-Led Research in the Creative Industries. Brisbane: QUT, 2006. Kennedy, Lyndon, Mor Naaman, Shane Ahern, Rahul Nair, and Tye Rattenbury. “How Flickr Helps Us Make Sense of the World: Context and Content in Community-Contributed Media Collections.” MM’07. ACM, 2007. Miller, Andrew D., and W. Keith Edwards. “Give and Take: A Study of Consumer Photo-Sharing Culture and Practice.” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2007. 347–356. Mislove, Alan, Hema Swetha Koppula, Krishna P. Gummadi, Peter Druschel and Bobby Bhattacharjee. “Growth of the Flickr Social Network.” Proceedings of the First Workshop on Online Social Networks. ACM, 2008. 25–30. Negoescu, Radu-Andrei, and Daniel Gatica-Perez. “Analyzing Flickr Groups.” CIVR '08: Proceedings of the 2008 International Conference on Content-Based Image and Video Retrieval. ACM, 2008. 417–426. ———, Brett Adams, Dinh Phung, Svetha Venkatesh, and Daniel Gatica-Perez. “Flickr Hypergroups.” MM '09: Proceedings of the Seventeenth ACM International Conference on Multimedia. ACM, 2009. 813–816. Pink, Sarah. Doing Visual Ethnography: Images, Media and Representation in Research. 2nd ed. London: Sage, 2007. Su, Ja-Hwung, Bo-Wen Wang, Hsin-Ho Yeh, and Vincent S. Tseng. “Ontology–Based Semantic Web Image Retrieval by Utilizing Textual and Visual Annotations.” 2009 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology – Workshops. 2009. Suler, John. “Ethics in Cyberspace Research: Consent, Privacy and Contribution.” The Psychology of Cyberspace. 1996. 12 Jan. 2010 ‹http://www-usr.rider.edu/~suler/psycyber/psycyber.html>. ———. “Image, Word, Action: Interpersonal Dynamics in a Photo-Sharing Community.” Cyberpsychology & Behavior 11.5 (2008): 555–560. Valentine, Mark. “HAPHAZART! Magazine/Discuss/image selections…” [discussion post]. 2009. 12 Jan. 2010 ‹http://www.flickr.com/groups/haphazartmagazin/discuss/72157613147017532/>. Xu, Hongtao, Xiangdong Zhou, Mei Wang, Yu Xiang, and Baile Shi. “Exploring Flickr’s Related Tags for Semantic Annotation of Web Images.” CIVR ’09. ACM, 2009.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography