To see the other types of publications on this topic, follow the link: Microsoft FrontPage (computer file).

Journal articles on the topic 'Microsoft FrontPage (computer file)'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 46 journal articles for your research on the topic 'Microsoft FrontPage (computer file).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Alzuwandi and Jalinus. "MEDIA PEMBELAJARAN MATEMATIKA BERBASIS KOMPUTER PADA MATERI FUNGSI." Jurnal Prinsip Pendidikan Matematika 1, no. 2 (May 30, 2019): 52–61. http://dx.doi.org/10.33578/prinsip.v1i2.29.

Full text
Abstract:
Computer technology can be utilized to create interactive learning media in the form of multimedia. Along with the development of technology need to make a computer-based learning media to help learners and teachers in learning, especially mathematics. In school, teachers use the material only from cardboard media and affixed to the blackboard, and even then not the students who hold it, so far the teacher taught mathematics only using textbooks and LKS, even though the teacher already has a computer or laptop and in the school already has facilities to support the use of media based computer that is one of them is LCD projector. One way that can be done is to create interactive learning media and interesting for learners. This study aims to develop a computer-based learning media that is valid to help students understand about functions. The research steps to be done is to analyze the syllabus, RPP and textbooks, collect the required images or animations, create paper based media designs, create media on powerpoints, create frontpage, and ask to validate by experts and practitioners. The program used for the design of computer-based products is a frontpage program, microsoft power point. Learning media validated by three experts or validators and then be revised according to validator’s advice. Learning media is packaged in the form of compact disk (CD) and user manual.
APA, Harvard, Vancouver, ISO, and other styles
2

Roza, Yenita, Putri Yuanita, Sehatta Saragih, Hadiyanta Alfajri, and Andespa Saputra. "Computer-Based Media for Learning Geometry at Mathematics Class of Secondary Schools." JOURNAL OF EDUCATIONAL SCIENCES 1, no. 1 (September 4, 2017): 79. http://dx.doi.org/10.31258/jes.1.1.p.79-91.

Full text
Abstract:
This research is aimed at developing computer-based media for mathematics learning. This media used interactive model to help students understand the topic of lines, angles and rectangular. This development research applied development model by Borg and Gallmodified by Sugiyono.The study began by finding the potentials materials and its problems followed by astudy of literature. The subject in this research is line, angle and rectangular,including rectangle, square and trapezium. Product design was done through two stages: paper-based design and computer-based. The applications used for product design are Microsoft Frontpage, Microsoft Power Point and Photoshop. Learning media were validated by three experts or validators and were revised based on the input from the validators. The revised learning media were tested on two stages: a small group test consisting of five respondens and a large grouptest consisting of forty respondens. Based on the analysis of data and discussion, it can be concluded that computer-based mediafor mathematics learning is valid with an average score of 3.17 on aspect of materials and 3.18 from aspects of media. This computer-based media formathematicslearning also had practicalities with an average score of 97,92% on a small group test and 99,22% in the large group test..
APA, Harvard, Vancouver, ISO, and other styles
3

Glen Ferguson, David. "Redefining File Slack in Microsoft® NTFS." Journal of Digital Forensic Practice 2, no. 3 (December 9, 2008): 140–56. http://dx.doi.org/10.1080/15567280802587965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Schuster, Andreas. "Introducing the Microsoft Vista event log file format." Digital Investigation 4 (September 2007): 65–72. http://dx.doi.org/10.1016/j.diin.2007.06.015.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Taryanto, Ardi, and Lilis Nur Handayani. "PENGEMBANGAN SISTEM INFORMASI RETENSI REKAM MEDIS DI RUMAH SAKIT DUSTIRA CIMAHI." Jurnal E-Komtek (Elektro-Komputer-Teknik) 3, no. 2 (November 6, 2019): 62–70. http://dx.doi.org/10.37339/e-komtek.v3i2.131.

Full text
Abstract:
This study aimed to develop a medical record retention information system with a case study at Dustira Cimahi Hospital. The data collection techniques used were by observations and completed by literature study which had related with the problems. Software development methods was made using waterfull. From the research conducted, it was found several problems associated with the retention information system which has running, such as: the implementation of retention was done directly with sorting out medical record file without the computer, the absence of a retention schedule medical record file, there is no storage space inactive and off medical record file. Therefore it is necessary to develop a medical record retention information system in the form of an application implemented with the Microsoft Visual Basic.Net 2010 programming language and the Database Access System / DBMS Microsoft Access. So that this application is expected to help hospitals in overcoming file retention problems and data retention processing can be better in the future.
APA, Harvard, Vancouver, ISO, and other styles
6

Tukino, Tukino. "Pelatihan Microsoft Office 2010 pada Kelompok Kerja Kepala TK (K3TK) dan Gugus Paud Naga di Kota Batam sebagai Aplikasi dari Pembelajaran Komputer pada Anak Usia Dini." J-ABDIPAMAS (Jurnal Pengabdian Kepada Masyarakat) 2, no. 2 (October 31, 2018): 65. http://dx.doi.org/10.30734/j-abdipamas.v2i2.245.

Full text
Abstract:
ABSTRACTThe material provided in this guidance is divided into 3 groups, there are Microsoft Word, Microsoft PowerPoint and Microsoft Paint. Activities are carried out based on the division level of the ability/skill of each teacher. Teachers who have been adept grouped separately from the teacher who still lay on the computer so that the coaching is intensive. Teachers who still lay be guided and accompanied by the instructor ranging from the procedure turn on the computer, opening files, saving files, using the mouse, typing basic, introduction computer parts up on the procedure to turn off the computer. The methods applied in this activity is the Method of Presentation, Demonstration and Practice. The method of Presentation used for the introduction of software using Microsoft PowerPoint 2010. The results of this study are: First, the Training provided some of the materials associated with efforts to improve the quality of learning by giving knowledge about the use of the Program PowerPoint to create interactive games as learning media multimedia ICT-based attractive and interactive, Second, the Material presented can be accepted, digested, and understood the participant well, the Third, the Activity takes place smoothly, on time and as expected. Keywords: Microsoft Word, Microsoft PowerPoint, and Microsoft Paint. ABSTRAKMateri yang diberikan dalam pembinaan ini terbagi menjadi 3 kelompok, yaitu kelompok Microsoft Word, Microsoft PowerPoint dan Microsoft Paint. Kegiatan dilakukan berdasarkan pembagian tingkat kemampuan/skill masing - masing guru. Guru yang telah mahir dikelompokkan tersendiri dari guru yang masih awam tentang komputer supaya pembinaan lebih intensif. Guru yang masih awam dibimbing dan ditemani instruktur mulai dari prosedur menyalakan komputer, membuka file, menyimpan file, menggunakan mouse, mengetik dasar, pengenalan bagian-bagian komputer sampai pada prosedur mematikan komputer. Metode yang diterapkan dalam kegiatan ini adalah Metode Presentasi, Demonstrasi dan Praktik. Metode Presentasi digunakan untuk pengenalan software menggunakan Microsoft PowerPoint 2010. Hasil penelitian ini adalah: Pertama, Pelatihan memberikan beberapa materi yang terkait dengan upaya meningkatkan kualitas pembelajaran dengan pemberikan pengetahuan mengenai pemanfaatan Program PowerPoint untuk membuat game interaktif sebagai media pembelajaran multimedia berbasis TIK yang atraktif dan interaktif, Kedua, Materi yang disajikan dapat diterima, dicerna, dan dipahami peserta dengan baik, Ketiga, Kegiatan berlangsung lancar, tepat waktu dan sesuai dengan yang diharapkan. Kata Kunci: Microsoft Word, Microsoft PowerPoint, dan Microsoft Paint.
APA, Harvard, Vancouver, ISO, and other styles
7

Sukarata, Putu Gde, I. Gede Suputra Widharma, I. Made Purbhawa, and I. Gede Wahyu Antara Kurniawan. "OPTIMALISASI COMPUTER MEMORY USAGE MENGGUNAKAN METODE APLIKASI MICROSOFT BINDER." JURNAL INTEGRASI 11, no. 2 (October 17, 2019): 74–80. http://dx.doi.org/10.30871/ji.v11i2.1650.

Full text
Abstract:
Perkembangan teknologi komputer sekarang ini sangat jauh berkembang dari tahun-tahun sebelumnya. Perkembangan ini terjadi di semua bidang kehidupan. Seperti dunia kedokteran, industri, pertanian dan yang lainnya. Teknologi juga sangat membantu berbagai pekerjaan manusia menjadi lebih mudah, cepat dan efisien. Teknologi komputer ini berupa perangkat keras dan perangkat lunak. Perangkat lunak yang digunakan dapat berupa sistem dan aplikasi. Untuk menjalankan sistem dan aplikasi ini memerlukan perangkat keras berupa ruang-ruang sebagai tempat menyimpan yang disebut dengan memory. Memory memiliki sifat sementara dan permanen. Besarnya ruang memory ini akan mempengaruhi sistem kerja komputer dan berjalannya aplikasi yang digunakan. Semakin banyak aplikasi yang digunakan, otomatis akan dapat memenuhi ruang memory yang ada. Aplikasi Microsoft Binder adalah salah satu fitur yang tidak terlalu banyak diketahui penggunaannya sebagai pengikat dokumen yang dihasilkan dari berbagai aplikasi. Microsoft Binder seperti klip pengikat dan menyimpan dokumen terkait secara bersamaan. Microsoft Binder memungkinkan kita menggabungkan berbagai file menjadi satu untuk memudahkan pengelolaan. Dalam hal ini penggunaan memory yang terpakai sudah tentu akan berkurang sehingga kerja komputer menjadi lebih optimal.
APA, Harvard, Vancouver, ISO, and other styles
8

Aqilah Mohd Nahar, Nur Farah, Nurul Hidayah Ab Rahman, and Kamarudin Malik Mohammad. "E-Raser: File Shredder Application With Content Replacement by Using Random Words Function." JOIV : International Journal on Informatics Visualization 2, no. 4-2 (September 10, 2018): 313. http://dx.doi.org/10.30630/joiv.2.4-2.175.

Full text
Abstract:
Data shredding indicates a process of irreversible file destruction while file shredder is the program designed to render computer-based files unreadable by implementing overwriting method to destroy data in the content of a file. The addressable problem with existence of file recovery tools is it may lead to data leakage, exploitation or dissemination from an unauthorized person. Thus, this study proposed a file shredding application named E-Raser which replacing the content of the file using random words function algorithm. A file shredder application named E-Raser was developed to shred Microsoft Word documents with (.doc) or (.docx) format. The implemented algorithm replaced the original content of the files with uninformative words provided by the application. After rewriting phase is complete, shredding process take place to make the file unrecoverable. Object Oriented Software Development was used as the methodology to develop this application. As a result, E-Raser achieved the objectives to add, remove, rewrite, display and shred files. Also, E-Raser is significantly facilitates users to securely dispose their file, protect the confidentiality and privacy of the file’s content.
APA, Harvard, Vancouver, ISO, and other styles
9

Hanner, K., and R. Hörmanseder. "Managing Windows NT®file system permissions— A security tool to master the complexity of Microsoft Windows NT®file system permissions." Journal of Network and Computer Applications 22, no. 2 (April 1999): 119–31. http://dx.doi.org/10.1006/jnca.1999.0086.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Eladawi, A. E., E. S. Gadelmawla, I. M. Elewa, and A. A. Abdel-Shafy. "An application of computer vision for programming computer numerical control machines." Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture 217, no. 9 (September 1, 2003): 1315–24. http://dx.doi.org/10.1243/095440503322420241.

Full text
Abstract:
Generation of the part programs, or tool paths, for products to be manufactured by computer numerical control (CNC) machines is very important. Many methods have been used to produce part programs, ranging from manual calculations to computer aided design/ manufacturing (CAD/CAM) systems. This work introduces a new technique for generating the part programs of existing products using the latest technology of computer vision. The proposed vision system is applicable for two-dimensional vertical milling CNC machines and is calibrated to produce both metric and imperial dimensions. Two steps are used to generate the part program. In the first step, the vision system is used to capture an image for the product to be manufactured. In the second step, the image is processed and analysed by software specially written for this purpose. The software CNCVision is fully written (in lab) using Microsoft Visual C++ 6.0. It is ready to run on any Windows environment. The CNCVision software processes the captured images and applies computer vision techniques to extract the product dimensions, then generates a suitable part program. All required information for the part program is calculated automatically, such as G-codes, X and Y coordinates of start-points and end-points, radii of arcs and circles and direction of arcs (clockwise or counterclockwise). The generated part program can be displayed on screen, saved to a file or sent to MS Word or MS Excel. In addition, the engineering drawing of the product can be displayed on screen or sent to AutoCAD as a drawing file.
APA, Harvard, Vancouver, ISO, and other styles
11

Yan, Jinpei, Yong Qi, and Qifan Rao. "Detecting Malware with an Ensemble Method Based on Deep Neural Network." Security and Communication Networks 2018 (2018): 1–16. http://dx.doi.org/10.1155/2018/7247095.

Full text
Abstract:
Malware detection plays a crucial role in computer security. Recent researches mainly use machine learning based methods heavily relying on domain knowledge for manually extracting malicious features. In this paper, we propose MalNet, a novel malware detection method that learns features automatically from the raw data. Concretely, we first generate a grayscale image from malware file, meanwhile extracting its opcode sequences with the decompilation tool IDA. Then MalNet uses CNN and LSTM networks to learn from grayscale image and opcode sequence, respectively, and takes a stacking ensemble for malware classification. We perform experiments on more than 40,000 samples including 20,650 benign files collected from online software providers and 21,736 malwares provided by Microsoft. The evaluation result shows that MalNet achieves 99.88% validation accuracy for malware detection. In addition, we also take malware family classification experiment on 9 malware families to compare MalNet with other related works, in which MalNet outperforms most of related works with 99.36% detection accuracy and achieves a considerable speed-up on detecting efficiency comparing with two state-of-the-art results on Microsoft malware dataset.
APA, Harvard, Vancouver, ISO, and other styles
12

Adhima, Fauzan, and Fathiah Fathiah. "TRAFFIC COUNTING STUDI KASUS DI JALAN TEUKU NYAK ARIEF." JOURNAL OF INFORMATICS AND COMPUTER SCIENCE 4, no. 2 (December 6, 2019): 75. http://dx.doi.org/10.33143/jics.vol5.iss1.510.

Full text
Abstract:
Banda Aceh merupakan salah satu kota wisata yang sebagian besar penduduknya bertumpu pada mode transportasi untuk keperluan sehari-hari dari mulai sepeda, sepeda motor, mobil dan angkutan umum. Pertumbuhan populasi penduduk kota Banda aceh meningkat dari tahun ke tahun menyebabkan ketidak seimbangan antara jumlah transportasi yang ada di jalan raya dengan kapasitas ruas jalan yang tersedia. Hal ini menyebabkan berbagai masalah lalu lintas Sehingga diperlukan kegiatan Monitoring arus lalu lintas pada jalan tertentu untuk mengetahui kelancaran arus lalu lintas. maka dari itu munculah ide untuk merancang sebuah sistem traffic counting yang dapat menghitung jumlah arus lalu lintas kendaran Sebagai pedoman untuk dijadikan informasi arus lalu lintas. Sistem ini dibuat dengan menggunakan Microsoft Visual Studio,dan Open Source Computer Vision. Hasil dari pengujian sistem ini dengan menggunakan file video yang sudah direkam pada jalan Teuku Nyak Arief Banda Aceh. Program ini secara keseluruhan menunjukkan keberhasilan lebih dari 80%. Dengan sistem ini diharapkan dapat memudahkan untuk menghitung jumlah kendaraan pada jalan tersebut. Kata Kunci : Jumlah Kendaraan, Arus Lalu Lintas, Microsoft Visual Studio dan Open Source Computer Vision
APA, Harvard, Vancouver, ISO, and other styles
13

Larbi, Peter Ako. "Advancing Microsoft Excel’s Potential for Teaching Digital Image Processing and Analysis." Applied Engineering in Agriculture 34, no. 2 (2018): 263–76. http://dx.doi.org/10.13031/aea.12221.

Full text
Abstract:
Abstract. Microsoft Excel is not considered a typical software for digital image processing and analysis. However, based on its large data handling and graphing capabilities, as well as its widespread usage, it presents a good opportunity for use as a tool for teaching image data processing or use in demonstrations requiring little training. It also lends itself well as a potentially useful research tool that can benefit a wide range of users including those with little or no computer programming knowledge. This article demonstrates a new method which can be adopted for teaching concepts of image processing and analysis, consisting of systematic procedures for implementing typical operations in Excel. Categories of operations demonstrated using this method include image preprocessing, image enhancement, image classification, analysis of change over time, and image data fusion. Examples of outputs resulting from using this new method are discussed in the article. The success of this proposed method is hinged on the availability of the required image data, based on which a simple graphical user interface (GUI) application was developed in MATLAB. That application, RGBExcel or the later RGB2X, extracts RGB image data from image files of any format and file size, and exports to Excel for processing. Deployed as standalone applications, both versions can be installed on a 64-bit windows computer and run without MATLAB. Keywords: Color images, Multispectral imagery, Remote sensing, RGB image data, RGB2X, RGBExcel.
APA, Harvard, Vancouver, ISO, and other styles
14

Melyanti*, Rika, and Muhammad Giatman. "Online Determination of Credit Score (PAK) Application Functional Teachers." International Journal of Management and Humanities 5, no. 7 (March 30, 2021): 89–93. http://dx.doi.org/10.35940/ijmh.f1238.035721.

Full text
Abstract:
Determination of Teacher Credit Numbers (PAK) is proposed by the teacher, which are evaluated and evaluated by the Assessment Team. Calculation of credit numbers in PAK still uses manual methods using Microsoft Excel for inputting and Microsoft results as reports, errors in input such as typos and risk of accidental deletion of data still occur frequently. The DUPAK report that will be input into the PAK system is also still waiting for the Assessment Team to send the file to the Pelalawan Regency Education Office so that it takes more time to complete the functional teacher promotion report. To overcome this problem, the credit score calculation process is fast and accurate. A new web-based system, which includes all the elements that are valued by credit numbers. In the old system procedure and the new system it is not much different, the fundamental difference between the new system uses a web-based computer technology in data management that can shorten the data entry process and can overcome the obstacles of the old system.
APA, Harvard, Vancouver, ISO, and other styles
15

Haghish, E. F. "Markdoc: Literate Programming in Stata." Stata Journal: Promoting communications on statistics and Stata 16, no. 4 (December 2016): 964–88. http://dx.doi.org/10.1177/1536867x1601600409.

Full text
Abstract:
Rigorous documentation of the analysis plan, procedure, and computer codes enhances the comprehensibility and transparency of data analysis. Documentation is particularly critical when the codes and data are meant to be publicly shared and examined by the scientific community to evaluate the analysis or adapt the results. The popular approach for documenting computer codes is known as literate programming, which requires preparing a trilingual script file that includes a programming language for running the data analysis, a human language for documentation, and a markup language for typesetting the document. In this article, I introduce markdoc, a software package for interactive literate programming and generating dynamic-analysis documents in Stata. markdoc recognizes Mark-down, LATEX, and HTML markup languages and can export documents in several formats, such as PDF, Microsoft Office .docx, OpenOffice and LibreOffice .odt, LATEX, HTML, ePub, and Markdown.
APA, Harvard, Vancouver, ISO, and other styles
16

Han, Yongman, Jongcheon Choi, Seong-Je Cho, Haeyoung Yoo, Jinwoon Woo, Yunmook Nah, and Minkyu Park. "A new detection scheme of software copyright infringement using software birthmark on windows systems." Computer Science and Information Systems 11, no. 3 (2014): 1055–69. http://dx.doi.org/10.2298/csis130918064h.

Full text
Abstract:
As software is getting more valuable, unauthorized users or malicious programmers illegally copies and distributes copyrighted software over online service provider (OSP) and P2P networks. To detect, block, and remove pirated software (illegal programs) on OSP and P2P networks, this paper proposes a new filtering approach using software birthmark, which is unique characteristics of program and can be used to identify each program. Software birthmark typically includes constant values, library information, sequence of function calls, and call graphs, etc. We target Microsoft Windows applications and utilize the numbers and names of DLLs and APIs stored in a Windows executable file. Using that information and each cryptographic hash value of the API sequence of programs, we construct software birthmark database. Whenever a program is uploaded or downloaded on OSP and P2P networks, we can identify the program by comparing software birthmark of the program with birthmarks in the database. It is possible to grasp to some extent whether software is an illegally copied one. The experiments show that the proposed software birthmark can effectively identify Windows applications. That is, our proposed technique can be employed to efficiently detect and block pirated programs on OSP and P2P networks.
APA, Harvard, Vancouver, ISO, and other styles
17

Lambert, Sherwood Lane. "Auto Accessories, Inc.: An Educational Case on Online Transaction Processing (OLTP) and Controls as Compared to Batch Processing and Controls." Journal of Emerging Technologies in Accounting 14, no. 2 (June 1, 2017): 59–81. http://dx.doi.org/10.2308/jeta-51844.

Full text
Abstract:
ABSTRACT The intent of this educational case is to increase students' understanding of online transaction processing (OLTP) and controls as compared to batch processing and controls. Learning concepts about batch processing is important because many entities continue to use batch processing for critical applications such as payroll, credit card processing, and Big Data. Students learn the advantages and disadvantages of batch processing and OLTP. The case provides a Microsoft Access database that includes a working batch program (module) and an online screen (form). Students use the form to update an employee table in a relational database with OLTP and use the module to update an employee master file with batch processing. Students compare the output from batch processing to the output from OLTP after processing the same input transactions and demonstrate that the outputs match when no input or processing errors exist. Students implement similar data validation edits in both the module and the form. Also, students implement run-to-run control total checks in the module and report input data errors in a batch-processed error report. Students learn processing and controls that are unique to batch processing, unique to OLTP, and common to both processing modes.
APA, Harvard, Vancouver, ISO, and other styles
18

Yan, Hai Zhong. "Development Technology of Excel Data Server Application with DELPHI ADO + RemObjectcs Combined (Part 1: The Server Side)." Applied Mechanics and Materials 727-728 (January 2015): 959–64. http://dx.doi.org/10.4028/www.scientific.net/amm.727-728.959.

Full text
Abstract:
Microsoft office Excel is an important part of the Microsoft Office suite of software that can be processed, statistical analysis of various data, with its quick style and rich functionality has been generally welcomed, but with the database networking and sharing trends deepening, Excel application bottlenecks in stand-alone mode began to appear, and the network is the computer information technology in today's inevitable end, I imagine combining ADO technology developed by Delphi Excel data server in order to break this limitation, changes in non-network applications Excel mode.Excel Data Server is a set of services and client program, the server deployed on the server, the client is retrieved by the query language SQL, Excel data file can perform various operations and customer interface directly to form a Web-based application systems, customer service not only to support the application at both ends of the LAN, you can also support the application on the Internet. In this paper, Delphi7.0 support, using ADO and RemObjects SDK tools to develop an Excel data server. RemObjects referred to RO. RO version a lot, but it is recommended to use RemObjects Data Abstract 6.0.43.801, Delphi's experience in the development of it appear to be more stable, server-side and client-side data connections and programming efficiency, data manipulation more convenient.
APA, Harvard, Vancouver, ISO, and other styles
19

Xiong, Qi, Xinman Zhang, Wen-Feng Wang, and Yuhong Gu. "A Parallel Algorithm Framework for Feature Extraction of EEG Signals on MPI." Computational and Mathematical Methods in Medicine 2020 (May 27, 2020): 1–10. http://dx.doi.org/10.1155/2020/9812019.

Full text
Abstract:
In this paper, we present a parallel framework based on MPI for a large dataset to extract power spectrum features of EEG signals so as to improve the speed of brain signal processing. At present, the Welch method has been wildly used to estimate the power spectrum. However, the traditional Welch method takes a lot of time especially for the large dataset. In view of this, we added the MPI into the traditional Welch method and developed it into a reusable master-slave parallel framework. As long as the EEG data of any format are converted into the text file of a specified format, the power spectrum features can be extracted quickly by this parallel framework. In the proposed parallel framework, the EEG signals recorded by a channel are divided into N overlapping data segments. Then, the PSD of N segments are computed by some nodes in parallel. The results are collected and summarized by the master node. The final PSD results of each channel are saved in the text file, which can be read and analyzed by Microsoft Excel. This framework can be implemented not only on the clusters but also on the desktop computer. In the experiment, we deploy this framework on a desktop computer with a 4-core Intel CPU. It took only a few minutes to extract the power spectrum features from the 2.85 GB EEG dataset, seven times faster than using Python. This framework makes it easy for users, who do not have any parallel programming experience in constructing the parallel algorithms to extract the EEG power spectrum.
APA, Harvard, Vancouver, ISO, and other styles
20

Masni, Harbeng, and Zuhri Saputra Hutabarat. "Pengembangan Multimedia Pembelajaran Berbasis Lash Animation With Swish Max Siswa Kelas XI SMA Negeri 8 Kota Jambi." Jurnal Ilmiah Dikdaya 9, no. 2 (September 28, 2019): 257. http://dx.doi.org/10.33087/dikdaya.v9i2.147.

Full text
Abstract:
There are several factors that influence learning, including teacher factors, student factors, facilities, tools and media, and the environment. Educators should pay attention to factors that influence learning. The task of the teacher in the learning process, in addition to conveying information, also diagnoses learning difficulties students select teaching materials, supervise learning activities, stimulate student learning activities, provide learning guidance using media, strategies, and methods. Teachers and students are required to master the science and information technology of communication continuously. Teachers need to keep abreast of the development of communication information and science so that they can deliver the latest learning material that is useful for the lives of students in the present and future. The thing that stands out in Swishmax is that the work can be exported into the SWF file format, which is the file format used by Macromedia Flash. Therefore, swishmax animations can be played on any personal computer that has a flash player installed. Swishmax animations can be inserted into web pages, or imported into macromedia flash documents or even in Microsoft Power Point documents. When compared to adobe flash, Swishmax is easier to learn and use for beginners, because the tools available are more user friendly. From the description above it can be concluded that Swishmax is not only multimedia software that is only capable of creating dynamic multimedia, but more than that, it is capable of displaying multimedia that is very interactive.
APA, Harvard, Vancouver, ISO, and other styles
21

Muadzani, Alim, Oky Dwi Nurhayati, and Ike Pertiwi Windasari. "Penyisipan Media Teks dan Citra Menggunakan Teknik Steganografi pada Media Pembawa Citra Digital." Jurnal Teknologi dan Sistem Komputer 4, no. 3 (August 14, 2016): 470. http://dx.doi.org/10.14710/jtsiskom.4.3.2016.470-478.

Full text
Abstract:
Information has now become an important comodity in human life, with the rapid development of communications technology has enable people to communicate and exchange information more easily. Internet is very popular and used by billions users worldwide, the information passing through the internet is very large, and some certain people trying to get this information for profit. The information need to be secured to prevent the others to get the information that send through the internet, steganography can be used to hide the information before sending it and the receiver can recover the hidden data. Digital steganography using computer can use a variety of digital file, digital image file is one of them. With the infomation being hidden inside the image-carrier, the others will not aware about the hidden information. Application created using C# programming leanguage and the steganoraphy method using Least Significant Bit Insertion (LSBI). Microsoft Visual Studio used as Integrated Development Environtment (IDE) to code and design the user interface. Software develpment model using Extreme Programming, and the testing using black box method. Application designed to hide text or image inside an image-carrier and the hidden text or image can be recovered. The result of this research are an application that can hide text or image into image-carrier, and the hidden text or image can be recovered. Based on the testing application can run as expected and fulfill all the requirements.
APA, Harvard, Vancouver, ISO, and other styles
22

Sedgewick, J. "From Digital Imaging to the Publisher: Navigating the Digital Journey." Microscopy and Microanalysis 7, S2 (August 2001): 844–45. http://dx.doi.org/10.1017/s1431927600030294.

Full text
Abstract:
With the increased frequency of digital presentation of research on laptops at meetings, on the web and to journals, the need arises for a unified set of computer programs. The best of all worlds would be to create text, graphs, images and HTML in one program only, but the reality is that several need to be used without much duplication of effort. The ideal situation is one in which the computer applications are open-ended, so that one application saves in a format that is readable by another application across Macintosh® and PC platforms. At the same time, the applications should be straightforward to use.Taking only the above conditions into account, even the most cursory examination would reveal that Microsoft® products are, in fact, closed systems at best and dead end at worst. PowerPoint® is a good example of a program that can only do one thing—make slides or on--screen presentations--but dead ends because high resolution image files cannot be saved from the program, nor can files export to HTML or to a file usable by a publisher.
APA, Harvard, Vancouver, ISO, and other styles
23

Kamdar, Biren B., Pooja A. Shah, Sruthi Sakamuri, Bharat S. Kamdar, and Jiwon Oh. "A NOVEL SEARCH BUILDER TO EXPEDITE SEARCH STRATEGIES FOR SYSTEMATIC REVIEWS." International Journal of Technology Assessment in Health Care 31, no. 1-2 (2015): 51–53. http://dx.doi.org/10.1017/s0266462315000136.

Full text
Abstract:
Objectives: Developing a search strategy for use in a systematic review is a time-consuming process requiring construction of detailed search strings using complicated syntax, followed by iterative fine-tuning and trial-and-error testing of these strings in online biomedical search engines.Methods: Building upon limitations of existing online-only search builders, a user-friendly computer-based tool was created to expedite search strategy development as part of production of a systematic review.Results: Search Builder 1.0 is a Microsoft Excel®-based tool that automatically assembles search strategy text strings for PubMed (www.pubmed.com) and Embase (www.embase.com), based on a list of user-defined search terms and preferences. With the click of a button, Search Builder 1.0 automatically populates the syntax needed for functional search strings, and copies the string to the clipboard for pasting into Pubmed or Embase. The offline file-based interface of Search Builder 1.0 also allows for searches to be easily shared and saved for future reference.Conclusions: This novel, user-friendly tool can save considerable time and streamline a cumbersome step in the systematic review process.
APA, Harvard, Vancouver, ISO, and other styles
24

Barbour, Leonard J. "EwaldSphere: an interactive approach to teaching the Ewald sphere construction." Journal of Applied Crystallography 51, no. 6 (October 11, 2018): 1734–38. http://dx.doi.org/10.1107/s1600576718012876.

Full text
Abstract:
EwaldSphere is a Microsoft Windows computer program that superimposes the Ewald sphere construction onto a small-molecule single-crystal X-ray diffractometer. The main objective of the software is to facilitate teaching of the Ewald sphere construction by depicting our classical description of the X-ray diffraction process as a three-dimensional model that can be explored interactively. Several features of the program are also useful for introducing students to the operation of a diffractometer. EwaldSphere creates a virtual reciprocal lattice based on user-defined unit-cell parameters. The Ewald sphere construction is then rendered visible, and the user can explore the effects of changing various diffractometer parameters (e.g. X-ray wavelength and intensity, goniometer angles, and detector distance) on the resulting diffraction pattern as captured by a virtual area detector. Additional digital resources are provided, including a simple but comprehensive program manual, a PowerPoint presentation that introduces the essential concepts, and an Excel file to facilitate calculation of lattice dhk spacings (required for the presentation). The program and accompanying resources are provided free of charge, and there are no restrictions on their use.
APA, Harvard, Vancouver, ISO, and other styles
25

Zhang, Xiaoliang, Kehe Wu, Zuge Chen, and Chenyi Zhang. "MalCaps: A Capsule Network Based Model for the Malware Classification." Processes 9, no. 6 (May 25, 2021): 929. http://dx.doi.org/10.3390/pr9060929.

Full text
Abstract:
The research on malware detection enabled by deep learning has become a hot issue in the field of network security. The existing malware detection methods based on deep learning suffer from some issues, such as weak ability of deep feature extraction, relatively complex model, and insufficient ability of model generalization. Traditional deep learning architectures, such as convolutional neural networks (CNNs) variants, do not consider the spatial hierarchies between features, and lose some information on the precise position of a feature within the feature region, which is crucial for a malware file which has specific sections. In this paper, we draw on the idea of image classification in the field of computer vision and propose a novel malware detection method based on capsule network architecture with hyper-parameter optimized convolutional layers (MalCaps), which overcomes CNNs limitations by removing the need for a pooling layer and introduces capsule layers. Firstly, the malware is transformed into a grayscale image. Then, the dynamic routing-based capsule network is used to detect and classify the image. Without advanced feature extraction and with only a small number of labeled samples, the presented method is tested on an unbalanced Microsoft Malware Classification Challenge (MMCC) dataset and experimental results produce testing accuracy of 99.34%, improving on a number of traditional deep learning models posited in recent malware classification literature.
APA, Harvard, Vancouver, ISO, and other styles
26

Lisetskaya, I. S., and A. Yu Kovalishin. "IMPLEMENTATION OF DISTANCE TRAINING IN QUARANTINE AT THE DEPARTMENT OF PAEDIATRIC DENTISTRY OF IVANO FRANKIVSK NATIONAL MEDICAL UNIVERSITY." Актуальні проблеми сучасної медицини: Вісник Української медичної стоматологічної академії 20, no. 3 (November 12, 2020): 241–44. http://dx.doi.org/10.31718/2077-1096.20.3.241.

Full text
Abstract:
Mankind has encountered a previously unknown disease, COVID-19, which has changed and caused a major disruption in nearly all spheres of human life. The COVID pandemic has had a tremendous impact on education, and medical education in particular. In quarantine conditions, distance learning was the only possible option to continue the professional medical training. The educators and education authorities faced a number of issues to be solved immediately including the organization of the course content and delivery for online synchronous and asynchronous format, the improvement of computer operating skills, the stimulation of students’ independent cognitive activity, creativity, nurtures self-awareness, independence and responsibility, etc. Ever-increasing amount of information, rapid progression of communication technologies and online tool applications in recent decades has created a strong ground for overcoming challenges caused by the pandemic. Practical classes on paediatric therapeutic dentistry were conducted in the format of an online conference via the Microsoft Teams program (groups, course schedule were created in advance). Microsoft Teams is a team centre of Office 365 that is a simplified version of learning management systems, but allows the learning team to communicate and share files. The program is convenient as it enables to create conventional work environment, including chat for discussion, file sharing and corporate programs. Students are supplied with instructional materials, guidelines; there has been elaborated the system of the tests for the themes of the discipline to check up students’ knowledge; educators give prompt feedback and grade student’s work. During the online lesson, educators discuss the main issues of a theme, explain the unclear or disputable points, using pre-loaded materials as presentations, videos, photos, radiographs and orthopantomograms that helps to facilitate the material comprehension. Practical training, which includes dealing with patients, improving manual and communication skills, is among the top priorities for dental students. To solve this problem associated with the remote learning, the department staff elaborated situational tasks and algorithms for performing practical skills. Distance learning can be and must be organized as a purposeful process of interaction between educators and students based on applying the latest information and technologies, adequate control and guidance that allows medical educational settings not to stop fostering future healthcare professionals in the pandemic period.
APA, Harvard, Vancouver, ISO, and other styles
27

., Yastori. "Completeness of informed consent in supporting national standard accreditation of patient and family rights 5 hospitals at Ropanasuri surgical special hospital in Padang." International Journal Of Community Medicine And Public Health 6, no. 11 (October 24, 2019): 4639. http://dx.doi.org/10.18203/2394-6040.ijcmph20195034.

Full text
Abstract:
Background: Completeness of informed consent is one indicator in supporting the accreditation of national hospital standards through the assessment of patient and family rights (PFR) assessment standards 5. In the health service process, informed consent can also be used as evidence and has a strong legal value in the form of a sheet of paper containing the doctor’s explanation about the diagnosis of the disease and the actions that will be performed on the patient.Methods: This research uses descriptive method with a qualitative approach. The population used was the entire patient medical record file in 2018, which was 3.093 medical record files. Sampling was done by random sampling using a formula according to Notoatmodjo for the calculation of the number of samples and obtained 355 files of medical records. Data processing using Microsoft Excel computer programs. For observing the completeness of the standard rights of patients and families using national standards for hospital accreditation.Results: Based on the analysis of 355 medical record files at Ropanasuri specialty hospital it is known that 296 pieces of informed consent were filled in with a percentage of 83.38%, 59 sheets of informed consent were incomplete with a percentage of 16.62%. The results showed the greatest incompleteness found in filling the informed consent items of witness signatures of 2.81%, providing information on the completeness of filling the doctor's identity by 2.54% and the name of the witness 1.70% on filling the authentication.Conclusions: 296 pieces of informed consent were filled in with a percentage of 83.38%, 59 sheets of informed consent were incomplete with a percentage of 16.62%.
APA, Harvard, Vancouver, ISO, and other styles
28

Haliq, Abdul, Akmal Hamsa, and Sakaria Sakaria. "ANALISIS PEMANFAATAN, FAKTOR PENDUKUNG DAN PENGHAMBAT, SERTA UPAYA OPTIMALISASI APLIKASI ZOTERO DALAM PENULISAN KARYA ILMIAH." Edukasi: Jurnal Pendidikan 19, no. 1 (April 30, 2021): 16. http://dx.doi.org/10.31571/edukasi.v19i1.2325.

Full text
Abstract:
<p class="StyleAuthorBold"><strong>Abstrak</strong></p><p class="abstrak">Penelitian bertujuan untuk menganalisis pemanfaatan, faktor penghambat dan pendukung, serta upaya optimalisasi aplikasi Zotero dalam mengelola referensi karya tulis ilmiah. Penelitian menggunakan pendekatan kualitatif dan dilakukan pada mahasiswa Program Studi Pendidikan Bahasa dan Sastra, Universitas Negeri Makassar dengan jumlah 40 orang. Pengumpulan data menggunakan teknik observasi, angket, dan wawancara. Analisis data mengadopsi model analisis interaktif yang terdiri atas empat alur kegiatan, yaitu pengumpulan, reduksi, penyajian data, dan penarikan kesimpulan. Hasil penelitian menunjukkan bahwa pemanfaatan aplikasi Zotero memudahkan dalam mengelola referensi, memiliki perpustakaan digital, kemudahan sitasi tulisan, dan efisien. Adapun faktor penghambat yaitu mahasiswa tidak terbiasa menggunakan aplikasi, proses instalasi, sinkronisasi <em>file</em>, integrasi, dan penambahan sitasi dengan Microsoft Word. Sedangkan faktor pendukung yaitu keterampilan mengoperasikan komputer, penguasaan teknik penulisan makalah, aplikasi Zotero bersifat nonkomersial, kemudahan menyimpan, mengonversi, dan mengedit metadata. Upaya optimalisasi dilakukan dengan memberikan fasilitas pendukung seperti buku panduan dan video tutorial, buku panduan sitasi, serta WhatsApp <em>group</em>. </p><p class="abstrak"> </p><p class="StyleAuthorBold"><strong><em>Abstract</em></strong></p><p class="abstrak"><em>This research aimed to analyze the utilization, inhibiting and supporting factors, and the efforts to optimize the Zotero application in managing references to scientific papers. This research used a qualitative approach and was conducted on 40 students of the Language and Literature Education Study Program, Universitas Negeri Makassar. Data collection was carried out by observation, questionnaire, and interview techniques. Data analysis was carried out by adopting an interactive analysis model consisting of four activity streams, namely data collection, reduction, presentation, and drawing conclusions. The results showed that the use of the Zotero application made it easier to manage references, have a digital library, ease writing citation, and efficient. The inhibiting factors including students are not accustomed to using applications, the installation process, synchronizing files, integrating, and adding citations to Microsoft Word. Furthermore, supporting factors, including computer skills, mastery of paper writing techniques, noncommercial Zotero applications, ease of storing, converting, and editing metadata. Optimization efforts are carried out by providing supporting facilities such guidebooks-video tutorials, citation manuals, and WhatsApp groups.</em></p>
APA, Harvard, Vancouver, ISO, and other styles
29

Echelard, Jean-François. "Use of Telemedicine in Depression Care by Physicians: Scoping Review." JMIR Formative Research 5, no. 7 (July 26, 2021): e29159. http://dx.doi.org/10.2196/29159.

Full text
Abstract:
Background Depression is a common disorder, and it creates burdens on people’s mental and physical health as well as societal costs. Although traditional in-person consultations are the usual mode of caring for patients with depression, telemedicine may be well suited to psychiatric assessment and management. Telepsychiatry can be defined as the use of information and communication technologies such as videoconferencing and telephone calls for the care of psychopathologies. Objective This review aims to evaluate the extent and nature of the existing literature on the use of telemedicine for the care of depression by physicians. This review also aims to examine the effects and perceptions regarding this virtual care and determine how it compares to traditional in-person care. Methods The Arksey and O’Malley framework and the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines were followed. Relevant articles were identified through a search of three databases (MEDLINE, Cochrane Database of Systematic Reviews, and PsycArticles) on October 11, 2020. The search terms were “(virtual OR telemedicine OR teleconsultation* OR telehealth OR phone* OR webcam* OR telepsychiatry) AND (depress*)”. Eligibility criteria were applied to select studies about the use of telemedicine for the care of patients with depression specifically by physicians. An Excel file (Microsoft Corporation) was used to chart data from all included articles. Results The search resulted in the identification of 28 articles, and all 13 nonreview studies were analyzed in detail. Most nonreview studies were conducted in the United States during the last decade. Most telemedicine programs were led by psychiatrists, and the average study population size was 135. In all applicable studies, telepsychiatry tended to perform at least as well as in-person care regarding improvement in depression severity, patient satisfaction, quality of life, functioning, cost-effectiveness, and most other perceptions and variables. Cultural sensitivity and collaborative care were part of the design of some telemedicine programs. Conclusions Additional randomized, high-quality studies are recommended to evaluate various outcomes of the use of telemedicine for depression care, including depression variables, perceptions, health care outcomes and other outcomes. Studies should be conducted in various clinical contexts, including primary care. Telepsychiatry is a promising modality of care for patients suffering from depression.
APA, Harvard, Vancouver, ISO, and other styles
30

Lisetska, I. S. "Distance form of learning medical students as a challenge of today." Modern pediatrics. Ukraine, no. 7(111) (November 29, 2020): 81–86. http://dx.doi.org/10.15574/sp.2020.111.81.

Full text
Abstract:
A year has passed since humanity first encountered a previously unknown disease — COVID-19, which changed and made adjustments to the established mechanisms of human life. These changes also apply to the field of education, including medical education. Due to the quarantine imposed in Ukraine to prevent the spread of COVID-19, educational institutions were closed and students were transferred to distance learning. Organize quality online learning in a short period of time, charge motivation to learn and be prepared for technological problems — this is not a complete list of problems faced by teachers. However, the present can be called the era of computer science, telecommunications and global digitalization. The rapid rise of the integration of information and communication technologies, online tools in society in recent decades has become a preparation for solving problems and problems. Distance learning has advantages and disadvantages. In addition, there are several organizational and methodological and organizational and technical models of distance learning. Practical classes on pediatric therapeutic dentistry are held in the format of an online conference in the Microsoft Teams program (Pre-created Classes for each subgroup and events in the program — in the calendar according to the schedule, students are invited). Microsoft Teams is a teamroom for Office 365, which is a simplified version of learning management systems, but allows the learning team to communicate and share files. The program is convenient because it combines everything in a common work environment, which includes chat for discussion, file sharing and corporate programs. Students take each test topic in the system according to the calendar-thematic plan, get the result, which the teacher converts into points, according to the evaluation criteria. During the online lesson, the teacher interviews the topic, corrects the answer, explains the points that were unclear, using pre-loaded materials — presentations, videos, photos, radiographs and orthopantomograms, which helps to master the material. For future dentists, practical training, work with real patients, practice of manual and communication skills are extremely important, so during the remote teaching of pediatric therapeutic dentistry, situational problems are solved and algorithms of practical skills are analyzed in order to bring students closer to practice. Distance learning is a purposeful process of interaction between teacher and student, based on the use of modern information and telecommunications technologies that allow distance learning, which is relevant in a pandemic COVID-19. No conflict of interest was declared by the author. Key words: distance learning, medical education, teacher, student, COVID-19 pandemic.
APA, Harvard, Vancouver, ISO, and other styles
31

Ding, Dan, Michael Phillips, Eduardo Iturrate, Sarah Hochman, and Anna Stachel. "Implementing an Automated Pneumonia Surveillance System." Infection Control & Hospital Epidemiology 41, S1 (October 2020): s281—s283. http://dx.doi.org/10.1017/ice.2020.853.

Full text
Abstract:
Background: Although definitions from the CDC were developed to increase the reliability of surveillance data, reduce the burden of surveillance in healthcare facilities, and enhance the utility of surveillance data for improving patient safety, the algorithm is still laborious for manual use. We implemented an automated surveillance system that combines 2 CDC pneumonia surveillance definitions to identify pneumonia infection in inpatients. Methods: The program was implemented at an academic health center with >40,000 inpatient admission per year. We used Window Task Scheduler with a batch file daily to run a validated pneumonia surveillance algorithm program written with SAS version 9.4 software (SAS Institute, Cary, NC) and a natural language processing tool that queries variables (Table 1) and text found in the electronic medical records (EMR) to identify pneumonia cases (Fig. 1). We uploaded all computer-identified positive cases into a Microsoft Access database daily to be reviewed by a hospital epidemiologist. Every week, we also validated 5 computer-identified negative cases from the prior 2 weeks to ensure accuracy of the computer algorithm. We defined negative cases as pneumonia present on admission or chest x-ray indicative of pneumonia but without CDC-defined surveillance symptoms. We also wrote a program to automatically send e-mails to key stakeholders and to prepare summary reports. Results: Since November 2019, we have successfully implemented the automated computer algorithm or program to notify, via e-mail, infection prevention staff and respiratory therapy providers of CDC-defined pneumonia cases on a daily basis. This automated program has reduced the number of manual hours spent reviewing each admission case for pneumonia. A summary report is created each week and month for distribution to hospital staff and the Department of Health, respectively. Conclusions: The implementation of an automated pneumonia surveillance system proves to be a timelier, more cost-effective approach compared to manual pneumonia surveillance. By allowing an automated algorithm to review pneumonia, timely reports can be sent to infection prevention control staff, respiratory therapy providers, and unit staff about individual cases. Hospitals should leverage current technology to automate surveillance definitions because automated programs allow near real-time identification and critical review for infection and prevention activities.Funding: NoneDisclosures: None
APA, Harvard, Vancouver, ISO, and other styles
32

Kondratyuk, M. O., T. G. Gutor, L. M. Strilchuk, I. B. Zhakun, O. O. Sorokopud, and O. M. Besh. "Individual prognosis of complications in the presence of chronic heart failure." Likarska sprava, no. 5-6 (June 27, 2018): 37–43. http://dx.doi.org/10.31640/jvd.5-6.2018(5).

Full text
Abstract:
Neurohumoral theory of chronic heart failure (CHF) development, approved by most scientists, does not completely explain mechanisms of its decompensation. Standard treatment is not always effective, thus, search for pathogenetic and prognostic factors, which influence the course of CHF, remains a current issue. However, individual prognosis of CHF course in clinical practice is not performed at present, since its distinct criteria have not been specified. Thus, it became the expediency and rationale of our research, the aim of which was to assess individual risk of occurrence of complications in patients with CHF, considering a combined influence of several factors. A complete clinical examination of 110 patients (74.5 % males, 25.5 % females) with CHF has been performed. The method of logistic regression was used for determination of combined influence of analyzed factors on CHF prognosis; adequacy and reliability of the difference of the obtained model were investigated by Wald’s criteria and chi-square test. On elaboration of the prognosis method of individual risk of the development of cardiac insufficiency, among other factors that, according to literature data, influence the development of the disease, we have singled out three factors, which have a reliable (P < 0.05) association with CHF: BMI, total cholesterol and amount of lymphocytes. Elaborated computer program, which can calculate the prognosis of CHF complications. This file opens in the program “Microsoft Excel” calculates individual risk and graphically demonstrates the degree of risk. Conducted correlation analysis of individual risk of CHF complications showed that its likelihood is accompanied by development of systolic dysfunction, hypertrophy of the left ventricle with dilatation, anemic and detoxification syndromes, impairment of liver and kidney functions with the reduction of leptin in the blood, which has important regulative functions. Thus, based on conducted logistic, correlation and prognostic analyses, individual risk of the development of CHF complications increases under conditions of combined factors, such as weight loss, decrease in total cholesterol level and reduction of the content of peripheral blood lymphocytes. Application of elaborated computer program allows a doctor to calculate individual risk and visualize it.
APA, Harvard, Vancouver, ISO, and other styles
33

Et al., Pramaha Prakasit Thitipasitthikorn. "Bowon Power: An Integrated of Community Development Mechanism in Nakhon Pathom Province." Psychology and Education Journal 58, no. 1 (January 29, 2021): 1718–26. http://dx.doi.org/10.17762/pae.v58i1.973.

Full text
Abstract:
This research has the objective to study citizenship awareness in community development and urbanization of Nakhon Pathom province. Using the integrated research methodology, namely quantitative research The quantitative data were collected from 375 samples. The data were analyzed by using percentage, mean, standard deviation. And qualitative research 24 in-depth interviews with key informants/person and 12 specific group conversations/person, analyzing data in context and describing. The research found that The community development and urbanization of communities in Nakhon Pathom Province. From the study of the context of community development and urbanization of urban communities in the study area, it can be seen that the mechanism of urbanization is "Power Bowon" with "Community Funds" that are fundamental factors that drive communities to develop. Cooperation from civil society Having visionary leaders wants to see the development of the area in various dimensions. The reliance between community organizations and people, namely the state temple and the communities in which each side performs their duties appropriately. And co-ordinate working together which affects the quality of life of the people in the community who are the direct recipients of urban development Although some community development activities are not initiated by people in the community, but with various sectors being able to coordinate and collaborate And join together to drive development that will benefit the people in the community People are affected by the development activities.Therefore, creating a shared awareness as a force to drive community development activities into urbanization and is an expression of good citizenship by democracy. These instructions give you guidelines for preparing papers for the International conference ICCSE). Use this document as a template if you are using Microsoft Office Word 6.0 or later. Otherwise, use this document as an instruction set. The electronic file of your paper will be formatted further at International Journal of Computer Theory and Engineering. Define all symbols used in the abstract. Do not cite references in the abstract. Do not delete the blank line immediately above the abstract; it sets the footnote at the bottom of this column
APA, Harvard, Vancouver, ISO, and other styles
34

"Microsoft Windows long file names exploited." Network Security 1998, no. 3 (March 1998): 2. http://dx.doi.org/10.1016/s1353-4858(98)90113-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Vinodhini, V., C. Kumuthini, and K. Santhi. "A Study on Behavioural Analysis of Specific Ransomware and its Comparison with DBSCAN-MP." International Journal of Scientific Research in Computer Science, Engineering and Information Technology, January 1, 2021, 01–06. http://dx.doi.org/10.32628/cseit206670.

Full text
Abstract:
Ransomware attack is known to as WCRY or WannaCry. This ransomware is intriguing advantage of a recently disclosed Microsoft vulnerability (“MS17-010 – “Eternalblue” ) coupled with the Shadow Brokers tools release. After a computer is fouled, WannaCry ransomware targets and encrypts 176 file types. Some of the file types WannaCry targets are database related files, multimedia and archive related files, as well as Microsoft Office documents. In its ransom note, which supports 27 languages, it initially demands US$300 worth of Bitcoins from its fatalities—an amount that increases incrementally after a definite time limit. The victim is also given seven days before the pretentious files are deleted. The WannaCry Ransomware consists of multiple components. It arrives on the ruined computer in the form of a dropper, a self-reliant program that extracts the other application mechanism embedded within it. Those components include: An application that encrypts and decrypts data Files containing encryption keys A copy of Tor The program secret code is not obfuscated and was relatively easy for security pros to analyze. Once it is launched, WannaCry tries to access a hard-coded URL (the so-called kill switch); if it can't, it proceeds to investigate for and encrypt files in a slew of important formats, ranging from Microsoft Office files to MP3s and MKVs, leaving them completely inaccessible to the user. It then displays a ransom notice, demanding numbers in Bitcoin to decrypt the files.
APA, Harvard, Vancouver, ISO, and other styles
36

Wahid, Abdul, and Retantyo Wardoyo. "An Implementation of Audio Security Using DES Algorithm." IJCCS (Indonesian Journal of Computing and Cybernetics Systems) 1, no. 2 (June 30, 2007). http://dx.doi.org/10.22146/ijccs.2280.

Full text
Abstract:
AbstractData security is an important problem in computer technology. This paper discusses security system for audio data. This technology is crucial because the multimedia technology has been improved very fast. One of the common audio format forms is wave audio format. The wave format is an uncompressed file format which is for RIFF specification owned by Microsoft. It is used for saving multimedia file. By using DES algorithm, the wave data could be encrypted for hiding information contained in the data. DES algorithm is chosen in this research because DES algorithm is one of the best symmetrical cryptography algorithms and it has been used world wide. This research is expected to give contribution to the audio security concept, especially for audio data security using wave file format.Keywords : audio security, DES algorithm, wave Omar
APA, Harvard, Vancouver, ISO, and other styles
37

"Software Reviews : Ethnogeography with MICROSOFT FILE (MS FILE) Data Management Program for the Apple Macintosh Reviewed by Gary B. Palmer, Center for Computer Applications in the Humanities, University of Nevada, Las Vegas Publisher: Microsoft Corporation, 10700 Northrup Way, Box 97200, Bellevue, WA 98009 (telephone: 800-426-9400) Price: $195.00." Social Science Microcomputer Review 4, no. 1 (April 1986): 91–99. http://dx.doi.org/10.1177/089443938600400108.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Habibi, Imam, and Rinaldi Munir. "The Balinese Unicode Text Processing." IJCCS (Indonesian Journal of Computing and Cybernetics Systems) 1, no. 1 (June 14, 2009). http://dx.doi.org/10.22146/ijccs.19.

Full text
Abstract:
In principal, the computer only recognizes numbers as the representation of a character. Therefore, there are many encoding systems to allocate these numbers although not all characters are covered. In Europe, every single language even needs more than one encoding system. Hence, a new encoding system known as Unicode has been established to overcome this problem. Unicode provides unique id for each different characters which does not depend on platform, program, and language. Unicode standard has been applied in a number of industries, such as Apple, HP, IBM, JustSystem, Microsoft, Oracle, SAP, Sun, Sybase, and Unisys. In addition, language standards and modern information exchanges such as XML, Java, ECMA Script (JavaScript), LDAP, CORBA 3.0, and WML make use of Unicode as an official tool for implementing ISO/IEC 10646. There are four things to do according to Balinese script: the algorithm of transliteration, searching, sorting, and word boundary analysis (spell checking). To verify the truth of algorithm, some applications are made. These applications can run on Linux/Windows OS platform using J2SDK 1.5 and J2ME WTK2 library. The input and output of the algorithm/application are character sequence that is obtained from keyboard punch and external file. This research produces a module or a library which is able to process the Balinese text based on Unicode standard. The output of this research is the ability, skill, and mastering of 1. Unicode standard (21-bit) as a substitution to ASCII (7-bit) and ISO8859-1 (8-bit) as the former default character set in many applications. 2. The Balinese Unicode text processing algorithm. 3. An experience of working with and learning from an international team that consists of the foremost experts in the area: Michael Everson (Ireland), Peter Constable (Microsoft US), I Made Suatjana, and Ida Bagus Adi Sudewa.
APA, Harvard, Vancouver, ISO, and other styles
39

Kim, Sunghwan, Asta Gindulyte, Jian Zhang, Paul A. Thiessen, and Evan E. Bolton. "PubChem Periodic Table and Element pages: improving access to information on chemical elements from authoritative sources." Chemistry Teacher International, July 13, 2020. http://dx.doi.org/10.1515/cti-2020-0006.

Full text
Abstract:
AbstractPubChem (https://pubchem.ncbi.nlm.nih.gov) is one of the top five most visited chemistry web sites in the world, with more than five million unique users per month (as of March 2020). Many of these users are educators, undergraduate students, and graduate students at academic institutions. Therefore, PubChem has a great potential as an online resource for chemical education. This paper describes the PubChem Periodic Table and Element pages, which were recently introduced to celebrate the 150th anniversary of the periodic table. These services help users navigate the abundant chemical element data available within PubChem, while providing a convenient entry point to explore additional chemical content, such as biological activities and health and safety data available in PubChem Compound pages for specific elements and their isotopes. The PubChem Periodic Table and Element pages are also available as widgets, which enable web developers to display PubChem’s element data on web pages they design. The elemental data can be downloaded in common file formats and imported into data analysis programs (e.g., spreadsheet software, like Microsoft Excel and Google Sheets, and computer scripts, such as python and R). Overall, the PubChem Periodic Table and Element pages improve access to chemical element data from authoritative sources.
APA, Harvard, Vancouver, ISO, and other styles
40

Qing, Xue, Meng Wang, Gerrit Karssen, Patricia Bucki, Wim Bert, and Sigal Braun-Miyara. "PPNID: a reference database and molecular identification pipeline for plant-parasitic nematodes." Bioinformatics, September 14, 2019. http://dx.doi.org/10.1093/bioinformatics/btz707.

Full text
Abstract:
Abstract Motivation The phylum Nematoda comprises the most cosmopolitan and abundant metazoans on Earth and plant-parasitic nematodes represent one of the most significant nematode groups, causing severe losses in agriculture. Practically, the demands for accurate nematode identification are high for ecological, agricultural, taxonomic and phylogenetic researches. Despite their importance, the morphological diagnosis is often a difficult task due to phenotypic plasticity and the absence of clear diagnostic characters while molecular identification is very difficult due to the problematic database and complex genetic background. Results The present study attempts to make up for currently available databases by creating a manually-curated database including all up-to-date authentic barcoding sequences. To facilitate the laborious process associated with the interpretation and identification of a given query sequence, we developed an automatic software pipeline for rapid species identification. The incorporated alignment function facilitates the examination of mutation distribution and therefore also reveals nucleotide autapomorphies, which are important in species delimitation. The implementation of genetic distance, plot and maximum likelihood phylogeny analysis provides more powerful optimality criteria than similarity searching and facilitates species delimitation using evolutionary or phylogeny species concepts. The pipeline streamlines several functions to facilitate more precise data analyses, and the subsequent interpretation is easy and straightforward. Availability and implementation The pipeline was written in vb.net, developed on Microsoft Visual Studio 2017 and designed to work in any Windows environment. The PPNID is distributed under the GNU General Public License (GPL). The executable file along with tutorials is available at https://github.com/xueqing4083/PPNID. Supplementary information Supplementary data are available at Bioinformatics online.
APA, Harvard, Vancouver, ISO, and other styles
41

Badrun, Badrun, Efrizon Efrizon, and Legiman Slamet. "KONTRIBUSI MOTIVASI BELAJAR DAN SUASANA BELAJAR TERHADAP HASIL BELAJAR PADA MATA PELAJARAN KETERAMPILAN KOMPUTER DAN PENGELOLAAN INFORMASI (KKPI) SISWA KELAS X TKJ SMKN 4 TAKENGON-ACEH TENGAH." Voteteknika (Vocational Teknik Elektronika dan Informatika) 4, no. 1 (November 20, 2018). http://dx.doi.org/10.24036/voteteknika.v4i1.6143.

Full text
Abstract:
This study was motivated by the problem of low learning outcomes in subjects Computer Skills and Information Management (KKPI) class X Computer Engineering and Networks in SMK Negeri 4 Takengon, where 30% of students get to learn the results below minimum completeness criteria (KKM). KKM assigned school on the subjects KKPI is 75 with a range of values of 0 - 100. The purpose of this study is to reveal the contribution of learning motivation and learning atmosphere on the results of study subjects Computer Skills and Information Management (KKPI) class X Computer Engineering and Networks SMKN 4 Takengon. This research is a correlation associative nature.The population in this study were all students of class X Computer Engineering and Networks SMKN 4 Takengon, consists of two classes of 40 people. A sampling technique that total sampling where the sample is whole members of the population because the subject is less than 100 persons. The data obtained from the student learning outcomes subject teachers Skills Computer and Information Management (KKPI) SMK Negeri 4 Takengon. While data on the motivation to learn and the learning environment are collected through a questionnaire using a Likert scale that have been tested for validity and reliability. File was analyzed using statistical methods with the help of software Microsoft Excel 2007.The result showed: (1) learning motivation accounted for 24.61% of the results of class X student of SMK Negeri 4 Takengon; (2) Atmosphere Learning accounted for 23.54% of the results of class X student of SMK Negeri 4 Takengon; (3) learning motivation and learning atmosphere together accounted for 47.90% of the results of class X student of SMK Negeri 4 Takengon. So it can be concluded that the motivation to learn and contribute to the learning atmosphere of learning outcomes, the higher the motivation to learn and the better learning environment, then it will get better learning results. Keywords : Motivation, Atmosphere Learning, Learning Outcomes, Associative correlation, Total Sampling.
APA, Harvard, Vancouver, ISO, and other styles
42

Kul'kova, Anna Olegovna, and Natalya Valentinovna Yandybaeva. "PROGRAM TESTING MODULE FOR QUALITY CONTROL OF THE EDUCATIONAL PROCESS." Vestnik of Astrakhan State Technical University. Series: Management, computer science and informatics, October 25, 2017, 122–28. http://dx.doi.org/10.24143/2072-9502-2017-4-122-128.

Full text
Abstract:
To carry out computer testing of knowledge of the students studying object-oriented programming, a software module "Test" was developed. It can become an element of the information and educational environment of the university and realize the possibilities of distance learning within the framework of using the SCORM standard. Information-educational resources (IOR) in SCORM format are accessible, adaptable, efficient, durable and interoperable. The information and educational resource implemented as SCORM package interacts with the LMS management system, which is implemented into the educational portal and transmits the important information. The program is created in the Microsoft Visual Studio 2015 environment using C # programming language. The offered testing module allows simultaneous testing on multiple computers without duplicating questions. The program processes and displays test results. The test can be done in simple word processor and contain any number of questions. It can insert HTML-tags of text formatting, any symbols and pictures inside the questions. The complexity of the question and the time of testing are determined by the lecturer in different ways, taking into account the level of the student's preparation. After testing the user is allowed to know test results and get detailed recommendations for further study of the subject. Test results are saved in an Excel file, so one can trace the dynamics of the students’ knowledge. The diagram of the transition of the program control focus is given. The scheme of interaction of components and services of LMS Moodle using the developed program module "Test" is also presented.
APA, Harvard, Vancouver, ISO, and other styles
43

"Software Reviews : Econoland Software Developer: Michael C. Lovell, Economics Department, Wesleyan University, Middletown, CT 06457 Hard disk space required: About 2 MB; program size (compacted): 448 KB Ram size required: As small as 8 MB Version reviewed: 8.1 Effectiveness: Good User friendliness: Adequate, but needs improvement Documentation: Not available, a brief installation instruction is provided in a Microsoft Word file (35 KB) and in a generic text file (3 KB." Social Science Computer Review 16, no. 4 (December 1998): 433–37. http://dx.doi.org/10.1177/089443939801600409.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Hinner, Kajetan. "Statistics of Major IRC Networks." M/C Journal 3, no. 4 (August 1, 2000). http://dx.doi.org/10.5204/mcj.1867.

Full text
Abstract:
Internet Relay Chat (IRC) is a text-based computer-mediated communication (CMC) service in which people can meet and chat in real time. Most chat occurs in channels named for a specific topic, such as #usa or #linux. A user can take part in several channels when connected to an IRC network. For a long time the only major IRC network available was EFnet, founded in 1990. Over the 1990s three other major IRC networks developed, Undernet (1993), DALnet (1994) and IRCnet (which split from EFnet in June 1996). Several causes led to the separate development of IRC networks: fast growth of user numbers, poor scalability of the IRC protocol and content disagreements, like allowing or prohibiting 'bot programs. Today we are experiencing the development of regional IRC networks, such as BrasNet for Brazilian users, and increasing regionalisation of the global networks -- IRCnet users are generally European, EFnet users generally from the Americas and Australia. All persons connecting to an IRC network at one time create that IRC network's user space. People are constantly signing on and off each network. The total number of users who have ever been to a specific IRC network could be called its 'social space' and an IRC network's social space is by far larger than its user space at any one time. Although there has been research on IRC almost from its beginning (it was developed in 1988, and the first research was made available in late 1991 (Reid)), resources on quantitative development are rare. To rectify this situation, a quantitative data logging 'bot program -- Socip -- was created and set to run on various IRC networks. Socip has been running for almost two years on several IRC networks, giving Internet researchers empirical data of the quantitative development of IRC. Methodology Any approach to gathering quantitative data on IRC needs to fulfil the following tasks: Store the number of users that are on an IRC network at a given time, e.g. every five minutes; Store the number of channels; and, Store the number of servers. It is possible to get this information using the '/lusers' command on an IRC-II client, entered by hand. This approach yields results as in Table 1. Table 1: Number of IRC users on January 31st, 1995 Date Time Users Invisible Servers Channels 31.01.95 10:57 2737 2026 93 1637 During the first months of 1995, it was even possible to get all user information using the '/who **' command. However, on current major IRC networks with greater than 50000 users this method is denied by the IRC Server program, which terminates the connection because it is too slow to accept that amount of data. Added to this problem is the fact that collecting these data manually is an exhausting and repetitive task, better suited to automation. Three approaches to automation were attempted in the development process. The 'Eggdrop' approach The 'Eggdrop' 'bot is one of the best-known IRC 'bot programs. Once programmed, 'bots can act autonomously on an IRC network, and Eggdrop was considered particularly convenient because customised modules could be easily installed. However, testing showed that the Eggdrop 'bot was unsuitable for two reasons. The first was technical: for reasons undetermined, all Eggdrop modules created extensive CPU usage, making it impossible to run several Eggdrops simultaneously to research a number of IRC networks. The second reason had to do with the statistics to be obtained. The objective was to get a snapshot of current IRC users and IRC channel use every five minutes, written into an ASCII file. It was impossible to extend Eggdrop's possibilities in a way that it would periodically submit the '/lusers' command and write the received data into a file. For these reasons, and some security concerns, the Eggdrop approach was abandoned. IrcII was a UNIX IRC client with its own scripting language, making it possible to write command files which periodically submit the '/lusers' command to any chosen IRC server and log the command's output. Four different scripts were used to monitor IRCnet, EFnet, DALnet and Undernet from January to October 1998. These scripts were named Socius_D, Socius_E, Socius_I and Socius_U (depending on the network). Every hour each script stored the number of users and channels in a logfile (examinable using another script written in the Perl language). There were some drawbacks to the ircII script approach. While the need for a terminal to run on could be avoided using the 'screen' package -- making it possible to start ircII, run the scripts, detach, and log off again -- it was impossible to restart ircII and the scripts using an automatic task-scheduler. Thus periodic manual checks were required to find out if the scripts were still running and restart them if needed (e.g. if the server connection was lost). These checks showed that at least one script would not be running after 10 hours. Additional disadvantages were the lengthy log files and the necessity of providing a second program to extract the log file data and write it into a second file from which meaningful graphs could be created. The failure of the Eggdrop and ircII scripting approaches lead to the solution still in use today. Perl script-only approach Perl is a powerful script language for handling file-oriented data when speed is not extremely important. Its version 5 flavour allows a lot of modules to use it for expansion, including the Net::IRC package. The object-oriented Perl interface enables Perl scripts to connect to an IRC server, and use the basic IRC commands. The Socip.pl program includes all server definitions needed to create connections. Socip is currently monitoring ten major IRC networks, including DALnet, EFnet, IRCnet, the Microsoft Network, Talkcity, Undernet and Galaxynet. When run, "Social science IRC program" selects a nickname from its list corresponding to the network -- For EFnet, the first nickname used is Socip_E1. It then functions somewhat like a 'bot. Using that nickname, Socip tries to create an IRC connection to a server of the given network. If there is no failure, handlers are set up which take care of proper reactions to IRC server messages (such as Ping-pong, message output and reply). Socip then joins the channel #hose (the name has no special meaning), a maintenance channel with the additional effect of real persons meeting the 'bot and trying to interact with it every now and then. Those interactions are logged too. Sitting in that channel, the script sleeps periodically and checks if a certain time span has passed (the default is five minutes). After that, the '/lusers' command's output is stored in a data file for each IRC network and the IRC network's RRD (Round Robin database) file is updated. This database, which is organised chronologically, offers great detail for recent events and more condensed information for older events. User and channel information younger than 10 days is stored in five-minute detail. If older than two years, the same information is automatically averaged and stored in a per-day resolution. In case of network problems, Socip acts as necessary. For example, it recognises a connection termination and tries to reconnect after pausing by using the next nickname on the list. This prevents nickname collision problems. If the IRC server does not respond to '/luser' commands three times in a row, the next server on the list is accessed. Special (crontab-invoked) scripts take care of restarting Socip when necessary, as in termination of script because of network problems, IRC operator kill or power failure. After a reboot all scripts are automatically restarted. All monitoring is done on a Linux machine (Pentium 120, 32 MB, Debian Linux 2.1) which is up all the time. Processor load is not extensive, and this machine also acts as the Sociology Department's WWW-Server. Graphs creation Graphs can be created from the data in Socip's RRD files. This task is done using the MRTG (multi router traffic grapher) program by Tobias Oetiker. A script updates all IRC graphs four times a day. Usage of each IRC network is visualised through five graphs: Daily, Weekly and Monthly users and channels, accompanied by two graphs showing all known data users/channels and servers. All this information is continuously published on the World Wide Web at http://www.hinner.com/ircstat. Figures The following samples demonstrate what information can be produced by Socip. As already mentioned, graphs of all monitored networks are updated four times a day, with five graphs for each IRC network. Figure 1 shows the rise of EFnet users from about 40000 in November 1998 to 65000 in July 2000. Sampled data is oscillating around an average amount, which is resulting from the different time zones of users. Fig. 1: EFnet - Users and Channels since November 1998 Figure 2 illustrates the decrease of interconnected EFnet servers over the years. Each server is now handling more and more users. Reasons for taking IRC servers off the net are security concerns (attacks on the server by malicious persons), new payment schemes, maintenance and cost effort. Fig. 2: EFnet - Servers since November 1998 A nice example of a heavily changing weekly graph is Figure 3, which shows peaks shortly before 6pm CEST and almost no users shortly after midnight. Fig. 3: Galaxynet: Weekly Graph (July, 15th-22nd, 2000) The daily graph portrays usage variations with even more detail. Figure 4 is taken from Undernet user and channel data. The vertical gap in the graph indicates missing data, caused either by a net split or other network problems. Fig. 4: Undernet: Daily Graph: July, 22nd, 2000 The final example (Figure 5) shows a weekly graph of the Webchat (http://www.webchat.org) network. It can be seen that every day the user count varies from 5000 to nearly 20000, and that channel numbers fluctuate in concert accordingly from 2500 to 5000. Fig. 5: Webchat: Monthly graph, Week 24-29, 2000 Not every IRC user is connected all the time to an IRC network. This figure may have increased lately with more and more flatrates and cheap Internet access offers, but in general most users will sign off the network after some time. This is why IRC is a very dynamic society, with its membership constantly in flux. Maximum user counts only give the highest number of members who were simultaneously online at some point, and one could only guess at the number of total users of the network -- that is, including those who are using that IRC service but are not signed on at that time. To answer these questions, more thorough investigation is necessary. Then inflows and outflows might be more readily estimated. Table 2 shows the all time maximum user counts of seven IRC networks, compared to the average numbers of IRC users of the four major IRC networks during the third quarter 1998 (based on available data). Table 2: Maximum user counts of selected IRC networks DALnet EFnet Galaxy Net IRCnet MS Chat Undernet Webchat Max. 2000 64276 64309 15253 65340 17392 60210 19793 3rd Q. 1998 21000 37000 n/a 24500 n/a 24000 n/a Compared with the 200-300 users in 1991 and the 7000 IRC-chatters in 1994, the recent growth is certainly extraordinary: it adds up to a total of 306573 users across all monitored networks. It can be expected that the 500000 IRC user threshold will be passed some time during the year 2001. As a final remark, it should be said that obviously Web-based chat systems will be more and more common in the future. These chat services do not use standard IRC protocols, and will be very hard to monitor. Given that these systems are already quite popular, the actual number of chat users in the world could have already passed the half million landmark. References Reid, Elizabeth. "Electropolis: Communications and Community on Internet Relay Chat." Unpublished Honours Dissertation. U of Melbourne, 1991. The Socip program can be obtained at no cost from http://www.hinner.com. Most IRC networks can be accessed with the original Net::Irc Perl extension, but for some special cases (e.g. Talkcity) an extended version is needed, which can also be found there. Citation reference for this article MLA style: Kajetan Hinner. "Statistics of Major IRC Networks: Methods and Summary of User Count." M/C: A Journal of Media and Culture 3.4 (2000). [your date of access] <http://www.api-network.com/mc/0008/count.php>. Chicago style: Kajetan Hinner, "Statistics of Major IRC Networks: Methods and Summary of User Count," M/C: A Journal of Media and Culture 3, no. 4 (2000), <http://www.api-network.com/mc/0008/count.php> ([your date of access]). APA style: Kajetan Hinner. (2000) Statistics of major IRC networks: methods and summary of user count. M/C: A Journal of Media and Culture 3(4). <http://www.api-network.com/mc/0008/count.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
45

Deck, Andy. "Treadmill Culture." M/C Journal 6, no. 2 (April 1, 2003). http://dx.doi.org/10.5204/mcj.2157.

Full text
Abstract:
Since the first days of the World Wide Web, artists like myself have been exploring the new possibilities of network interactivity. Some good tools and languages have been developed and made available free for the public to use. This has empowered individuals to participate in the media in ways that are quite remarkable. Nonetheless, the future of independent media is clouded by legal, regulatory, and organisational challenges that need to be addressed. It is not clear to what extent independent content producers will be able to build upon the successes of the 90s – it is yet to be seen whether their efforts will be largely nullified by the anticyclones of a hostile media market. Not so long ago, American news magazines were covering the Browser War. Several real wars later, the terms of surrender are becoming clearer. Now both of the major Internet browsers are owned by huge media corporations, and most of the states (and Reagan-appointed judges) that were demanding the break-up of Microsoft have given up. A curious about-face occurred in U.S. Justice Department policy when John Ashcroft decided to drop the federal case. Maybe Microsoft's value as a partner in covert activity appealed to Ashcroft more than free competition. Regardless, Microsoft is now turning its wrath on new competitors, people who are doing something very, very bad: sharing the products of their own labour. This practice of sharing source code and building free software infrastructure is epitomised by the continuing development of Linux. Everything in the Linux kernel is free, publicly accessible information. As a rule, the people building this "open source" operating system software believe that maintaining transparency is important. But U.S. courts are not doing much to help. In a case brought by the Motion Picture Association of America against Eric Corley, a federal district court blocked the distribution of source code that enables these systems to play DVDs. In addition to censoring Corley's journal, the court ruled that any programmer who writes a program that plays a DVD must comply with a host of license restrictions. In short, an established and popular media format (the DVD) cannot be used under open source operating systems without sacrificing the principle that software source code should remain in the public domain. Should the contents of operating systems be tightly guarded secrets, or subject to public review? If there are capable programmers willing to create good, free operating systems, should the law stand in their way? The question concerning what type of software infrastructure will dominate personal computers in the future is being answered as much by disappointing legal decisions as it is by consumer choice. Rather than ensuring the necessary conditions for innovation and cooperation, the courts permit a monopoly to continue. Rather than endorsing transparency, secrecy prevails. Rather than aiming to preserve a balance between the commercial economy and the gift-economy, sharing is being undermined by the law. Part of the mystery of the Internet for a lot of newcomers must be that it seems to disprove the old adage that you can't get something for nothing. Free games, free music, free pornography, free art. Media corporations are doing their best to change this situation. The FBI and trade groups have blitzed the American news media with alarmist reports about how children don't understand that sharing digital information is a crime. Teacher Gail Chmura, the star of one such media campaign, says of her students, "It's always been interesting that they don't see a connection between the two. They just don't get it" (Hopper). Perhaps the confusion arises because the kids do understand that digital duplication lets two people have the same thing. Theft is at best a metaphor for the copying of data, because the original is not stolen in the same sense as a material object. In the effort to liken all copying to theft, legal provisions for the fair use of intellectual property are neglected. Teachers could just as easily emphasise the importance of sharing and the development of an electronic commons that is free for all to use. The values advanced by the trade groups are not beyond question and are not historical constants. According to Donald Krueckeberg, Rutgers University Professor of Urban Planning, native Americans tied the concept of property not to ownership but to use. "One used it, one moved on, and use was shared with others" (qtd. in Batt). Perhaps it is necessary for individuals to have dominion over some private data. But who owns the land, wind, sun, and sky of the Internet – the infrastructure? Given that publicly-funded research and free software have been as important to the development of the Internet as have business and commercial software, it is not surprising that some ambiguity remains about the property status of the dataverse. For many the Internet is as much a medium for expression and the interplay of languages as it is a framework for monetary transaction. In the case involving DVD software mentioned previously, there emerged a grass-roots campaign in opposition to censorship. Dozens of philosophical programmers and computer scientists asserted the expressive and linguistic bases of software by creating variations on the algorithm needed to play DVDs. The forbidden lines of symbols were printed on T-shirts, translated into different computer languages, translated into legal rhetoric, and even embedded into DNA and pictures of MPAA president Jack Valenti (see e.g. Touretzky). These efforts were inspired by a shared conviction that important liberties were at stake. Supporting the MPAA's position would do more than protect movies from piracy. The use of the algorithm was not clearly linked to an intent to pirate movies. Many felt that outlawing the DVD algorithm, which had been experimentally developed by a Norwegian teenager, represented a suppression of gumption and ingenuity. The court's decision rejected established principles of fair use, denied the established legality of reverse engineering software to achieve compatibility, and asserted that journalists and scientists had no right to publish a bit of code if it might be misused. In a similar case in April 2000, a U.S. court of appeals found that First Amendment protections did apply to software (Junger). Noting that source code has both an expressive feature and a functional feature, this court held that First Amendment protection is not reserved only for purely expressive communication. Yet in the DVD case, the court opposed this view and enforced the inflexible demands of the Digital Millennium Copyright Act. Notwithstanding Ted Nelson's characterisation of computers as literary machines, the decision meant that the linguistic and expressive aspects of software would be subordinated to other concerns. A simple series of symbols were thereby cast under a veil of legal secrecy. Although they were easy to discover, and capable of being committed to memory or translated to other languages, fair use and other intuitive freedoms were deemed expendable. These sorts of legal obstacles are serious challenges to the continued viability of free software like Linux. The central value proposition of Linux-based operating systems – free, open source code – is threatening to commercial competitors. Some corporations are intent on stifling further development of free alternatives. Patents offer another vulnerability. The writing of free software has become a minefield of potential patent lawsuits. Corporations have repeatedly chosen to pursue patent litigation years after the alleged infringements have been incorporated into widely used free software. For example, although it was designed to avoid patent problems by an array of international experts, the image file format known as JPEG (Joint Photographic Experts Group) has recently been dogged by patent infringement charges. Despite good intentions, low-budget initiatives and ad hoc organisations are ill equipped to fight profiteering patent lawsuits. One wonders whether software innovation is directed more by lawyers or computer scientists. The present copyright and patent regimes may serve the needs of the larger corporations, but it is doubtful that they are the best means of fostering software innovation and quality. Orwell wrote in his Homage to Catalonia, There was a new rule that censored portions of the newspaper must not be left blank but filled up with other matter; as a result it was often impossible to tell when something had been cut out. The development of the Internet has a similar character: new diversions spring up to replace what might have been so that the lost potential is hardly felt. The process of retrofitting Internet software to suit ideological and commercial agendas is already well underway. For example, Microsoft has announced recently that it will discontinue support for the Java language in 2004. The problem with Java, from Microsoft's perspective, is that it provides portable programming tools that work under all operating systems, not just Windows. With Java, programmers can develop software for the large number of Windows users, while simultaneously offering software to users of other operating systems. Java is an important piece of the software infrastructure for Internet content developers. Yet, in the interest of coercing people to use only their operating systems, Microsoft is willing to undermine thousands of existing Java-language projects. Their marketing hype calls this progress. The software industry relies on sales to survive, so if it means laying waste to good products and millions of hours of work in order to sell something new, well, that's business. The consequent infrastructure instability keeps software developers, and other creative people, on a treadmill. From Progressive Load by Andy Deck, artcontext.org/progload As an Internet content producer, one does not appeal directly to the hearts and minds of the public; one appeals through the medium of software and hardware. Since most people are understandably reluctant to modify the software running on their computers, the software installed initially is a critical determinant of what is possible. Unconventional, independent, and artistic uses of the Internet are diminished when the media infrastructure is effectively established by decree. Unaccountable corporate control over infrastructure software tilts the playing field against smaller content producers who have neither the advance warning of industrial machinations, nor the employees and resources necessary to keep up with a regime of strategic, cyclical obsolescence. It seems that independent content producers must conform to the distribution technologies and content formats favoured by the entertainment and marketing sectors, or else resign themselves to occupying the margins of media activity. It is no secret that highly diversified media corporations can leverage their assets to favour their own media offerings and confound their competitors. Yet when media giants AOL and Time-Warner announced their plans to merge in 2000, the claim of CEOs Steve Case and Gerald Levin that the merged companies would "operate in the public interest" was hardly challenged by American journalists. Time-Warner has since fought to end all ownership limits in the cable industry; and Case, who formerly championed third-party access to cable broadband markets, changed his tune abruptly after the merger. Now that Case has been ousted, it is unclear whether he still favours oligopoly. According to Levin, global media will be and is fast becoming the predominant business of the 21st century ... more important than government. It's more important than educational institutions and non-profits. We're going to need to have these corporations redefined as instruments of public service, and that may be a more efficient way to deal with society's problems than bureaucratic governments. Corporate dominance is going to be forced anyhow because when you have a system that is instantly available everywhere in the world immediately, then the old-fashioned regulatory system has to give way (Levin). It doesn't require a lot of insight to understand that this "redefinition," this slight of hand, does not protect the public from abuses of power: the dissolution of the "old-fashioned regulatory system" does not serve the public interest. From Lexicon by Andy Deck, artcontext.org/lexicon) As an artist who has adopted telecommunications networks and software as his medium, it disappoints me that a mercenary vision of electronic media's future seems to be the prevailing blueprint. The giantism of media corporations, and the ongoing deregulation of media consolidation (Ahrens), underscore the critical need for independent media sources. If it were just a matter of which cola to drink, it would not be of much concern, but media corporations control content. In this hyper-mediated age, content – whether produced by artists or journalists – crucially affects what people think about and how they understand the world. Content is not impervious to the software, protocols, and chicanery that surround its delivery. It is about time that people interested in independent voices stop believing that laissez faire capitalism is building a better media infrastructure. The German writer Hans Magnus Enzensberger reminds us that the media tyrannies that affect us are social products. The media industry relies on thousands of people to make the compromises necessary to maintain its course. The rapid development of the mind industry, its rise to a key position in modern society, has profoundly changed the role of the intellectual. He finds himself confronted with new threats and new opportunities. Whether he knows it or not, whether he likes it or not, he has become the accomplice of a huge industrial complex which depends for its survival on him, as he depends on it for his own. He must try, at any cost, to use it for his own purposes, which are incompatible with the purposes of the mind machine. What it upholds he must subvert. He may play it crooked or straight, he may win or lose the game; but he would do well to remember that there is more at stake than his own fortune (Enzensberger 18). Some cultural leaders have recognised the important role that free software already plays in the infrastructure of the Internet. Among intellectuals there is undoubtedly a genuine concern about the emerging contours of corporate, global media. But more effective solidarity is needed. Interest in open source has tended to remain superficial, leading to trendy, cosmetic, and symbolic uses of terms like "open source" rather than to a deeper commitment to an open, public information infrastructure. Too much attention is focussed on what's "cool" and not enough on the road ahead. Various media specialists – designers, programmers, artists, and technical directors – make important decisions that affect the continuing development of electronic media. Many developers have failed to recognise (or care) that their decisions regarding media formats can have long reaching consequences. Web sites that use media formats which are unworkable for open source operating systems should be actively discouraged. Comparable technologies are usually available to solve compatibility problems. Going with the market flow is not really giving people what they want: it often opposes the work of thousands of activists who are trying to develop open source alternatives (see e.g. Greene). Average Internet users can contribute to a more innovative, free, open, and independent media – and being conscientious is not always difficult or unpleasant. One project worthy of support is the Internet browser Mozilla. Currently, many content developers create their Websites so that they will look good only in Microsoft's Internet Explorer. While somewhat understandable given the market dominance of Internet Explorer, this disregard for interoperability undercuts attempts to popularise standards-compliant alternatives. Mozilla, written by a loose-knit group of activists and programmers (some of whom are paid by AOL/Time-Warner), can be used as an alternative to Microsoft's browser. If more people use Mozilla, it will be harder for content providers to ignore the way their Web pages appear in standards-compliant browsers. The Mozilla browser, which is an open source initiative, can be downloaded from http://www.mozilla.org/. While there are many people working to create real and lasting alternatives to the monopolistic and technocratic dynamics that are emerging, it takes a great deal of cooperation to resist the media titans, the FCC, and the courts. Oddly enough, corporate interests sometimes overlap with those of the public. Some industrial players, such as IBM, now support open source software. For them it is mostly a business decision. Frustrated by the coercive control of Microsoft, they support efforts to develop another operating system platform. For others, including this writer, the open source movement is interesting for the potential it holds to foster a more heterogeneous and less authoritarian communications infrastructure. Many people can find common cause in this resistance to globalised uniformity and consolidated media ownership. The biggest challenge may be to get people to believe that their choices really matter, that by endorsing certain products and operating systems and not others, they can actually make a difference. But it's unlikely that this idea will flourish if artists and intellectuals don't view their own actions as consequential. There is a troubling tendency for people to see themselves as powerless in the face of the market. This paralysing habit of mind must be abandoned before the media will be free. Works Cited Ahrens, Frank. "Policy Watch." Washington Post (23 June 2002): H03. 30 March 2003 <http://www.washingtonpost.com/ac2/wp-dyn/A27015-2002Jun22?la... ...nguage=printer>. Batt, William. "How Our Towns Got That Way." 7 Oct. 1996. 31 March 2003 <http://www.esb.utexas.edu/drnrm/WhatIs/LandValue.htm>. Chester, Jeff. "Gerald Levin's Negative Legacy." Alternet.org 6 Dec. 2001. 5 March 2003 <http://www.democraticmedia.org/resources/editorials/levin.php>. Enzensberger, Hans Magnus. "The Industrialisation of the Mind." Raids and Reconstructions. London: Pluto Press, 1975. 18. Greene, Thomas C. "MS to Eradicate GPL, Hence Linux." 25 June 2002. 5 March 2003 <http://www.theregus.com/content/4/25378.php>. Hopper, D. Ian. "FBI Pushes for Cyber Ethics Education." Associated Press 10 Oct. 2000. 29 March 2003 <http://www.billingsgazette.com/computing/20001010_cethics.php>. Junger v. Daley. U.S. Court of Appeals for 6th Circuit. 00a0117p.06. 2000. 31 March 2003 <http://pacer.ca6.uscourts.gov/cgi-bin/getopn.pl?OPINION=00a0... ...117p.06>. Levin, Gerald. "Millennium 2000 Special." CNN 2 Jan. 2000. Touretzky, D. S. "Gallery of CSS Descramblers." 2000. 29 March 2003 <http://www.cs.cmu.edu/~dst/DeCSS/Gallery>. Links http://artcontext.org/lexicon/ http://artcontext.org/progload http://pacer.ca6.uscourts.gov/cgi-bin/getopn.pl?OPINION=00a0117p.06 http://www.billingsgazette.com/computing/20001010_cethics.html http://www.cs.cmu.edu/~dst/DeCSS/Gallery http://www.democraticmedia.org/resources/editorials/levin.html http://www.esb.utexas.edu/drnrm/WhatIs/LandValue.htm http://www.mozilla.org/ http://www.theregus.com/content/4/25378.html http://www.washingtonpost.com/ac2/wp-dyn/A27015-2002Jun22?language=printer Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Deck, Andy. "Treadmill Culture " M/C: A Journal of Media and Culture< http://www.media-culture.org.au/0304/04-treadmillculture.php>. APA Style Deck, A. (2003, Apr 23). Treadmill Culture . M/C: A Journal of Media and Culture, 6,< http://www.media-culture.org.au/0304/04-treadmillculture.php>
APA, Harvard, Vancouver, ISO, and other styles
46

Downes, Daniel M. "The Medium Vanishes?" M/C Journal 3, no. 1 (March 1, 2000). http://dx.doi.org/10.5204/mcj.1829.

Full text
Abstract:
Introduction The recent AOL/Time-Warner merger invites us to re-think the relationships amongst content producers, distributors, and audiences. Worth an estimated $300 billion (US), the largest Internet transaction of all time, the deal is 45 times larger than the AOL/Netscape merger of November 1998 (Ledbetter). Additionally, the Time Warner/EMI merger, which followed hard on the heels of the AOL/Time-Warner deal and is itself worth $28 billion (US), created the largest content rights organisation in the music industry. The joining of the Internet giant (AOL) with what was already the world's largest media corporation (Time-Warner-EMI) has inspired some exuberant reactions. An Infoworld column proclaimed: The AOL/Time-Warner merger signals the demise of traditional media companies and the ascendancy of 'new economy' media companies that will force any industry hesitant to adopt a complete electronic-commerce strategy to rethink and put itself on Internet time. (Saap & Schwarrtz) This comment identifies the distribution channel as the dominant component of the "new economy" media. But this might not really be much of an innovation. Indeed, the assumption of all industry observers is that Time-Warner will provide broadband distribution (through its extensive cable holdings) as well as proprietary content for AOL. It is also expected that Time-Warner will adopt AOL's strategy of seeking sponsorship for development projects as well as for content. However, both of these phenomena -- merger and sponsorship -- are at least as old as radio. It seems that the Internet is merely repeating an old industrial strategy. Nonetheless, one important difference distinguishes the Internet from earlier media: its characterisation of the audience. Internet companies such as AOL and Microsoft tend towards a simple and simplistic media- centred view of the audience as market. I will show, however, that as the Internet assumes more of the traditional mass media functions, it will be forced to adopt a more sophisticated notion of the mass audience. Indeed, the Internet is currently the site in which audience definitions borrowed from broadcasting are encountering and merging with definitions borrowed from marketing. The Internet apparently lends itself to both models. As a result, definitions of what the Internet does or is, and of how we should understand the audience, are suitably confused and opaque. And the behaviour of big Internet players, such as AOL and MSN, perfectly reflects this confusion as they seem to careen between a view of the Internet as the new television and a contrasting view of the Internet as the new shopping mall. Meanwhile, Internet users move in ways that most observers fail to capture. For example, Baran and Davis characterise mass communication as a process involving (1) an organized sender, (2) engaged in the distribution of messages, (3) directed toward a large audience. They argue that broadcasting fits this model whereas a LISTSERV does not because, even though the LISTSERV may have very many subscribers, its content is filtered through a single person or Webmaster. But why is the Webmaster suddenly more determining than a network programmer or magazine editor? The distinction seems to grow out of the Internet's technological characteristics: it is an interactive pipeline, therefore its use necessarily excludes the possibility of "broadcasting" which in turn causes us to reject "traditional" notions of the audience. However, if a media organisation were to establish an AOL discussion group in order to promote Warner TV shows, for example, would not the resulting communication suddenly fall under the definition as set out by Baran and Davis? It was precisely the confusion around such definitions that caused the CRTC (Canada's broadcasting and telecommunications regulator) to hold hearings in 1999 to determine what kind of medium the Internet is. Unlike traditional broadcasting, Internet communication does indeed include the possibility of interactivity and niche communities. In this sense, it is closer to narrowcasting than to broadcasting even while maintaining the possibility of broadcasting. Hence, the nature of the audience using the Internet quickly becomes muddy. While such muddiness might have led us to sharpen our definitions of the audience, it seems instead to have led many to focus on the medium itself. For example, Morris & Ogan define the Internet as a mass medium because it addresses a mass audience mediated through technology (Morris & Ogan 39). They divide producers and audiences on the Internet into four groups: One-to-one asynchronous communication (e-mail); Many-to-many asynchronous communication (Usenet and News Groups); One-to-one, one-to-few, and one-to-many synchronous communication (topic groups, construction of an object, role-playing games, IRC chats, chat rooms); Asynchronous communication (searches, many-to-one, one-to-one, one to- many, source-receiver relations (Morris & Ogan 42-3) Thus, some Internet communication qualifies as mass communication while some does not. However, the focus remains firmly anchored on either the sender or the medium because the receiver --the audience -- is apparently too slippery to define. When definitions do address the content distributed over the Net, they make a distinction between passive reception and interactive participation. As the World Wide Web makes pre-packaged content the norm, the Internet increasingly resembles a traditional mass medium. Timothy Roscoe argues that the main focus of the World Wide Web is not the production of content (and, hence, the fulfilment of the Internet's democratic potential) but rather the presentation of already produced material: "the dominant activity in relation to the Web is not producing your own content but surfing for content" (Rosco 680). He concludes that if the emphasis is on viewing material, the Internet will become a medium similar to television. Within media studies, several models of the audience compete for dominance in the "new media" economy. Denis McQuail recalls how historically, the electronic media furthered the view of the audience as a "public". The audience was an aggregate of common interests. With broadcasting, the electronic audience was delocalised and socially decomposed (McQuail, Mass 212). According to McQuail, it was not a great step to move from understanding the audience as a dispersed "public" to thinking about the audience as itself a market, both for products and as a commodity to be sold to advertisers. McQuail defines this conception of the audience as an "aggregate of potential customers with a known social- economic profile at which a medium or message is directed" (McQuail, Mass 221). Oddly though, in light of the emancipatory claims made for the Internet, this is precisely the dominant view of the audience in the "new media economy". Media Audience as Market How does the marketing model characterise the relationship between audience and producer? According to McQuail, the marketing model links sender and receiver in a cash transaction between producer and consumer rather than in a communicative relationship between equal interlocutors. Such a model ignores the relationships amongst consumers. Indeed, neither the effectiveness of the communication nor the quality of the communicative experience matters. This model, explicitly calculating and implicitly manipulative, is characteristically a "view from the media" (McQuail, Audience 9). Some scholars, when discussing new media, no longer even refer to audiences. They speak of users or consumers (Pavick & Dennis). The logic of the marketing model lies in the changing revenue base for media industries. Advertising-supported media revenues have been dropping since the early 1990s while user-supported media such as cable, satellite, online services, and pay-per-view, have been steadily growing (Pavlik & Dennis 19). In the Internet-based media landscape, the audience is a revenue stream and a source of consumer information. As Bill Gates says, it is all about "eyeballs". In keeping with this view, AOL hopes to attract consumers with its "one-stop shopping and billing". And Internet providers such as MSN do not even consider their subscribers as "audiences". Instead, they work from a consumer model derived from the computer software industry: individuals make purchases without the seller providing content or thematising the likely use of the software. The analogy extends well beyond the transactional moment. The common practice of prototyping products and beta-testing software requires the participation of potential customers in the product development cycle not as a potential audience sharing meanings but as recalcitrant individuals able to uncover bugs. Hence, media companies like MTV now use the Internet as a source of sophisticated demographic research. Recently, MTV Asia established a Website as a marketing tool to collect preferences and audience profiles (Slater 50). The MTV audience is now part of the product development cycle. Another method for getting information involves the "cookie" file that automatically provides a Website with information about the user who logs on to a site (Pavick & Dennis). Simultaneously, though, both Microsoft and AOL have consciously shifted from user-subscription revenues to advertising in an effort to make online services more like television (Gomery; Darlin). For example, AOL has long tried to produce content through its own studios to generate sufficiently heavy traffic on its Internet service in order to garner profitable advertising fees (Young). However, AOL and Microsoft have had little success in providing content (Krantz; Manes). In fact, faced with the AOL/Time-Warner merger, Microsoft declared that it was in the software rather than the content business (Trott). In short, they are caught between a broadcasting model and a consumer model and their behaviour is characteristically erratic. Similarly, media companies such as Time-Warner have failed to establish their own portals. Indeed, Time-Warner even abandoned attempts to create large Websites to compete with other Internet services when it shut down its Pathfinder site (Egan). Instead it refocussed its Websites so as to blur the line between pitching products and covering them (Reid; Lyons). Since one strategy for gaining large audiences is the creation of portals - - large Websites that keep surfers within the confines of a single company's site by providing content -- this is the logic behind the AOL/Time-Warner merger though both companies have clearly been unsuccessful at precisely such attempts. AOL seems to hope that Time- Warner will act as its content specialist, providing the type of compelling material that will make users want to use AOL, whereas Time- Warner seems to hope that AOL will become its privileged pipeline to the hearts and minds of untold millions. Neither has a coherent view of the audience, how it behaves, or should behave. Consequently, their efforts have a distinctly "unmanaged" and slighly inexplicable air to them, as though everyone were simultaneously hopeful and clueless. While one might argue that the stage is set to capitalise on the audience as commodity, there are indications that the success of such an approach is far from guaranteed. First, the AOL/Time-Warner/EMI transaction, merely by existing, has sparked conflicts over proprietary rights. For example, the Recording Industry Association of America, representing Sony, Universal, BMG, Warner and EMI, recently launched a $6.8 billion lawsuit against MP3.com -- an AOL subsidiary -- for alleged copyright violations. Specifically, MP3.com is being sued for selling digitized music over the Internet without paying royalties to the record companies (Anderson). A similar lawsuit has recently been launched over the issue of re- broadcasting television programs over the Internet. The major US networks have joined together against Canadian Internet company iCravetv for the unlawful distribution of content. Both the iCravetv and the MP3.com cases show how dominant media players can marshal their forces to protect proprietary rights in both content and distribution. Since software and media industries have failed to recreate the Internet in the image of traditional broadcasting, the merger of the dominant players in each industry makes sense. However, their simultaneous failure to secure proprietary rights reflects both the competitive nature of the "new media economy" and the weakness of the marketing view of the audience. Media Audience as Public It is often said that communication produces social cohesion. From such cohesion communities emerge on which political or social orders can be constructed. The power of social cohesion and attachment to group symbols can even create a sense of belonging to a "people" or nation (Deutsch). Sociologist Daniel Bell described how the mass media helped create an American culture simply by addressing a large enough audience. He suggested that on the evening of 7 March 1955, when one out of every two Americans could see Mary Martin as Peter Pan on television, a kind of social revolution occurred and a new American public was born. "It was the first time in history that a single individual was seen and heard at the same time by such a broad public" (Bell, quoted in Mattelart 72). One could easily substitute the 1953 World Series or the birth of little Ricky on I Love Lucy. The desire to document such a process recurs with the Internet. Internet communities are based on the assumption that a common experience "creates" group cohesion (Rheingold; Jones). However, as a mass medium, the Internet has yet to find its originary moment, that event to which all could credibly point as the birth of something genuine and meaningful. A recent contender was the appearance of Paul McCartney at the refurbished Cavern Club in Liverpool. On Tuesday, 14 December 1999, McCartney played to a packed club of 300 fans, while another 150,000 watched on an outdoor screen nearby. MSN arranged to broadcast the concert live over the Internet. It advertised an anticipated global audience of 500 million. Unfortunately, there was such heavy Internet traffic that the system was unable to accommodate more than 3 million people. Servers in the United Kingdom were so congested that many could only watch the choppy video stream via an American link. The concert raises a number of questions about "virtual" events. We can draw several conclusions about measuring Internet audiences. While 3 million is a sizeable audience for a 20 minute transmission, by advertising a potential audience of 500 million, MSN showed remarkably poor judgment of its inherent appeal. The Internet is the first medium that allows access to unprocessed material or information about events to be delivered to an audience with neither the time constraints of broadcast media nor the space limitations of the traditional press. This is often cited as one of the characteristics that sets the Internet apart from other media. This feeds the idea of the Internet audience as a participatory, democratic public. For example, it is often claimed that the Internet can foster democratic participation by providing voters with uninterpreted information about candidates and issues (Selnow). However, as James Curran argues, the very process of distributing uninterrupted, unfiltered information, at least in the case of traditional mass media, represents an abdication of a central democratic function -- that of watchdog to power (Curran). In the end, publics are created and maintained through active and continuous participation on the part of communicators and audiences. The Internet holds together potentially conflicting communicative relationships within the same technological medium (Merrill & Ogan). Viewing the audience as co-participant in a communicative relationship makes more sense than simply focussing on the Internet audience as either an aggregate of consumers or a passively constructed symbolic public. Audience as Relationship Many scholars have shifted attention from the producer to the audience as an active participant in the communication process (Ang; McQuail, Audience). Virginia Nightingale goes further to describe the audience as part of a communicative relationship. Nightingale identifies four factors in the relationship between audiences and producers that emphasize their co-dependency. The audience and producer are engaged in a symbiotic relationship in which consumption and use are necessary but not sufficient explanations of audience relations. The notion of the audience invokes, at least potentially, a greater range of activities than simply use or consumption. Further, the audience actively, if not always consciously, enters relationships with content producers and the institutions that govern the creation, distribution and exhibition of content (Nightingale 149-50). Others have demonstrated how this relationship between audiences and producers is no longer the one-sided affair characterised by the marketing model or the model of the audience as public. A global culture is emerging based on critical viewing skills. Kavoori calls this a reflexive mode born of an increasing familiarity with the narrative conventions of news and an awareness of the institutional imperatives of media industries (Kavoori). Given the sophistication of the emergent global audience, a theory that reduces new media audiences to a set of consumer preferences or behaviours will inevitably prove inadequate, just as it has for understanding audience behavior in old media. Similarly, by ignoring those elements of audience behavior that will be easily transported to the Web, we run the risk of idealising the Internet as a medium that will create an illusory, pre-technological public. Conclusion There is an understandable confusion between the two models of the audience that appear in the examples above. The "new economy" will have to come to terms with sophisticated audiences. Contrary to IBM's claim that they want to "get to know all about you", Internet users do not seem particularly interested in becoming a perpetual source of market information. The fragmented, autonomous audience resists attempts to lock it into proprietary relationships. Internet hypesters talk about creating publics and argue that the Internet recreates the intimacy of community as a corrective to the atomisation and alienation characteristic of mass society. This faith in the power of a medium to create social cohesion recalls the view of the television audience as a public constructed by the common experience of watching an important event. However, MSN's McCartney concert indicates that creating a public from spectacle it is not a simple process. In fact, what the Internet media conglomerates seem to want more than anything is to create consumer bases. Audiences exist for pleasure and by the desire to be entertained. As Internet media institutions are established, the cynical view of the audience as a source of consumer behavior and preferences will inevitably give way, to some extent, to a view of the audience as participant in communication. Audiences will be seen, as they have been by other media, as groups whose attention must be courted and rewarded. Who knows, maybe the AOL/Time-Warner merger might, indeed, signal the new medium's coming of age. References Anderson, Lessley. "To Beam or Not to Beam. MP3.com Is Being Sued by the Major Record Labels. Does the Digital Download Site Stand a Chance?" Industry Standard 31 Jan. 2000. <http://www.thestandard.com>. Ang, Ien. Watching Dallas: Soap Opera and the Melodramatic Imagination. London: Methuen, 1985. Baran, Stanley, and Dennis Davis. Mass Communication Theory: Foundations, Ferment, and Future. 2nd ed. Belmont, Calif.: Wadsworth 2000. Curran, James. "Mass Media and Democracy Revisited." Mass Media and Society. Eds. James Curran and Michael Gurevitch. New York: Hodder Headline Group, 1996. Darlin, Damon. "He Wants Your Eyeballs." Forbes 159 (16 June 1997): 114-6. Egan, Jack, "Pathfinder, Rest in Peace: Time-Warner Pulls the Plug on Site." US News and World Report 126.18 (10 May 1999): 50. Gomery, Douglas. "Making the Web Look like Television (American Online and Microsoft)." American Journalism Review 19 (March 1997): 46. Jones, Steve, ed. CyberSociety: Computer-Mediated Communication and Community. Thousand Oaks: Sage, 1995. Kavoori, Amandam P. "Discursive Texts, Reflexive Audiences: Global Trends in Television News Texts and Audience Reception." Journal of Broadcasting and Electronic Media 43.3 (Summer 1999): 386-98. Krantz, Michael. "Is MSN on the Block?" Time 150 (20 Oct. 1997): 82. Ledbetter, James. "AOL-Time-Warner Make It Big." Industry Standard 11 Jan. 2000. <http://www.thestandard.com>. Lyons, Daniel. "Desparate.com (Media Companies Losing Millions on the Web Turn to Electronic Commerce)." Forbes 163.6 (22 March 1999): 50-1. Manes, Stephen. "The New MSN as Prehistoric TV." New York Times 4 Feb. 1997: C6. McQuail, Denis. Audience Analysis. Thousand Oaks, Calif.: Sage, 1997. ---. Mass Communication Theory. 2nd ed. London: Sage, 1987. Mattelart, Armand. Mapping World Communication: War, Progress, Culture. Trans. Susan Emanuel and James A. Cohen. Minneapolis: U of Minnesota P, 1994. Morris, Merrill, and Christine Ogan. "The Internet as Mass Medium." Journal of Communications 46 (Winter 1996): 39-50. Nightingale, Virginia. Studying Audience: The Shock of the Real. London: Routledge, 1996. Pavlik, John V., and Everette E. Dennis. New Media Technology: Cultural and Commercial Perspectives. 2nd ed. Boston: Allyn and Bacon, 1998. Reid, Calvin. "Time-Warner Seeks Electronic Synergy, Profits on the Web (Pathfinder Site)." Publisher's Weekly 242 (4 Dec. 1995): 12. Rheingold, Howard. Virtual Community: Homesteading on the Electronic Frontier. New York: Harper, 1993. Roscoe, Timothy. "The Construction of the World Wide Web Audience." Media, Culture and Society 21.5 (1999): 673-84. Saap, Geneva, and Ephraim Schwarrtz. "AOL-Time-Warner Deal to Impact Commerce, Content, and Access Markets." Infoworld 11 January 2000. <http://infoworld.com/articles/ic/xml/00/01/11/000111icimpact.xml>. Slater, Joanna. "Cool Customers: Music Channels Hope New Web Sites Tap into Teen Spirit." Far Eastern Economic Review 162.9 (4 March 1999): 50. Trott, Bob. "Microsoft Views AOL-Time-Warner as Confirmation of Its Own Strategy." Infoworld 11 Jan. 2000. <http://infoworld.com/articles/pi/xml/00/01/11/000111pimsaoltw.xml>. Yan, Catherine. "A Major Studio Called AOL?" Business Week 1 Dec. 1997: 1773-4. Citation reference for this article MLA style: Daniel M. Downes. "The Medium Vanishes? The Resurrection of the Mass Audience in the New Media Economy." M/C: A Journal of Media and Culture 3.1 (2000). [your date of access] <http://www.uq.edu.au/mc/0003/mass.php>. Chicago style: Daniel M. Downes, "The Medium Vanishes? The Resurrection of the Mass Audience in the New Media Economy," M/C: A Journal of Media and Culture 3, no. 1 (2000), <http://www.uq.edu.au/mc/0003/mass.php> ([your date of access]). APA style: Daniel M. Downes. (2000) The Medium Vanishes? The Resurrection of the Mass Audience in the New Media Economy. M/C: A Journal of Media and Culture 3(1). <http://www.uq.edu.au/mc/0003/mass.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography