To see the other types of publications on this topic, follow the link: Layout (Printing) – Software.

Journal articles on the topic 'Layout (Printing) – Software'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 24 journal articles for your research on the topic 'Layout (Printing) – Software.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Hou, Vincent D. H. "Automatic Page-Layout Scripts for Gatan Digital Micrograph®." Microscopy and Microanalysis 7, S2 (August 2001): 976–77. http://dx.doi.org/10.1017/s1431927600030956.

Full text
Abstract:
The software DigitalMicrograph (DM) by Gatan, Inc., is a popular software platform for digital imaging in microscopy. in a service-oriented microscopy laboratory, a large number of images from many different samples are generated each day. It is critical that each printed image is properly labeled with sample identification and a description before printing. with DM, a script language is provided: from this, various analyses can be designed or customized and repetitive tasks can be automated. This paper presents the procedures and DM scripts needed to perform these tasks. Due to the major software architecture change between version 2.5x and version 3.5x, each will be discussed separately.DM Version 2.5.8 (on Macintosh®)A “Data Bar” mechanism is provided in this version of DM. Using the “Edit→Data Bar→Define and Add Data Bar...“ menu command specifies data bar items (e.g., scale bar, microscope operator) to be included in the image. in addition, other annotations (text, line, rectangle, and oval) can be included as part of “Data Bar.” This is done by first selecting the desired annotation on the image and then using the “Edit→Data Bar→use AS Default Data Bar...” menu command. After defining data bar items, executing the menu command adds these data bar items to the image.
APA, Harvard, Vancouver, ISO, and other styles
2

J Tampubolon, L D Agoestine Simangunsong, M D Agustina Sibuea, and A C Sembiring and A Mardhatillah. "Prayer paper production facility layout redesign using systematic layout planning method and CRAFT." International Journal of Science, Technology & Management 1, no. 4 (November 30, 2020): 448–56. http://dx.doi.org/10.46729/ijstm.v1i4.84.

Full text
Abstract:
Abstract. The facility layout is a strategic design that is used for a long time. All manufacturing industries must pay attention to the right layout to increase the productivity of the industry. A prayer paper manufacturing industry located in the Tanjung Morawa area, Medan has an error in the placement of raw materials and the placement of production machines, so that the distance from the temporary warehouse to the printing and cutting work stations is far apart, causing high material handling costs. Seeing these problems, research was carried out to improve the layout of the facilities and redesign. The method used for this research is Systematic Layout Planning (SLP), which is one of the methods used to regulate a workplace in a factory by using two areas with high frequency and logical relationships with each other. And the Computerized Relative Allocation of Facilities Technique (CRAFT) Algorithm is a repair program, which is a program that looks for optimal design by making gradual improvements to the layout. CRAFT evaluates the layout by interchanging departmental locations. Inputs required for the CRAFT algorithm include initial layout, data flow or frequency of movement, cost data per unit distance, and the number of departments that do not change or remain. The CRAFT method is usually applied using Quantitative Systems (QS) software. By comparing the layout between SLP and CRAFT, the optimal result is obtained using the SLP method by reducing the distance between departments by 1.407 meters or a distance efficiency of 39.91%.
APA, Harvard, Vancouver, ISO, and other styles
3

Torah, Russel, Yang Wei, Neil Grabham, Yi Li, Marc de Vos, Todor Todorov, Boris Popov, et al. "Enabling platform technology for smart fabric design and printing." Journal of Engineered Fibers and Fabrics 14 (January 2019): 155892501984590. http://dx.doi.org/10.1177/1558925019845903.

Full text
Abstract:
A hardware and software platform is presented enabling the design, and realisation via printing, of smart fabrics. The cultural and creative industries are an important economic area within which designers frequently utilise fabrics. Smart fabrics offer further creative opportunities to the cultural and creative industries, but designers often lack the required specialist knowledge, in electronics, software and materials, to produce smart fabrics. The software platform offers the ability to perform design, layout and visualisation of a smart fabric using a library of standard smart fabric functions (e.g. electroluminescence) so specialist expertise is not needed. Operation of the smart fabric can be simulated, and parameters can be set for smart fabric control electronics, which consists of standard circuit board modules. The software also provides driver code for the hardware platform to print the smart fabric. The hardware platform consists of a bespoke dispenser printer; functional inks are deposited via a pneumatic syringe controlled by the driver software, allowing bespoke rapid prototyped smart fabrics to be printed. Operation of the software and hardware system is demonstrated by the realisation of an interactive smart fabric consisting of electroluminescent lamps controlled by a proximity sensor. The modular electronics are used to control the smart fabric operation using embedded code generated by the software platform. For example, the blink rate of the electroluminescent lamp can be adjusted by the proximity of a hand. This control is achieved by the use of intuitive drop-down menus and input/output selections by the creative user. At present, the platform allows the design, print and implementation of smart fabrics incorporating the functions of colour change, electroluminescence, sound emission and proximity sensing. The platform can be expanded to add additional functions in the future and the printer will be compatible with new inks developed for screen and inkjet printing.
APA, Harvard, Vancouver, ISO, and other styles
4

Zheng, Wei Jiang, Bing Luo, Zheng Guang Hu, and Zhong Liang Lv. "Design and Implementation of a New Meteorology Geographic Information System." Applied Mechanics and Materials 411-414 (September 2013): 440–43. http://dx.doi.org/10.4028/www.scientific.net/amm.411-414.440.

Full text
Abstract:
Meteorology Geographic Information System (MeteoGIS) is a professional meteorological GIS platform with completely independent intelligent properties. It fully utilizes the national innovative GIS technologies in the meteorological scenario; MeteoGIS supports multiple databases, browsers and a variety of development environments, has a good cross-platform capability. It also has a massive vector and raster data management and distribution capacity. MeteoGIS extends the meteorological data models and data sets, and is able to produce meteorological thematic maps, layout and printing. It has integrated algorithms for meteorological applications and special-use analysis. The platform is comprised of development kits, data engine, desktop software, and Web development platforms.
APA, Harvard, Vancouver, ISO, and other styles
5

Widyokusumo, Lintang. "Desain Sampul Majalah sebagai Ujung Tombak Pemasaran." Humaniora 3, no. 2 (October 31, 2012): 637. http://dx.doi.org/10.21512/humaniora.v3i2.3408.

Full text
Abstract:
Amid the proliferation of electronic media competition, the role of conventional media such as magazines still becomes a promising choice. The recency of printing techniques, finishing and layout software currently enable more rapid publication of a magazine. Its presence in every corner of the city increasingly makes it easier for consumers to buy and makes it a complete source of information plus attractive bonus which is always a tempting offer in each issue. Amid the hundreds of magazine design magazine rack is on store / stall / supermarket sales how can a magazine survive? Surely, the role of creativity of magazine cover designs as a bridge of communication to the customers becomes the forefront of marketing which is highly important.
APA, Harvard, Vancouver, ISO, and other styles
6

Lineberger, R. Daniel. "Providing Extension Information Electronically—Easing the Transition From a Paper-Based System." HortScience 32, no. 4 (July 1997): 591F—592. http://dx.doi.org/10.21273/hortsci.32.4.591f.

Full text
Abstract:
Cooperative Extension has relied heavily on the distribution of printed materials to accomplish its mission of providing research-based educational materials to agricultural producers and consumers. As the costs of print media have escalated and budgets have been reduced, Extension has continually sought more efficient and effective alternatives. World Wide Web information servers are central to this task, since they are relatively inexpensive to set up and operate, and can deliver high-quality materials for on-screen viewing or printing on demand. Recent developments (specifically the WebTV network) indicate the Web to be the medium of choice for Extension delivery systems. In addition to providing electronic versions of publications, slide shows, and video clips, most Web browsers also support e-mail and interactive forms for obtaining information from the client. Analysis of Web server logs and guest registers can be used to determine client use patterns to address issues of access and accountability. The current and next generations of most word processing, page layout, and presentation software offer Web-ready layout as one saving option.
APA, Harvard, Vancouver, ISO, and other styles
7

Andryukhina, Y. N., Ya G. Poshivaylo, and V. A. Ananev. "Methodical principles of development of tactile map symbology and layout." Geodesy and Cartography 941, no. 11 (December 20, 2018): 25–33. http://dx.doi.org/10.22389/0016-7126-2018-941-11-25-33.

Full text
Abstract:
Tactile maps (maps for blind and visually impaired people) play an essential role in education and social adaptation of visually challenged people. The tactile cartography rapidly develops along with the development of science and technology, and various new technical means and materials for printing three-dimensional graphics have appeared recently. The need for cartographic materials for the visually impaired is great, and so there is an urgent need for the approved methodology for tactile maps and 3D models creation, which could be used as a standard to provide educational institutions, municipalities, and other organizations working with visually challenged people with tactile cartographic materials. Recommendations on the use of map symbols and design of tactile maps are given in the article. The recommendations are based on the research carried out on the grounds of Novosibirsk Regional Special Library for the Blind and Visually Impaired and devoted to tactile perception of map symbols by various groups of blind and visually impaired users. The technology of making tactile maps is currently based on processing of images in graphic editors regardless of the geodata storage and processing systems, is labor-consuming and imposes high demands on professional skills of cartographers. The use of geoinformation systems will make it possible to automate the process of creating tactile maps in many respects. The authors’ recommendations can be the basis when developing functional requirements to software that ensures GIS options integration with automated preparation of tactile maps and other special cartographic materials for the blind and visually impaired.
APA, Harvard, Vancouver, ISO, and other styles
8

Hu, Shuyu. "Research on Data Acquisition Algorithms Based on Image Processing and Artificial Intelligence." International Journal of Pattern Recognition and Artificial Intelligence 34, no. 06 (September 23, 2019): 2054016. http://dx.doi.org/10.1142/s0218001420540166.

Full text
Abstract:
At present, image recognition processing technology has been playing a decisive role in the field of pattern recognition, of which automatic recognition of bank notes is an important research topic. Due to the limitation of the size of bill layout and printing method, many invoice layouts are not clear, skewed or distorted, and even there are irregular handwritten signature contents, which lead to the problem of recognition of digital characters on bill surface. In this regard, this paper proposes a data acquisition and recognition algorithm based on improved BP neural network for ticket number identification, which is based on the theory of image processing and recognition, combined with improved bill information recognition technology. First, in the pre-processing stage of bill image, denoising and graying of bill image are processed. After binarization of bill image, the tilt detection method based on Bresenham integer algorithm is used to correct the tilted bill image. Secondly, character localization and feature extraction are carried out for par characters, and the target background is separated from the interference background in order to extract the desired target characters. Finally, the improved BP neural network-based bill digit data acquisition and recognition algorithm is used to realize the classification and recognition of bill characters. The experimental results show that the improved method has better classification and recognition effect than other data acquisition and recognition algorithms.
APA, Harvard, Vancouver, ISO, and other styles
9

N. E., Ibezim,, and Ireh, E. C. "Computer Graphics Skills Required for Effective Entrepreneurial Development." Review of European Studies 9, no. 2 (March 27, 2017): 45. http://dx.doi.org/10.5539/res.v9n2p45.

Full text
Abstract:
This study identified lucrative business ideas in the use of computer graphics skills that could boost entrepreneurial development. The descriptive survey research design was adopted for the study. Five specific purposes and five research questions were formulated to guide the study. The population for the study was eight graphics design/printing press firms comprising of 1,024 graphics design/printing press personnel in Nsukka metropolis, Enugu State, Nigeria. A Structured questionnaire was used as an instrument for data collection and data collected were analyzed using mean. Major findings of the study showed that ability to think creatively and create a vision or imagery off heart, ability to utilize the hardware and software rights of graphics designing jobs, ability to use the function/impact of design and the role of the design profession appropriately in our society, ability to organize files in terms of formats/size for easy use, ability to use computer software to execute designs, meeting with clients and adjusting designs to fit their needs or taste, and using the various print and layout techniques were among the artistic, technical, communication, organizational and problem solving skills required in computer graphics for effective entrepreneurial development. Based on the findings of this study, it was recommended that the curriculum of institutions of higher learning in computer graphics should be reviewed to incorporate present needs of the society. The government should also ensure that adequate funds are allocated to procure necessary facilities that will facilitate teaching and learning of computer graphics in schools.
APA, Harvard, Vancouver, ISO, and other styles
10

Yulianti, Eka, Zumrotun Ni’mah, and Lili Shafdila Nursin. "Insect Atlas as a Learning Resource for Diversity of Insect." Proceeding International Conference on Science and Engineering 2 (March 1, 2019): 55–59. http://dx.doi.org/10.14421/icse.v2.55.

Full text
Abstract:
Diversity of insect is a biological theme that allows direct knowledge and experience of learning for student. The author describes 24 order and 34 species of JAZ (Jogja Adventure Zone), Candi Abang, and Liwa Lampung Barat. The research on insect as a source of learning is rarely done. Learning resources in the form of insect diversity atlases can help students to understand insects completely. The author develops a colorful atlas of an insect order using Corel Draw X5 software through the identification o f indicator to be achieved, preparation of materials, layout design, and product printing. the product was assessed by 1 material expert, 1 media expert, 3 peer reviewer, 2 biology teachers and tested by 15 high school/MA students using checklist questionnaire. The result of the overall assessment of the colored atlas of the insect’s diversity showed very good qualifications with an assessment percentage 88,9% and the response of students showed very good qualifications with a percentage 91,2%.
APA, Harvard, Vancouver, ISO, and other styles
11

Manisha Gahlot and Vandana Bhandari, LaimayumJogeeta Devi, Anita Rani. "Traditional arts and crafts: Responsible lifestyle products design through heat transfer printing." International Journal for Modern Trends in Science and Technology 06, no. 9S (October 16, 2020): 234–41. http://dx.doi.org/10.46501/ijmtst0609s34.

Full text
Abstract:
Sustainability is the key to responsible production and conservation of environment, which is the need of the hour. Indian motifs based on traditional textile arts and crafts have always been a source of inspiration not only to Indian designers but also have intrigued global designers. These motifs can be adapted into lifestyle products through modern techniques of surface enrichment. Lifestyle products hold a lucrative market in the textile sector. Apron is one such lifestyle product which falls under the category of accessories. This study explores how traditional knowledge of Indian arts and crafts can open up avenues for responsible designing of lifestyle products. In the present study, fifty motifs/designs from textile and architectural sources of Manipur were collected from secondary sources, adapted and simplified for application in kitchen apron using CorelDraw X3 software. Ten adapted designs were selected through visual inspection by a panel of thirty judges. The design arrangements were developed for kitchen apron by preparing line patterns, motifs/designs layout and colourways, respectively. The outcome of every step was visually evaluated by the same panel of thirty judges, except for the line patterns, on five point scale. The prototype scoring highest weighted mean score i.e., rank I was selected for further developing the following consequent steps. The finalized designs were printed on the paper using disperse dyes. The printed papers were then used to transfer designs on the constructed and finished apron made of polyester/cotton blended fabric. The cost of apron was estimated Rs. 244/- which can be reduced if produced in bulk. Consumer assessment was carried out for the printed apron on various aesthetic parameters. Consumers’ acceptance for the printed apron was found high which reflected its marketability owing to uniqueness of the motifs, traditional values associated with the traditional motifs of Manipur, sharpness of design lines, the clarity of prints and the reasonable price. Thus, study outcome revealed that the designs inspired from traditional textile arts and crafts of Manipur can be successfully rejuvenated into lifestyle products through heat transfer printing which is environmentally feasible, socially acceptable and economically viable.
APA, Harvard, Vancouver, ISO, and other styles
12

Hikmah, Nur, Huzein Fahmi Hawari, and Monika Gupta. "Design and simulation of interdigitated electrode for Graphene-SnO2 sensor on acetone gas." Indonesian Journal of Electrical Engineering and Computer Science 19, no. 1 (July 1, 2020): 119. http://dx.doi.org/10.11591/ijeecs.v19.i1.pp119-125.

Full text
Abstract:
<span>This paper presents the design and simulation of interdigitated electrode for graphene-SnO2 sensor on acetone gas. This study focuses on designing and simulating a sensor platform based on IDE with different configuration parameters to obtain the most ideal and efficient layout concerning sensitivity. Eventhough the sensor platform can be easily fabricated by using photolithography, screen-printing and other methods, the simulation is preferable as it provides low cost, secure and quick analysis tools with required sensitivity analysis. The design is important before developing a hybrid gas sensor based on metal oxide and graphene to detect acetone for diabetic mellitus at room temperature. IDE is one of the sensor platforms which provide simplicity, miniaturization and offers an economical mass-fabrication as an alternative to large systems for a sensor. The sensitivity of this IDE can be improved by altering the parameters of the IDE configuration. Herein, COMSOL Multiphysics® 5.4 software is used for simulation where the IDE-based sensor is constructed, and the electrical field is simulated with dependence on several parameters such as width, gap, finger's number and thickness of the electrode. The electrical field that is generated by the simulation results were analyzed and discussed to find the ideal design with the highest sensitivity. From the simulation, it was found that the optimum sensitivity with electrical field of 58808 V/m was the design of IDE configuration with 14 fingers, 0.15 mm spacing size between fingers, 0.15 mm width of the finger and 0.7mm thickness of fingers and electrode.</span>
APA, Harvard, Vancouver, ISO, and other styles
13

Chiolino, N., A. M. Francis, J. Holmes, M. Barlow, and C. Perkowski. "470 Celsius Packaging System for Silicon Carbide Electronics." Additional Conferences (Device Packaging, HiTEC, HiTEN, and CICMT) 2021, HiTEC (April 1, 2021): 000083–88. http://dx.doi.org/10.4071/2380-4491.2021.hitec.000083.

Full text
Abstract:
Abstract High temperature Silicon Carbide (SiC) integrated circuit (IC) processes have enabled devices that operate at &gt;450°C for more than a year. These results have established the need for more advanced and practical packaging strategies. Off the shelf state of the art packages cannot withstand the same high temperatures as the semiconductor can for long periods of time. Packaging SiC die to survive temperatures &gt;450°C, while also maintaining a reasonable packaging strategy that is agile, rapid, and modular, presents new challenges. Presented is a technique for packaging SiC die with a focus on additive manufacturing, modular design scaling, and rugged survivability. This packaging strategy utilizes state of the art Additive Manufacturing (AM) methods, using an nScrypt 3Dn-Tabletop printer, together with stereolithography (SLA) digital light processing (DLP) 3D printing. Ultra-violet (UV) curable ceramic resins are used to create high temperature connectors. A design environment is also described, in which first time correct, interconnect layers are verified in software to reduce the risk of errors. A Ceramic Wiring Board Process Design Kit (CWBPDK) allows the design of single or multiple layers of metal, with fabricated SiC die. This interconnect is verified with standard design rule checking (DRC) and layout vs. schematic (LVS) software. Entire systems in packages can be verified using multiple SiC die. Input and output pins (I/O) are connected to these modules using metal connectors. After design, manufacturing can be performed in just a few days. A system in package for driving a stepper motor was designed and fabricated using this packaging method. The motor actuator design utilizes four separate SiC die. These die contain large JFETs designed for sourcing current in a unipolar stepper motor architecture. This module was placed in a furnace at 470°C and demonstrated functional operation for over 1000 hours. These devices were able to source an average of 30 mA in &gt;400°C temperatures to drive the room temperature stepper motor. A high I/O count, next generation package for discrete SiC chips was also designed using this packaging system. A single large JFET component was soaked for over 100 hours at both 500°C and 800°C. Utilizing Ozark IC’s automated test design environment, several DC and transient variables were captured for both tests and will be presented.
APA, Harvard, Vancouver, ISO, and other styles
14

Reul, Christian, Dennis Christ, Alexander Hartelt, Nico Balbach, Maximilian Wehner, Uwe Springmann, Christoph Wick, Christine Grundig, Andreas Büttner, and Frank Puppe. "OCR4all—An Open-Source Tool Providing a (Semi-)Automatic OCR Workflow for Historical Printings." Applied Sciences 9, no. 22 (November 13, 2019): 4853. http://dx.doi.org/10.3390/app9224853.

Full text
Abstract:
Optical Character Recognition (OCR) on historical printings is a challenging task mainly due to the complexity of the layout and the highly variant typography. Nevertheless, in the last few years, great progress has been made in the area of historical OCR, resulting in several powerful open-source tools for preprocessing, layout analysis and segmentation, character recognition, and post-processing. The drawback of these tools often is their limited applicability by non-technical users like humanist scholars and in particular the combined use of several tools in a workflow. In this paper, we present an open-source OCR software called OCR4all, which combines state-of-the-art OCR components and continuous model training into a comprehensive workflow. While a variety of materials can already be processed fully automatically, books with more complex layouts require manual intervention by the users. This is mostly due to the fact that the required ground truth for training stronger mixed models (for segmentation, as well as text recognition) is not available, yet, neither in the desired quantity nor quality. To deal with this issue in the short run, OCR4all offers a comfortable GUI that allows error corrections not only in the final output, but already in early stages to minimize error propagations. In the long run, this constant manual correction produces large quantities of valuable, high quality training material, which can be used to improve fully automatic approaches. Further on, extensive configuration capabilities are provided to set the degree of automation of the workflow and to make adaptations to the carefully selected default parameters for specific printings, if necessary. During experiments, the fully automated application on 19th Century novels showed that OCR4all can considerably outperform the commercial state-of-the-art tool ABBYY Finereader on moderate layouts if suitably pretrained mixed OCR models are available. Furthermore, on very complex early printed books, even users with minimal or no experience were able to capture the text with manageable effort and great quality, achieving excellent Character Error Rates (CERs) below 0.5%. The architecture of OCR4all allows the easy integration (or substitution) of newly developed tools for its main components by standardized interfaces like PageXML, thus aiming at continual higher automation for historical printings.
APA, Harvard, Vancouver, ISO, and other styles
15

Krylov, Sergey A., Gleb I. Zagrebin, Anton V. Dvornikov, Dmitriy S. Loginov, and Ivan E. Fokin. "The automation of processes of atlas mapping." Abstracts of the ICA 1 (July 15, 2019): 1–2. http://dx.doi.org/10.5194/ica-abs-1-193-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> Atlas mapping is one of the priorities of the modern mapping industry. It is connected both with the unceasing popularity of traditional atlases in the printing version, and with the active introduction and use of interactive electronic and web atlases, as well as with the development of atlas information systems. Currently there are no generalizing methodological solutions for automating the atlas compilation processes, despite the importance and perspectives of atlas mapping. In addition, there is no software which ensures all requirements for the design of the atlas. The use of geographic information technologies in atlas mapping doesn’t solve all the problems involved in creating atlases. For example, GIS can be used for solution of individual processes, e.g. the development of spatial and thematic databases, the creation of thematic maps, the cartographic generalization, the project of graphical index of maps into atlas pages, etc. But GIS doesn’t provide for the creation of a finished cartographic work with an integrated structure and content. This fact undoubtedly leads to the high cost and laboriousness of creating atlases, to the long-term design and the presence of errors in their compilation.</p><p>The Department of Cartography of the Moscow State University of Geodesy and Cartography (MIIGAiK) carries out research work on the development of the theory and methods of automating atlas mapping processes. The research is being directed to the speed of creation and improvement of the quality of general geographical, thematic and complex atlases. As part of this research, methods and algorithms for the automated creation of atlases have been developed, providing solutions to the most time-consuming and rather complex processes of atlas mapping, among which are:</p><ol><li>The development of an optimal atlas structure;</li><li>Design of the mathematical basis of the atlas (selection of a scale series, cartographic projections, format and layout);</li><li>Formalization of the creation of geographic base maps for atlas;</li><li>The formation and visualization of reference information of the atlas;</li><li>Organization, storage and use of spatial-temporary data in electronic atlases.</li></ol><p> Figure 1 presents the proposed solutions for automating processes mentioned above. The list of solutions may be different depending on the type of atlas being created (general geographical, thematic, and complex atlas). So the following options can be when developing the structure of the atlas. The use of a unified system of classification and coding of thematic maps and taking into account the degree of knowledge (study) of the object and mapped phenomenon is being used only for a thematic or complex regional atlas. The definition of possible combinations of territories shown on a single atlas map is used only for thematic or general geographical atlas. For each type of atlas, the formation and use of the reference information system of the studied atlases can be used, as well as the identification and formalization of factors affecting the inclusion of a specific section or a separate map.</p><p>The results of the research will ensure the efficiency of creating atlases and increase their quality. Also its will help to meet the increasing consumer demand for atlas map products, especially in the form of electronic atlases and geo-portal solutions.</p>
APA, Harvard, Vancouver, ISO, and other styles
16

Tahmasebinia, Faham, Samad M.E. Sepasgozar, Sara Shirowzhan, Marjo Niemela, Arthur Tripp, Servani Nagabhyrava, ko ko, Zuheen Mansuri, and Fernando Alonso-Marroquin. "Criteria development for sustainable construction manufacturing in Construction Industry 4.0." Construction Innovation 20, no. 3 (March 23, 2020): 379–400. http://dx.doi.org/10.1108/ci-10-2019-0103.

Full text
Abstract:
Purpose This paper aims to present the sustainable performance criteria for 3D printing practices, while reporting the primarily computations and lab experimentations. The potential advantages for integrating three-dimensional (3D) printing into house construction are significant in Construction Industry 4.0; these include the capacity for mass customisation of designs and parameters for functional and aesthetic purposes, reduction in construction waste from highly precise material placement and the use of recycled waste products in layer deposition materials. With the ultimate goal of improving construction efficiency and decreasing building costs, applying Strand7 Finite Element Analysis software, a numerical model was designed specifically for 3D printing in a cement mix incorporated with recycled waste product high-density polyethylene (HDPE) and found that construction of an arched truss-like roof was structurally feasible without the need for steel reinforcements. Design/methodology/approach The research method consists of three key steps: design a prototype of possible structural layouts for the 3DSBP, create 24 laboratory samples using a brittle material to identify operation challenges and analyse the correlation between time and scale size and synthesising the numerical analysis and laboratory observations to develop the evaluation criteria for 3DSBP products. The selected house consists of layouts that resemble existing house such as living room, bed rooms and garages. Findings Some criteria for sustainable construction using 3DP were developed. The Strand7 model results suggested that under the different load combinations as stated in AS1700, the maximum tensile stress experienced is 1.70 MPa and maximum compressive stress experienced is 3.06 MPa. The cement mix of the house is incorporated with rHDPE, which result in a tensile strength of 3 MPa and compressive strength of 26 MPa. That means the house is structurally feasible without the help of any reinforcements. Investigations had also been performed on comparing a flat and arch and found the maximum tensile stress within a flat roof would cause the concrete to fail. Whereas an arch roof had reduced the maximum tensile stress to an acceptable range for concrete to withstand loadings. Currently, there are a few 3D printing techniques that can be adopted for this purpose, and more advanced technology in the future could eliminate the current limitation on 3D printing and bring forth this idea as a common practice in house construction. Originality/value This study provides some novel criteria for evaluating a 3D printing performance and discusses challenges of 3D utilisation from design and managerial perspectives. The criteria are relied on maximum utility and minimum impact pillars which can be used by scholars and practitioners to measure their performance. The criteria and the results of the computation and experimentation can be considered as critical benchmarks for future practices.
APA, Harvard, Vancouver, ISO, and other styles
17

Barvir, Radek, Alena Vondrakova, and Jan Brus. "TouchIt3D: Technology (not only) for Tactile Maps." Abstracts of the ICA 1 (July 15, 2019): 1–2. http://dx.doi.org/10.5194/ica-abs-1-24-2019.

Full text
Abstract:
<p><strong>Abstract.</strong> The majority of information has a spatial context that can be represented on the map, while maps are presenting the real world in the simplified and generalised way, focusing on the key features or specific topic. For some kinds of users, the map as the representation of the real spatial context is not only the possibility but also the necessity. Among these people belong people with visual impairments.</p><p> The number of visually impaired people increases every year and to their full-fledged integration into society is devoted considerable attention. But People with visual impairments are the target group with specific user needs, and the conventional map is insufficient for them. Along the growing number of visually impaired people importance of tactile cartography is increasing.</p><p> Currently, there are many technologies used for creating tactile maps, including very primitive and cheap solutions as well as advanced methods. The simplest way is drawing on the hand which brings only the real-time perception which needs to memorise for next uses. Another technique of hand embroidery consists of thick fibre placed on the cardboard or different paper type. More accurate is drawing on a special paper for blind or using dense colour gels. Also, some kinds of machinery producing technologies are used, e.g: shaping carton, plastic or metal. Braille printers can produce not very complicated tactile maps using 3D dots. Similar results can be obtained using serigraphy. Very popular is printing on heat-sensitive paper as mentioned before in the case of haptic maps by Mapy.cz. Another possibility is to use rubberized colours and nowadays popular technology of 3D printing (Vozenilek and Ludikova, 2010).</p><p> At the Department of Geoinformatics, Faculty of Science, Palacký University Olomouc, Czechia, the research team developed prototypes and methodology for the creation of the modern type of 3D tactile maps, linkable with mobile devices (Barvir et al., 2018).Interactive tactile maps connectable with mobile devices bring new opportunities to develop tactile map production. The prototypes have been verified in practice in cooperation with educational centres for people with visual impairment and blind people, and special schools. It is comprehensive research focusing a lot of scientific challenges. The contribution would like to summarise the most significant findings of the research.</p><p> The developed TouchIt3D technology is based on linking 3D objects, such as tactile maps, 3D models, controls, etc., with a mobile tablet or mobile phone using a combination of conductive and non-conductive filament. Each model is linked to an individual mobile application layout that initiates a pre-action based on user suggestions done within touching the model. For example, such an action may be a vibration or a speech command when the person with visual impairment touch inappropriate map symbol. As example can be introduced a listing of current public transport departures after the user touches the bus-stop map symbol on the 3D transport terminal plan. Data can be acquired in real time via Internet as the tablet can be connected to WiFi or cellular network. TouchIt3D technology is primarily focused on the presentation of spatial data and navigation for the public, people with visual or other impairment.</p><p> There are two ways how to create such tactile map. The first way is to prepare all the data manually. Another approach is the semi-automatic workflow. This approach is significantly different from previous workflows of producing maps for people with visual impairment. The solution based on the open-source and free software and data together with sharing electronic part of the map in the form of tablet dramatically lowered costs of tactile maps production. The designed scripts and models also reduced the time necessary to spend by map designing up to a minimum. User testing provided all data required for the improvement, and maximal adaptation of the cartographic visualisation methods to the target user needs. Nevertheless, maps partly automatically done and based on crowdsourcing data cannot bring the same quality as individually made tactile maps.</p><p> The main aim of the research is to find a workflow of interactive tactile maps creation using the TouchIt3D technology. The research also deals with setting appropriate parameters of the map, e.g. the map scale, cartographic symbol size, map content etc. This optimisation is done to fit the needs of people with visual impairment as much as possible on the one hand and taking into account the limitations of the map creation possibilities.</p><p>This research is implemented within the project <i>Development of independent movement through tactile-auditory aids</i>, Nr. TL01000507, supported by the Technology Agency of the Czech Republic.</p>
APA, Harvard, Vancouver, ISO, and other styles
18

"Study of CNC System for PCB Design using Proteus." International Journal of Advanced Trends in Computer Science and Engineering 10, no. 1 (February 15, 2021): 101–5. http://dx.doi.org/10.30534/ijatcse/2021/141012021.

Full text
Abstract:
A Computer Numerical Control (CNC) system for Printed Circuit Board (PCB) design using Proteus Design Suite has been presented. A schematic diagram and single-sided PCB layout of a high voltage circuit for Geiger– Muller (GM) tube is designed using Proteus software. Subsequently, the PCB layout of the circuit is converted into Gerber files that are decoded into G-code through Flat CAM software. The G-code is introduced to the CNC system consisting of a computer, a CNC controller and a CNC machine. The code is stored in the memory of the computer and is uploaded to the CNC controller byMach3 software. The controller operates the CNC machine to perform isolation routing, drilling and milling for PCB as per the instructed design. It is noticed that the CNC system associated with Proteus makes the PCB designing process automated and easier by reducing the process of printing as well as etching. This study reveals that the proposed system can eliminate human error to achieve better accuracy and higher productivity as compared to the conventional methods of PCB design.
APA, Harvard, Vancouver, ISO, and other styles
19

Maras, Steven. "Reflections on Adobe Corporation, Bill Viola, and Peter Ramus while Printing Lecture Notes." M/C Journal 8, no. 2 (June 1, 2005). http://dx.doi.org/10.5204/mcj.2338.

Full text
Abstract:
In March 2002, I was visiting the University of Southern California. One night, as sometimes happens on a vibrant campus, two interesting but very different public lectures were scheduled against one another. The first was by the co-chairman and co-founder of Adobe Systems Inc., Dr. John E. Warnock, talking about books. The second was a lecture by acclaimed video artist Bill Viola. The first event was clearly designed as a networking forum for faculty and entrepreneurs. The general student population was conspicuously absent. Warnock spoke of the future of Adobe, shared stories of his love of books, and in an embodiment of the democratising potential of Adobe software (and no doubt to the horror of archivists in the room) he invited the audience to handle extremely rare copies of early printed works from his personal library. In the lecture theatre where Viola was to speak the atmosphere was different. Students were everywhere; even at the price of ten dollars a head. Viola spoke of time and memory in the information age, of consciousness and existence, to an enraptured audience—and showed his latest work. The juxtaposition of these two events says something about our cultural moment, caught between a paradigm modelled on reverence toward the page, and a still emergent sense of medium, intensity and experimentation. But, the juxtaposition yields more. At one point in Warnock’s speech, in a demonstration of the ultra-high resolution possible in the next generation of Adobe products, he presented a scan of a manuscript, two pages, two columns per page, overflowing with detail. Fig. 1. Dr John E. Warnock at the Annenberg Symposium. Photo courtesy of http://www.annenberg.edu/symposia/annenberg/2002/photos.php Later, in Viola’s presentation, a fragment of a video work, Silent Mountain (2001) splits the screen in two columns, matching Warnock’s text: inside each a human figure struggles with intense emotion, and the challenges of bridging the relational gap. Fig. 2. Images from Bill Viola, Silent Mountain (2001). From Bill Viola, THE PASSIONS. The J. Paul Getty Museum, Los Angeles in Association with The National Gallery, London. Ed. John Walsh. p. 44. Both events are, of course, lectures. And although they are different in style and content, a ‘columnular’ scheme informs and underpins both, as a way of presenting and illustrating the lecture. Here, it is worth thinking about Pierre de la Ramée or Petrus (Peter) Ramus (1515-1572), the 16th century educational reformer who in the words of Frances Yates ‘abolished memory as a part of rhetoric’ (229). Ramus was famous for transforming rhetoric through the introduction of his method or dialectic. For Walter J. Ong, whose discussion of Ramism we are indebted to here, Ramus produced the paradigm of the textbook genre. But it is his method that is more noteworthy for us here, organised through definitions and divisions, the distribution of parts, ‘presented in dichotomized outlines or charts that showed exactly how the material was organised spatially in itself and in the mind’ (Ong, Orality 134-135). Fig. 3. Ramus inspired study of Medicine. Ong, Ramus 301. Ong discusses Ramus in more detail in his book Ramus: Method, and the Decay of Dialogue. Elsewhere, Sutton, Benjamin, and I have tried to capture the sense of Ong’s argument, which goes something like the following. In Ramus, Ong traces the origins of our modern, diagrammatic understanding of argument and structure to the 16th century, and especially the work of Ramus. Ong’s interest in Ramus is not as a great philosopher, nor a great scholar—indeed Ong sees Ramus’s work as a triumph of mediocrity of sorts. Rather, his was a ‘reformation’ in method and pedagogy. The Ramist dialectic ‘represented a drive toward thinking not only of the universe but of thought itself in terms of spatial models apprehended by sight’ (Ong, Ramus 9). The world becomes thought of ‘as an assemblage of the sort of things which vision apprehends—objects or surfaces’. Ramus’s teachings and doctrines regarding ‘discoursing’ are distinctive for the way they draw on geometrical figures, diagrams or lecture outlines, and the organization of categories through dichotomies. This sets learning up on a visual paradigm of ‘study’ (Ong, Orality 8-9). Ramus introduces a new organization for discourse. Prior to Ramus, the rhetorical tradition maintained and privileged an auditory understanding of the production of content in speech. Central to this practice was deployment of the ‘seats’, ‘images’ and ‘common places’ (loci communes), stock arguments and structures that had accumulated through centuries of use (Ong, Orality 111). These common places were supported by a complex art of memory: techniques that nourished the practice of rhetoric. By contrast, Ramism sought to map the flow and structure of arguments in tables and diagrams. Localised memory, based on dividing and composing, became crucial (Yates 230). For Ramus, content was structured in a set of visible or sight-oriented relations on the page. Ramism transformed the conditions of visualisation. In our present age, where ‘content’ is supposedly ‘king’, an archaeology of content bears thinking about. In it, Ramism would have a prominent place. With Ramus, content could be mapped within a diagrammatic page-based understanding of meaning. A container understanding of content arises. ‘In the post-Gutenberg age where Ramism flourished, the term “content”, as applied to what is “in” literary productions, acquires a status which it had never known before’ (Ong, Ramus 313). ‘In lieu of merely telling the truth, books would now in common estimation “contain” the truth, like boxes’ (313). For Ramus, ‘analysis opened ideas like boxes’ (315). The Ramist move was, as Ong points out, about privileging the visual over the audible. Alongside the rise of the printing press and page-based approaches to the word, the Ramist revolution sought to re-work rhetoric according to a new scheme. Although spatial metaphors had always had a ‘place’ in the arts of memory—other systems were, however, phonetically based—the notion of place changed. Specific figures such as ‘scheme’, ‘plan’, and ‘table’, rose to prominence in the now-textualised imagination. ‘Structure’ became an abstract diagram on the page disconnected from the total performance of the rhetor. This brings us to another key aspect of the Ramist reformation: that alongside a spatialised organisation of thought Ramus re-works style as presentation and embellishment (Brummett 449). A kind of separation of conception and execution is introduced in relation to performance. In Ramus’ separation of reason and rhetoric, arrangement and memory are distinct from style and delivery (Brummett 464). While both dialectic and rhetoric are re-worked by Ramus in light of divisions and definitions (see Ong, Ramus Chs. XI-XII), and dialectic remains a ‘rhetorical instrument’ (Ramus 290), rhetoric becomes a unique site for simplification in the name of classroom practicality. Dialectic circumscribes the space of learning of rhetoric; invention and arrangement (positioning) occur in advance (289). Ong’s work on the technologisation of the word is strongly focused on identifying the impact of literacy on consciousness. What Ong’s work on Ramus shows is that alongside the so-called printing revolution the Ramist reformation enacts an equally if not more powerful transformation of pedagogic space. Any serious consideration of print must not only look at the technologisation of the word, and the shifting patterns of literacy produced alongside it, but also a particular tying together of pedagogy and method that Ong traces back to Ramus. If, as is canvassed in the call for papers of this issue of M/C Journal, ‘the transitions in print culture are uneven and incomplete at this point’, then could it be in part due to the way Ramism endures and is extended in electronic and hypermedia contexts? Powerpoint presentations, outlining tools (Heim 139-141), and the scourge of bullet points, are the most obvious evidence of greater institutionalization of Ramist knowledge architecture. Communication, and the teaching of communication, is now embedded in a Ramist logic of opening up content like a box. Theories of communication draw on so-called ‘models’ that draw on the representation of the communication process through boxes that divide and define. Perhaps in a less obvious way, ‘spatialized processes of thought and communication’ (Ong, Ramus 314) are essential to the logic of flowcharting and tracking new information structures, and even teaching hypertext (see the diagram in Nielsen 7): a link puts the popular notion that hypertext is close to the way we truly think into an interesting perspective. The notion that we are embedded in print culture is not in itself new, even if the forms of our continual reintegration into print culture can be surprising. In the experience of printing, of the act of pressing the ‘Print’ button, we find ourselves re-integrated into page space. A mini-preview of the page re-assures me of an actuality behind the actualizations on the screen, of ink on paper. As I write in my word processing software, the removal of writing from the ‘element of inscription’ (Heim 136) —the frictionless ‘immediacy’ of the flow of text (152) — is conditioned by a representation called the ‘Page Layout’, the dark borders around the page signalling a kind of structures abyss, a no-go zone, a place, beyond ‘Normal’, from which where there is no ‘Return’. At the same time, however, never before has the technological manipulation of the document been so complex, a part of a docuverse that exists in three dimensions. It is a world that is increasingly virtualised by photocopiers that ‘scan to file’ or ‘scan to email’ rather than good old ‘xeroxing’ style copying. Printing gives way to scanning. In a perverse extension of printing (but also residually film and photography), some video software has a function called ‘Print to Video’. That these super-functions of scanning to file or email are disabled on my department photocopier says something about budgets, but also the comfort with which academics inhabit Ramist space. As I stand here printing my lecture plan, the printer stands defiantly separate from the photocopier, resisting its colonizing convergence even though it is dwarfed in size. Meanwhile, the printer demurely dispenses pages, one at a time, face down, in a gesture of discretion or perhaps embarrassment. For in the focus on the pristine page there is a Puritanism surrounding printing: a morality of blemishes, smudges, and stains; of structure, format and order; and a failure to match that immaculate, perfect argument or totality. (Ong suggests that ‘the term “method” was appropriated from the Ramist coffers and used to form the term “methodists” to designate first enthusiastic preachers who made an issue of their adherence to “logic”’ (Ramus 304).) But perhaps this avoidance of multi-functionality is less of a Ludditism than an understanding that the technological assemblage of printing today exists peripherally to the ideality of the Ramist scheme. A change in technological means does not necessarily challenge the visile language that informs our very understanding of our respective ‘fields’, or the ideals of competency embodied in academic performance and expression, or the notions of content we adopt. This is why I would argue some consideration of Ramism and print culture is crucial. Any ‘true’ breaking out of print involves, as I suggest, a challenge to some fundamental principles of pedagogy and method, and the link between the two. And of course, the very prospect of breaking out of print raises the issue of its desirability at a time when these forms of academic performance are culturally valued. On the surface, academic culture has been a strange inheritor of the Ramist legacy, radically furthering its ambitions, but also it would seem strongly tempering it with an investment in orality, and other ideas of performance, that resist submission to the Ramist ideal. Ong is pessimistic here, however. Ramism was after all born as a pedagogic movement, central to the purveying ‘knowledge as a commodity’ (Ong, Ramus 306). Academic discourse remains an odd mixture of ‘dialogue in the give-and-take Socratic form’ and the scheduled lecture (151). The scholastic dispute is at best a ‘manifestation of concern with real dialogue’ (154). As Ong notes, the ideals of dialogue have been difficult to sustain, and the dominant practice leans towards ‘the visile pole with its typical ideals of “clarity”, “precision”, “distinctness”, and “explanation” itself—all best conceivable in terms of some analogy with vision and a spatial field’ (151). Assessing the importance and after-effects of the Ramist reformation today is difficult. Ong describes it an ‘elusive study’ (Ramus 296). Perhaps Viola’s video, with its figures struggling in a column-like organization of space, structured in a kind of dichotomy, can be read as a glimpse of our existence in or under a Ramist scheme (interestingly, from memory, these figures emote in silence, deprived of auditory expression). My own view is that while it is possible to explore learning environments in a range of ways, and thus move beyond the enclosed mode of study of Ramism, Ramism nevertheless comprises an important default architecture of pedagogy that also informs some higher level assumptions about assessment and knowledge of the field. Software training, based on a process of working through or mimicking a linked series of screenshots and commands is a direct inheritor of what Ong calls Ramism’s ‘corpuscular epistemology’, a ‘one to one correspondence between concept, word and referent’ (Ong, Orality 168). My lecture plan, providing an at a glance view of my presentation, is another. The default architecture of the Ramist scheme impacts on our organisation of knowledge, and the place of performance with in it. Perhaps this is another area where Ong’s fascinating account of secondary orality—that orality that comes into being with television and radio—becomes important (Orality 136). Not only does secondary orality enable group-mindedness and communal exchange, it also provides a way to resist the closure of print and the Ramist scheme, adapting knowledge to new environments and story frameworks. Ong’s work in Orality and Literacy could thus usefully be taken up to discuss Ramism. But this raises another issue, which has to do with the relationship between Ong’s two books. In Orality and Literacy, Ong is careful to trace distinctions between oral, chirographic, manuscript, and print culture. In Ramus this progression is not as prominent— partly because Ong is tracking Ramus’ numerous influences in detail —and we find a more clear-cut distinction between the visile and audile worlds. Yates seems to support this observation, suggesting contra Ong that it is not the connection between Ramus and print that is important, but between Ramus and manuscript culture (230). The interconnections but also lack of fit between the two books suggests a range of fascinating questions about the impact of Ramism across different media/technological contexts, beyond print, but also the status of visualisation in both rhetorical and print cultures. References Brummett, Barry. Reading Rhetorical Theory. Fort Worth: Harcourt, 2000. Heim, Michael. Electric Language: A Philosophical Study of Word Processing. New Haven: Yale UP, 1987. Maras, Steven, David Sutton, and with Marion Benjamin. “Multimedia Communication: An Interdisciplinary Approach.” Information Technology, Education and Society 2.1 (2001): 25-49. Nielsen, Jakob. Multimedia and Hypertext: The Internet and Beyond. Boston: AP Professional, 1995. Ong, Walter J. Orality and Literacy: The Technologizing of the Word. London: Methuen, 1982. —. Ramus: Method, and the Decay of Dialogue. New York: Octagon, 1974. The Second Annual Walter H. Annenberg Symposium. 20 March 2002. http://www.annenberg.edu/symposia/annenberg/2002/photos.php> USC Annenberg Center of Communication and USC Annenberg School for Communication. 22 March 2005. Viola, Bill. Bill Viola: The Passions. Ed. John Walsh. London: The J. Paul Getty Museum, Los Angeles in Association with The National Gallery, 2003. Yates, Frances A. The Art of Memory. Harmondsworth: Penguin, 1969. Citation reference for this article MLA Style Maras, Steven. "Reflections on Adobe Corporation, Bill Viola, and Peter Ramus while Printing Lecture Notes." M/C Journal 8.2 (2005). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0506/05-maras.php>. APA Style Maras, S. (Jun. 2005) "Reflections on Adobe Corporation, Bill Viola, and Peter Ramus while Printing Lecture Notes," M/C Journal, 8(2). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0506/05-maras.php>.
APA, Harvard, Vancouver, ISO, and other styles
20

Hill, Benjamin Mako. "Revealing Errors." M/C Journal 10, no. 5 (October 1, 2007). http://dx.doi.org/10.5204/mcj.2703.

Full text
Abstract:
Introduction In The World Is Not a Desktop, Marc Weisner, the principal scientist and manager of the computer science laboratory at Xerox PARC, stated that, “a good tool is an invisible tool.” Weisner cited eyeglasses as an ideal technology because with spectacles, he argued, “you look at the world, not the eyeglasses.” Although Weisner’s work at PARC played an important role in the creation of the field of “ubiquitous computing”, his ideal is widespread in many areas of technology design. Through repetition, and by design, technologies blend into our lives. While technologies, and communications technologies in particular, have a powerful mediating impact, many of the most pervasive effects are taken for granted by most users. When technology works smoothly, its nature and effects are invisible. But technologies do not always work smoothly. A tiny fracture or a smudge on a lens renders glasses quite visible to the wearer. The Microsoft Windows “Blue Screen of Death” on subway in Seoul (Photo credit Wikimedia Commons). Anyone who has seen a famous “Blue Screen of Death”—the iconic signal of a Microsoft Windows crash—on a public screen or terminal knows how errors can thrust the technical details of previously invisible systems into view. Nobody knows that their ATM runs Windows until the system crashes. Of course, the operating system chosen for a sign or bank machine has important implications for its users. Windows, or an alternative operating system, creates affordances and imposes limitations. Faced with a crashed ATM, a consumer might ask herself if, with its rampant viruses and security holes, she should really trust an ATM running Windows? Technologies make previously impossible actions possible and many actions easier. In the process, they frame and constrain possible actions. They mediate. Communication technologies allow users to communicate in new ways but constrain communication in the process. In a very fundamental way, communication technologies define what their users can say, to whom they say it, and how they can say it—and what, to whom, and how they cannot. Humanities scholars understand the power, importance, and limitations of technology and technological mediation. Weisner hypothesised that, “to understand invisibility the humanities and social sciences are especially valuable, because they specialise in exposing the otherwise invisible.” However, technology activists, like those at the Free Software Foundation (FSF) and the Electronic Frontier Foundation (EFF), understand this power of technology as well. Largely constituted by technical members, both organisations, like humanists studying technology, have struggled to communicate their messages to a less-technical public. Before one can argue for the importance of individual control over who owns technology, as both FSF and EFF do, an audience must first appreciate the power and effect that their technology and its designers have. To understand the power that technology has on its users, users must first see the technology in question. Most users do not. Errors are under-appreciated and under-utilised in their ability to reveal technology around us. By painting a picture of how certain technologies facilitate certain mistakes, one can better show how technology mediates. By revealing errors, scholars and activists can reveal previously invisible technologies and their effects more generally. Errors can reveal technology—and its power and can do so in ways that users of technologies confront daily and understand intimately. The Misprinted Word Catalysed by Elizabeth Eisenstein, the last 35 years of print history scholarship provides both a richly described example of technological change and an analysis of its effects. Unemphasised in discussions of the revolutionary social, economic, and political impact of printing technologies is the fact that, especially in the early days of a major technological change, the artifacts of print are often quite similar to those produced by a new printing technology’s predecessors. From a reader’s purely material perspective, books are books; the press that created the book is invisible or irrelevant. Yet, while the specifics of print technologies are often hidden, they are often exposed by errors. While the shift from a scribal to print culture revolutionised culture, politics, and economics in early modern Europe, it was near-invisible to early readers (Eisenstein). Early printed books were the same books printed in the same way; the early press was conceived as a “mechanical scriptorium.” Shown below, Gutenberg’s black-letter Gothic typeface closely reproduced a scribal hand. Of course, handwriting and type were easily distinguishable; errors and irregularities were inherent in relatively unsteady human hands. Side-by-side comparisons of the hand-copied Malmesbury Bible (left) and the black letter typeface in the Gutenberg Bible (right) (Photo credits Wikimedia Commons & Wikimedia Commons). Printing, of course, introduced its own errors. As pages were produced en masse from a single block of type, so were mistakes. While a scribe would re-read and correct errors as they transcribed a second copy, no printing press would. More revealingly, print opened the door to whole new categories of errors. For example, printers setting type might confuse an inverted n with a u—and many did. Of course, no scribe made this mistake. An inverted u is only confused with an n due to the technological possibility of letter flipping in movable type. As print moved from Monotype and Linotype machines, to computerised typesetting, and eventually to desktop publishing, an accidentally flipped u retreated back into the realm of impossibility (Mergenthaler, Swank). Most readers do not know how their books are printed. The output of letterpresses, Monotypes, and laser printers are carefully designed to produce near-uniform output. To the degree that they succeed, the technologies themselves, and the specific nature of the mediation, becomes invisible to readers. But each technology is revealed in errors like the upside-down u, the output of a mispoured slug of Monotype, or streaks of toner from a laser printer. Changes in printing technologies after the press have also had profound effects. The creation of hot-metal Monotype and Linotype, for example, affected decisions to print and reprint and changed how and when it is done. New mass printing technologies allowed for the printing of works that, for economic reasons, would not have been published before. While personal computers, desktop publishing software, and laser printers make publishing accessible in new ways, it also places real limits on what can be printed. Print runs of a single copy—unheard of before the invention of the type-writer—are commonplace. But computers, like Linotypes, render certain formatting and presentation difficult and impossible. Errors provide a space where the particulars of printing make technologies visible in their products. An inverted u exposes a human typesetter, a letterpress, and a hasty error in judgment. Encoding errors and botched smart quotation marks—a ? in place of a “—are only possible with a computer. Streaks of toner are only produced by malfunctioning laser printers. Dust can reveal the photocopied provenance of a document. Few readers reflect on the power or importance of the particulars of the technologies that produced their books. In part, this is because the technologies are so hidden behind their products. Through errors, these technologies and the power they have on the “what” and “how” of printing are exposed. For scholars and activists attempting to expose exactly this, errors are an under-exploited opportunity. Typing Mistyping While errors have a profound effect on media consumption, their effect is equally important, and perhaps more strongly felt, when they occur during media creation. Like all mediating technologies, input technologies make it easier or more difficult to create certain messages. It is, for example, much easier to write a letter with a keyboard than it is to type a picture. It is much more difficult to write in languages with frequent use of accents on an English language keyboard than it is on a European keyboard. But while input systems like keyboards have a powerful effect on the nature of the messages they produce, they are invisible to recipients of messages. Except when the messages contains errors. Typists are much more likely to confuse letters in close proximity on a keyboard than people writing by hand or setting type. As keyboard layouts switch between countries and languages, new errors appear. The following is from a personal email: hez, if there’s not a subversion server handz, can i at least have the root password for one of our machines? I read through the instructions for setting one up and i think i could do it. [emphasis added] The email was quickly typed and, in two places, confuses the character y with z. Separated by five characters on QWERTY keyboards, these two letters are not easily mistaken or mistyped. However, their positions are swapped on German and English keyboards. In fact, the author was an American typing in a Viennese Internet cafe. The source of his repeated error was his false expectations—his familiarity with one keyboard layout in the context of another. The error revealed the context, both keyboard layouts, and his dependence on a particular keyboard. With the error, the keyboard, previously invisible, was exposed as an inter-mediator with its own particularities and effects. This effect does not change in mobile devices where new input methods have introduced powerful new ways of communicating. SMS messages on mobile phones are constrained in length to 160 characters. The result has been new styles of communication using SMS that some have gone so far as to call a new language or dialect called TXTSPK (Thurlow). Yet while they are obvious to social scientists, the profound effects of text message technologies on communication is unfelt by most users who simply see the messages themselves. More visible is the fact that input from a phone keypad has opened the door to errors which reveal input technology and its effects. In the standard method of SMS input, users press or hold buttons to cycle through the letters associated with numbers on a numeric keyboard (e.g., 2 represents A, B, and C; to produce a single C, a user presses 2 three times). This system makes it easy to confuse characters based on a shared association with a single number. Tegic’s popular T9 software allows users to type in words by pressing the number associated with each letter of each word in quick succession. T9 uses a database to pick the most likely word that maps to that sequence of numbers. While the system allows for quick input of words and phrases on a phone keypad, it also allows for the creation of new types of errors. A user trying to type me might accidentally write of because both words are mapped to the combination of 6 and 3 and because of is a more common word in English. T9 might confuse snow and pony while no human, and no other input method, would. Users composing SMS’s are constrained by its technology and its design. The fact that text messages must be short and the difficult nature of phone-based input methods has led to unique and highly constrained forms of communication like TXTSPK (Sutherland). Yet, while the influence of these input technologies is profound, users are rarely aware of it. Errors provide a situation where the particularities of a technology become visible and an opportunity for users to connect with scholars exposing the effect of technology and activists arguing for increased user control. Google News Denuded As technologies become more complex, they often become more mysterious to their users. While not invisible, users know little about the way that complex technologies work both because they become accustomed to them and because the technological specifics are hidden inside companies, behind web interfaces, within compiled software, and in “black boxes” (Latour). Errors can help reveal these technologies and expose their nature and effects. One such system, Google’s News, aggregates news stories and is designed to make it easy to read multiple stories on the same topic. The system works with “topic clusters” that attempt to group articles covering the same news event. The more items in a news cluster (especially from popular sources) and the closer together they appear in time, the higher confidence Google’s algorithms have in the “importance” of a story and the higher the likelihood that the cluster of stories will be listed on the Google News page. While the decision to include or remove individual sources is made by humans, the act of clustering is left to Google’s software. Because computers cannot “understand” the text of the articles being aggregated, clustering happens less intelligently. We know that clustering is primarily based on comparison of shared text and keywords—especially proper nouns. This process is aided by the widespread use of wire services like the Associated Press and Reuters which provide article text used, at least in part, by large numbers of news sources. Google has been reticent to divulge the implementation details of its clustering engine but users have been able to deduce the description above, and much more, by watching how Google News works and, more importantly, how it fails. For example, we know that Google News looks for shared text and keywords because text that deviates heavily from other articles is not “clustered” appropriately—even if it is extremely similar semantically. In this vein, blogger Philipp Lenssen gives advice to news sites who want to stand out in Google News: Of course, stories don’t have to be exactly the same to be matched—but if they are too different, they’ll also not appear in the same group. If you want to stand out in Google News search results, make your article be original, or else you’ll be collapsed into a cluster where you may or may not appear on the first results page. While a human editor has no trouble understanding that an article using different terms (and different, but equally appropriate, proper nouns) is discussing the same issue, the software behind Google News is more fragile. As a result, Google News fails to connect linked stories that no human editor would miss. A section of a screenshot of Google News clustering aggregation showcasing what appears to be an error. But just as importantly, Google News can connect stories that most human editors will not. Google News’s clustering of two stories by Al Jazeera on how “Iran offers to share nuclear technology,” and by the Guardian on how “Iran threatens to hide nuclear program,” seem at first glance to be a mistake. Hiding and sharing are diametrically opposed and mutually exclusive. But while it is true that most human editors would not cluster these stories, it is less clear that it is, in fact, an error. Investigation shows that the two articles are about the release of a single statement by the government of Iran on the same day. The spin is significant enough, and significantly different, that it could be argued that the aggregation of those stories was incorrect—or not. The error reveals details about the way that Google News works and about its limitations. It reminds readers of Google News of the technological nature of their news’ meditation and gives them a taste of the type of selection—and mis-selection—that goes on out of view. Users of Google News might be prompted to compare the system to other, more human methods. Ultimately it can remind them of the power that Google News (and humans in similar roles) have over our understanding of news and the world around us. These are all familiar arguments to social scientists of technology and echo the arguments of technology activists. By focusing on similar errors, both groups can connect to users less used to thinking in these terms. Conclusion Reflecting on the role of the humanities in a world of increasingly invisible technology for the blog, “Humanities, Arts, Science and Technology Advanced Collaboratory,” Duke English professor Cathy Davidson writes: When technology is accepted, when it becomes invisible, [humanists] really need to be paying attention. This is one reason why the humanities are more important than ever. Analysis—qualitative, deep, interpretive analysis—of social relations, social conditions, in a historical and philosophical perspective is what we do so well. The more technology is part of our lives, the less we think about it, the more we need rigorous humanistic thinking that reminds us that our behaviours are not natural but social, cultural, economic, and with consequences for us all. Davidson concisely points out the strength and importance of the humanities in evaluating technology. She is correct; users of technologies do not frequently analyse the social relations, conditions, and effects of the technology they use. Activists at the EFF and FSF argue that this lack of critical perspective leads to exploitation of users (Stallman). But users, and the technology they use, are only susceptible to this type of analysis when they understand the applicability of these analyses to their technologies. Davidson leaves open the more fundamental question: How will humanists first reveal technology so that they can reveal its effects? Scholars and activists must do more than contextualise and describe technology. They must first render invisible technologies visible. As the revealing nature of errors in printing systems, input systems, and “black box” software systems like Google News show, errors represent a point where invisible technology is already visible to users. As such, these errors, and countless others like them, can be treated as the tip of an iceberg. They represent an important opportunity for humanists and activists to further expose technologies and the beginning of a process that aims to reveal much more. References Davidson, Cathy. “When Technology Is Invisible, Humanists Better Get Busy.” HASTAC. (2007). 1 September 2007 http://www.hastac.org/node/779>. Eisenstein, Elisabeth L. The Printing Press as an Agent of Change: Communications and Cultural Transformations in Early-Modern Europe. Cambridge, UK: Cambridge University Press, 1979. Latour, Bruno. Pandora’s Hope: Essays on the Reality of Science Studies. Harvard UP, 1999. Lenssen, Philipp. “How Google News Indexes.” Google Blogscoped. 2006. 1 September 2007 http://blogoscoped.com/archive/2006-07-28-n49.html>. Mergenthaler, Ottmar. The Biography of Ottmar Mergenthaler, Inventor of the Linotype. New ed. New Castle, Deleware: Oak Knoll Books, 1989. Monotype: A Journal of Composing Room Efficiency. Philadelphia: Lanston Monotype Machine Co, 1913. Stallman, Richard M. Free Software, Free Society: Selected Essays of Richard M. Stallman. Boston, Massachusetts: Free Software Foundation, 2002. Sutherland, John. “Cn u txt?” Guardian Unlimited. London, UK. 2002. Swank, Alvin Garfield, and United Typothetae America. Linotype Mechanism. Chicago, Illinois: Dept. of Education, United Typothetae America, 1926. Thurlow, C. “Generation Txt? The Sociolinguistics of Young People’s Text-Messaging.” Discourse Analysis Online 1.1 (2003). Weiser, Marc. “The World Is Not a Desktop.” ACM Interactions. 1.1 (1994): 7-8. Citation reference for this article MLA Style Hill, Benjamin Mako. "Revealing Errors." M/C Journal 10.5 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0710/01-hill.php>. APA Style Hill, B. (Oct. 2007) "Revealing Errors," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0710/01-hill.php>.
APA, Harvard, Vancouver, ISO, and other styles
21

Chekanin, Vladislav. "Solving the Problem of Packing Objects of Complex Geometric Shape into a Container of Arbitrary Dimension." Proceedings of the 30th International Conference on Computer Graphics and Machine Vision (GraphiCon 2020). Part 2, December 17, 2020, paper50–1—paper50–13. http://dx.doi.org/10.51130/graphicon-2020-2-3-50.

Full text
Abstract:
The article is devoted to algorithms developed for solving the problem of placement orthogonal polyhedrons of arbitrary dimension into a container. To describe all free areas of a container of complex geometric shape is applied the developed model of potential containers. Algorithms for constructing orthogonal polyhedrons and their subsequent placement are presented. The decomposition algorithm intended to reduce the number of orthogonal objects forming an orthogonal polyhedron is described in detail. The proposed placement algorithm is based on the application of intersection operations to obtain the areas of permissible placement of each considered object of complex geometric shape. Examples of packing sets of orthogonal polyhedrons and voxelized objects into containers of various geometric shapes are given. The effectiveness of application of all proposed algorithms is presented on an example of solving practical problems of rational placement of objects produced by 3D printing technology. The achieved layouts exceed the results obtained by the Sinter module of the software Materialise Magics both in speed and density.
APA, Harvard, Vancouver, ISO, and other styles
22

Glöckler, Falko, James Macklin, David Shorthouse, Christian Bölling, Satpal Bilkhu, and Christian Gendreau. "DINA—Development of open source and open services for natural history collections & research." Biodiversity Information Science and Standards 4 (October 6, 2020). http://dx.doi.org/10.3897/biss.4.59070.

Full text
Abstract:
The DINA Consortium (DINA = “DIgital information system for NAtural history data”, https://dina-project.net) is a framework for like-minded practitioners of natural history collections to collaborate on the development of distributed, open source software that empowers and sustains collections management. Target collections include zoology, botany, mycology, geology, paleontology, and living collections. The DINA software will also permit the compilation of biodiversity inventories and will robustly support both observation and molecular data. The DINA Consortium focuses on an open source software philosophy and on community-driven open development. Contributors share their development resources and expertise for the benefit of all participants. The DINA System is explicitly designed as a loosely coupled set of web-enabled modules. At its core, this modular ecosystem includes strict guidelines for the structure of Web application programming interfaces (APIs), which guarantees the interoperability of all components (https://github.com/DINA-Web). Important to the DINA philosophy is that users (e.g., collection managers, curators) be actively engaged in an agile development process. This ensures that the product is pleasing for everyday use, includes efficient yet flexible workflows, and implements best practices in specimen data capture and management. There are three options for developing a DINA module: create a new module compliant with the specifications (Fig. 1), modify an existing code-base to attain compliance (Fig. 2), or wrap a compliant API around existing code that cannot be or may not be modified (e.g., infeasible, dependencies on other systems, closed code) (Fig. 3). create a new module compliant with the specifications (Fig. 1), modify an existing code-base to attain compliance (Fig. 2), or wrap a compliant API around existing code that cannot be or may not be modified (e.g., infeasible, dependencies on other systems, closed code) (Fig. 3). All three of these scenarios have been applied in the modules recently developed: a module for molecular data (SeqDB), modules for multimedia, documents and agents data and a service module for printing labels and reports: The SeqDB collection management and molecular tracking system (Bilkhu et al. 2017) has evolved through two of these scenarios. Originally, the required architectural changes were going to be added into the codebase, but after some time, the development team recognised that the technical debt inherent in the project wasn’t worth the effort of modification and refactoring. Instead a new codebase was created bringing forward the best parts of the system oriented around the molecular data model for Sanger Sequencing and Next Generation Sequencing (NGS) workflows. In the case of the Multimedia and Document Store module and the Agents module, a brand new codebase was established whose technology choices were aligned with the DINA vision. These two modules have been created from fundamental use cases for collection management and digitization workflows and will continue to evolve as more modules come online and broaden their scope. The DINA Labels &amp; Reporting module is a generic service for transforming data in arbitrary printable layouts based on customizable templates. In order to use the module in combination with data managed in collection management software Specify (http://specifysoftware.org) for printing labels of collection objects, we wrapped the Specify 7 API with a DINA-compliant API layer called the “DINA Specify Broker”. This allows for using the easy-to-use web-based template engine within the DINA Labels &amp; Reports module without changing Specify’s codebase. In our presentation we will explain the DINA development philosophy and will outline benefits for different stakeholders who directly or indirectly use collections data and related research data in their daily workflows. We will also highlight opportunities for joining the DINA Consortium and how to best engage with members of DINA who share their expertise in natural science, biodiversity informatics and geoinformatics.
APA, Harvard, Vancouver, ISO, and other styles
23

Lukas, Scott A. "Nevermoreprint." M/C Journal 8, no. 2 (June 1, 2005). http://dx.doi.org/10.5204/mcj.2336.

Full text
Abstract:
Perhaps the supreme quality of print is one that is lost on us, since it has so casual and obvious an existence (McLuhan 160). Print Machine (Thad Donovan, 1995) In the introduction to his book on 9/11, Welcome to the Desert of the Real, Slavoj Zizek uses an analogy of letter writing to emphasize the contingency of post-9/11 reality. In the example, Zizek discusses the efforts of writers to escape the eyes of governmental censors and a system that used blue ink to indicate a message was true, red ink to indicate it was false. The story ends with an individual receiving a letter from the censored country stating that the writer could not find any red ink. The ambiguity and the duplicity of writing, suggested in Zizek’s tale of colored inks, is a condition of the contemporary world, even if we are unaware of it. We exist in an age in which print—the economization of writing—has an increasingly significant and precarious role in our lives. We turn to the Internet chat room for textual interventions in our sexual, political and aesthetic lives. We burn satanic Harry Potter books and issue fatwas against writers like Salman Rushdie. We narrate our lives using pictures, fonts of varying typeface and color, and sound on our personalized homepages. We throw out our printed books and buy audio ones so we can listen to our favorite authors in the car. We place trust of our life savings, personal numbers, and digital identity in the hands of unseen individuals behind computer screens. Decisively, we are a print people, but our very nature of being dependent on the technologies of print in our public and private lives leads to our inability to consider the epistemological, social and existential effects of print on us. In this article, I focus on the current manifestations of print—what I call “newprint”—including their relationships to consumerism, identity formation and the politics of the state. I will then consider the democratic possibilities of print, suggested by the personalization of print through the Internet and home publishing, and conclude with the implications of the end of print that include the possibility of a post-print language and the middle voice. In order to understand the significance of our current print culture, it is important to situate print in the context of the history of communication. In earlier times, writing had magical associations (Harris 10), and commonly these underpinnings led to the stratification of communities. Writing functioned as a type of black box, “the mysterious technology by which any message [could] be concealed from its illiterate bearer” (Harris 16). Plato and Socrates warned against the negative effects of writing on the mind, including the erosion of memory (Ong 81). Though it once supplemented the communicational bases of orality, the written word soon supplanted it and created a dramatic existential shift in people—a separation of “the knower from the known” (Ong 43-44). As writing moved from the inconvenience of illuminated manuscripts and hand-copied texts, it became systemized in Gutenberg print, and writing then took on the signature of the state—messages between people were codified in the technology of print. With the advent of computer technologies in the 1990s, including personal computers, word processing programs, printers, and the Internet, the age of newprint begins. Newprint includes the electronic language of the Internet and other examples of the public alphabet, including billboards, signage and the language of advertising. As much as members of consumer society are led to believe that newprint is the harbinger of positive identity construction and individualism, closer analysis of the mechanisms of newprint leads to a different conclusion. An important context of new print is found in the space of the home computer. The home computer is the workstation of the contemporary discursive culture—people send and receive emails, do their shopping on the Internet, meet friends and even spouses through dating services, conceal their identity on MUDs and MOOs, and produce state-of-the-art publishing projects, even books. The ubiquity of print in the space of the personal computer leads to the vital illusion that this newprint is emancipatory. Some theorists have argued that the Internet exhibits the spirit of communicative action addressed by Juergen Habermas, but such thinkers have neglected the fact that the foundations of newprint, just like those of Gutenberg print, are the state and the corporation. Recent advertising of Hewlett-Packard and other computer companies illustrates this point. One advertisement suggested that consumers could “invent themselves” through HP computer and printer technology: by using the varied media available to them, consumers can make everything from personalized greeting cards to full-fledged books. As Friedrich Kittler illustrates, we should resist the urge to separate the practices of writing from the technologies of their production, what Jay David Bolter (41) denotes as the “writing space”. For as much as we long for new means of democratic and individualistic expression, we should not succumb to the urge to accept newprint because of its immediacy, novelty or efficiency. Doing so will relegate us to a mechanistic existence, what is referenced metaphorically in Thad Donovan’s “print machine.” In multiple contexts, newprint extends the corporate state’s propaganda industry by turning the written word into artifice. Even before newprint, the individual was confronted with the hegemony of writing. Writing creates “context-free language” or “autonomous discourse,” which means an individual cannot directly confront the language or speaker as one could in oral cultures (Ong 78). This further division of the individual from the communicational world is emphasized in newprint’s focus on the aesthetics of the typeface. In word processing programs like Microsoft Word, and specialized ones like TwistType, the consumer can take a word or a sentence and transform it into an aesthetic formation. On the word processing program that is producing this text, I can choose from Blinking Background, Las Vegas Lights, Marching Red or Black Ants, Shimmer, and Sparkle Text. On my campus email system I am confronted with pictorial backgrounds, font selection and animation as an intimate aspect of the communicational system of my college. On my cell phone I can receive text messages, and I can choose to use emoticons (iconic characters and messages) on the Internet. As Walter Ong wrote, “print situates words in space more relentlessly than writing ever did … control of position is everything in print” (Ong 121). In the case of the new culture of print, the control over more functions of the printed page, specifically its presentation, leads some consumers to believe that choice and individuality are the outcomes. Newprint does not free the writer from the constraints imposed by the means of traditional print—the printing press—rather, it furthers them as the individual operates by the logos of a predetermined and programmed electronic print. The capacity to spell and write grammatically correct sentences is abated by the availability of spell- and grammar-checking functions in word processing software. In many ways, the aura of writing is lost in newprint in the same way in which art lost its organic nature as it moved into the age of reproducibility (Benjamin). Just as filters in imaging programs like Photoshop reduce the aesthetic functions of the user to the determinations of the software programmer, the use of automated print technologies—whether spell-checking or fanciful page layout software like QuarkXpress or Page Maker—will further dilute the voice of the writer. Additionally, the new forms of print can lead to a fracturing of community, the opposite intent of Habermas’ communicative action. An example is the recent growth of specialized languages on the Internet. Some of the newer forms of such languages use combinations of alphanumeric characters to create a language that can only be read by those with the code. As Internet print becomes more specialized, a tribal effect may be felt within our communities. Since email began a few years ago, I have noticed that the nature of the emails I receive has been dramatically altered. Today’s emails tend to be short and commonly include short hands (“LOL” = “laugh out loud”), including the elimination of capitalization and punctuation. In surveying students on the reasons behind such alterations of language in email, I am told that these short hands allow for more efficient forms of communication. In my mind, this is the key issue that is at stake in both print and newprint culture—for as long as we rely on print and other communicational systems as a form of efficiency, we are doomed to send and receive inaccurate and potentially dangerous messages. Benedict Anderson and Hannah Arendt addressed the connections of print to nationalistic and fascist urges (Anderson; Arendt), and such tendencies are seen in the post-9/11 discursive formations within the United States. Bumper stickers and Presidential addresses conveyed the same simplistic printed messages: “Either You are with Us or You are with the Terrorists.” Whether dropping leaflets from airplanes or in scrolling text messages on the bottom of the television news screen, the state is dependent on the efficiency of print to maintain control of the citizen. A feature of this efficiency is that newprint be rhetorically immediate in its results, widely available in different forms of technology, and dominated by the notion of individuality and democracy that is envisioned in HP’s “invent yourself” advertsiements. As Marshall McLuhan’s epigram suggests, we have an ambiguous relationship to print. We depend on printed language in our daily lives, for education and for the economic transactions that underpin our consumer world, yet we are unable to confront the rhetoric of the state and mass media that are consequences of the immediacy and magic of both print and new print. Print extends the domination of our consciousness by forms of discourse that privilege representation over experience and the subject over the object. As we look to new means of communicating with one another and of expressing our intimate lives, we must consider altering the discursive foundations of our communication, such as looking to the middle voice. The middle voice erases the distinctions between subjects and objects and instead emphasizes the writer being in the midst of things, as a part of the world as opposed to dominating it (Barthes; Tyler). A few months prior to writing this article, I spent the fall quarter teaching in London. One day I received an email that changed my life. My partner of nearly six years announced that she was leaving me. I was gripped with the fact of my being unable to discuss the situation with her as we were thousands of miles apart and I struggled to understand how such a significant and personal circumstance could be communicated with the printed word of email. Welcome to new print! References Anderson, Benedict. Imagined Communities: Reflections on the Origin and Spread of Nationalism. London: Verso, 1991. Arendt, Hannah. The Origins of Totalitarianism. San Diego: Harcourt Brace, 1976. Barthes, Roland. “To Write: An Intransitive Verb?” The Languages of Criticism and the Sciences of Man: The Structuralist Controversy. Ed. Richard Macksey and Eugenio Donato. Baltimore: Johns Hopkins UP, 1970. 134-56. Benjamin, Walter. “The Work of Art in the Age of Its Technological Reproducibility: Second Version.” Walter Benjamin: Selected Writings, Volume 3: 1935-1938. Cambridge: Belknap/Harvard, 2002. Bolter, Jay David. Writing Space: The Computer, Hypertext, and the History of Writing. Hillsdale, NJ: Lawrence Erlbaum, 1991. Habermas, Jürgen. The Theory of Communicative Action. Vol. I. Boston: Beacon Press, 1985. Harris, Roy. The Origin of Writing. La Salle, IL: Open Court, 1986. Kittler, Friedrich A. Discourse Networks 1800/1900. Stanford: Stanford UP, 1990. McLuhan, Marshall. Understanding Media: The Extensions of Man. Cambridge: MIT P, 1994. Ong, Walter J. Orality and Literacy: The Technologizing of the Word. London: Routledge, 1991. Tyler, Stephen A. “The Middle Voice: The Influence of Post-Modernism on Empirical Research in Anthropology.” Post-modernism and Anthropology. Eds. K. Geuijen, D. Raven, and J. de Wolf. Assen, The Neatherlands: Van Gorcum, 1995. Zizek, Slavoj. Welcome to the Desert of the Real. London: Verso, 2002. Citation reference for this article MLA Style Lukas, Scott A. "Nevermoreprint." M/C Journal 8.2 (2005). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0506/04-lukas.php>. APA Style Lukas, S. (Jun. 2005) "Nevermoreprint," M/C Journal, 8(2). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0506/04-lukas.php>.
APA, Harvard, Vancouver, ISO, and other styles
24

Holmes, Ashley M. "Cohesion, Adhesion and Incoherence: Magazine Production with a Flickr Special Interest Group." M/C Journal 13, no. 1 (March 22, 2010). http://dx.doi.org/10.5204/mcj.210.

Full text
Abstract:
This paper provides embedded, reflective practice-based insight arising from my experience collaborating to produce online and print-on-demand editions of a magazine showcasing the photography of members of haphazart! Contemporary Abstracts group (hereafter referred to as haphazart!). The group’s online visual, textual and activity-based practices via the photo sharing social networking site Flickr are portrayed as achieving cohesive visual identity. Stylistic analysis of pictures in support of this claim is not attempted. Rather negotiation, that Elliot has previously described in M/C Journal as innate in collaboration, is identified as the unifying factor. However, the collaborators’ adherence to Flickr’s communication platform proves problematic in the editorial context. Some technical incoherence with possible broader cultural implications is encountered during the process of repurposing images from screen to print. A Scan of Relevant Literature The photographic gaze perceives and captures objects which seem to ‘carry within them ready-made’ a work of art. But the reminiscences of the gaze are only made possible by knowing and associating with groups that define a tradition. The list of valorised subjects is not actually defined with reference to a culture, but rather by familiarity with a limited group. (Chamboredon 144) As part of the array of socio-cultural practices afforded by Web 2.0 interoperability, sites of produsage (Bruns) are foci for studies originating in many disciplines. Flickr provides a rich source of data that researchers interested in the interface between the technological and the social find useful to analyse. Access to the Flickr application programming interface enables quantitative researchers to observe a variety of means by which information is propagated, disseminated and shared. Some findings from this kind of research confirm the intuitive. For example, Negoecsu et al. find that “a large percentage of users engage in sharing with groups and that they do so significantly” ("Analyzing Flickr Groups" 425). They suggest that Flickr’s Groups feature appears to “naturally bring together two key aspects of social media: content and relations.” They also find evidence for what they call hyper-groups, which are “communities consisting of groups of Flickr groups” ("Flickr Hypergroups" 813). Two separate findings from another research team appear to contradict each other. On one hand, describing what they call “social cascades,” Cha et al. claim that “content in the form of ideas, products, and messages spreads across social networks like a virus” ("Characterising Social Cascades"). Yet in 2009 they claim that homocity and reciprocity ensure that “popularity of pictures is localised” ("Measurement-Driven Analysis"). Mislove et al. reflect that the affordances of Flickr influence the growth patterns they observe. There is optimism shared by some empiricists that through collation and analysis of Flickr tag data, the matching of perceptual structures of images and image annotation techniques will yield ontology-based taxonomy useful in automatic image annotation and ultimately, the Semantic Web endeavour (Kennedy et al.; Su et al.; Xu et al.). Qualitative researchers using ethnographic interview techniques also find Flickr a valuable resource. In concluding that the photo sharing hobby is for many a “serious leisure” activity, Cox et al. propose that “Flickr is not just a neutral information system but also value laden and has a role within a wider cultural order.” They also suggest that “there is genuinely greater scope for individual creativity, releasing the individual to explore their own identity in a way not possible with a camera club.” Davies claims that “online spaces provide an arena where collaboration over meanings can be transformative, impacting on how individuals locate themselves within local and global contexts” (550). She says that through shared ways of describing and commenting on images, Flickrites develop a common criticality in their endeavour to understand images, each other and their world (554).From a psychologist’s perspective, Suler observes that “interpersonal relationships rarely form and develop by images alone” ("Image, Word, Action" 559). He says that Flickr participants communicate in three dimensions: textual (which he calls “verbal”), visual, and via the interpersonal actions that the site affords, such as Favourites. This latter observation can surely be supplemented by including the various games that groups configure within the constraints of the discussion forums. These often include submissions to a theme and voting to select a winning image. Suler describes the place in Flickr where one finds identity as one’s “cyberpsychological niche” (556). However, many participants subscribe to multiple groups—45.6% of Flickrites who share images share them with more than 20 groups (Negoescu et al., "Analyzing Flickr Groups" 420). Is this a reflection of the existence of the hyper-groups they describe (2009) or, of the ranging that people do in search of a niche? It is also probable that some people explore more than a singular identity or visual style. Harrison and Bartell suggest that there are more interesting questions than why users create media products or what motivates them to do so: the more interesting questions center on understanding what users will choose to do ultimately with [Web2.0] capabilities [...] in what terms to define the success of their efforts, and what impact the opportunity for individual and collaborative expression will have on the evolution of communicative forms and character. (167) This paper addresseses such questions. It arises from a participatory observational context which differs from that of the research described above. It is intended that a different perspective about online group-based participation within the Flickr social networking matrix will avail. However, it will be seen that the themes cited in this introductory review prove pertinent. Context As a university teacher of a range of subjects in the digital media field, from contemporary photomedia to social media to collaborative multimedia practice, it is entirely appropriate that I embed myself in projects that engage, challenge and provide me with relevant first-hand experience. As an academic I also undertake and publish research. As a practicing new media artist I exhibit publically on a regular basis and consider myself semi-professional with respect to this activity. While there are common elements to both approaches to research, this paper is written more from the point of view of ‘reflective practice’ (Holmes, "Reconciling Experimentum") rather than ‘embedded ethnography’ (Pink). It is necessarily and unapologetically reflexive. Abstract Photography Hyper-Group A search of all Flickr groups using the query “abstract” is currently likely to return around 14,700 results. However, only in around thirty of them does the group name, its stated rules and, the stream of images that flow through the pool arguably reflect a sense of collective concept and aesthetic that is coherently abstract. This loose complex of groups comprises a hyper-group. Members of these groups often have co-memberships, reciprocal contacts, and regularly post images to a range of groups and comment on others’ posts to be found throughout. Given that one of Flickr’s largest groups, Black and White, currently has around 131,150 members and hosts 2,093,241 items in its pool, these abstract special interest groups are relatively small. The largest, Abstract Photos, has 11,338 members and hosts 89,306 items in its pool. The group that is the focus of this paper, haphazart!, currently has 2,536 members who have submitted 53,309 items. The group pool is more like a constantly flowing river because the most recently added images are foremost. Older images become buried in an archive of pages which cannot be reverse accessed at a rate greater than the seven pages linked from a current view. A member’s presence is most immediate through images posted to a pool. This structural feature of Flickr promotes a desire for currency; a need to post regularly to maintain presence. Negotiating Coherence to the Abstract The self-managing social dynamics in groups has, as Suler proposes to be the case for individuals, three dimensions: visual, textual and action. A group integrates the diverse elements, relationships and values which cumulatively constitute its identity with contributions from members in these dimensions. First impressions of that identity are usually derived from the group home page which consists of principal features: the group name, a selection of twelve most recent posts to the pool, some kind of description, a selection of six of the most recent discussion topics, and a list of rules (if any). In some of these groups, what is considered to constitute an abstract photographic image is described on the group home page. In some it is left to be contested and becomes the topic of ongoing forum debates. In others the specific issue is not discussed—the images are left to speak for themselves. Administrators of some groups require that images are vetted for acceptance. In haphazart! particular administrators dutifully delete from the pool on a regular basis any images that they deem not to comply with the group ethic. Whether reasons are given or not is left to the individual prosecutor. Mostly offending images just disappear from the group pool without trace. These are some of the ways that the coherence of a group’s visual identity is established and maintained. Two groups out of the abstract photography hyper-group are noteworthy in that their discussion forums are particularly active. A discussion is just the start of a new thread and may have any number of posts under it. At time of writing Abstract Photos has 195 discussions and haphazart! — the most talkative by this measure—has 333. Haphazart! invites submissions of images to regularly changing themes. There is always lively and idiosyncratic banter in the forum over the selection of a theme. To be submitted an image needs to be identified by a specific theme tag as announced on the group home page. The tag can be added by the photographer themselves or by anyone else who deems the image appropriate to the theme. An exhibition process ensues. Participant curators search all Flickr items according to the theme tag and select from the outcome images they deem to most appropriately and abstractly address the theme. Copies of the images together with comments by the curators are posted to a dedicated discussion board. Other members may also provide responses. This activity forms an ongoing record that may serve as a public indicator of the aesthetic that underlies the group’s identity. In Abstract Photos there is an ongoing discussion forum where one can submit an image and request that the moderators rule as to whether or not the image is ‘abstract’. The same group has ongoing discussions labelled “Hall of Appropriate” where worthy images are reposted and celebrated and, “Hall of Inappropriate” where images posted to the group pool have been removed and relegated because abstraction has been “so far stretched from its definition that it now resides in a parallel universe” (Askin). Reasons are mostly courteously provided. In haphazart! a relatively small core of around twelve group members regularly contribute to the group discussion board. A curious aspect of this communication is that even though participants present visually with a ‘buddy icon’ and most with a screen name not their real name, it is usual practice to address each other in discussions by their real Christian names, even when this is not evident in a member’s profile. This seems to indicate a common desire for authenticity. The makeup of the core varies from time to time depending on other activities in a member’s life. Although one or two may be professionally or semi-professionally engaged as photographers or artists or academics, most of these people would likely consider themselves to be “serious amateurs” (Cox). They are internationally dispersed with bias to the US, UK, Europe and Australia. English is the common language though not the natural tongue of some. The age range is approximately 35 to 65 and the gender mix 50/50. The group is three years old. Where Do We Go to from Here? In early January 2009 the haphazart! core was sparked into a frenzy of discussion by a post from a member headed “Where do we go to from here?” A proposal was mooted to produce a ‘book’ featuring images and texts representative of the group. Within three days a new public group with invited membership dedicated to the idea had been established. A smaller working party then retreated to a private Flickr group. Four months later Issue One of haphazart! magazine was available in print-on-demand and online formats. Following however is a brief critically reflective review of some of the collaborative curatorial, editorial and production processes for Issue Two which commenced in early June 2009. Most of the team had also been involved with Issue One. I was the only newcomer and replaced the person who had undertaken the design for Issue One. I was not provided access to the prior private editorial ruminations but apparently the collaborative curatorial and editorial decision-making practices the group had previously established persisted, and these took place entirely within the discussion forums of a new dedicated private Flickr group. Over a five-month period there were 1066 posts in 54 discussions concerning matters such as: change of format from the previous; selection of themes, artists and images; conduct of and editing of interviews; authoring of texts; copyright and reproduction. The idiom of those communications can be described as: discursive, sporadic, idiosyncratic, resourceful, collegial, cooperative, emphatic, earnest and purposeful. The selection process could not be said to follow anything close to a shared manifesto, or articulation of style. It was established that there would be two primary themes: the square format and contributors’ use of colour. Selection progressed by way of visual presentation and counter presentation until some kind of consensus was reached often involving informal votes of preference. Stretching the Limits of the Flickr Social Tools The magazine editorial collaborators continue to use the facilities with which they are familiar from regular Flickr group participation. However, the strict vertically linear format of the Flickr discussion format is particularly unsuited to lengthy, complex, asynchronous, multithreaded discussion. For this purpose it causes unnecessary strain, fatigue and confusion. Where images are included, the forums have set and maximum display sizes and are not flexibly configured into matrixes. Images cannot readily be communally changed or moved about like texts in a wiki. Likewise, the Flickrmail facility is of limited use for specialist editorial processes. Attachments cannot be added. This opinion expressed by a collaborator in the initial, open discussion for Issue One prevailed among Issue Two participants: do we want the members to go to another site to observe what is going on with the magazine? if that’s ok, then using google groups or something like that might make sense; if we want others to observe (and learn from) the process - we may want to do it here [in Flickr]. (Valentine) The opinion appears socially constructive; but because the final editorial process and production processes took place in a separate private forum, ultimately the suggested learning between one issue and the next did not take place. During Issue Two development the reluctance to try other online collaboration tools for the selection processes requiring visual comparative evaluation of images and trials of sequencing adhered. A number of ingenious methods of working within Flickr were devised and deployed and, in my opinion, proved frustratingly impractical and inefficient. The digital layout, design, collation and formatting of images and texts, all took place on my personal computer using professional software tools. Difficulties arose in progressively sharing this work for the purposes of review, appraisal and proofing. Eventually I ignored protests and insisted the team review demonstrations I had converted for sharing in Google Documents. But, with only one exception, I could not tempt collaborators to try commenting or editing in that environment. For example, instead of moving the sequence of images dynamically themselves, or even typing suggestions directly into Google Documents, they would post responses in Flickr. To Share and to Hold From the first imaginings of Issue One the need to have as an outcome something in one’s hands was expressed and this objective is apparently shared by all in the haphazart! core as an ongoing imperative. Various printing options have been nominated, discussed and evaluated. In the end one print-on-demand provider was selected on the basis of recommendation. The ethos of haphazart! is clearly not profit-making and conflicts with that of the printing organisation. Presumably to maintain an incentive to purchase the print copy online preview is restricted to the first 15 pages. To satisfy the co-requisite to make available the full 120 pages for free online viewing a second host that specialises in online presentation of publications is also utilised. In this way haphazart! members satisfy their common desires for sharing selected visual content and ideas with an online special interest audience and, for a physical object of art to relish—with all the connotations of preciousness, fetish, talisman, trophy, and bookish notions of haptic pleasure and visual treasure. The irony of publishing a frozen chunk of the ever-flowing Flickriver, whose temporally changing nature is arguably one of its most interesting qualities, is not a consideration. Most of them profess to be simply satisfying their own desire for self expression and would eschew any critical judgement as to whether this anarchic and discursive mode of operation results in a coherent statement about contemporary photographic abstraction. However there remains a distinct possibility that a number of core haphazart!ists aspire to transcend: popular taste; the discernment encouraged in camera clubs; and, the rhetoric of those involved professionally (Bourdieu et al.); and seek to engage with the “awareness of illegitimacy and the difficulties implied by the constitution of photography as an artistic medium” (Chamboredon 130). Incoherence: A Technical Note My personal experience of photography ranges from the filmic to the digital (Holmes, "Bridging Adelaide"). For a number of years I specialised in facsimile graphic reproduction of artwork. In those days I became aware that films were ‘blind’ to the psychophysical affect of some few particular paint pigments. They just could not be reproduced. Even so, as I handled the dozens of images contributed to haphazart!2, converting them from the pixellated place where Flickr exists to the resolution and gamut of the ink based colour space of books, I was surprised at the number of hue values that exist in the former that do not translate into the latter. In some cases the affect is subtle so that judicious tweaking of colour levels or local colour adjustment will satisfy discerning comparison between the screenic original and the ‘soft proof’ that simulates the printed outcome. In other cases a conversion simply does not compute. I am moved to contemplate, along with Harrison and Bartell (op. cit.) just how much of the experience of media in the shared digital space is incomparably new? Acknowledgement Acting on the advice of researchers experienced in cyberethnography (Bruckman; Suler, "Ethics") I have obtained the consent of co-collaborators to comment freely on proceedings that took place in a private forum. They have been given the opportunity to review and suggest changes to the account. References Askin, Dean (aka: dnskct). “Hall of Inappropriate.” Abstract Photos/Discuss/Hall of Inappropriate, 2010. 12 Jan. 2010 ‹http://www.flickr.com/groups/abstractphotos/discuss/72157623148695254/>. Bourdieu, Pierre, Luc Boltanski, Robert Castel, Jean-Claude Chamboredeon, and Dominique Schnapper. Photography: A Middle-Brow Art. 1965. Trans. Shaun Whiteside. Stanford: Stanford UP, 1990. Bruckman, Amy. Studying the Amateur Artist: A Perspective on Disguising Data Collected in Human Subjects Research on the Internet. 2002. 12 Jan. 2010 ‹http://www.nyu.edu/projects/nissenbaum/ethics_bru_full.html>. Bruns, Axel. “Towards Produsage: Futures for User-Led Content Production.” Proceedings: Cultural Attitudes towards Communication and Technology 2006. Perth: Murdoch U, 2006. 275–84. ———, and Mark Bahnisch. Social Media: Tools for User-Generated Content. Vol. 1 – “State of the Art.” Sydney: Smart Services CRC, 2009. Cha, Meeyoung, Alan Mislove, Ben Adams, and Krishna P. Gummadi. “Characterizing Social Cascades in Flickr.” Proceedings of the First Workshop on Online Social Networks. ACM, 2008. 13–18. ———, Alan Mislove, and Krishna P. Gummadi. “A Measurement-Driven Analysis of Information Propagation in the Flickr Social Network." WWW '09: Proceedings of the 18th International Conference on World Wide Web. ACM, 2009. 721–730. Cox, A.M., P.D. Clough, and J. Marlow. “Flickr: A First Look at User Behaviour in the Context of Photography as Serious Leisure.” Information Research 13.1 (March 2008). 12 Dec. 2009 ‹http://informationr.net/ir/13-1/paper336.html>. Chamboredon, Jean-Claude. “Mechanical Art, Natural Art: Photographic Artists.” Photography: A Middle-Brow Art. Pierre Bourdieu. et al. 1965. Trans. Shaun Whiteside. Stanford: Stanford UP, 1990. 129–149. Davies, Julia. “Display, Identity and the Everyday: Self-Presentation through Online Image Sharing.” Discourse: Studies in the Cultural Politics of Education 28.4 (Dec. 2007): 549–564. Elliott, Mark. “Stigmergic Collaboration: The Evolution of Group Work.” M/C Journal 9.2 (2006). 12 Jan. 2010 ‹http://journal.media-culture.org.au/0605/03-elliott.php>. Harrison, Teresa, M., and Brea Barthel. “Wielding New Media in Web 2.0: Exploring the History of Engagement with the Collaborative Construction of Media Products.” New Media & Society 11.1-2 (2009): 155–178. Holmes, Ashley. “‘Bridging Adelaide 2001’: Photography and Hyperimage, Spanning Paradigms.” VSMM 2000 Conference Proceedings. International Society for Virtual Systems and Multimedia, 2000. 79–88. ———. “Reconciling Experimentum and Experientia: Reflective Practice Research Methodology for the Creative Industries”. Speculation & Innovation: Applying Practice-Led Research in the Creative Industries. Brisbane: QUT, 2006. Kennedy, Lyndon, Mor Naaman, Shane Ahern, Rahul Nair, and Tye Rattenbury. “How Flickr Helps Us Make Sense of the World: Context and Content in Community-Contributed Media Collections.” MM’07. ACM, 2007. Miller, Andrew D., and W. Keith Edwards. “Give and Take: A Study of Consumer Photo-Sharing Culture and Practice.” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2007. 347–356. Mislove, Alan, Hema Swetha Koppula, Krishna P. Gummadi, Peter Druschel and Bobby Bhattacharjee. “Growth of the Flickr Social Network.” Proceedings of the First Workshop on Online Social Networks. ACM, 2008. 25–30. Negoescu, Radu-Andrei, and Daniel Gatica-Perez. “Analyzing Flickr Groups.” CIVR '08: Proceedings of the 2008 International Conference on Content-Based Image and Video Retrieval. ACM, 2008. 417–426. ———, Brett Adams, Dinh Phung, Svetha Venkatesh, and Daniel Gatica-Perez. “Flickr Hypergroups.” MM '09: Proceedings of the Seventeenth ACM International Conference on Multimedia. ACM, 2009. 813–816. Pink, Sarah. Doing Visual Ethnography: Images, Media and Representation in Research. 2nd ed. London: Sage, 2007. Su, Ja-Hwung, Bo-Wen Wang, Hsin-Ho Yeh, and Vincent S. Tseng. “Ontology–Based Semantic Web Image Retrieval by Utilizing Textual and Visual Annotations.” 2009 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology – Workshops. 2009. Suler, John. “Ethics in Cyberspace Research: Consent, Privacy and Contribution.” The Psychology of Cyberspace. 1996. 12 Jan. 2010 ‹http://www-usr.rider.edu/~suler/psycyber/psycyber.html>. ———. “Image, Word, Action: Interpersonal Dynamics in a Photo-Sharing Community.” Cyberpsychology & Behavior 11.5 (2008): 555–560. Valentine, Mark. “HAPHAZART! Magazine/Discuss/image selections…” [discussion post]. 2009. 12 Jan. 2010 ‹http://www.flickr.com/groups/haphazartmagazin/discuss/72157613147017532/>. Xu, Hongtao, Xiangdong Zhou, Mei Wang, Yu Xiang, and Baile Shi. “Exploring Flickr’s Related Tags for Semantic Annotation of Web Images.” CIVR ’09. ACM, 2009.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography