To see the other types of publications on this topic, follow the link: Parallel merge sort.

Journal articles on the topic 'Parallel merge sort'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Parallel merge sort.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Cole, Richard. "Parallel Merge Sort." SIAM Journal on Computing 17, no. 4 (1988): 770–85. http://dx.doi.org/10.1137/0217049.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cole, Richard. "Correction: Parallel Merge Sort." SIAM Journal on Computing 22, no. 6 (1993): 1349. http://dx.doi.org/10.1137/0222081.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Ullah Khan Rajesh, Husain. "An Adaptive Framework towards Analyzing the Parallel Merge Sort." International Journal of Science and Research (IJSR) 1, no. 2 (2012): 31–34. http://dx.doi.org/10.21275/ijsr11120222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Manwade, K. B. "Analysis of Parallel Merge Sort Algorithm." International Journal of Computer Applications 1, no. 19 (2010): 70–73. http://dx.doi.org/10.5120/401-597.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Shen, Hai Long. "Optimal Parallel Algorithm of Merge Sort Based on OpenMP." Applied Mechanics and Materials 556-562 (May 2014): 3400–3403. http://dx.doi.org/10.4028/www.scientific.net/amm.556-562.3400.

Full text
Abstract:
The parallel algorithm of merge sort is proposed. The improvements of merge sort are analyzed in this paper. OpenMP is applied in the proposed algorithm for implementation. The results of complexity and execution time of the proposed algorithm indicate that the parallel algorithm approach the optimal case.
APA, Harvard, Vancouver, ISO, and other styles
6

Romm, Ya E. "Parallel merge sort using comparison matrices. II." Cybernetics and Systems Analysis 31, no. 4 (1995): 484–505. http://dx.doi.org/10.1007/bf02366405.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Romm, Ya E. "Parallel merge sort using comparison matrices. I." Cybernetics and Systems Analysis 30, no. 5 (1994): 631–47. http://dx.doi.org/10.1007/bf02367744.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Zhang, Jun, Yong Ping Gao, Yue Shun He, and Xue Yuan Wang. "Algorithm Improvement of Two-Way Merge Sort Based on OpenMP." Applied Mechanics and Materials 701-702 (December 2014): 24–29. http://dx.doi.org/10.4028/www.scientific.net/amm.701-702.24.

Full text
Abstract:
Two-way merge sort algorithm has a good time efficiency which has been used widely. The sort algorithm can be improved on speed and efficient based on its own potential parallelism via the parallel processing capacity of multi-core processor and the convenient programming interface of OpenMP. The time complexity is improved to O(nlog2n/TNUM) and inversely proportional to the number of parallel threads. The experiment results show that the improved two-way merge sort algorithm become much more efficient compared to the traditional one.
APA, Harvard, Vancouver, ISO, and other styles
9

Marszałek, Zbigniew, Marcin Woźniak, and Dawid Połap. "Fully Flexible Parallel Merge Sort for Multicore Architectures." Complexity 2018 (December 2, 2018): 1–19. http://dx.doi.org/10.1155/2018/8679579.

Full text
Abstract:
The development in multicore architectures gives a new line of processors that can flexibly distribute tasks between their logical cores. These need flexible models of efficient algorithms, both fast and stable. A new line of efficient sorting algorithms can support these systems to efficiently use all available resources. Processes and calculations shall be flexibly distributed between cores to make the performance as high as possible. In this article we present a fully flexible sorting method designed for parallel processing. The idea we describe in this article is based on modified merge sort, which in parallel form is designed for multicore architectures. The novelty of this idea is in particular way of processing. We have developed a fully flexible method that can be implemented for a number of processors. The tasks are flexibly distributed between logical cores to increase the efficiency of sorting. The method preserves separation of concerns; therefore, each of the processors works separately without any cross actions and interruptions. The proposed method was described in theoretical way, examined in tests, and compared to other methods. The results confirm high efficiency and show that with each newly added processor sorting becomes faster and more efficient.
APA, Harvard, Vancouver, ISO, and other styles
10

Yudiswara, I. Nyoman Aditya, and Abba Suganda. "Analisis Kinerja Algoritma Quick Double Merge Sort Paralel Menggunakan openMP." Ultima Computing : Jurnal Sistem Komputer 11, no. 2 (2020): 95–102. http://dx.doi.org/10.31937/sk.v11i2.1294.

Full text
Abstract:
Processor technology currently tends to increase the number of cores more than increasing the clock speed. This development is very useful and becomes an opportunity to improve the performance of sequential algorithms that are only done by one core. This paper discusses the sorting algorithm that is executed in parallel by several logical CPUs or cores using the openMP library. This algorithm is named QDM Sort which is a combination of sequential quick sort algorithm and double merge algorithm. This study uses a data parallelism approach to design parallel algorithms from sequential algorithms. The data used in this study are the data that have not been sorted and also the data that has been sorted is integer type which is stored in advance in a file. The parameter measured to determine the performance of the QDM Sort algorithm is speedup. In a condition where a large amount of data is above 4096 and the number of threads in QDM Sort is the same as the number of logical CPUs, the QDM Sort algorithm has a better speedup compared to the other parallel sorting algorithms discussed in this study. For small amounts of data it is still better to use sequential sorting algorithm.
APA, Harvard, Vancouver, ISO, and other styles
11

Trahan, Robin, and Susan Rodger. "Simulation and visualization tools for teaching parallel merge sort." ACM SIGCSE Bulletin 25, no. 1 (1993): 237–41. http://dx.doi.org/10.1145/169073.169461.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Evans, D. J., and Nadia Y. Yousif. "The parallel neighbour sort and 2-way merge algorithm." Parallel Computing 3, no. 1 (1986): 85–90. http://dx.doi.org/10.1016/0167-8191(86)90009-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Arman, Arif, and Dmitri Loguinov. "Origami." Proceedings of the VLDB Endowment 15, no. 2 (2021): 259–71. http://dx.doi.org/10.14778/3489496.3489507.

Full text
Abstract:
Mergesort is a popular algorithm for sorting real-world workloads as it is immune to data skewness, suitable for parallelization using vectorized intrinsics, and relatively simple to multi-thread. In this paper, we introduce Origami , an in-memory merge-sort framework that is optimized for scalar, as well as all current SIMD (single-instruction multiple-data) CPU architectures. For each vector-extension set (e.g., SSE, AVX2, AVX-512), we present an in-register sorter for small sequences that is up to 8× faster than prior methods and a branchless streaming merger that achieves up to a 1.5× speed-up over the naive merge. In addition, we introduce a cache-residing quad-merge tree to avoid bottlenecking on memory bandwidth and a parallel partitioning scheme to maximize thread-level concurrency. We develop an end-to-end sort with these components and produce a highly utilized mergesort pipeline by reducing the synchronization overhead between threads. Single-threaded Origami performs up to 2× faster than the closest competitor and achieves a nearly perfect speed-up in multi-core environments.
APA, Harvard, Vancouver, ISO, and other styles
14

Wolf, J. L., D. M. Dias, and P. S. Yu. "A parallel sort merge join algorithm for managing data skew." IEEE Transactions on Parallel and Distributed Systems 4, no. 1 (1993): 70–86. http://dx.doi.org/10.1109/71.205654.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

DEHNE, FRANK, and HAMIDREZA ZABOLI. "DETERMINISTIC SAMPLE SORT FOR GPUS." Parallel Processing Letters 22, no. 03 (2012): 1250008. http://dx.doi.org/10.1142/s0129626412500089.

Full text
Abstract:
We demonstrate that parallel deterministic sample sort for many-core GPUs (GPU BUCKET SORT) is not only considerably faster than the best comparison-based sorting algorithm for GPUs (THRUST MERGE [Satish et.al., Proc. IPDPS 2009]) but also as fast as randomized sample sort for GPUs (GPU SAMPLE SORT [Leischner et.al., Proc. IPDPS 2010]). However, deterministic sample sort has the advantage that bucket sizes are guaranteed and therefore its running time does not have the input data dependent fluctuations that can occur for randomized sample sort.
APA, Harvard, Vancouver, ISO, and other styles
16

Kyi, Lai Lai Win, and Nay Min Tun. "Performance Comparison of Parallel Sorting Algorithms on Homogeneous Cluster of Workstations." Advanced Materials Research 433-440 (January 2012): 3900–3904. http://dx.doi.org/10.4028/www.scientific.net/amr.433-440.3900.

Full text
Abstract:
Sorting appears the most attention among all computational tasks over the past years because sorted data is at the heart of many computations. Sorting is of additional importance to parallel computing because of its close relation to the task of routing data among processes, which is an essential part of many parallel algorithms. Many parallel sorting algorithms have been investigated for a variety of parallel computer architectures. In this paper, three parallel sorting algorithms have been implemented and compared in terms of their overall execution time. The algorithms implemented are the odd-even transposition sort, parallel merge sort and parallel shell sort. Cluster of Workstations or Windows Compute Cluster has been used to compare the algorithms implemented. The C# programming language is used to develop the sorting algorithms. The MPI library has been selected to establish the communication and synchronization between processors. The time complexity for each parallel sorting algorithm will also be mentioned and analyzed.
APA, Harvard, Vancouver, ISO, and other styles
17

Srivastava, Rahul. "Research Paper on Visualization of Sorting Algorithm." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 05 (2024): 1–5. http://dx.doi.org/10.55041/ijsrem34569.

Full text
Abstract:
In the realm of sorting algorithms, visualization projects serve as educational tools to comprehend and demonstrate various sorting techniques. This paper presents an analysis and review of sorting visualization projects that do not utilize parallelism. By exploring sorting algorithms without parallel processing, this review aims to provide insights into the efficiency, functionality, and visual representation of these algorithms. Understanding sorting algorithms without parallelism contributes to a foundational understanding of their sequential execution and computational complexities. Keywords: Sorting Algorithms, React Visualizer, Selection Sort, Merge Sort, Bubble Sort, Insertion Sort, Heap Sort.
APA, Harvard, Vancouver, ISO, and other styles
18

Schneider, G. Michael. "Using Parallel Merge Sort to Teach Fundamental Concepts in Distributed Parallelism." Computer Science Education 9, no. 2 (1999): 148–61. http://dx.doi.org/10.1076/csed.9.2.148.3810.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

AL-Azzam, Saad, and Mohammad Qatawneh. "Parallel Processing of Sorting and Searching Algorithms Comparative Study." Modern Applied Science 12, no. 4 (2018): 143. http://dx.doi.org/10.5539/mas.v12n4p143.

Full text
Abstract:
Recently, supercomputers structure and its software optimization have been popular subjects. Much of the software recently consumes a long period of time both to sort and search datasets, and thus optimizing these algorithms becomes a priority. In order to discover the most efficient sorting and searching algorithms for parallel processing units, one can compare CPU runtime as a performance index. In this paper, Quick, Bubble, and Merge sort algorithms have been chosen for comparison, as well as sequential and binary as search algorithms. Each one of the sort and search algorithms was tested in worst, average and best case scenarios. And each scenario was applied using multiple techniques (sequential, multithread, and parallel processing) on a various number of processors to spot differences and calculate speed up factor.The proposed solution aims to optimize the performance of a supercomputer focusing one-time efficiency; all tests were conducted by The IMAN1 supercomputer which is Jordan's first and fastest supercomputer.
APA, Harvard, Vancouver, ISO, and other styles
20

Albutiu, Martina-Cezara, Alfons Kemper, and Thomas Neumann. "Massively parallel sort-merge joins in main memory multi-core database systems." Proceedings of the VLDB Endowment 5, no. 10 (2012): 1064–75. http://dx.doi.org/10.14778/2336664.2336678.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

SAXENA, SANJEEV, P. C. P. BHATT, and V. C. PRASAD. "ON PARALLEL PREFIX COMPUTATION." Parallel Processing Letters 04, no. 04 (1994): 429–36. http://dx.doi.org/10.1142/s0129626494000399.

Full text
Abstract:
We prove that prefix sums of n integers of at most b bits can be found on a COMMON CRCW PRAM in [Formula: see text] time with a linear time-processor product. The algorithm is optimally fast, for any polynomial number of processors. In particular, if [Formula: see text] the time taken is [Formula: see text]. This is a generalisation of previous result. The previous [Formula: see text] time algorithm was valid only for O(log n)-bit numbers. Application of this algorithm to r-way parallel merge sort algorithm is also considered. We also consider a more realistic PRAM variant, in which the word size, m, may be smaller than b (m≥log n). On this model, prefix sums can be found in [Formula: see text] optimal time.
APA, Harvard, Vancouver, ISO, and other styles
22

Nugroho, Eko Dwi, Ilham Firman Ashari, Muhammad Nashrullah, Muhammad Habib Algifari, and Miranti Verdiana. "Comparative Analysis of OpenMP and MPI Parallel Computing Implementations in Team Sort Algorithm." Journal of Applied Informatics and Computing 7, no. 2 (2023): 141–49. http://dx.doi.org/10.30871/jaic.v7i2.6409.

Full text
Abstract:
Tim Sort is a sorting algorithm that combines Merge Sort and Binary Insertion Sort sorting algorithms. Parallel computing is a computational processing technique in parallel or is divided into several parts and carried out simultaneously. The application of parallel computing to algorithms is called parallelization. The purpose of parallelization is to reduce computational processing time, but not all parallelization can reduce computational processing time. Our research aims to analyse the effect of implementing parallel computing on the processing time of the Tim Sort algorithm. The Team Sort algorithm will be parallelized by dividing the flow or data into several parts, then each sorting and recombining them. The libraries we use are OpenMP and MPI, and tests are carried out using up to 16 core processors and data up to 4194304 numbers. The goal to be achieved by comparing the application of OpenMP and MPI to the Team Sort algorithm is to find out and choose which library is better for the case study, so that when there is a similar case, it can be used as a reference for using the library in solving the problem. The results of research for testing using 16 processor cores and the data used prove that the parallelization of the Sort Team algorithm using OpenMP is better with a speed increase of up to 8.48 times, compared to using MPI with a speed increase of 8.4 times. In addition, the increase in speed and efficiency increases as the amount of data increases. However, the increase in efficiency that is obtained by increasing the processor cores decreases.
APA, Harvard, Vancouver, ISO, and other styles
23

Hosseini-Rad, Mina, Majid Abdulrozzagh-Nezzad, and Seyyed-Mohammad Javadi-Moghaddam. "Study of Scheduling in Programming Languages of Multi-Core Processor." Data Science: Journal of Computing and Applied Informatics 2, no. 2 (2019): 101–9. http://dx.doi.org/10.32734/jocai.v2.i2-282.

Full text
Abstract:
Over the recent decades, the nature of multi core processors caused changing the serial programming model to parallel mode. There are several programming languages for the parallel multi core processors and processors with different architectures that these languages have faced programmers to challenges to achieve higher performance. In additional, different scheduling methods in the programming languages for the multi core processors have significant impact on efficiency of the programming languages. Therefore, this article addresses the investigation of the conventional scheduling techniques in the programming languages of multi core processors which allows researcher to choose more suitable programing languages by comparing efficiency than application. Several languages such as Cilk++، OpenMP، TBB and PThread were studied and their scheduling efficiency has been investigated by running Quick-Sort and Merge-Sort algorithms as well
APA, Harvard, Vancouver, ISO, and other styles
24

Hosseini-Rad, Mina, Majid Abdolrazzagh-Nezhad, and Seyyed-Mohammad Javadi-Moghaddam. "Study of Scheduling in Programming Languages of Multi-Core Processor." Data Science: Journal of Computing and Applied Informatics 2, no. 2 (2018): 101–9. http://dx.doi.org/10.32734/jocai.v2.i2-327.

Full text
Abstract:
Over the recent decades, the nature of multi-core processors caused changing the serial programming model to parallel mode. There are several programming languages for the parallel multi-core processors and processors with different architectures that these languages have faced programmers to challenges to achieve higher performance. In addition, different scheduling methods in the programming languages for the multi-core processors have the significant impact on the efficiency of the programming languages. Therefore, this article addresses the investigation of the conventional scheduling techniques in the programming languages of multi-core processors which allows the researcher to choose more suitable programing languages by comparing efficiency than application. Several languages such as Cilk++، OpenMP، TBB and PThread were studied, and their scheduling efficiency has been investigated by running Quick-Sort and Merge-Sort algorithms as well.
APA, Harvard, Vancouver, ISO, and other styles
25

Keller, Jörg, Christoph Kessler, and Rikard Hultén. "Optimized On-Chip-Pipelining for Memory-Intensive Computations on Multi-Core Processors with Explicit Memory Hierarchy." JUCS - Journal of Universal Computer Science 18, no. (14) (2012): 1987–2023. https://doi.org/10.3217/jucs-018-14-1987.

Full text
Abstract:
Limited bandwidth to off-chip main memory tends to be a performance bottleneck in chip multiprocessors, and this will become even more problematic with an increasing number of cores. Especially for streaming computations where the ratio between computational work and memory transfer is low, transforming the program into more memory-efficient code is an important program optimization. On-chip pipelining reorganizes the computation so that partial results of subtasks are forwarded immediately between the cores over the high-bandwidth internal network, in order to reduce the volume of main memory accesses, and thereby improves the throughput for memory-intensive computations. At the same time, throughput is also constrained by the limited amount of on-chip memory available for buffering forwarded data. By optimizing the mapping of tasks to cores, balancing a trade-off between load balancing, buffer memory consumption, and communication load on the on-chip network, a larger buffer size can be applied, resulting in less DMA communication and scheduling overhead. In this article, we consider parallel mergesort as a representative memory-intensive application in detail, and focus on the global merging phase, which is dominating the overall sorting time for larger data sets. We work out the technical issues of applying the on-chip pipelining technique, and present several algorithms for optimized mapping of merge trees to the multiprocessor cores. We also demonstrate how some of these algorithms can be used for mapping of other streaming task graphs. We describe an implementation of pipelined parallel mergesort for the Cell Broadband Engine, which serves as an exemplary target. We evaluate experimentally the influence of buffer sizes and mapping optimizations, and show that optimized on-chip pipelining indeed speeds up, for realistic problem sizes, merging times by up to 70% on QS20 and 143% on PS3 compared to the merge phase of CellSort, which was by now the fastest merge sort implementation on Cell.
APA, Harvard, Vancouver, ISO, and other styles
26

Lee, JinWoo, Jung-Im Won, and JeeHee Yoon. "A Sort and Merge Method for Genome Variant Call Format (GVCF) Files using Parallel and Distributed Computing." Journal of KIISE 48, no. 3 (2021): 358–67. http://dx.doi.org/10.5626/jok.2021.48.3.358.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Adam, George K. "Co-Design of Multicore Hardware and Multithreaded Software for Thread Performance Assessment on an FPGA." Computers 11, no. 5 (2022): 76. http://dx.doi.org/10.3390/computers11050076.

Full text
Abstract:
Multicore and multithreaded architectures increase the performance of computing systems. The increase in cores and threads, however, raises further issues in the efficiency achieved in terms of speedup and parallelization, particularly for the real-time requirements of Internet of things (IoT)-embedded applications. This research investigates the efficiency of a 32-core field-programmable gate array (FPGA) architecture, with memory management unit (MMU) and real-time operating system (OS) support, to exploit the thread level parallelism (TLP) of tasks running in parallel as threads on multiple cores. The research outcomes confirm the feasibility of the proposed approach in the efficient execution of recursive sorting algorithms, as well as their evaluation in terms of speedup and parallelization. The results reveal that parallel implementation of the prevalent merge sort and quicksort algorithms on this platform is more efficient. The increase in the speedup is proportional to the core scaling, reaching a maximum of 53% for the configuration with the highest number of cores and threads. However, the maximum magnitude of the parallelization (66%) was found to be bounded to a low number of two cores and four threads. A further increase in the number of cores and threads did not add to the improvement of the parallelism.
APA, Harvard, Vancouver, ISO, and other styles
28

Rui, Ran, Hao Li, and Yi-Cheng Tu. "Efficient join algorithms for large database tables in a multi-GPU environment." Proceedings of the VLDB Endowment 14, no. 4 (2020): 708–20. http://dx.doi.org/10.14778/3436905.3436927.

Full text
Abstract:
Relational join processing is one of the core functionalities in database management systems. It has been demonstrated that GPUs as a general-purpose parallel computing platform is very promising in processing relational joins. However, join algorithms often need to handle very large input data, which is an issue that was not sufficiently addressed in existing work. Besides, as more and more desktop and workstation platforms support multi-GPU environment, the combined computing capability of multiple GPUs can easily achieve that of a computing cluster. It is worth exploring how join processing would benefit from the adaptation of multiple GPUs. We identify the low rate and complex patterns of data transfer among the CPU and GPUs as the main challenges in designing efficient algorithms for large table joins. To overcome such challenges, we propose three distinctive designs of multi-GPU join algorithms, namely, the nested loop, global sort-merge and hybrid joins for large table joins with different join conditions. Extensive experiments running on multiple databases and two different hardware configurations demonstrate high scalability of our algorithms over data size and significant performance boost brought by the use of multiple GPUs. Furthermore, our algorithms achieve much better performance as compared to existing join algorithms, with a speedup up to 25X and 2.8X over best known code developed for multi-core CPUs and GPUs respectively.
APA, Harvard, Vancouver, ISO, and other styles
29

Niu, Yue, Jonathan Sterling, Harrison Grodin, and Robert Harper. "A cost-aware logical framework." Proceedings of the ACM on Programming Languages 6, POPL (2022): 1–31. http://dx.doi.org/10.1145/3498670.

Full text
Abstract:
We present calf , a c ost- a ware l ogical f ramework for studying quantitative aspects of functional programs. Taking inspiration from recent work that reconstructs traditional aspects of programming languages in terms of a modal account of phase distinctions , we argue that the cost structure of programs motivates a phase distinction between intension and extension . Armed with this technology, we contribute a synthetic account of cost structure as a computational effect in which cost-aware programs enjoy an internal noninterference property: input/output behavior cannot depend on cost. As a full-spectrum dependent type theory, calf presents a unified language for programming and specification of both cost and behavior that can be integrated smoothly with existing mathematical libraries available in type theoretic proof assistants. We evaluate calf as a general framework for cost analysis by implementing two fundamental techniques for algorithm analysis: the method of recurrence relations and physicist’s method for amortized analysis . We deploy these techniques on a variety of case studies: we prove a tight, closed bound for Euclid’s algorithm, verify the amortized complexity of batched queues, and derive tight, closed bounds for the sequential and parallel complexity of merge sort, all fully mechanized in the Agda proof assistant. Lastly we substantiate the soundness of quantitative reasoning in calf by means of a model construction.
APA, Harvard, Vancouver, ISO, and other styles
30

King, Stephen F., and Phil Aisthorpe. "Re-Engineering in the Face of a Merger: Soft Systems and Concurrent Dynamics." Journal of Information Technology 15, no. 2 (2000): 165–79. http://dx.doi.org/10.1177/026839620001500207.

Full text
Abstract:
Business process re-engineering (BPR) was presented as the key to successful organizational transformation in the early 1990s. In this paper we examine a BPR initiative at a medium-sized UK building society in order to explore whether BPR succeeded or failed and to place BPR within the wider context of an organization facing a merger. The study describes the development of a novel BPR methodology which combines both hard and soft modelling approaches and reveals a degree of success in terms of process modelling and gaining consensus on process content and faults. However, two more far-reaching initiatives served to drain away support from the BPR effort: a parallel organizational analysis undertaken by an external consultancy and the hidden (although rumoured) merger talks with a larger partner. Therefore it is inappropriate to view BPR as an isolated, strategic initiative when, in practice, it may be one of several competing change activities vying for support within a changing organizational context. The paper concludes by presenting a model of concurrent dynamics which helps to explain why BPR lost momentum.
APA, Harvard, Vancouver, ISO, and other styles
31

Locatell, Christian. "Translating and Exegeting Hebrew Poetry: Illustrated with Psalm 70." Journal of Translation 11, no. 1 (2015): 35–60. http://dx.doi.org/10.54395/jot-p46yv.

Full text
Abstract:
Biblical Hebrew (BH) poetry poses unique challenges to translators and exegetes because of its often complex textual development, its defamiliarized mode of communication, and its understudied relationship to its co-text. While a comprehensive analysis is welcomed for any discourse type, the unique challenges of BH poetry call for a holistic approach that marshals insights from the extra-linguistic setting, co-text, and multifaceted discourse features. The method of discourse analysis proposed by Wendland (1994) seems to provide a helpful framework for such investigation. Applying this approach to Psalm 70—a short, but incredibly multifaceted text—reveals the value of this sort of comprehensive, interdisciplinary analysis. Additionally, following the application of Lambrecht’s (1994) theory of information structure (IS) to BH by Van der Merwe et al. (forthcoming), I propose that the Psalms may use parallel word order variation patterns beyond their IS purposes to create coherence relations at the discourse level.
APA, Harvard, Vancouver, ISO, and other styles
32

Xie, Weiying, Haonan Qin, Yunsong Li, Zhuo Wang, and Jie Lei. "A Novel Effectively Optimized One-Stage Network for Object Detection in Remote Sensing Imagery." Remote Sensing 11, no. 11 (2019): 1376. http://dx.doi.org/10.3390/rs11111376.

Full text
Abstract:
With great significance in military and civilian applications, the topic of detecting small and densely arranged objects in wide-scale remote sensing imagery is still challenging nowadays. To solve this problem, we propose a novel effectively optimized one-stage network (NEOON). As a fully convolutional network, NEOON consists of four parts: Feature extraction, feature fusion, feature enhancement, and multi-scale detection. To extract effective features, the first part has implemented bottom-up and top-down coherent processing by taking successive down-sampling and up-sampling operations in conjunction with residual modules. The second part consolidates high-level and low-level features by adopting concatenation operations with subsequent convolutional operations to explicitly yield strong feature representation and semantic information. The third part is implemented by constructing a receptive field enhancement (RFE) module and incorporating it into the fore part of the network where the information of small objects exists. The final part is achieved by four detectors with different sensitivities accessing the fused features, all four parallel, to enable the network to make full use of information of objects in different scales. Besides, the Focal Loss is set to enable the cross entropy for classification to solve the tough problem of class imbalance in one-stage methods. In addition, we introduce the Soft-NMS to preserve accurate bounding boxes in the post-processing stage especially for densely arranged objects. Note that the split and merge strategy and multi-scale training strategy are employed in training. Thorough experiments are performed on ACS datasets constructed by us and NWPU VHR-10 datasets to evaluate the performance of NEOON. Specifically, 4.77% and 5.50% improvements in mAP and recall, respectively, on the ACS dataset as compared to YOLOv3 powerfully prove that NEOON can effectually improve the detection accuracy of small objects in remote sensing imagery. In addition, extensive experiments and comprehensive evaluations on the NWPU VHR-10 dataset with 10 classes have illustrated the superiority of NEOON in the extraction of spatial information of high-resolution remote sensing images.
APA, Harvard, Vancouver, ISO, and other styles
33

Kotov, V. G. "Engraved images of the Shulgan-Tash (Kapova) cave, Bashkortostan, South Ural." VESTNIK ARHEOLOGII, ANTROPOLOGII I ETNOGRAFII, no. 2(61) (June 15, 2023): 5–15. http://dx.doi.org/10.20874/2071-0437-2023-61-2-1.

Full text
Abstract:
The cave of Shulgan-Tash (Kapova) with wall drawings of the Upper Paleolithic is located in the mountain course of the River Belaya in the Southern Urals, nearby the village of Gadelgareevo, Burzyansky district of the Republic of Bashkortostan. In the process of more than 50 years of studying the cave sanctuary, the search for engraved images has been carried out. Two compositions with engraved images were discovered in 2008. Com-position No. 1 is located in the Main Gallery, 100 m from the entrance, in a niche on the western wall at a height of about 2 m above the floor level. It consists of the elements located on two levels. At the lower level, a number of elements are confined to the natural fracture and a chain of caverns. Parallel to the horizontal crack, five lines were drawn. The lines connect to a quadrangular shape filled with vertical and horizontal lines. Behind it, the crack merges into a chain of caverns. The upper tier consists of four oval artificial recesses. The fourth groove is located under the engraved anthropomorphic figure, between the legs. This indicates that this is a vulva-shaped symbol. The grooves are connected by deeply incised lines to the quadrangular figure and caverns of the lower tier. Lines also run from the chain of the caverns downwards. Thus, these groups of artificial and natural elements were combined into a single composition. Composition No. 2 is located in the Dome Hall, 150 m from the entrance, above the Chapel of Skulls in the western wall, nearby the colorful wall images in the shape of splashes. It was made on a 16 cm × 14 cm rock surface leveled and cleaned of calcite deposits. The composition consists of three pictorial ele-ments made in three different ways. The first element is represented by two parallel arcuate bands of comb lines of 4 cm wide and about 30 cm long made with a serrated stone tool of 4 cm wide in the soft mondmilch. Under them, with finger impressions in the mondmilch, a circle of about 6 cm in diameter was made of round dimples; rows of en-graved straight lines and zigzags were applied to the right of the circle. At present, the composition is held together by calcite incrustation and has completely hardened. The use of stone tools to create the engravings and grooves, the calcite crust inside the engraved lines, the use of the natural forms of the wall relief in the pictorial ensemble, the similarity of the quadrangular figure with the quadrangular symbols painted with ochre in the same cave, and the presence of a vulva-shaped symbol — all this indicates the Upper Paleolithic Age of these compositions.
APA, Harvard, Vancouver, ISO, and other styles
34

Tank, W. J., B. C. Curran, and E. E. Wadleigh. "Targeting Horizontal Wells—Efficient Oil Capture and Fracture Insights." SPE Reservoir Evaluation & Engineering 2, no. 02 (1999): 180–85. http://dx.doi.org/10.2118/55984-pa.

Full text
Abstract:
Summary Horizontal well targeting is often a greater challenge in massive, fractured carbonates than in low-productivity, poorly connected, and relatively thin reservoirs. This paper discusses methods to target horizontal wellbores in three-dimensional space to both confirm the fracture interpretation and establish high-efficiency oil capture. Several well examples are presented to illustrate the targeting objectives and the resulting well performance. Early in the program, the horizontal drilling objectives sought to maximize the lateral length in a direction determined by offset well productivity; the sample philosophy as is used in matrix-dominated reservoirs. Analysis of these results and employment of methods presented in this paper indicate profit can be maximized by drilling to a specific target to intersect a fracture trend at an optimum elevation instead of concentrating on maximizing length of lateral. Intervals of rapid penetration, lost circulation, and/or bit slides, along with cutting sample compositions, provided insight for confirmation and extension of the fracture network interpretation. The width of disturbance and degree of fracturing observed along interpreted fracture trends are valuable data for improved fracture network interpretation and computer simulation. Both the elevation and number of fracture branches encountered are significant strategic planning issues for oil recovery from unconfined oil columns in a massive carbonate system. Results from a large number of horizontals indicate significant productivity increases are achieved by proper targeting of laterals into major fracture features. Introduction Horizontal wells provide a unique assessment tool for formations containing reservoirs dominated by discontinuous flow features such as fractures or interbedded sandstones. Massive carbonate formations are the most extreme setting for large-scale, high-contrast, discontinuous reservoir properties. In sandstones of moderate to low quality, horizontals are typically applied to improve rate by exposing additional formation for fluid entry at high drawdown. In carbonates, horizontals serve to intersect high-conductivity flow features. In sandstones, high flow quality often coincides with sand accumulation. In contrast, carbonate flow is often highly discontinuous while storage capacity remains a relatively continuous function (as limited by depositional and diagenetic porosity history). Since 1993, significant study has gone into identifying the extent and quality of fracture networks and the impact these systems have had on reservoir management, fluid reinjection, and completion efficiency.1,2 In west Texas alone, well over 100 short-radius horizontal wells have been drilled in one field since 1986. Horizontals drilled in this fractured carbonate reservoir were initially done to maximize oil production while limiting gas coning.3 With the recent fracture studies, emphasis has moved to using horizontal boreholes to connect with large flow features not penetrated in existing wellbores.4,5 These more recent wells have targeted fracture zones interpreted from flexure maps which are developed from a second derivative analysis of structural surface maps. This paper provides results of several horizontal wells drilled with the intent of cutting the interpreted fracture zones. Targeting horizontal wells requires an understanding of massive carbonate features as well as discontinuous flow features. This paper will discuss how mapping was used to determine flow-feature locations; how horizontal drilling techniques were used to intersect these targeted flow features; and a discussion of the refinement of the interpretation and the drilling operations. Massive Carbonate Flow Features What is a massive carbonate? Carbonates that have relatively thick (100 ft or greater) intervals of mixed porous and tight/brittle rock types, free of continuous soft shale or anhydrite layers, are considered massive for this discussion. Structural deformation is subtle in many massive carbonate reservoirs, but still highly significant in generating preferential flow within the reservoir body. Minor deformation, as resulting from differential compaction and formation dip growth is accommodated in a range of extensional fracturing of the relatively brittle carbonates. Potential solution enhancement of fracture and fault zones further enhances flow. The highly conductive flow features of these carbonates often are a mix of bedding parallel (matrix) and subvertical (fracture) features.2 Data gathered from vertical wells can bias the interpretation of flow-feature population due to sampling a greater population of bedding parallel features. Vertical wells statistically encounter numerous short, mostly random-oriented fractures, but very few of the largest subvertical fracture features. Horizontal wells, in contrast, encounter few bedding parallel flow features in exchange for a full range of subvertical fracture flow features. Horizontal wells can provide data for direct assessment of fracture frequency and matrix block size in contrast to the highly interpretive approach required for assessment from vertical well data. More importantly, horizontal well data provides insight into the lateral variance in subvertical fracture features. Significant variation is expected between low fracture intensity near the center of a large formation block relative to the high frequency expected near the edges of this block where strain is concentrated. Block edges for large-scale features may follow obvious faults, hingelines (linear trends of dip change), or structural noses. Fig. 1 conceptually illustrates a fractured rock mass with a horizontal well intersecting a strain zone of likely high-flow capacity. Often, the structural indications of block-edge strain zones are subtle and easily merged with interpreted depositional or erosional changes across the field. Here, horizontal well data are critical to generation of an adequate flow-feature model.
APA, Harvard, Vancouver, ISO, and other styles
35

Zečević, Slobodan. "Contribution to discussions about existence of the constitutional law of the European Union." Arhiv za pravne i drustvene nauke 11, no. 1 (2023): 9–27. http://dx.doi.org/10.5937/adpn2301009z.

Full text
Abstract:
In relation to the topic, the formal absence of a legal text called the constitution of the European Union is noticeable. Simple logic dictates the conclusion that in absence of European constitution, there is no constitutional law of the European Union. However, the reality is much more complex than it seems. The United Kingdom, for example, does not have a written act called a constitution, but instead several constitutional contents whose sources are in laws, legal practice and so-called constitutional customs. Germany also formally does not have a constitution, but a Fundamental Law that pursue a constitutional role. The term is not apparently so important but the status of the text. The constitution is a set of norms that are supreme, stable and difficult to change. It accords competences to the state bodies and guarantee essential civil rights and freedoms. The relevant question in this case is the existence of constitution and constitutional law of the European Union, not in a formal, but in an essential sense. The European Union does not have the characteristics of a unitary, but could it be considering as a federal state? In political-legal theory, opinions appeared that such a thing is impossible for the following reasons. As an example of the emergence of a federal state, the history of the United States of America is cited. According to the constitution of 1878, the US received competences in foreign affairs, defense, monetary policy, as well as in the field of protection of fundamental rights and freedoms. The European Union rested on the process of federalization in the economic area. The treaties establishing the Community and the Union have merged the national markets of the member states into one. Originally the European Communities did not have powers in foreign affairs, defense, security and justice. Only in 1993, with the Maastricht Treaty, the newly created European Union get the possibility to take decisions in the aforementioned areas, but even then federal mechanisms were not applied. The rule was unanimous decisions of represents of member states government reassemble in the Council of EU. The state sovereignty was preserved. For the obvious lack of authority at the supranational level, the European Union cannot currently be considered as a classic federal state. However, it can be observed as a sort of federal community, which was originally intended to evolves into something more than that. In a historical sense, this situation in itself is not new. It also appeared in the 19th century with the so-called emerging federal states such as the United States of America, the Swiss Confederation, Germany, Canada or Australia. However, the European Union is a permanent political-legal structure that has certain attributes of a federal state. The notion of a federal community, allows to take into account the essential role of the member states in such system of integration. The federal community as a permanent entity, rests on the contractual relationship that defines the common goals of its members. The aforementioned goals in practice change the internal conditions in the member states, but also their global political status. Several indications point to the federal nature of the European Union. The use of the term Union is not harmless. The founding fathers of the US Constitution of 1878, called their new created federal state Union in order to mark the difference with the previously existing Confederacy. The inspires of the European Union in the constitutive treaty emphasize that its main goal is to constantly create closer ties between European nations. This sentence indirectly indicates a strong, integrative, federal dynamic. In its legal practice, the Court of Justice does not ignore the initial international nature of constitutive treaties, but points to the following. The treaties establishing the Communities and the European Union represent the basis of an independent, hierarchically organized legal order, the kind that states have. As the highest legal act and source of law, they have a constitutional function. The law of the European Union is directly integrated into the law order of the member states and has primacy in relation with the national law. The legislative acts of European derivative law (regulations, directives, decisions) cannot contradict the provisions of the founding treaties. Like the Supreme Court in a federal state, the Court of Justice of the European Union control the compliance of legislative acts with constitutive treaties. The same principle applies in the field of international relations. An international agreement concluded by the European Union or its member states must be in accordance with the provisions of the founding treaties. Their constitutionality is checked by the Court of Justice. The Lisbon Treaty gave the European Union another federal distinction. It recognizes to the European Union a possession of legal personality, which means a full legal capacity to conclude international agreements with other countries and international organizations. The division of competences between the federal state and its members is for many the essence of the federalist legal order. The parallel existence of two levels of government imposes the need to clearly demarcate the fields of action of one and other authorities. In 2009 the Treaty of Lisbon established a principled delimitation of European and national competences. This is another step in the direction of federal legal regulation. The existence of European citizenship gives to the European Union one more federal characteristics. European citizens acquire rights and obligations parallel to those related to national citizenship. Opponents of such a solution were those who believed that the Union represents only an international organization. The founding treaties assign competences to the institutions of the Union, as well as guarantee basic human rights and freedoms. The legislation of the European Union determines the functioning of the member states and in many areas directly or indirectly governs the life of their citizens. Treaties establishing the European Union have in practice a constitutional role and value.
APA, Harvard, Vancouver, ISO, and other styles
36

PhD, Soha S. Zaghloul,, Laila M. AlShehri, Maram F. AlJouie, Nojood E. AlEissa, and Nourah A. AlMogheerah. "Analytical and Experimental Performance Evaluation of Parallel Merge sort on Multicore System." International Journal Of Engineering And Computer Science, June 30, 2017. http://dx.doi.org/10.18535/ijecs/v6i6.36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Dimitrov, Metodi, and Tzvetomir Vassilev. "Research on the Amount of Information Needed to Restore the Original Order of Four or Eight Elements Lists, when Using Different Sorting Algorithms." Proceedings of the Bulgarian Academy of Sciences 77, no. 9 (2024). http://dx.doi.org/10.7546/crabs.2024.09.07.

Full text
Abstract:
Data sorting is essential in most software applications, but sometimes elements need to be restored to their original order after processing. If this restoration happens long after sorting or on a different computer, additional information is needed to restore the order. This work explores the information needed to restore sequences of 4 and 8 elements. The following sorting algorithms were studied: parallel neighbour (odd even) sort, insertion sort, bubble sort, shell sort, merge-insertion sort, and merge sort. For each algorithm, the amount of information required in bytes to restore the original order of the elements was determined.
APA, Harvard, Vancouver, ISO, and other styles
38

Zhang, Jin, Jincheng Zhou, Xiang Zhang, Di Ma, and Chunye Gong. "Fine-grained vectorized merge sorting on RISC-V: from register to cache." CCF Transactions on High Performance Computing, December 18, 2024. https://doi.org/10.1007/s42514-024-00201-2.

Full text
Abstract:
AbstractMerge sort as a divide-sort-merge paradigm has been widely applied in computer science fields. As modern reduced instruction set computing architectures like the fifth generation (RISC-V) regard multiple registers as a vector register group for wide instruction parallelism, optimizing merge sort with this vectorized property is becoming increasingly common. In this paper, we overhaul the divide-sort-merge paradigm, from its register-level sort to the cache-aware merge, to develop a fine-grained RISC-V vectorized merge sort (RVMS). From the register-level view, the inline vectorized transpose instruction is missed in RISC-V, so implementing it efficiently is non-trivial. Besides, the vectorized comparisons do not always work well in the merging networks. Both issues primarily stem from the expensive data shuffle instruction. To bypass it, RVMS strides to take register data as the proxy of data shuffle to accelerate the transpose operation, and meanwhile replaces vectorized comparisons with scalar cousin for more light real value swap. On the other hand, as cache-aware merge makes larger data merge in the cache, most merge schemes have two drawbacks: the in-cache merge usually has low cache utilization, while the out-of-cache merging network remains an ineffectively symmetric structure. To this end, we propose the half-merge scheme to employ the auxiliary space of in-place merge to halve the footprint of naïve merge sort, and meanwhile copy one sequence to this space to avoid the former data exchange. Furthermore, an asymmetric merging network is developed to adapt to two different input sizes. Experiments on the RISC-V processor SG2042 show that four fine-grained optimization schemes including register strided transpose, hybrid merging network, half-merge strategy, and asymmetric merging network, improve performance by 4.05%, 19.88%, 12.23%, and 11.04% respectively. Importantly, the overall performance is 1.34x faster than the parallel sorting in the Boost C++ library, and 1.85x faster than std::sort.
APA, Harvard, Vancouver, ISO, and other styles
39

Lai, Lai Win Kyi, and Min Tun Nay. "Performance Comparison of Parallel Sorting Algorithms on the Cluster of Workstations." March 24, 2011. https://doi.org/10.5281/zenodo.1059595.

Full text
Abstract:
Sorting appears the most attention among all computational tasks over the past years because sorted data is at the heart of many computations. Sorting is of additional importance to parallel computing because of its close relation to the task of routing data among processes, which is an essential part of many parallel algorithms. Many parallel sorting algorithms have been investigated for a variety of parallel computer architectures. In this paper, three parallel sorting algorithms have been implemented and compared in terms of their overall execution time. The algorithms implemented are the odd-even transposition sort, parallel merge sort and parallel rank sort. Cluster of Workstations or Windows Compute Cluster has been used to compare the algorithms implemented. The C# programming language is used to develop the sorting algorithms. The MPI (Message Passing Interface) library has been selected to establish the communication and synchronization between processors. The time complexity for each parallel sorting algorithm will also be mentioned and analyzed.
APA, Harvard, Vancouver, ISO, and other styles
40

Altarawneh, Muhyidean, Umur Inan, and Basima Elshqeirat. "Empirical Analysis Measuring the Performance of Multi-threading in Parallel Merge Sort." International Journal of Advanced Computer Science and Applications 13, no. 1 (2022). http://dx.doi.org/10.14569/ijacsa.2022.0130110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Ketchaya, Sirilak, and Apisit Rattanatranurak. "Parallel Multi-Deque Partition Dual-Deque Merge sorting algorithm using OpenMP." Scientific Reports 13, no. 1 (2023). http://dx.doi.org/10.1038/s41598-023-33583-4.

Full text
Abstract:
AbstractQuicksort is an important algorithm that uses the divide and conquer concept, and it can be run to solve any problem. The performance of the algorithm can be improved by implementing this algorithm in parallel. In this paper, the parallel sorting algorithm named the Multi-Deque Partition Dual-Deque Merge Sorting algorithm (MPDMSort) is proposed and run on a shared memory system. This algorithm contains the Multi-Deque Partitioning phase, which is a block-based parallel partitioning algorithm, and the Dual-Deque Merging phase, which is a merging algorithm without compare-and-swap operations and sorts the small data with the sorting function of the standard template library. The OpenMP library, which is an application programming interface used to develop the parallel implementation of this algorithm, is implemented in MPDMSort. Two computers (one with an Intel Xeon Gold 6142 CPU and the other with an Intel Core i7-11700 CPU) running Ubuntu Linux are used in this experiment. The results show that MPDMSort is faster than parallel balanced quicksort and multiway merge sort on the large random distribution data. A speedup of 13.81$$\times$$ × and speedup per thread of 0.86 can be obtained. Thus, developers can use these parallel partitioning and merging algorithms to improve the performance of related algorithms.
APA, Harvard, Vancouver, ISO, and other styles
42

Gandi, Carlo, Luigi Cosenza, Marco Campetella, et al. "What can the metaverse do for urology?" Urologia Journal, June 2, 2023, 039156032311759. http://dx.doi.org/10.1177/03915603231175940.

Full text
Abstract:
Everyone talks about the metaverse but few know what it really is. Augmented reality, virtual reality, internet of things (IoT), 5G, blockchain: these are just some of the technologies underlying the structure of the metaverse, a sort of parallel dimension in which the physical and virtual worlds merge together enabling users to interact by emerging technologies in order to enhance their actions and decisions. The healthcare scientific community is already looking at the metaverse as a new research frontier, a tool to improve medical knowledge and patient care. We reviewed the metaverse applications and services, looking for those that could best be developed in the urological field. Urology, due to its technological nature, is a privileged laboratory for experimenting and exploiting the applications of the metaverse both inside and outside the operating room. The revolution of the metaverse is already happening, which is why it is necessary that urologists face it as protagonists in order to lead it in the right direction.
APA, Harvard, Vancouver, ISO, and other styles
43

Goyal, Kapil Dev, Muhammad Raihan Abbas, Vishal Goyal, and Yasir Saleem. "Forward-backward transliteration of Punjabi Gurmukhi script using n-gram language model." ACM Transactions on Asian and Low-Resource Language Information Processing, June 9, 2022. http://dx.doi.org/10.1145/3542924.

Full text
Abstract:
Transliterating the text of a language to a foreign script is called forward transliteration and transliterating the text back to the original script is called backward transliteration. In this work, we perform both forward as well as backward transliteration on Punjabi. We transliterate Punjabi person names from Gurmukhi script to English Roman script and from English Roman script back to Gurmukhi script using n-gram language model. We used more than one million parallel entities of person names in Gurmukhi and Roman script as the training corpus. We generated English to Punjabi and Punjabi to English n-grams databases from the corpus. To get better results, we tried to create as long n-grams as possible ranging from bi-gram to 30-gram. Our n-grams database contains more than 10 million n-grams and each n-gram having multiple mappings of the other script. The most challenging part is to find the mapping for the given n-gram from the parallel name entity while creating n-grams databases. As per the orthography rules, the same combination of letters may have different pronunciation depending upon its location in the word. Therefore, we categorized n-grams into starting, middle and ending n-grams and used them accordingly in the transliteration process. The transliteration process works like the merge sort. We start searching the longest possible n-gram in the database and split the string recursively until the match is found. The transliterated strings are merged back to form the final output. In English to Punjabi transliteration, we achieved 96% accuracy using gold standard and 99.14% accuracy using minimum edit distance. In Punjabi to English transliteration, the result showed 96.85% and 99.35% accuracy for the gold standard and minimum edit distance respectively.
APA, Harvard, Vancouver, ISO, and other styles
44

Bharti, Urmil, Anita Goel, and S. C. Gupta. "ReactiveFnJ: A choreographed model for Fork-Join Workflow in Serverless Computing." Journal of Cloud Computing 12, no. 1 (2023). http://dx.doi.org/10.1186/s13677-023-00429-3.

Full text
Abstract:
AbstractFunction-as-a-Service (FaaS) is an event-based reactive programming model where functions run in ephemeral stateless containers for short duration. For building complex serverless applications, function composition is crucial to coordinate and synchronize the workflow of an application. Some serverless orchestration systems exist, but they are in their primitive state and do not provide inherent support for non-trivial workflows like, Fork-Join. To address this gap, we propose a fully serverless and scalable design model ReactiveFnJ for Fork-Join workflow. The intent of this work is to illustrate a design which is completely choreographed, reactive, asynchronous, and represents a dynamic composition model for serverless applications based on Fork-Join workflow. Our design uses two innovative patterns, namely, Relay Composition and Master-Worker Composition to solve execution time-out challenges. As a Proof-of-Concept (PoC), the prototypical implementation of Split-Sort-Merge use case, based on Fork-Join workflow is discussed and evaluated. The ReactiveFnJ handles embarrassingly parallel computations, and its design does not depend on any external orchestration services, messaging services, and queue services. ReactiveFnJ facilitates in designing fully automated pipelines for distributed data processing systems, satisfying the Serverless Trilemma in true essence. A file of any size can be processed using our effective and extensible design without facing execution time-out challenges. The proposed model is generic and can be applied to a wide range of serverless applications that are based on the Fork-Join workflow pattern. It fosters the choreographed serverless composition for complex workflows. The proposed design model is useful for software engineers and developers in industry and commercial organizations, total solution vendors and academic researchers.
APA, Harvard, Vancouver, ISO, and other styles
45

Shazia, Rana, and Saeed Muhammad. "PCTLHS-Matrix, Time-based Level Cuts, Operators, and unified time-layer health state Model." October 2, 2022. https://doi.org/10.5281/zenodo.7135347.

Full text
Abstract:
: This article aims to introduce a unique hypersoft time-based matrix model that organizes and classifies higher-dimensional information scattered in numerous forms and vague appearances varying on specific time levels. Classical matrices as rank-2 tensors single-handedly relate equations and variables across rows and columns are a limited approach to organizing higher-dimensional information. This Plithogenic Crisp Time Leveled Hypersoft Matrix (PCTLHS-Matrix) model is designed to sort the higher dimensional information flowing in parallel time layers as a combined view of events. This matrix has several parallel layers of time. The time-based level cuts as time layers are introduced to present an explicit view of information on certain required time levels as a separate reality. The sub-layers are formulated as sub-level cuts that represent a partial view of the event or reality. Further subdividing these sub-levels creates sub-sub-level cuts, which are the smallest focused partial view of the event, serving the purpose of zooming. These Level cuts are utilized to construct local aggregation operators for PCTLHSMatrix. And the concept of timelessness is introduced by unifying the time levels of the universe. This means all attributes that exist in various time levels are merged to exist in a unified time called the unified time layer. In this way, the attributes are focused and the layers of time are merged as if there is no time. The particular types of time layers are unified by local operators to introduce the concept of timelessness that is obtained by unifying time levels. Finally, for a precise description of the model, a numerical example is constructed by assuming a classification of various health states with COVID-19 patients in a hospital. Intuitionistic Fuzzy / Neutrosophic / and other fuzzy-extension IndetermSoft Set & IndetermHyperSoft Set are presented together with their applications. 
APA, Harvard, Vancouver, ISO, and other styles
46

Emilio Faroldi. "The architecture of differences." TECHNE - Journal of Technology for Architecture and Environment, May 26, 2021, 9–15. http://dx.doi.org/10.36253/techne-11023.

Full text
Abstract:
Following in the footsteps of the protagonists of the Italian architectural debate is a mark of culture and proactivity. The synthesis deriving from the artistic-humanistic factors, combined with the technical-scientific component, comprises the very root of the process that moulds the architect as an intellectual figure capable of governing material processes in conjunction with their ability to know how to skilfully select schedules, phases and actors: these are elements that – when paired with that magical and essential compositional sensitivity – have fuelled this profession since its origins.
 The act of X-raying the role of architecture through the filter of its “autonomy” or “heteronomy”, at a time when the hybridisation of different areas of knowledge and disciplinary interpenetration is rife, facilitates an understanding of current trends, allowing us to bring the fragments of a debate carved into our culture and tradition up to date.
 As such, heteronomy – as a condition in which an acting subject receives the norm of its action from outside itself: the matrix of its meaning, coming from ancient Greek, the result of the fusion of the two terms ἕτερος éteros “different, other” and νόμος nómos “law, ordinance” – suggests the existence of a dual sentiment now pervasive in architecture: the sin of self-reference and the strength of depending on other fields of knowledge.
 Difference, interpreted as a value, and the ability to establish relationships between different points of observation become moments of a practice that values the process and method of affirming architecture as a discipline.
 The term “heteronomy”, used in opposition to “autonomy”, has – from the time of Kant onwards – taken on a positive value connected to the mutual respect between reason and creativity, exact science and empirical approach, contamination and isolation, introducing the social value of its existence every time that it returns to the forefront.
 At the 1949 conference in Lima, Ernesto Nathan Rogers spoke on combining the principle of “Architecture is an Art” with the demands of a social dimension of architecture: «Alberti, in the extreme precision of his thought, admonishes us that the idea must be translated into works and that these must have a practical and moral purpose in order to adapt harmoniously ‘to the use of men’, and I would like to point out the use of the plural of ‘men’, society. The architect is neither a passive product nor a creator completely independent of his era: society is the raw material that he transforms, giving it an appearance, an expression, and the consciousness of those ideals that, without him, would remain implicit. Our prophecy, like that of the farmer, already contains the seeds for future growth, as our work also exists between heaven and earth. Poetry, painting, sculpture, dance and music, even when expressing the contemporary, are not necessarily limited within practical terms. But we architects, who have the task of synthesising the useful with the beautiful, must feel the fundamental drama of existence at every moment of our creative process, because life continually puts practical needs and spiritual aspirations at odds with one another. We cannot reject either of these necessities, because a merely practical or moralistic position denies the full value of architecture to the same extent that a purely aesthetic position would: we must mediate one position with the other» (Rogers, 1948).
 Rogers discusses at length the relationship between instinctive forces and knowledge acquired through culture, along with his thoughts on the role played by study in an artist’s training.
 It is in certain debates that have arisen within the “International Congresses of Modern Architecture” that the topic of architecture as a discipline caught between self-sufficiency and dependence acquires a certain centrality within the architectural context: in particular, in this scenario, the theme of the “autonomy” and “heteronomy” of pre-existing features of the environment plays a role of strategic importance. Arguments regarding the meaning of form in architecture and the need for liberation from heteronomous influences did not succeed in undermining the idea of an architecture capable of influencing the governing of society as a whole, thanks to an attitude very much in line with Rogers’ own writings.
 The idea of a project as the result of the fusion of an artistic idea and pre-existing features of an environment formed the translation of the push to coagulate the antithetical forces striving for a reading of the architectural work that was at once autonomous and heteronomous, as well as linked to geographical, cultural, sociological and psychological principles.
 The CIAM meeting in Otterlo was attended by Ignazio Gardella, Ernesto Nathan Rogers, Vico Magistretti and Giancarlo De Carlo as members of the Italian contingent: the architects brought one project each to share with the conference and comment on as a manifesto. Ernesto Nathan Rogers, who presented the Velasca Tower, and Giancarlo De Carlo, who presented a house in Matera in the Spine Bianche neighbourhood, were openly criticised as none of the principles established by the CIAM were recognisable in their work any longer, and De Carlo’s project represented a marked divergence from a consolidated method of designing and building in Matera.
 In this cultural condition, Giancarlo De Carlo – in justifying the choices he had made – even went so far as to say: «my position was not at all a flight from architecture, for example in sociology. I cannot stand those who, paraphrasing what I have said, dress up as politicians or sociologists because they are incapable of creating architecture. Architecture is – and cannot be anything other than – the organisation and form of physical space. It is not autonomous, it is heteronomous» (De Carlo, 2001).
 Even more so than in the past, it is not possible today to imagine an architecture encapsulated entirely within its own enclosure, autoimmune, averse to any contamination or relationships with other disciplinary worlds: architecture is the world and the world is the sum total of our knowledge.
 Architecture triggers reactions and phenomena: it is not solely and exclusively the active and passive product of a material work created by man. «We believed in the heteronomy of architecture, in its necessary dependence on the circumstances that produce it, in its intrinsic need to exist in harmony with history, with the happenings and expectations of individuals and social groups, with the arcane rhythms of nature. We denied that the purpose of architecture was to produce objects, and we argued that its fundamental role was to trigger processes of transformation of the physical environment that are capable of contributing to the improvement of the human condition» (De Carlo, 2001).
 Productive and cultural reinterpretations place the discipline of architecture firmly at the centre of the critical reconsideration of places for living and working. Consequently, new interpretative models continue to emerge which often highlight the instability of built architecture with the lack of a robust theoretical apparatus, demanding the sort of “technical rationality” capable of restoring the centrality of the act of construction, through the contribution of actions whose origins lie precisely in other subject areas.
 Indeed, the transformation of the practice of construction has resulted in direct changes to the structure of the nature of the knowledge of it, to the role of competencies, to the definition of new professional skills based on the demands emerging not just from the production system, but also from the socio-cultural system. The architect cannot disregard the fact that the making of architecture does not burn out by means of some implosive dynamic; rather, it is called upon to engage with the multiple facets and variations that the cognitive act of design itself implies, bringing into play a theory of disciplines which – to varying degrees and according to different logics – offer their significant contribution to the formation of the design and, ultimately, the work.
 As Álvaro Siza claims, «The architect is not a specialist. The sheer breadth and variety of knowledge that practicing design encompasses today – its rapid evolution and progressive complexity – in no way allow for sufficient knowledge and mastery. Establishing connections – pro-jecting [from Latin proicere, ‘to stretch out’] – is their domain, a place of compromise that is not tantamount to conformism, of navigation of the web of contradictions, the weight of the past and the weight of the doubts and alternatives of the future, aspects that explain the lack of a contemporary treatise on architecture. The architect works with specialists. The ability to chain things together, to cross bridges between fields of knowledge, to create beyond their respective borders, beyond the precarity of inventions, requires a specific education and stimulating conditions. [...] As such, architecture is risk, and risk requires impersonal desire and anonymity, starting with the merging of subjectivity and objectivity. In short, a gradual distancing from the ego. Architecture means compromise transformed into radical expression, in other words, a capacity to absorb the opposite and overcome contradiction. Learning this requires an education in search of the other within each of us» (Siza, 2008).
 We are seeing the coexistence of contrasting, often extreme, design trends aimed at recementing the historical and traditional mould of construction by means of the constant reproposal of the characteristics of “persistence” that long-established architecture, by its very nature, promotes, and at decrypting the evolutionary traits of architecture – markedly immaterial nowadays – that society promotes as phenomena of everyday living.
 Speed, temporariness, resilience, flexibility: these are just a few fragments.
 In other words, we indicate a direction which immediately composes and anticipates innovation as a characterising element, describing its stylistic features, materials, languages and technologies, and only later on do we tend to outline the space that these produce: what emerges is a largely anomalous path that goes from “technique” to “function” – by way of “form” – denying the circularity of the three factors at play.
 The threat of a short-circuit deriving from discourse that exceeds action – in conjunction with a push for standardisation aimed at asserting the dominance of construction over architecture, once again echoing the ideas posited by Rogers – may yet be able to finding a lifeline cast through the attempt to merge figurative research with technology in a balanced way, in the wake of the still-relevant example of the Bauhaus or by emulating the thinking of certain masters of modern Italian architecture who worked during that post-war period so synonymous with physical – and, at the same time, moral – reconstruction.
 These architectural giants’ aptitude for technical and formal transformation and adaptation can be held up as paradigmatic examples of methodological choice consistent with their high level of mastery over the design process and the rhythm of its phases. In all this exaltation of the outcome, the power of the process is often left behind in a haze: in the uncritical celebration of the architectural work, the method seems to dissolve entirely into the finished product.
 Technical innovation and disciplinary self-referentiality would seem to deny the concepts of continuity and transversality by means of a constant action of isolation and an insufficient relationship with itself: conversely, the act of designing, as an operation which involves selecting elements from a vast heritage of knowledge, cannot exempt itself from dealing in the variables of a functional, formal, material and linguistic nature – all of such closely intertwined intents – that have over time represented the energy of theoretical formulation and of the works created.
 For years, the debate in architecture has concentrated on the synergistic or contrasting dualism between cultural approaches linked to venustas and firmitas. Kenneth Frampton, with regard to the interpretative pair of “tectonics” and “form”, notes the existence of a dual trend that is both identifiable and contrasting: namely the predisposition to favour the formal sphere as the predominant one, rejecting all implications on the construction, on the one hand; and the tendency to celebrate the constructive matrix as the generator of the morphological signature – emphasised by the ostentation of architectural detail, including that of a technological matrix – on the other.
 The design of contemporary architecture is enriched with sprawling values that are often fundamental, yet at times even damaging to the successful completion of the work: it should identify the moment of coagulation within which the architect goes in pursuit of balance between all the interpretative categories that make it up, espousing the Vitruvian meaning, according to which practice is «the continuous reflection on utility» and theory «consists of being able to demonstrate and explain the things made with technical ability in terms of the principle of proportion» (Vitruvius Pollio, 15 BC).
 Architecture will increasingly be forced to demonstrate how it represents an applied and intellectual activity of a targeted synthesis, of a complex system within which it is not only desirable, but indeed critical, for the cultural, social, environmental, climatic, energy-related, geographical and many other components involved in it to interact proactively, together with the more spatial, functional and material components that are made explicit in the final construction itself through factors borrowed from neighbouring field that are not endogenous to the discipline of architecture alone.
 Within a unitary vision that exists parallel to the transcalarity that said vision presupposes, the technology of architecture – as a discipline often called upon to play the role of a collagen of skills, binding them together – acts as an instrument of domination within which science and technology interpret the tools for the translation of man’s intellectual needs, expressing the most up-to-date principles of contemporary culture.
 Within the concept of tradition – as inferred from its evolutionary character – form, technique and production, in their historical “continuity” and not placed in opposition to one other, make up the fields of application by which, in parallel, research proceeds with a view to ensuring a conforming overall design. The “technology of architecture” and “technological design” give the work of architecture its personal hallmark: a sort of DNA to be handed down to future generations, in part as a discipline dedicated to amalgamating the skills and expertise derived from other dimensions of knowledge.
 In the exercise of design, the categories of urban planning, composition, technology, structure and systems engineering converge, the result increasingly accentuated by multidisciplinary nuances in search of a sense of balance between the parts: a setup founded upon simultaneity and heteronomous logic in the study of variables, by means of translations, approaches and skills as expressions of multifaceted identities. «Architects can influence society with their theories and works, but they are not capable of completing any such transformation on their own, and end up being the interpreters of an overbearing historical reality under which, if the strongest and most honest do not succumb, that therefore means that they alone represent the value of a component that is algebraically added to the others, all acting in the common field» (Rogers, 1951).
 Construction, in this context, identifies the main element of the transmission of continuity in architecture, placing the “how” at the point of transition between past and future, rather than making it independent of any historical evolution. Architecture determines its path within a heteronomous practice of construction through an effective distinction between the strength of the principles and codes inherent to the discipline – long consolidated thanks to sedimented innovations – and the energy of experimentation in its own right.
 Architecture will have to seek out and affirm its own identity, its validity as a discipline that is at once scientific and poetic, its representation in the harmonies, codes and measures that history has handed down to us, along with the pressing duty of updating them in a way that is long overdue. The complexity of the architectural field occasionally expresses restricted forms of treatment bound to narrow disciplinary areas or, conversely, others that are excessively frayed, tending towards an eclecticism so vast that it prevents the tracing of any discernible cultural perimeter.
 In spite of the complex phenomenon that characterises the transformations that involve the status of the project and the figure of the architect themselves, it is a matter of urgency to attempt to renew the interpretation of the activity of design and architecture as a coherent system rather than a patchwork of components. «Contemporary architecture tends to produce objects, even though its most concrete purpose is to generate processes. This is a falsehood that is full of consequences because it confines architecture to a very limited band of its entire spectrum; in doing so, it isolates it, exposing it to the risks of subordination and delusions of grandeur, pushing it towards social and political irresponsibility. The transformation of the physical environment passes through a series of events: the decision to create a new organised space, detection, obtaining the necessary resources, defining the organisational system, defining the formal system, technological choices, use, management, technical obsolescence, reuse and – finally – physical obsolescence. This concatenation is the entire spectrum of architecture, and each link in the chain is affected by what happens in all the others.
 It is also the case that the cadence, scope and intensity of the various bands can differ according to the circumstances and in relation to the balances or imbalances within the contexts to which the spectrum corresponds. Moreover, each spectrum does not conclude at the end of the chain of events, because the signs of its existence – ruins and memory – are projected onto subsequent events. Architecture is involved with the entirety of this complex development: the design that it expresses is merely the starting point for a far-reaching process with significant consequences» (De Carlo, 1978).
 The contemporary era proposes the dialectic between specialisation, the coordination of ideas and actions, the relationship between actors, phases and disciplines: the practice of the organisational culture of design circumscribes its own code in the coexistence and reciprocal exploitation of specialised fields of knowledge and the discipline of synthesis that is architecture.
 With the revival of the global economy on the horizon, the dematerialisation of the working practice has entailed significant changes in the productive actions and social relationships that coordinate the process. Despite a growing need to implement skills and means of coordination between professional actors, disciplinary fields and sectors of activity, architectural design has become the emblem of the action of synthesis. This is a representation of society which, having developed over the last three centuries, from the division of social sciences that once defined it as a “machine”, an “organism” and a “system”, is now defined by the concept of the “network” or, more accurately, by that of the “system of networks”, in which a person’s desire to establish relationships places them within a multitude of social spheres.
 The “heteronomy” of architecture, between “hybridisation” and “contamination of knowledge”, is to be seen not only an objective fact, but also, crucially, as a concept aimed at providing the discipline with new and broader horizons, capable of putting it in a position of serenity, energy and courage allowing it to tackle the challenges that the cultural, social and economic landscape is increasingly throwing at the heart of our contemporary world.
APA, Harvard, Vancouver, ISO, and other styles
47

Wallace, Derek. "'Self' and the Problem of Consciousness." M/C Journal 5, no. 5 (2002). http://dx.doi.org/10.5204/mcj.1989.

Full text
Abstract:
Whichever way you look at it, self is bound up with consciousness, so it seems useful to review some of the more significant existing conceptions of this relationship. A claim by Mikhail Bakhtin can serve as an anchoring point for this discussion. He firmly predicates the formation of self not just on the existence of an individual consciousness, but on what might be called a double or social (or dialogic) consciousness. Summarising his argument, Pam Morris writes: 'A single consciousness could not generate a sense of its self; only the awareness of another consciousness outside the self can produce that image.' She goes on to say that, 'Behind this notion is Bakhtin's very strong sense of the physical and spatial materiality of bodily being,' and quotes directly from Bakhtin's essay as follows: This other human being whom I am contemplating, I shall always see and know something that he, from his place outside and over against me, cannot see himself: parts of his body that are inaccessible to his own gaze (his head, his face and its expression), the world behind his back . . . are accessible to me but not to him. As we gaze at each other, two different worlds are reflected in the pupils of our eyes . . . to annihilate this difference completely, it would be necessary to merge into one, to become one and the same person. This ever--present excess of my seeing, knowing and possessing in relation to any other human being, is founded in the uniqueness and irreplaceability of my place in the world. (Bakhtin in Morris 6 Recent investigations in neuroscience and the philosophy of mind lay down a challenge to this social conception of the self. Notably, it is a challenge that does not involve the restoration of any variant of Cartesian rationalism; indeed, it arguably over--privileges rationalism's subjective or phenomenological opposite. 'Self' in this emerging view is a biologically generated but illusory construction, an effect of the operation of what are called 'neural correlates of consciousness' (NCC). Very briefly, an NCC refers to the distinct pattern of neurochemical activity, a 'neural representational system' -- to some extent observable by modern brain--imaging equipment – that corresponds to a particular configuration of sense--phenomena, or 'content of consciousness' (a visual image, a feeling, or indeed a sense of self). Because this science is still largely hypothetical, with many alternative terms and descriptions, it would be better in this limited space to focus on one particular account – one that is particularly well developed in the area of selfhood and one that resonates with other conceptions included in this discussion. Thomas Metzinger begins by postulating the existence within each person (or 'system' in his terms) of a 'self--model', a representation produced by neural activity -- what he calls a 'neural correlate of self--consciousness' -- that the individual takes to be the actual self, or what Metzinger calls the 'phenomenal self'. 'A self--model is important,' Metzinger says, 'in enabling a system to represent itself to itself as an agent' (293). The individual is able to maintain this illusion because 'the self--model is the only representational structure that is anchored in the brain by a continuous source of internally generated input' (297). In a manner partly reminiscent of Bakhtin, he continues: 'The body is always there, and although its relational properties in space and in movement constantly change, the body is the only coherent perceptual object that constantly generates input.' The reason why the individual is able to jump from the self--model to the phenomenal self in the first place is because: We are systems that are not able to recognise their subsymbolic self--model as a model. For this reason, we are permanently operating under the conditions of a 'naïve--realistic self--misunderstanding': We experience ourselves as being in direct and immediate epistemic contact with ourselves. What we have in the past simply called a 'self' is not a non--physical individual, but only the content of an ongoing dynamical process – the process of transparent self—modeling. (Metzinger 299) The question that nonetheless arises is why it should be concluded that this self--model emerges from subjective neural activity and not, say, from socialisation. Why should a self--model be needed in the first place? Metzinger's response is to say that there is good evidence 'for some kind of innate 'body prototype'' (298), and he refers to research that shows that even children born without limbs develop self--models which sometimes include limbs, or report phantom sensations in limbs that have never existed. To me, this still leaves open the possibility that such children are modelling their body image on strong identification with human others. But be that as it may, one of the things that remains unclear after this relatively rich account of contemporary or scientific phenomenology is the extent to which 'neural consciousness' is or can be supplemented by other kinds of consciousness, or indeed whether neural consciousness can be overridden by the 'self' acting on the basis of these other kinds of consciousness. The key stake in Metzinger's account is 'subjectivity'. The reason why the neural correlate of self--consciousness is so important to him is: 'Only if we find the neural and functional correlates of the phenomenal self will we be able to discover a more general theoretical framework into which all data can fit. Only then will we have a chance to understand what we are actually talking about when we say that phenomenal experience is a subjective phenomenon' (301). What other kinds of consciousness might there be? It is significant that, not only do NCC exponents have little to say about the interaction with other people, they rarely mention language, and they are unanimously and emphatically of the opinion that the thinking or processing that takes place in consciousness is not dependent on language, or indeed any signifying system that we know of (though conceivably, it occurs to me, the neural correlates may signify to, or 'call up', each other). And they show little 'consciousness' that a still influential body of opinion (informed latterly by post--structuralist thinking) has argued for the consciousness shaping effects of 'discourse' -- i.e. for socially and culturally generated patterns of language or other signification to order the processing of reality. We could usefully coin the term 'verbal correlates of consciousness' (VCC) to refer to these patterns of signification (words, proverbs, narratives, discourses). Again, however, the same sorts of questions apply, since few discourse theorists mention anything like neuroscience: To what extent is verbal consciousness supplemented by other forms of consciousness, including neural consciousness? These questions may never be fully answerable. However, it is interesting to work through the idea that NCC and VCC both exist and can be in some kind of relation even if the precise relationship is not measurable. This indeed is close to the case that Charles Shepherdson makes for psychoanalysis in attempting to retrieve it from the misunderstanding under which it suffers today: We are now familiar with debates between those who seek to demonstrate the biological foundations of consciousness and sexuality, and those who argue for the cultural construction of subjectivity, insisting that human life has no automatically natural form, but is always decisively shaped by contingent historical conditions. No theoretical alternative is more widely publicised than this, or more heavily invested today. And yet, this very debate, in which 'nature' and 'culture' are opposed to one another, amounts to a distortion of psychoanalysis, an interpretive framework that not only obscures its basic concepts, but erodes the very field of psychoanalysis as a theoretically distinct formation (2--3). There is not room here for an adequate account of Shepherdson's recuperation of psychoanalytic categories. A glimpse of the stakes involved is provided by Shepherdson's account, following Eugenie Lemoine--Luccione, of anorexia, which neither biomedical knowledge nor social constructionism can adequately explain. The further fact that anorexia is more common among women of the same family than in the general population, and among women rather than men, but in neither case exclusively so, thereby tending to rule out a genetic factor, allows Shepherdson to argue: [A]norexia can be understood in terms of the mother--daughter relation: it is thus a symbolic inheritance, a particular relation to the 'symbolic order', that is transmitted from one generation to another . . . we may add that this relation to the 'symbolic order' [which in psychoanalytic theory is not coextensive with language] is bound up with the symbolisation of sexual difference. One begins to see from this that the term 'sexual difference' is not used biologically, but also that it does not refer to general social representations of 'gender,' since it concerns a more particular formation of the 'subject' (12). An intriguing, and related, possibility, suggested by Foucault, is that NCC and VCC (or in Foucault's terms the 'visible' and the 'articulable'), operate independently of each other – that there is a 'disjunction' (Deleuze 64) or 'dislocation' (Shepherdson 166) between them that prevents any dialectical relation. Clearly, for Foucault, the lack of dialectical relation between the two modes does not mean that both are not at all times equally functional. But one can certainly speculate that, increasingly under postmodernity and media saturation, the verbal (i.e. the domain of signification in general) is influential. And if linguistic formations -- discourses, narratives, etc. -- can proliferate and feed on each other unconstrained by other aspects of reality, we get the sense of language 'running away with itself' and, at least for a time, becoming divorced from a more complete sense of reality. (This of course is basically the argument of Baudrillard.) The reverse may also be possible, in certain periods, although the idea that language could have no mediating effect at all on the production of reality (just inconsequential fluff on the surface of things) seems far--fetched in the wake of so much postmodern and media theory. However, the notion is consistent with the theories of hard--line materialists and genetic determinists. But we should at least consider the possibility that some sort of shaping interaction between NCC and VCC, without implicating the full conceptual apparatus of psychoanalysis, is continuously occurring. This possibility is, for me, best realised by Jacques Derrida when he writes of an irreducible interweaving of consciousness and language (the latter for Derrida being a cover term for any system of signification). This interweaving is such that the significatory superstructure 'reacts upon' the 'substratum of non--expressive acts and contents', and the name for this interweaving is 'text' (Mowitt 98). A further possibility is that provided by Pierre Bourdieu's notion of habitus -- the socially inherited schemes of perception and selection, imparted by language and example, which operate for the most part below the level of consciousness but are available to conscious reflection by any individual lucky enough to learn how to recognise that possibility. If the subjective representations of NCC exist, this habitus can be at best only partial; something denied by Bourdieu whose theory of individual agency is founded in what he has referred to as 'the relation between two states of the social' – i.e. 'between history objectified in things, in the form of institutions, and history incarnate in the body, in the form of that system of durable dispositions I call habitus' (190). At the same time, much of Bourdieu's thinking about the habitus seems as though it could be consistent with the kind of predictable representations that might be produced by NCC. For example, there are the simple oppositions that structure much perception in Bourdieu's account. These range from the obvious phenomenological ones (dark/light; bright/dull; male/female; hard/soft, etc.) through to the more abstract, often analogical or metaphorical ones, such as those produced by teachers when assessing their students (bright/dull again; elegant/clumsy, etc.). It seems possible that NCC could provide the mechanism or realisation for the representation, storage, and reactivation of impressions constituting a social model--self. However, an entirely different possibility remains to be considered – which perhaps Bourdieu is also getting at – involving a radical rejection of both NCC and VCC. Any correlational or representational theory of the relationship between a self and his/her environment -- which, according to Charles Altieri, includes the anti--logocentrism of Derrida -- assumes that the primary focus for any consciousness is the mapping and remapping of this relationship rather than the actions and purposes of the agent in question. Referring to the later philosophy of Wittgenstein, Altieri argues: 'Conciousness is essentially not a way of relating to objects but of relating to actions we learn to perform . . . We do not usually think about objects, but about the specific form of activity which involves us with these objects at this time' (233). Clearly, there is not yet any certainty in the arguments provided by neuroscience that neural activity performs a representational role. Is it not, then, possible that this activity, rather than being a 'correlate' of entities, is an accompaniment to, a registration of, action that the rest of the body is performing? In this view, self is an enactment, an expression (including but not restricted to language), and what self--consciousness is conscious of is this activity of the self, not the self as entity. In a way that again returns us towards Bakhtin, Altieri writes: '>From an analytical perspective, it seems likely that our normal ways of acting in the world provide all the criteria we need for a sense of identity. As Sidney Shoemaker has shown, the most important source of the sense of our identity is the way we use the spatio--temporal location of our body to make physical distinctions between here and there, in front and behind, and so on' (234). Reasonably consistent with the Wittgensteinian view -- in its focus on self--activity -- is that contemporary theorisation of the self that compares in influence with that posed by neuroscience. This is the self avowedly constructed by networked computer technology, as described by Mark Poster: [W]hat has occurred in the advanced industrial societies with increasing rapidity . . . is the dissemination of technologies of symbolisation, or language machines, a process that may be described as the electronic textualisation of daily life, and the concomitant transformations of agency, transformations of the constitution of individuals as fixed identities (autonomous, self--regulating, independent) into subjects that are multiple, diffuse, fragmentary. The old (modern) agent worked with machines on natural materials to form commodities, lived near other workers and kin in urban communities, walked to work or traveled by public transport, and read newspapers but engaged as a communicator mostly in face--to--face relations. The new (postmodern) agent works mostly on symbols using computers, lives in isolation from other workers and kin, travels to work by car, and receives news and entertainment from television. . . . Individuals who have this experience do not stand outside the world of objects, observing, exercising rational faculties and maintaining a stable character. The individuals constituted by the new modes of information are immersed and dispersed in textualised practices where grounds are less important than moves. (44--45) Interestingly, Metzinger's theorisation of the model--self lends itself to the self--mutability -- though not the diffusion -- favoured by postmodernists like Poster. [I]t is . . . well conceivable that a system generates a number of different self--models which are functionally incompatible, and therefore modularised. They nevertheless could be internally coherent, each endowed with its own characteristic phenomenal content and behavioral profile. . . this does not have to be a pathological situation. Operating under different self--models in different situational contexts may be biologically as well as socially adaptive. Don't we all to some extent use multiple personalities to cope efficiently with different parts of our lives? (295--6) Poster's proposition is consistent with that of many in the humanities and social sciences today, influenced variously by postmodernism and social constructionism. What I believe remains at issue about his account is that it exchanges one form of externally constituted self ('fixed identity') for another (that produced by the 'modes of information'), and therefore remains locked in a logic of deterministic constitution. (There is a parallel here with Altieri's point about Derrida's inability to escape representation.) Furthermore, theorists like Poster may be too quickly generalising from the experience of adults in 'textualised environments'. Until such time as human beings are born directly into virtual reality environments, each will, for a formative period of time, experience the world in the way described by Bakhtin – through 'a unified perception of bodily and personal being . . . characterised . . . as a loving gift mutually exchanged between self and other across the borderzone of their two consciousnesses' (cited in Morris 6). I suggest it is very unlikely that this emergent sense of being can ever be completely erased even when people subsequently encounter each other in electronic networked environments. It is clearly not the role of a brief survey like this to attempt any resolution of these matters. Indeed, my review has made all the more apparent how far from being settled the question of consciousness, and by extension the question of selfhood, remains. Even the classical notion of the homunculus (the 'little inner man' or the 'ghost in the machine') has been put back into play with Francis Crick and Christof Koch's (2000) neurobiological conception of the 'unconscious homunculus'. The balance of contemporary evidence and argument suggests that the best thing to do right now is to keep the questions open against any form of reductionism – whether social or biological. One way to do this is to explore the notions of self and consciousness as implicated in ongoing processes of complex co--adaptation between biology and culture -- or their individual level equivalents, brain and mind (Taylor Ch. 7). References Altieri, C. "Wittgenstein on Consciousness and Language: a Challenge to Derridean Literary Theory." Wittgenstein, Theory and the Arts. Ed. Richard Allen and Malcolm Turvey. New York: Routledge, 2001. Bourdieu, P. In Other Words: Essays Towards a Reflexive Sociology. Trans. Matthew Adamson. Stanford: Stanford University Press, 1990. Crick, F. and Koch, C. "The Unconscious Homunculus." Neural Correlates of Consciousness: Empirical and Conceptual Questions. Ed. Thomas Metzinger. Cambridge, Mass.: MIT Press, 2000. Deleuze, G. Foucault. Trans. Sean Hand. Minneapolis: University of Minnesota Press, 1988. Metzinger, T. "The Subjectivity of Subjective Experience: A Representationalist Analysis of the First-Person Perspective." Neural Correlates of Consciousness: Empirical and Conceptual Questions. Ed. Thomas Metzinger. Cambridge, Mass.: MIT Press, 2000. Morris, P. (ed.). The Bakhtin Reader: Selected Writings of Bakhtin, Medvedev, Voloshinov. London: Edward Arnold, 1994. Mowitt, J. Text: The Genealogy of an Interdisciplinary Object. Durham: Duke University Press, 1992. Poster, M. Cultural History and Modernity: Disciplinary Readings and Challenges. New York: Columbia University Press, 1997. Shepherdson, C. Vital Signs: Nature, Culture, Psychoanalysis. New York: Routledge, 2000. Taylor, M. C. The Moment of Complexity: Emerging Network Culture. Chicago: University of Chicago Press, 2001. Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Wallace, Derek. "'Self' and the Problem of Consciousness" M/C: A Journal of Media and Culture 5.5 (2002). [your date of access] < http://www.media-culture.org.au/mc/0210/Wallace.html &gt. Chicago Style Wallace, Derek, "'Self' and the Problem of Consciousness" M/C: A Journal of Media and Culture 5, no. 5 (2002), < http://www.media-culture.org.au/mc/0210/Wallace.html &gt ([your date of access]). APA Style Wallace, Derek. (2002) 'Self' and the Problem of Consciousness. M/C: A Journal of Media and Culture 5(5). < http://www.media-culture.org.au/mc/0210/Wallace.html &gt ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
48

Burns, Alex. "Select Issues with New Media Theories of Citizen Journalism." M/C Journal 10, no. 6 (2008). http://dx.doi.org/10.5204/mcj.2723.

Full text
Abstract:

 
 
 “Journalists have to begin a new type of journalism, sometimes being the guide on the side of the civic conversation as well as the filter and gatekeeper.” (Kolodzy 218) “In many respects, citizen journalism is simply public journalism removed from the journalism profession.” (Barlow 181) 1. Citizen Journalism — The Latest Innovation? New Media theorists such as Dan Gillmor, Henry Jenkins, Jay Rosen and Jeff Howe have recently touted Citizen Journalism (CJ) as the latest innovation in 21st century journalism. “Participatory journalism” and “user-driven journalism” are other terms to describe CJ, which its proponents argue is a disruptive innovation (Christensen) to the agenda-setting media institutions, news values and “objective” reportage. In this essay I offer a “contrarian” view, informed by two perspectives: (1) a three-stage model of theory-building (Carlile & Christensen) to evaluate the claims made about CJ; and (2) self-reflexive research insights (Etherington) from editing the US-based news site Disinformation between November 1999 and February 2008. New media theories can potentially create “cognitive dissonance” (Festinger) when their explanations of CJ practices are compared with what actually happens (Feyerabend). First I summarise Carlile & Christensen’s model and the dangers of “bad theory” (Ghoshal). Next I consider several problems in new media theories about CJ: the notion of ‘citizen’, new media populism, parallels in event-driven and civic journalism, and mergers and acquisitions. Two ‘self-reflexive’ issues are considered: ‘pro-ams’ or ‘professional amateurs’ as a challenge to professional journalists, and CJ’s deployment in new media operations and production environments. Finally, some exploratory questions are offered for future researchers. 2. An Evaluative Framework for New Media Theories on Citizen Journalism Paul Carlile and Clayton M. Christensen’s model offers one framework with which to evaluate new media theories on CJ. This framework is used below to highlight select issues and gaps in CJ’s current frameworks and theories. Carlile & Christensen suggest that robust theory-building emerges via three stages: Descriptive, Categorisation and Normative (Carlile & Christensen). There are three sub-stages in Descriptive theory-building; namely, the observation of phenomena, inductive classification into schemas and taxonomies, and correlative relationships to develop models (Carlile & Christensen 2-5). Once causation is established, Normative theory evolves through deductive logic which is subject to Kuhnian paradigm shifts and Popperian falsifiability (Carlile & Christensen 6). Its proponents situate CJ as a Categorisation or new journalism agenda that poses a Normative challenged and Kuhnian paradigm shift to traditional journalism. Existing CJ theories jump from the Descriptive phase of observations like “smart mobs” in Japanese youth subcultures (Rheingold) to make broad claims for Categorisation such as that IndyMedia, blogs and wiki publishing systems as new media alternatives to traditional media. CJ theories then underpin normative beliefs, values and worldviews. Correlative relationships are also used to differentiate CJ from the demand side of microeconomic analysis, from the top-down editorial models of traditional media outlets, and to adopt a vanguard stance. To support this, CJ proponents cite research on emergent collective behaviour such as the “wisdom of crowds” hypothesis (Surowiecki) or peer-to-peer network “swarms” (Pesce) to provide scientific justification for their Normative theories. However, further evaluative research is needed for three reasons: the emergent collective behaviour hypothesis may not actually inform CJ practices, existing theories may have “correlation not cause” errors, and the link may be due to citation network effects between CJ theorists. Collectively, this research base also frames CJ as an “ought to” Categorisation and then proceeds to Normative theory-building (Carlile & Christensen 7). However, I argue below that this Categorisation may be premature: its observations and correlative relationships might reinforce a ‘weak’ Normative theory with limited generalisation. CJ proponents seem to imply that it can be applied anywhere and under any condition—a “statement of causality” that almost makes it a fad (Carlile & Christensen 8). CJ that relies on Classification and Normative claims will be problematic without a strong grounding in Descriptive observation. To understand what’s potentially at stake for CJ’s future consider the consider the parallel debate about curricula renewal for the Masters of Business Administration in the wake of high-profile corporate collapses such as Enron, Worldcom, HIH and OneTel. The MBA evolved as a sociological and institutional construct to justify management as a profession that is codified, differentiated and has entry barriers (Khurana). This process might partly explain the pushback that some media professionals have to CJ as one alternative. MBA programs faced criticism if they had student cohorts with little business know-how or experiential learning (Mintzberg). Enron’s collapse illustrated the ethical dilemmas and unintended consequences that occurred when “bad theories” were implemented (Ghoshal). Professional journalists are aware of this: MBA-educated managers challenged the “craft” tradition in the early 1980s (Underwood). This meant that journalism’s ‘self-image’ (Morgan; Smith) is intertwined with managerial anxieties about media conglomerates in highly competitive markets. Ironically, as noted below, Citizen Journalists who adopt a vanguard position vis-a-vis media professionals step into a more complex game with other players. However, current theories have a naïve idealism about CJ’s promise of normative social change in the face of Machiavellian agency in business, the media and politics. 3. Citizen Who? Who is the “citizen” in CJ? What is their self-awareness as a political agent? CJ proponents who use the ‘self-image’ of ‘citizen’ draw on observations from the participatory vision of open source software, peer-to-peer networks, and case studies such as Howard Dean’s 2004 bid for the Democrat Party nominee in the US Presidential election campaign (Trippi). Recent theorists note Alexander Hamilton’s tradition of civic activism (Barlow 178) which links contemporary bloggers with the Federalist Papers and early newspaper pamphlets. One unsurfaced assumption in these observations and correlations is that most bloggers will adopt a coherent political philosophy as informed citizens: a variation on Lockean utilitarianism, Rawlsian liberalism or Nader consumer activism. To date there is little discussion about how political philosophy could deepen CJ’s ‘self-image’: how to critically evaluate sources, audit and investigation processes, or strategies to deal with elites, deterrence and power. For example, although bloggers kept Valerie Plame’s ‘outing’ as a covert intelligence operative highly visible in the issues-attention cycle, it was agenda-setting media like The New York Times who the Bush Administration targeted to silence (Pearlstine). To be viable, CJ needs to evolve beyond a new media populism, perhaps into a constructivist model of agency, norms and social change (Finnemore). 4. Citizen Journalism as New Media Populism Several “precursor trends” foreshadowed CJ notably the mid-1990s interest in “cool-hunting” by new media analysts and subculture marketeers (Gibson; Gladwell). Whilst this audience focus waned with the 1995-2000 dotcom bubble it resurfaced in CJ and publisher Tim O’Reilly’s Web 2.0 vision. Thus, CJ might be viewed as new media populism that has flourished with the Web 2.0 boom. Yet if the boom becomes a macroeconomic bubble (Gross; Spar) then CJ could be written off as a “silver bullet” that ultimately failed to deliver on its promises (Brooks, Jr.). The reputations of uncritical proponents who adopted a “true believer” stance would also be damaged (Hoffer). This risk is evident if CJ is compared with a parallel trend that shares its audience focus and populist view: day traders and technical analysts who speculate on financial markets. This parallel trend provides an alternative discipline in which the populism surfaced in an earlier form (Carlile & Christensen 12). Fidelity’s Peter Lynch argues that stock pickers can use their Main Street knowledge to beat Wall Street by exploiting information asymmetries (Lynch & Rothchild). Yet Lynch’s examples came from the mid-1970s to early 1980s when indexed mutual fund strategies worked, before deregulation and macroeconomic volatility. A change in the Web 2.0 boom might similarly trigger a reconsideration of Citizen Journalism. Hedge fund maven Victor Niederhoffer contends that investors who rely on technical analysis are practicing a Comtean religion (Niederhoffer & Kenner 72-74) instead of Efficient Market Hypothesis traders who use statistical arbitrage to deal with ‘random walks’ or Behavioural Finance experts who build on Amos Tversky and Daniel Kahneman’s Prospect Theory (Kahneman & Tversky). Niederhoffer’s deeper point is that technical analysts’ belief that the “trend is your friend” is no match for the other schools, despite a mini-publishing industry and computer trading systems. There are also ontological and epistemological differences between the schools. Similarly, CJ proponents who adopt a ‘Professional Amateur’ or ‘Pro-Am’ stance (Leadbeater & Miller) may face a similar gulf when making comparisons with professional journalists and the production environments in media organisations. CJ also thrives as new media populism because of institutional vested interests. When media conglomerates cut back on cadetships and internships CJ might fill the market demand as one alternative. New media programs at New York University and others can use CJ to differentiate themselves from “hyperlocal” competitors (Christensen; Slywotzky; Christensen, Curtis & Horn). This transforms CJ from new media populism to new media institution. 5. Parallels: Event-driven & Civic Journalism For new media programs, CJ builds on two earlier traditions: the Event-driven journalism of crises like the 1991 Gulf War (Wark) and the Civic Journalism school that emerged in the 1960s social upheavals. Civic Journalism’s awareness of minorities and social issues provides the character ethic and political philosophy for many Citizen Journalists. Jay Rosen and others suggest that CJ is the next-generation heir to Civic Journalism, tracing a thread from the 1968 Chicago Democratic Convention to IndyMedia’s coverage of the 1999 “Battle in Seattle” (Rosen). Rosen’s observation could yield an interesting historiography or genealogy. Events such as the Southeast Asian tsunami on 26 December 2004 or Al Qaeda’s London bombings on 7 July 2005 are cited as examples of CJ as event-driven journalism and “pro-am collaboration” (Kolodzy 229-230). Having covered these events and Al Qaeda’s attacks on 11th September 2001, I have a slightly different view: this was more a variation on “first responder” status and handicam video footage that journalists have sourced for the past three decades when covering major disasters. This different view means that the “salience of categories” used to justify CJ and “pro-am collaboration” these events does not completely hold. Furthermore, when Citizen Journalism proponents tout Flickr and Wikipedia as models of real-time media they are building on a broader phenomenon that includes CNN’s Gulf War coverage and Bloomberg’s dominance of financial news (Loomis). 6. The Mergers & Acquisitions Scenario CJ proponents often express anxieties about the resilience of their outlets in the face of predatory venture capital firms who initiate Mergers & Acquisitions (M&A) activities. Ironically, these venture capital firms have core competencies and expertise in the event-driven infrastructure and real-time media that CJ aspires to. Sequoia Capital and other venture capital firms have evaluative frameworks that likely surpass Carlile & Christensen in sophistication, and they exploit parallels, information asymmetries and market populism. Furthermore, although venture capital firms such as Union Street Ventures have funded Web 2.0 firms, they are absent from the explanations of some theorists, whose examples of Citizen Journalism and Web 2.0 success may be the result of survivorship bias. Thus, the venture capital market remains an untapped data source for researchers who want to evaluate the impact of CJ outlets and institutions. The M&A scenario further problematises CJ in several ways. First, CJ is framed as “oppositional” to traditional media, yet this may be used as a stratagem in a game theory framework with multiple stakeholders. Drexel Burnham Lambert’s financier Michael Milken used market populism to sell ‘high-yield’ or ‘junk’ bonds to investors whilst disrupting the Wall Street establishment in the late 1980s (Curtis) and CJ could fulfil a similar tactical purpose. Second, the M&A goal of some Web 2.0 firms could undermine the participatory goals of a site’s community if post-merger integration fails. Jason Calacanis’s sale of Weblogs, Inc to America Online in 2005 and MSNBC’s acquisition of Newsvine on 5 October 2007 (Newsvine) might be success stories. However, this raises issues of digital “property rights” if you contribute to a community that is then sold in an M&A transaction—an outcome closer to business process outsourcing. Third, media “buzz” can create an unrealistic vision when a CJ site fails to grow beyond its start-up phase. Backfence.com’s demise as a “hyperlocal” initiative (Caverly) is one cautionary event that recalls the 2000 dotcom crash. The M&A scenarios outlined above are market dystopias for CJ purists. The major lesson for CJ proponents is to include other market players in hypotheses about causation and correlation factors. 7. ‘Pro-Ams’ & Professional Journalism’s Crisis CJ emerged during a period when Professional Journalism faced a major crisis of ‘self-image’. The Demos report The Pro-Am Revolution (Leadbeater & Miller) popularised the notion of ‘professional amateurs’ which some CJ theorists adopt to strengthen their categorisation. In turn, this triggers a response from cultural theorists who fear bloggers are new media’s barbarians (Keen). I concede Leadbeater and Miller have identified an important category. However, how some CJ theorists then generalise from ‘Pro-Ams’ illustrates the danger of ‘weak’ theory referred to above. Leadbeater and Miller’s categorisation does not really include a counter-view on the strengths of professionals, as illustrated in humanistic consulting (Block), professional service firms (Maister; Maister, Green & Galford), and software development (McConnell). The signs of professionalism these authors mention include a commitment to learning and communal verification, mastery of a discipline and domain application, awareness of methodology creation, participation in mentoring, and cultivation of ethical awareness. Two key differences are discernment and quality of attention, as illustrated in how the legendary Hollywood film editor Walter Murch used Apple’s Final Cut Pro software to edit the 2003 film Cold Mountain (Koppelman). ‘Pro-Ams’ might not aspire to these criteria but Citizen Journalists shouldn’t throw out these standards, either. Doing so would be making the same mistake of overconfidence that technical analysts make against statistical arbitrageurs. Key processes—fact-checking, sub-editing and editorial decision-making—are invisible to the end-user, even if traceable in a blog or wiki publishing system, because of the judgments involved. One post-mortem insight from Assignment Zero was that these processes were vital to create the climate of authenticity and trust to sustain a Citizen Journalist community (Howe). CJ’s trouble with “objectivity” might also overlook some complexities, including the similarity of many bloggers to “noise traders” in financial markets and to op-ed columnists. Methodologies and reportage practices have evolved to deal with the objections that CJ proponents raise, from New Journalism’s radical subjectivity and creative non-fiction techniques (Wolfe & Johnson) to Precision Journalism that used descriptive statistics (Meyer). Finally, journalism frameworks could be updated with current research on how phenomenological awareness shapes our judgments and perceptions (Thompson). 8. Strategic Execution For me, one of CJ’s major weaknesses as a new media theory is its lack of “rich description” (Geertz) about the strategic execution of projects. As Disinfo.com site editor I encountered situations ranging from ‘denial of service’ attacks and spam to site migration, publishing systems that go offline, and ensuring an editorial consistency. Yet the messiness of these processes is missing from CJ theories and accounts. Theories that included this detail as “second-order interactions” (Carlile & Christensen 13) would offer a richer view of CJ. Many CJ and Web 2.0 projects fall into the categories of mini-projects, demonstration prototypes and start-ups, even when using a programming language such as Ajax or Ruby on Rails. Whilst the “bootstrap” process is a benefit, more longitudinal analysis and testing needs to occur, to ensure these projects are scalable and sustainable. For example, South Korea’s OhmyNews is cited as an exemplar that started with “727 citizen reporters and 4 editors” and now has “38,000 citizen reporters” and “a dozen editors” (Kolodzy 231). How does OhmyNews’s mix of hard and soft news change over time? Or, how does OhmyNews deal with a complex issue that might require major resources, such as security negotiations between North and South Korea? Such examples could do with further research. We need to go beyond “the vision thing” and look at the messiness of execution for deeper observations and counterintuitive correlations, to build new descriptive theories. 9. Future Research This essay argues that CJ needs re-evaluation. Its immediate legacy might be to splinter ‘journalism’ into micro-trends: Washington University’s Steve Boriss proclaims “citizen journalism is dead. Expert journalism is the future.” (Boriss; Mensching). The half-lives of such micro-trends demand new categorisations, which in turn prematurely feeds the theory-building cycle. Instead, future researchers could reinvigorate 21st century journalism if they ask deeper questions and return to the observation stage of building descriptive theories. In closing, below are some possible questions that future researchers might explore: Where are the “rich descriptions” of journalistic experience—“citizen”, “convergent”, “digital”, “Pro-Am” or otherwise in new media? How could practice-based approaches inform this research instead of relying on espoused theories-in-use? What new methodologies could be developed for CJ implementation? What role can the “heroic” individual reporter or editor have in “the swarm”? Do the claims about OhmyNews and other sites stand up to longitudinal observation? Are the theories used to justify Citizen Journalism’s normative stance (Rheingold; Surowiecki; Pesce) truly robust generalisations for strategic execution or do they reflect the biases of their creators? How could developers tap the conceptual dimensions of information technology innovation (Shasha) to create the next Facebook, MySpace or Wikipedia? References Argyris, Chris, and Donald Schon. Theory in Practice. San Francisco: Jossey-Bass Publishers, 1976. Barlow, Aaron. The Rise of the Blogosphere. Westport, CN: Praeger Publishers, 2007. Block, Peter. Flawless Consulting. 2nd ed. San Francisco, CA: Jossey-Bass/Pfeiffer, 2000. Boriss, Steve. “Citizen Journalism Is Dead. Expert Journalism Is the Future.” The Future of News. 28 Nov. 2007. 20 Feb. 2008 http://thefutureofnews.com/2007/11/28/citizen-journalism-is-dead- expert-journalism-is-the-future/>. Brooks, Jr., Frederick P. The Mythical Man-Month: Essays on Software Engineering. Rev. ed. Reading, MA: Addison-Wesley Publishing Company, 1995. Campbell, Vincent. Information Age Journalism: Journalism in an International Context. New York: Arnold, 2004. Carlile, Paul R., and Clayton M. Christensen. “The Cycles of Building Theory in Management Research.” Innosight working paper draft 6. 6 Jan. 2005. 19 Feb. 2008 http://www.innosight.com/documents/Theory%20Building.pdf>. Caverly, Doug. “Hyperlocal News Site Takes A Hit.” WebProNews.com 6 July 2007. 19 Feb. 2008 http://www.webpronews.com/topnews/2007/07/06/hyperlocal-news- sites-take-a-hit>. Chenoweth, Neil. Virtual Murdoch: Reality Wars on the Information Superhighway. Sydney: Random House Australia, 2001. Christensen, Clayton M. The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail. Boston, MA: Harvard Business School Press, 1997. Christensen, Clayton M., Curtis Johnson, and Michael Horn. Disrupting Class: How Disruptive Innovation Will Change the Way the World Learns. New York: McGraw-Hill, 2008. Curtis, Adam. The Mayfair Set. London: British Broadcasting Corporation, 1999. Etherington, Kim. Becoming a Reflexive Researcher: Using Ourselves in Research. London: Jessica Kingsley Publishers, 2004. Festinger, Leon. A Theory of Cognitive Dissonance. Stanford, CA: Stanford University Press, 1962. Feyerabend, Paul. Against Method. 3rd ed. London: Verso, 1993. Finnemore, Martha. National Interests in International Society. Ithaca, NY: Cornell University Press, 1996. Geertz, Clifford. The Interpretation of Cultures. New York: Basic Books, 1973. Ghoshal, Sumantra. “Bad Management Theories Are Destroying Good Management Practices.” Academy of Management Learning & Education 4.1 (2005): 75-91. Gibson, William. Pattern Recognition. London: Viking, 2003. Gladwell, Malcolm. “The Cool-Hunt.” The New Yorker Magazine 17 March 1997. 20 Feb. 2008 http://www.gladwell.com/1997/1997_03_17_a_cool.htm>. Gross, Daniel. Pop! Why Bubbles Are Great for the Economy. New York: Collins, 2007. Hoffer, Eric. The True Believer. New York: Harper, 1951. Howe, Jeff. “Did Assignment Zero Fail? A Look Back, and Lessons Learned.” Wired News 16 July 2007. 19 Feb. 2008 http://www.wired.com/techbiz/media/news/2007/07/assignment_ zero_final?currentPage=all>. Kahneman, Daniel, and Amos Tversky. Choices, Values and Frames. Cambridge: Cambridge UP, 2000. Keen, Andrew. The Cult of the Amateur. New York: Doubleday Currency, 2007. Khurana, Rakesh. From Higher Aims to Hired Hands. Princeton, NJ: Princeton UP, 2007. Kolodzy, Janet. Convergence Journalism: Writing and Reporting across the News Media. Oxford: Rowman & Littlefield, 2006. Koppelman, Charles. Behind the Seen: How Walter Murch Edited Cold Mountain Using Apple’s Final Cut Pro and What This Means for Cinema. Upper Saddle River, NJ: New Rider, 2004. Leadbeater, Charles, and Paul Miller. “The Pro-Am Revolution”. London: Demos, 24 Nov. 2004. 19 Feb. 2008 http://www.demos.co.uk/publications/proameconomy>. Loomis, Carol J. “Bloomberg’s Money Machine.” Fortune 5 April 2007. 20 Feb. 2008 http://money.cnn.com/magazines/fortune/fortune_archive/2007/04/16/ 8404302/index.htm>. Lynch, Peter, and John Rothchild. Beating the Street. Rev. ed. New York: Simon & Schuster, 1994. Maister, David. True Professionalism. New York: The Free Press, 1997. Maister, David, Charles H. Green, and Robert M. Galford. The Trusted Advisor. New York: The Free Press, 2004. Mensching, Leah McBride. “Citizen Journalism on Its Way Out?” SFN Blog, 30 Nov. 2007. 20 Feb. 2008 http://www.sfnblog.com/index.php/2007/11/30/940-citizen-journalism- on-its-way-out>. Meyer, Philip. Precision Journalism. 4th ed. Lanham, MD: Rowman & Littlefield, 2002. McConnell, Steve. Professional Software Development. Boston, MA: Addison-Wesley, 2004. Mintzberg, Henry. Managers Not MBAs. San Francisco, CA: Berrett-Koehler, 2004. Morgan, Gareth. Images of Organisation. Rev. ed. Thousand Oaks, CA: Sage, 2006. Newsvine. “Msnbc.com Acquires Newsvine.” 7 Oct. 2007. 20 Feb. 2008 http://blog.newsvine.com/_news/2007/10/07/1008889-msnbccom- acquires-newsvine>. Niederhoffer, Victor, and Laurel Kenner. Practical Speculation. New York: John Wiley & Sons, 2003. Pearlstine, Norman. Off the Record: The Press, the Government, and the War over Anonymous Sources. New York: Farrar, Straus & Giroux, 2007. Pesce, Mark D. “Mob Rules (The Law of Fives).” The Human Network 28 Sep. 2007. 20 Feb. 2008 http://blog.futurestreetconsulting.com/?p=39>. Rheingold, Howard. Smart Mobs: The Next Social Revolution. Cambridge MA: Basic Books, 2002. Rosen, Jay. What Are Journalists For? Princeton NJ: Yale UP, 2001. Shasha, Dennis Elliott. Out of Their Minds: The Lives and Discoveries of 15 Great Computer Scientists. New York: Copernicus, 1995. Slywotzky, Adrian. Value Migration: How to Think Several Moves Ahead of the Competition. Boston, MA: Harvard Business School Press, 1996. Smith, Steve. “The Self-Image of a Discipline: The Genealogy of International Relations Theory.” Eds. Steve Smith and Ken Booth. International Relations Theory Today. Cambridge, UK: Polity Press, 1995. 1-37. Spar, Debora L. Ruling the Waves: Cycles of Discovery, Chaos and Wealth from the Compass to the Internet. New York: Harcourt, 2001. Surowiecki, James. The Wisdom of Crowds. New York: Doubleday, 2004. Thompson, Evan. Mind in Life: Biology, Phenomenology, and the Sciences of Mind. Cambridge, MA: Belknap Press, 2007. Trippi, Joe. The Revolution Will Not Be Televised. New York: ReganBooks, 2004. Underwood, Doug. When MBA’s Rule the Newsroom. New York: Columbia University Press, 1993. Wark, McKenzie. Virtual Geography: Living with Global Media Events. Bloomington IN: Indiana UP, 1994. Wolfe, Tom, and E.W. Johnson. The New Journalism. New York: Harper & Row, 1973. 
 
 
 
 Citation reference for this article
 
 MLA Style
 Burns, Alex. "Select Issues with New Media Theories of Citizen Journalism." M/C Journal 10.6/11.1 (2008). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0804/10-burns.php>. APA Style
 Burns, A. (Apr. 2008) "Select Issues with New Media Theories of Citizen Journalism," M/C Journal, 10(6)/11(1). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0804/10-burns.php>. 
APA, Harvard, Vancouver, ISO, and other styles
49

Burns, Alex. "Select Issues with New Media Theories of Citizen Journalism." M/C Journal 11, no. 1 (2008). http://dx.doi.org/10.5204/mcj.30.

Full text
Abstract:
“Journalists have to begin a new type of journalism, sometimes being the guide on the side of the civic conversation as well as the filter and gatekeeper.” (Kolodzy 218) “In many respects, citizen journalism is simply public journalism removed from the journalism profession.” (Barlow 181) 1. Citizen Journalism — The Latest Innovation? New Media theorists such as Dan Gillmor, Henry Jenkins, Jay Rosen and Jeff Howe have recently touted Citizen Journalism (CJ) as the latest innovation in 21st century journalism. “Participatory journalism” and “user-driven journalism” are other terms to describe CJ, which its proponents argue is a disruptive innovation (Christensen) to the agenda-setting media institutions, news values and “objective” reportage. In this essay I offer a “contrarian” view, informed by two perspectives: (1) a three-stage model of theory-building (Carlile & Christensen) to evaluate the claims made about CJ; and (2) self-reflexive research insights (Etherington) from editing the US-based news site Disinformation between November 1999 and February 2008. New media theories can potentially create “cognitive dissonance” (Festinger) when their explanations of CJ practices are compared with what actually happens (Feyerabend). First I summarise Carlile & Christensen’s model and the dangers of “bad theory” (Ghoshal). Next I consider several problems in new media theories about CJ: the notion of ‘citizen’, new media populism, parallels in event-driven and civic journalism, and mergers and acquisitions. Two ‘self-reflexive’ issues are considered: ‘pro-ams’ or ‘professional amateurs’ as a challenge to professional journalists, and CJ’s deployment in new media operations and production environments. Finally, some exploratory questions are offered for future researchers. 2. An Evaluative Framework for New Media Theories on Citizen Journalism Paul Carlile and Clayton M. Christensen’s model offers one framework with which to evaluate new media theories on CJ. This framework is used below to highlight select issues and gaps in CJ’s current frameworks and theories. Carlile & Christensen suggest that robust theory-building emerges via three stages: Descriptive, Categorisation and Normative (Carlile & Christensen). There are three sub-stages in Descriptive theory-building; namely, the observation of phenomena, inductive classification into schemas and taxonomies, and correlative relationships to develop models (Carlile & Christensen 2-5). Once causation is established, Normative theory evolves through deductive logic which is subject to Kuhnian paradigm shifts and Popperian falsifiability (Carlile & Christensen 6). Its proponents situate CJ as a Categorisation or new journalism agenda that poses a Normative challenged and Kuhnian paradigm shift to traditional journalism. Existing CJ theories jump from the Descriptive phase of observations like “smart mobs” in Japanese youth subcultures (Rheingold) to make broad claims for Categorisation such as that IndyMedia, blogs and wiki publishing systems as new media alternatives to traditional media. CJ theories then underpin normative beliefs, values and worldviews. Correlative relationships are also used to differentiate CJ from the demand side of microeconomic analysis, from the top-down editorial models of traditional media outlets, and to adopt a vanguard stance. To support this, CJ proponents cite research on emergent collective behaviour such as the “wisdom of crowds” hypothesis (Surowiecki) or peer-to-peer network “swarms” (Pesce) to provide scientific justification for their Normative theories. However, further evaluative research is needed for three reasons: the emergent collective behaviour hypothesis may not actually inform CJ practices, existing theories may have “correlation not cause” errors, and the link may be due to citation network effects between CJ theorists. Collectively, this research base also frames CJ as an “ought to” Categorisation and then proceeds to Normative theory-building (Carlile & Christensen 7). However, I argue below that this Categorisation may be premature: its observations and correlative relationships might reinforce a ‘weak’ Normative theory with limited generalisation. CJ proponents seem to imply that it can be applied anywhere and under any condition—a “statement of causality” that almost makes it a fad (Carlile & Christensen 8). CJ that relies on Classification and Normative claims will be problematic without a strong grounding in Descriptive observation. To understand what’s potentially at stake for CJ’s future consider the consider the parallel debate about curricula renewal for the Masters of Business Administration in the wake of high-profile corporate collapses such as Enron, Worldcom, HIH and OneTel. The MBA evolved as a sociological and institutional construct to justify management as a profession that is codified, differentiated and has entry barriers (Khurana). This process might partly explain the pushback that some media professionals have to CJ as one alternative. MBA programs faced criticism if they had student cohorts with little business know-how or experiential learning (Mintzberg). Enron’s collapse illustrated the ethical dilemmas and unintended consequences that occurred when “bad theories” were implemented (Ghoshal). Professional journalists are aware of this: MBA-educated managers challenged the “craft” tradition in the early 1980s (Underwood). This meant that journalism’s ‘self-image’ (Morgan; Smith) is intertwined with managerial anxieties about media conglomerates in highly competitive markets. Ironically, as noted below, Citizen Journalists who adopt a vanguard position vis-a-vis media professionals step into a more complex game with other players. However, current theories have a naïve idealism about CJ’s promise of normative social change in the face of Machiavellian agency in business, the media and politics. 3. Citizen Who? Who is the “citizen” in CJ? What is their self-awareness as a political agent? CJ proponents who use the ‘self-image’ of ‘citizen’ draw on observations from the participatory vision of open source software, peer-to-peer networks, and case studies such as Howard Dean’s 2004 bid for the Democrat Party nominee in the US Presidential election campaign (Trippi). Recent theorists note Alexander Hamilton’s tradition of civic activism (Barlow 178) which links contemporary bloggers with the Federalist Papers and early newspaper pamphlets. One unsurfaced assumption in these observations and correlations is that most bloggers will adopt a coherent political philosophy as informed citizens: a variation on Lockean utilitarianism, Rawlsian liberalism or Nader consumer activism. To date there is little discussion about how political philosophy could deepen CJ’s ‘self-image’: how to critically evaluate sources, audit and investigation processes, or strategies to deal with elites, deterrence and power. For example, although bloggers kept Valerie Plame’s ‘outing’ as a covert intelligence operative highly visible in the issues-attention cycle, it was agenda-setting media like The New York Times who the Bush Administration targeted to silence (Pearlstine). To be viable, CJ needs to evolve beyond a new media populism, perhaps into a constructivist model of agency, norms and social change (Finnemore). 4. Citizen Journalism as New Media Populism Several “precursor trends” foreshadowed CJ notably the mid-1990s interest in “cool-hunting” by new media analysts and subculture marketeers (Gibson; Gladwell). Whilst this audience focus waned with the 1995-2000 dotcom bubble it resurfaced in CJ and publisher Tim O’Reilly’s Web 2.0 vision. Thus, CJ might be viewed as new media populism that has flourished with the Web 2.0 boom. Yet if the boom becomes a macroeconomic bubble (Gross; Spar) then CJ could be written off as a “silver bullet” that ultimately failed to deliver on its promises (Brooks, Jr.). The reputations of uncritical proponents who adopted a “true believer” stance would also be damaged (Hoffer). This risk is evident if CJ is compared with a parallel trend that shares its audience focus and populist view: day traders and technical analysts who speculate on financial markets. This parallel trend provides an alternative discipline in which the populism surfaced in an earlier form (Carlile & Christensen 12). Fidelity’s Peter Lynch argues that stock pickers can use their Main Street knowledge to beat Wall Street by exploiting information asymmetries (Lynch & Rothchild). Yet Lynch’s examples came from the mid-1970s to early 1980s when indexed mutual fund strategies worked, before deregulation and macroeconomic volatility. A change in the Web 2.0 boom might similarly trigger a reconsideration of Citizen Journalism. Hedge fund maven Victor Niederhoffer contends that investors who rely on technical analysis are practicing a Comtean religion (Niederhoffer & Kenner 72-74) instead of Efficient Market Hypothesis traders who use statistical arbitrage to deal with ‘random walks’ or Behavioural Finance experts who build on Amos Tversky and Daniel Kahneman’s Prospect Theory (Kahneman & Tversky). Niederhoffer’s deeper point is that technical analysts’ belief that the “trend is your friend” is no match for the other schools, despite a mini-publishing industry and computer trading systems. There are also ontological and epistemological differences between the schools. Similarly, CJ proponents who adopt a ‘Professional Amateur’ or ‘Pro-Am’ stance (Leadbeater & Miller) may face a similar gulf when making comparisons with professional journalists and the production environments in media organisations. CJ also thrives as new media populism because of institutional vested interests. When media conglomerates cut back on cadetships and internships CJ might fill the market demand as one alternative. New media programs at New York University and others can use CJ to differentiate themselves from “hyperlocal” competitors (Christensen; Slywotzky; Christensen, Curtis & Horn). This transforms CJ from new media populism to new media institution. 5. Parallels: Event-driven & Civic Journalism For new media programs, CJ builds on two earlier traditions: the Event-driven journalism of crises like the 1991 Gulf War (Wark) and the Civic Journalism school that emerged in the 1960s social upheavals. Civic Journalism’s awareness of minorities and social issues provides the character ethic and political philosophy for many Citizen Journalists. Jay Rosen and others suggest that CJ is the next-generation heir to Civic Journalism, tracing a thread from the 1968 Chicago Democratic Convention to IndyMedia’s coverage of the 1999 “Battle in Seattle” (Rosen). Rosen’s observation could yield an interesting historiography or genealogy. Events such as the Southeast Asian tsunami on 26 December 2004 or Al Qaeda’s London bombings on 7 July 2005 are cited as examples of CJ as event-driven journalism and “pro-am collaboration” (Kolodzy 229-230). Having covered these events and Al Qaeda’s attacks on 11th September 2001, I have a slightly different view: this was more a variation on “first responder” status and handicam video footage that journalists have sourced for the past three decades when covering major disasters. This different view means that the “salience of categories” used to justify CJ and “pro-am collaboration” these events does not completely hold. Furthermore, when Citizen Journalism proponents tout Flickr and Wikipedia as models of real-time media they are building on a broader phenomenon that includes CNN’s Gulf War coverage and Bloomberg’s dominance of financial news (Loomis). 6. The Mergers & Acquisitions Scenario CJ proponents often express anxieties about the resilience of their outlets in the face of predatory venture capital firms who initiate Mergers & Acquisitions (M&A) activities. Ironically, these venture capital firms have core competencies and expertise in the event-driven infrastructure and real-time media that CJ aspires to. Sequoia Capital and other venture capital firms have evaluative frameworks that likely surpass Carlile & Christensen in sophistication, and they exploit parallels, information asymmetries and market populism. Furthermore, although venture capital firms such as Union Street Ventures have funded Web 2.0 firms, they are absent from the explanations of some theorists, whose examples of Citizen Journalism and Web 2.0 success may be the result of survivorship bias. Thus, the venture capital market remains an untapped data source for researchers who want to evaluate the impact of CJ outlets and institutions. The M&A scenario further problematises CJ in several ways. First, CJ is framed as “oppositional” to traditional media, yet this may be used as a stratagem in a game theory framework with multiple stakeholders. Drexel Burnham Lambert’s financier Michael Milken used market populism to sell ‘high-yield’ or ‘junk’ bonds to investors whilst disrupting the Wall Street establishment in the late 1980s (Curtis) and CJ could fulfil a similar tactical purpose. Second, the M&A goal of some Web 2.0 firms could undermine the participatory goals of a site’s community if post-merger integration fails. Jason Calacanis’s sale of Weblogs, Inc to America Online in 2005 and MSNBC’s acquisition of Newsvine on 5 October 2007 (Newsvine) might be success stories. However, this raises issues of digital “property rights” if you contribute to a community that is then sold in an M&A transaction—an outcome closer to business process outsourcing. Third, media “buzz” can create an unrealistic vision when a CJ site fails to grow beyond its start-up phase. Backfence.com’s demise as a “hyperlocal” initiative (Caverly) is one cautionary event that recalls the 2000 dotcom crash. The M&A scenarios outlined above are market dystopias for CJ purists. The major lesson for CJ proponents is to include other market players in hypotheses about causation and correlation factors. 7. ‘Pro-Ams’ & Professional Journalism’s Crisis CJ emerged during a period when Professional Journalism faced a major crisis of ‘self-image’. The Demos report The Pro-Am Revolution (Leadbeater & Miller) popularised the notion of ‘professional amateurs’ which some CJ theorists adopt to strengthen their categorisation. In turn, this triggers a response from cultural theorists who fear bloggers are new media’s barbarians (Keen). I concede Leadbeater and Miller have identified an important category. However, how some CJ theorists then generalise from ‘Pro-Ams’ illustrates the danger of ‘weak’ theory referred to above. Leadbeater and Miller’s categorisation does not really include a counter-view on the strengths of professionals, as illustrated in humanistic consulting (Block), professional service firms (Maister; Maister, Green & Galford), and software development (McConnell). The signs of professionalism these authors mention include a commitment to learning and communal verification, mastery of a discipline and domain application, awareness of methodology creation, participation in mentoring, and cultivation of ethical awareness. Two key differences are discernment and quality of attention, as illustrated in how the legendary Hollywood film editor Walter Murch used Apple’s Final Cut Pro software to edit the 2003 film Cold Mountain (Koppelman). ‘Pro-Ams’ might not aspire to these criteria but Citizen Journalists shouldn’t throw out these standards, either. Doing so would be making the same mistake of overconfidence that technical analysts make against statistical arbitrageurs. Key processes—fact-checking, sub-editing and editorial decision-making—are invisible to the end-user, even if traceable in a blog or wiki publishing system, because of the judgments involved. One post-mortem insight from Assignment Zero was that these processes were vital to create the climate of authenticity and trust to sustain a Citizen Journalist community (Howe). CJ’s trouble with “objectivity” might also overlook some complexities, including the similarity of many bloggers to “noise traders” in financial markets and to op-ed columnists. Methodologies and reportage practices have evolved to deal with the objections that CJ proponents raise, from New Journalism’s radical subjectivity and creative non-fiction techniques (Wolfe & Johnson) to Precision Journalism that used descriptive statistics (Meyer). Finally, journalism frameworks could be updated with current research on how phenomenological awareness shapes our judgments and perceptions (Thompson). 8. Strategic Execution For me, one of CJ’s major weaknesses as a new media theory is its lack of “rich description” (Geertz) about the strategic execution of projects. As Disinfo.com site editor I encountered situations ranging from ‘denial of service’ attacks and spam to site migration, publishing systems that go offline, and ensuring an editorial consistency. Yet the messiness of these processes is missing from CJ theories and accounts. Theories that included this detail as “second-order interactions” (Carlile & Christensen 13) would offer a richer view of CJ. Many CJ and Web 2.0 projects fall into the categories of mini-projects, demonstration prototypes and start-ups, even when using a programming language such as Ajax or Ruby on Rails. Whilst the “bootstrap” process is a benefit, more longitudinal analysis and testing needs to occur, to ensure these projects are scalable and sustainable. For example, South Korea’s OhmyNews is cited as an exemplar that started with “727 citizen reporters and 4 editors” and now has “38,000 citizen reporters” and “a dozen editors” (Kolodzy 231). How does OhmyNews’s mix of hard and soft news change over time? Or, how does OhmyNews deal with a complex issue that might require major resources, such as security negotiations between North and South Korea? Such examples could do with further research. We need to go beyond “the vision thing” and look at the messiness of execution for deeper observations and counterintuitive correlations, to build new descriptive theories. 9. Future Research This essay argues that CJ needs re-evaluation. Its immediate legacy might be to splinter ‘journalism’ into micro-trends: Washington University’s Steve Boriss proclaims “citizen journalism is dead. Expert journalism is the future.” (Boriss; Mensching). The half-lives of such micro-trends demand new categorisations, which in turn prematurely feeds the theory-building cycle. Instead, future researchers could reinvigorate 21st century journalism if they ask deeper questions and return to the observation stage of building descriptive theories. In closing, below are some possible questions that future researchers might explore: Where are the “rich descriptions” of journalistic experience—“citizen”, “convergent”, “digital”, “Pro-Am” or otherwise in new media?How could practice-based approaches inform this research instead of relying on espoused theories-in-use?What new methodologies could be developed for CJ implementation?What role can the “heroic” individual reporter or editor have in “the swarm”?Do the claims about OhmyNews and other sites stand up to longitudinal observation?Are the theories used to justify Citizen Journalism’s normative stance (Rheingold; Surowiecki; Pesce) truly robust generalisations for strategic execution or do they reflect the biases of their creators?How could developers tap the conceptual dimensions of information technology innovation (Shasha) to create the next Facebook, MySpace or Wikipedia? References Argyris, Chris, and Donald Schon. Theory in Practice. San Francisco: Jossey-Bass Publishers, 1976. Barlow, Aaron. The Rise of the Blogosphere. Westport, CN: Praeger Publishers, 2007. Block, Peter. Flawless Consulting. 2nd ed. San Francisco, CA: Jossey-Bass/Pfeiffer, 2000. Boriss, Steve. “Citizen Journalism Is Dead. Expert Journalism Is the Future.” The Future of News. 28 Nov. 2007. 20 Feb. 2008 < http://thefutureofnews.com/2007/11/28/citizen-journalism-is-dead- expert-journalism-is-the-future/ >. Brooks, Jr., Frederick P. The Mythical Man-Month: Essays on Software Engineering. Rev. ed. Reading, MA: Addison-Wesley Publishing Company, 1995. Campbell, Vincent. Information Age Journalism: Journalism in an International Context. New York: Arnold, 2004. Carlile, Paul R., and Clayton M. Christensen. “The Cycles of Building Theory in Management Research.” Innosight working paper draft 6. 6 Jan. 2005. 19 Feb. 2008 < http://www.innosight.com/documents/Theory%20Building.pdf >. Caverly, Doug. “Hyperlocal News Site Takes A Hit.” WebProNews.com 6 July 2007. 19 Feb. 2008 < http://www.webpronews.com/topnews/2007/07/06/hyperlocal-news- sites-take-a-hit >. Chenoweth, Neil. Virtual Murdoch: Reality Wars on the Information Superhighway. Sydney: Random House Australia, 2001. Christensen, Clayton M. The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail. Boston, MA: Harvard Business School Press, 1997. Christensen, Clayton M., Curtis Johnson, and Michael Horn. Disrupting Class: How Disruptive Innovation Will Change the Way the World Learns. New York: McGraw-Hill, 2008. Curtis, Adam. The Mayfair Set. London: British Broadcasting Corporation, 1999. Etherington, Kim. Becoming a Reflexive Researcher: Using Ourselves in Research. London: Jessica Kingsley Publishers, 2004. Festinger, Leon. A Theory of Cognitive Dissonance. Stanford, CA: Stanford University Press, 1962. Feyerabend, Paul. Against Method. 3rd ed. London: Verso, 1993. Finnemore, Martha. National Interests in International Society. Ithaca, NY: Cornell University Press, 1996. Geertz, Clifford. The Interpretation of Cultures. New York: Basic Books, 1973. Ghoshal, Sumantra. “Bad Management Theories Are Destroying Good Management Practices.” Academy of Management Learning & Education 4.1 (2005): 75-91. Gibson, William. Pattern Recognition. London: Viking, 2003. Gladwell, Malcolm. “The Cool-Hunt.” The New Yorker Magazine 17 March 1997. 20 Feb. 2008 < http://www.gladwell.com/1997/1997_03_17_a_cool.htm >. Gross, Daniel. Pop! Why Bubbles Are Great for the Economy. New York: Collins, 2007. Hoffer, Eric. The True Believer. New York: Harper, 1951. Howe, Jeff. “Did Assignment Zero Fail? A Look Back, and Lessons Learned.” Wired News 16 July 2007. 19 Feb. 2008 < http://www.wired.com/techbiz/media/news/2007/07/assignment_ zero_final?currentPage=all >. Kahneman, Daniel, and Amos Tversky. Choices, Values and Frames. Cambridge: Cambridge UP, 2000. Keen, Andrew. The Cult of the Amateur. New York: Doubleday Currency, 2007. Khurana, Rakesh. From Higher Aims to Hired Hands. Princeton, NJ: Princeton UP, 2007. Kolodzy, Janet. Convergence Journalism: Writing and Reporting across the News Media. Oxford: Rowman & Littlefield, 2006. Koppelman, Charles. Behind the Seen: How Walter Murch Edited Cold Mountain Using Apple’s Final Cut Pro and What This Means for Cinema. Upper Saddle River, NJ: New Rider, 2004. Leadbeater, Charles, and Paul Miller. “The Pro-Am Revolution”. London: Demos, 24 Nov. 2004. 19 Feb. 2008 < http://www.demos.co.uk/publications/proameconomy >. Loomis, Carol J. “Bloomberg’s Money Machine.” Fortune 5 April 2007. 20 Feb. 2008 < http://money.cnn.com/magazines/fortune/fortune_archive/2007/04/16/ 8404302/index.htm >. Lynch, Peter, and John Rothchild. Beating the Street. Rev. ed. New York: Simon & Schuster, 1994. Maister, David. True Professionalism. New York: The Free Press, 1997. Maister, David, Charles H. Green, and Robert M. Galford. The Trusted Advisor. New York: The Free Press, 2004. Mensching, Leah McBride. “Citizen Journalism on Its Way Out?” SFN Blog, 30 Nov. 2007. 20 Feb. 2008 < http://www.sfnblog.com/index.php/2007/11/30/940-citizen-journalism- on-its-way-out >. Meyer, Philip. Precision Journalism. 4th ed. Lanham, MD: Rowman & Littlefield, 2002. McConnell, Steve. Professional Software Development. Boston, MA: Addison-Wesley, 2004. Mintzberg, Henry. Managers Not MBAs. San Francisco, CA: Berrett-Koehler, 2004. Morgan, Gareth. Images of Organisation. Rev. ed. Thousand Oaks, CA: Sage, 2006. Newsvine. “Msnbc.com Acquires Newsvine.” 7 Oct. 2007. 20 Feb. 2008 < http://blog.newsvine.com/_news/2007/10/07/1008889-msnbccom- acquires-newsvine >. Niederhoffer, Victor, and Laurel Kenner. Practical Speculation. New York: John Wiley & Sons, 2003. Pearlstine, Norman. Off the Record: The Press, the Government, and the War over Anonymous Sources. New York: Farrar, Straus & Giroux, 2007. Pesce, Mark D. “Mob Rules (The Law of Fives).” The Human Network 28 Sep. 2007. 20 Feb. 2008 < http://blog.futurestreetconsulting.com/?p=39 >. Rheingold, Howard. Smart Mobs: The Next Social Revolution. Cambridge MA: Basic Books, 2002. Rosen, Jay. What Are Journalists For? Princeton NJ: Yale UP, 2001. Shasha, Dennis Elliott. Out of Their Minds: The Lives and Discoveries of 15 Great Computer Scientists. New York: Copernicus, 1995. Slywotzky, Adrian. Value Migration: How to Think Several Moves Ahead of the Competition. Boston, MA: Harvard Business School Press, 1996. Smith, Steve. “The Self-Image of a Discipline: The Genealogy of International Relations Theory.” Eds. Steve Smith and Ken Booth. International Relations Theory Today. Cambridge, UK: Polity Press, 1995. 1-37. Spar, Debora L. Ruling the Waves: Cycles of Discovery, Chaos and Wealth from the Compass to the Internet. New York: Harcourt, 2001. Surowiecki, James. The Wisdom of Crowds. New York: Doubleday, 2004. Thompson, Evan. Mind in Life: Biology, Phenomenology, and the Sciences of Mind. Cambridge, MA: Belknap Press, 2007. Trippi, Joe. The Revolution Will Not Be Televised. New York: ReganBooks, 2004. Underwood, Doug. When MBA’s Rule the Newsroom. New York: Columbia University Press, 1993. Wark, McKenzie. Virtual Geography: Living with Global Media Events. Bloomington IN: Indiana UP, 1994. Wolfe, Tom, and E.W. Johnson. The New Journalism. New York: Harper & Row, 1973.
APA, Harvard, Vancouver, ISO, and other styles
50

Goldman, Jonathan E. "Double Exposure." M/C Journal 7, no. 5 (2004). http://dx.doi.org/10.5204/mcj.2414.

Full text
Abstract:
I. Happy Endings Chaplin’s Modern Times features one of the most subtly strange endings in Hollywood history. It concludes with the Tramp (Chaplin) and the Gamin (Paulette Goddard) walking away from the camera, down the road, toward the sunrise. (Figure 1.) They leave behind the city, their hopes for employment, and, it seems, civilization itself. The iconography deployed is clear: it is 1936, millions are unemployed, and to walk penniless into the Great Depression means destitution if not death. Chaplin invokes a familiar trope of 1930s texts, the “marginal men,” for whom “life on the road is not romanticized” and who “do not participate in any culture,” as Warren Susman puts it (171). The Tramp and the Gamin seem destined for this non-existence. For the duration of the film they have tried to live and work within society, but now they are outcasts. This is supposed to be a happy ending, though. Before marching off into poverty, the Tramp whistles a tune and tells the Gamin to “buck up” and smile; the string section swells around them. (Little-known [or discussed] fact: Chaplin later added lyrics to this music, resulting in the song “Smile,” now part of the repertoire of countless torch singers and jazz musicians. Standout recordings include those by Nat King Cole and Elvis Costello.) It seems like a great day to be alive. Why is that? In this narrative of despair, what is there to “buck up” about? The answer lies outside of the narrative. There is another iconography at work here: the rear-view silhouette of the Tramp strolling down the road, foregrounded against a wide vista, complete with bowler hat, baggy pants, and pigeon-toed walk, recalls previous Chaplin films. By invoking similar moments in his oeuvre, Chaplin signals that the Tramp, more than a mere movie character, is the mass-reproduced trademark image of Charlie Chaplin, multimillionaire entertainer and worldwide celebrity. The film doubles Chaplin with the Tramp. This double exposure, figuratively speaking, reconciles the contradictions between the cheerful atmosphere and the grim story. The celebrity’s presence alleviates the suspicion that the protagonists are doomed. Rather than being reduced to one of the “marginal men,” the Tramp is heading for the Hollywood hills, where Chaplin participates in quite a bit of culture, making hit movies for huge audiences. Nice work if you can get it, indeed. Chaplin resolves the plot by supplanting narrative logic with celebrity logic. Chaplin’s celebrity diverges somewhat from the way Hollywood celebrity functions generally. Miriam Hansen provides a popular understanding of celebrity: “The star’s presence in a particular film blurs the boundary between diegesis and discourse, between an address relying on the identification with fictional characters and an activation of the viewer’s familiarity with the star on the basis of production and publicity” (246). That is, celebrity images alter films by enlisting what Hansen terms “intertexts,” which include journalism and studio publicity. According to Hansen, celebrity invites these intertexts to inform and multiply the meaning of the narrative. By contrast, Modern Times disregards the diegesis altogether, switching focus to the celebrity. Meaning is not multiplied. It is replaced. Filmic resolution depends not only on recognizing Chaplin’s image, but also on abandoning plot and leaving the Tramp and the Gamin to their fates. This explicit use of celebrity culminates Chaplin’s reworking of early twentieth-century celebrity, his negotiations with fame that continue to reverberate today. In what follows, I will argue that Chaplin weds visual celebrity with strategies of author-production often attributed to modernist literature, strategies that parallel Michel Foucault’s theory of the “author function.” Like his modernist contemporaries, Chaplin deploys narrative techniques that gesture toward the text’s creator, not as a person who is visible in a so-called real world, but as an idealized consciousness who resides in the film and controls its meaning. While Chaplin’s Hollywood counterparts rely on images to connote individual personalities, Chaplin resists locating his self within a body, instead using the Tramp as a sign, rather than an embodiment, of his celebrity, and turning his filmmaking into an aesthetic space to contain his subjectivity. Creating himself as author, Chaplin reckons with the fact that his image remains on display. Chaplin recuperates the Tramp image, mobilizing it as a signifier of his mass audience. The Tramp’s universal recognizability, Chaplin suggests, authorizes the image to represent an entire historical moment. II. An Author Is Born Chaplin produces himself as an author residing in his texts, rather than a celebrity on display. He injects himself into Modern Times to resolve the narrative (and by extension assuage the social unrest the film portrays). This gesture insists that the presence of the author generates and controls signification. Chaplin thus echoes Foucault’s account of the author function: “The author is . . . the principle of a certain unity of writing – all differences having to be resolved” by reference to the author’s subjectivity (215). By reconciling narrative contradictions through the author, Chaplin proposes himself as the key to his films’ coherence of meaning. Foucault reminds us, however, that such positioning of the author is illusory: “We are used to thinking that the author is so different from all other men, and so transcendent . . . that, as soon as he speaks, meaning begins to proliferate, to proliferate indefinitely. The truth is quite the contrary: the author does not precede the works. The text contains a number of signs referring to the author” (221). In this formulation, authors do not create meaning. Rather, texts exercise formal attributes to produce their authors. So Modern Times, by enlisting Chaplin’s celebrity to provide closure, produces a controlling consciousness, a special class of being who “proliferates” meaning. Chaplin’s films in general contain signs of the author such as displays of cinematic tricks. These strategies, claiming affinity with objects of high culture, inevitably evoke the author. Chaplin’s author is not a physical entity. Authorship, Foucault writes, “does not refer purely and simply to a real individual,” meaning that the author is composed of text, not flesh and blood (216). Chaplin resists imbuing the image of the Tramp with the sort of subjectivity reserved for the author. In this way Chaplin again departs from usual accounts of Hollywood stars. In Chaplin’s time, according to Richard Dyer, “The roles and/or the performance of a star in a film were taken as revealing the personality of the star” (20). (Moreover, Chaplin achieves all that fame without relying on close-ups. Critics typically cite the close-up as the device most instrumental to Hollywood celebrity. Scott J. Juengel writes of the close-up as “a fetishization of the face” that creates “an intense manifestation of subjectivity” [353; also see Dyer, 14-15, and Susman, 282]. The one true close-up I have found in Chaplin’s early films occurs in “A Woman” [1915], when Chaplin goes in drag. It shows Chaplin’s face minus the trademark fake mustache, as if to de-familiarize his recognizability.) Dyer represents the standard view: Hollywood movies propose that stars’ public images directly reflect their private personalities. Chaplin’s celebrity contradicts that model. Chaplin’s initial fame stems from his 1914 performances in Mack Sennett’s Keystone productions, consummate examples of the slapstick genre, in which the Tramp and his trademark regalia first become recognizable trademarks. Far from offering roles that reveal “personality,” slapstick treats both people and things as objects, equally at the mercy of apparently unpredictable physical laws. Within this genre the Tramp remains an object, subject to the chaos of slapstick just like the other bodies on the screen. Chaplin’s celebrity emerges without the suggestion that his image contains a unique subject or stands out among other slapstick objects. The disinclination to treat the image as container of the subject – shared with literary modernism – sets up the Tramp as a sign that connotes Chaplin’s presence elsewhere. Gradually, Chaplin turns his image into an emblem that metonymically refers to the author. When he begins to direct, Chaplin manipulates the generic features of slapstick to reconstruct his image, establishing the Tramp in a central position. For example, in “The Vagabond” (1916), the Tramp becomes embroiled in a barroom brawl and runs toward the saloon’s swinging doors, neatly sidestepping before reaching them. The pursuer’s momentum, naturally, carries him through the doorway. Other characters exist in a slapstick dimension that turns bodies into objects, but not the Tramp. He exploits his liberation from slapstick by exacerbating the other characters’ lack of control. Such moments grant the Tramp a degree of physical control that enhances his value in relation to the other images. The Tramp, bearing the celebrity image and referring to authorial control, becomes a signifier of Chaplin’s combination of authorship and celebrity. Chaplin devises a metonymic relationship between author and image; the Tramp cannot encompass the author, only refer to him. Maintaining his subjectivity separate from the image, Chaplin imagines his films as an aesthetic space where signification is contingent on the author. He attempts to delimit what he, his name and image, signify – in opposition to intertexts that might mobilize meanings drawn from outside the text. Writing of celebrity intertexts, P. David Marshall notes that “the descriptions of the connections between celebrities’ ‘real’ lives and their working lives . . . are what configure the celebrity status” (58). For Chaplin, to situate the subject in a celebrity body would be to allow other influences – uses of his name or image in other texts – to determine the meaning of the celebrity sign. His separation of image and author reveals an anxiety about identifying one specific body or image as location of the subject, about putting the actual subject on display and in circulation. The opening moment of “Shoulder Arms” (1920) illustrates Chaplin’s uneasy alliance of celebrity, author, and image. The title card displays a cartoon sketch of the Tramp in doughboy garb. Alongside, print lettering conveys the film title and the words, “written and produced by” above a blank area. A real hand appears, points to the drawing, and elaborately signs “Charles Chaplin” in the blank space. It then pantomimes shooting a gun at the Tramp. The film announces itself as a product of one author, represented by a giant, disembodied hand. The hand provides an inimitable signature of the author, while the Tramp, disfigured by the uniform but still identifiable, provides an inimitable signature of the celebrity. The relationship between the image and the “writer” is co-dependent but antagonistic; the same hand signs Chaplin’s name and mimes shooting the Tramp. Author-production merges with resistance to the image as representation of the subject. III. The Image Is History “Shoulder Arms” reminds us that despite Chaplin’s conception of himself as an incorporeal author, the Tramp remains present, and not quite accounted for. Here Foucault’s author function finds its limitations, failing to explain author-production that relies on the image even as it situates the author in the text. The Tramp remains visible in Modern Times while the film has made it clear that the author is present to engender significance. To Slavoj Zizek the Tramp is “the remainder” of the text, existing on a separate plane from the diegesis (6). Zizek watches City Lights (1931) and finds that the Tramp, who is continually shifting between classes and characters, acts as “an intercessor, middleman, purveyor.” He is continually mistaken for something he is not, and when the mistake is recognized, “he turns into a disturbing stain one tries to get rid of as quickly as possible” (4). Zizek points out that the Tramp is often positioned outside of social institutions, set slightly apart from the diegesis. Modern Times follows this pattern as well. For example, throughout the film the Tramp continually shifts from one side of the law to the other. He endures two prison sentences, prevents a jailbreak, and becomes a security guard. The film doesn’t quite know what to do with him. Chaplin takes up this remainder and transforms it into an emblem of his mass popularity. The Tramp has always floated somewhat above the narrative; in Modern Times that narrative occurs against a backdrop of historical turmoil. Chaplin, therefore, superimposes the Tramp on to scenes of historical change. The film actually withholds the tramp image during the first section of the movie, as the character is working in a factory and does not appear in his trademark regalia until he emerges from a stay in the “hospital.” His appearance engenders a montage of filmmaking techniques: abrupt cross-cutting between shots at tilted angles, superimpositions, and crowds of people and cars moving rapidly through the city, all set to (Chaplin’s) jarring, brass-wind music. The Tramp passes before a closed factory and accidentally marches at the head of a left-wing demonstration. The sequence combines signs of social upheaval, technological advancement, and Chaplin’s own technical achievements, to indicate that the film has entered “modern times” – all spurred by the appearance of the Tramp in his trademark attire, thus implicating the Tramp in the narration of historical change. By casting his image as a universally identifiable sign of Chaplin’s mass popularity, Chaplin authorizes it to function as a sign of the historical moment. The logic behind Chaplin’s treating the Tramp as an emblem of history is articulated by Walter Benjamin’s concept of the dialectical image. Benjamin explains how culture identifies itself through images, writing that “Every present day is determined by the images that are synchronic with it: each “now” is of a particular recognizability”(462-3). Benjamin proposes that the image, achieving a “particular recognizability,” puts temporality in stasis. This illuminates the dynamic by which Chaplin elevates the mass-reproduced icon to transcendent historical symbol. The Tramp image crystallizes that passing of time into a static unit. Indeed, Chaplin instigates the way the twentieth century, according to Richard Schickel, registers its history. Schickel writes that “In the 1920s, the media, newly abustle, had discovered techniques whereby anyone could be wrested out of whatever context had originally nurtured him and turned into images . . . for no previous era is it possible to make a history out of images . . . for no subsequent era is it possible to avoid doing so. For most of us, now, this is history” (70-1). From Schickel, Benjamin, and Chaplin, a picture of the far-reaching implications of Chaplin’s celebrity emerges. By gesturing beyond the boundary of the text, toward Chaplin’s audience, the Tramp image makes legible that significant portion of the masses unified in recognition of Chaplin’s celebrity, affirming that the celebrity sign depends on its wide circulation to attain significance. As Marshall writes, “The celebrity’s power is derived from the collective configuration of its meaning.” The image’s connotative function requires collaboration with the audience. The collective configuration Chaplin mobilizes is the Tramp’s recognizability as it moves through scenes of historical change, whatever other discourses may attach to it. Chaplin thrusts the image into this role because of its status as remainder, which stems from Chaplin’s rejection of the body as a location of the subject. Chaplin has incorporated the modernist desire to situate subjectivity in the text rather than the body. Paradoxically, this impulse expands the role of visuality, turning the celebrity image into a principal figure by which our culture understands itself. References Benjamin, Walter. The Arcades Project. Trans. Howard Eiland and Kevin McLaughlin. Cambridge: The Belknap Press of Harvard UP, 1999. Chaplin, Charles, dir. City Lights. RBC Films, 1931. –––. Modern Times. Perf. Chaplin and Paulette Goddard, United Artists, 1936. –––. “Shoulder Arms.” First National, 1918. –––. “The Vagabond.” Mutual, 1916. Dyer, Richard. Stars. London: BFI, 1998. Foucault, Michel. Aesthetics, Method, and Epistemology. Ed. James D. Faubion. New York: The New Press, 1998. Hansen, Miriam. Babel and Babylon. Cambridge: Harvard UP, 1991. Marshall, P. David. Celebrity and Power. Minneapolis: U of Minnesota P, 1998. Juengel, Scott J. “Face, Figure and Physiognomics: Mary Shelley’s Frankenstein and the Moving Image.” Novel 33.3 (Summer 2000): 353-67. Schickel. Intimate Strangers. New York: Fromm International Publishing Company, 1986. Susman, Warren I. Culture as History. New York: Pantheon Books, 1973. Zizek, Slavoj. Enjoy Your Symptom! Jacques Lacan in Hollywood and Out. New York: Routledge, 1992. Citation reference for this article MLA Style Goldman, Jonathan. "Double Exposure: Charlie Chaplin as Author and Celebrity." M/C Journal 7.5 (2004). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0411/05-goldman.php>. APA Style Goldman, J. (Nov. 2004) "Double Exposure: Charlie Chaplin as Author and Celebrity," M/C Journal, 7(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0411/05-goldman.php>.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!