To see the other types of publications on this topic, follow the link: Computational-hard real-life problem.

Journal articles on the topic 'Computational-hard real-life problem'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 44 journal articles for your research on the topic 'Computational-hard real-life problem.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Iantovics, Laszlo Barna, László Kovács, and Corina Rotar. "MeasApplInt - a novel intelligence metric for choosing the computing systems able to solve real-life problems with a high intelligence." Applied Intelligence 49 (April 1, 2019): 3491–511. https://doi.org/10.1007/s10489-019-01440-5.

Full text
Abstract:
Intelligent agent-based systems are applied for many real-life difficult problem-solving tasks in domains like transport and healthcare. In the case of many classes of real-life difficult problems, it is important to make an efficient selection of the computing systems that are able to solve the problems very intelligently. The selection of the appropriate computing systems should be based on an intelligence metric that is able to measure the systems intelligence for real-life problem solving. In this paper, we propose a novel universal metric called&nbsp;<em>MeasApplInt</em>&nbsp;able to measure and compare the real-life problem solving machine intelligence of cooperative multiagent systems (CMASs). Based on their measured intelligence levels, two studied CMASs can be classified to the same or to different classes of intelligence.&nbsp;<em>MeasApplInt</em>&nbsp;is compared with a recent state-of-the-art metric called&nbsp;<em>MetrIntPair</em>. The comparison was based on the same principle of difficult problem-solving intelligence and the same pairwise/matched problem-solving intelligence evaluations. Our analysis shows that the main advantage of&nbsp;<em>MeasApplInt</em>&nbsp;versus the compared metric, is its robustness. For evaluation purposes, we performed an illustrative case study considering two CMASs composed of simple reactive agents providing problem-solving intelligence at the systems&rsquo; level. The two CMASs have been designed for solving an NP-hard problem with many applications in the standard, modified and generalized formulation. The conclusion of the case study, using the&nbsp;<em>MeasApplInt</em>&nbsp;metric, is that the studied CMASs have the same real-life problems solving intelligence level. An additional experimental evaluation of the proposed metric is attached as an&nbsp;Appendix.
APA, Harvard, Vancouver, ISO, and other styles
2

Konstantakopoulos, Grigorios D., Sotiris P. Gayialis, Evripidis P. Kechagias, Georgios A. Papadopoulos, and Ilias P. Tatsiopoulos. "A Multiobjective Large Neighborhood Search Metaheuristic for the Vehicle Routing Problem with Time Windows." Algorithms 13, no. 10 (2020): 243. http://dx.doi.org/10.3390/a13100243.

Full text
Abstract:
The Vehicle Routing Problem with Time Windows (VRPTW) is an NP-Hard optimization problem which has been intensively studied by researchers due to its applications in real-life cases in the distribution and logistics sector. In this problem, customers define a time slot, within which they must be served by vehicles of a standard capacity. The aim is to define cost-effective routes, minimizing both the number of vehicles and the total traveled distance. When we seek to minimize both attributes at the same time, the problem is considered as multiobjective. Although numerous exact, heuristic and metaheuristic algorithms have been developed to solve the various vehicle routing problems, including the VRPTW, only a few of them face these problems as multiobjective. In the present paper, a Multiobjective Large Neighborhood Search (MOLNS) algorithm is developed to solve the VRPTW. The algorithm is implemented using the Python programming language, and it is evaluated in Solomon’s 56 benchmark instances with 100 customers, as well as in Gehring and Homberger’s benchmark instances with 1000 customers. The results obtained from the algorithm are compared to the best-published, in order to validate the algorithm’s efficiency and performance. The algorithm is proven to be efficient both in the quality of results, as it offers three new optimal solutions in Solomon’s dataset and produces near optimal results in most instances, and in terms of computational time, as, even in cases with up to 1000 customers, good quality results are obtained in less than 15 min. Having the potential to effectively solve real life distribution problems, the present paper also discusses a practical real-life application of this algorithm.
APA, Harvard, Vancouver, ISO, and other styles
3

Joshi, Rajendra Prasad. "Analysis of Metaheuristic Solutions to the Response Time Variability Problem." Api Journal of Science 1 (December 31, 2024): 81–83. https://doi.org/10.3126/ajs.v1i1.75493.

Full text
Abstract:
The problem of variation in the response time is known as response time variability problem (RTVP). It is combinatorial NP-hard problem which has a broad range of real-life applications. The RTVP arises whenever events, jobs, clients or products need to be sequenced so as to minimize the variability of the time they wait for their next turn in obtaining the resources they need to advance. In RTVP the concern is to find out near optimal sequence of jobs with objective of minimizing the response time variability. The metaheuristic approaches to solve the RTVP are: Multi-start (MS), Greedy Randomized Adaptive Search Procedure (GRASP) and Particle Swarm Optimization (PSO). In this paper, the computational result of MS and GRASP will be analyzed.
APA, Harvard, Vancouver, ISO, and other styles
4

Hidri, Lotfi, and Ahmed M. Elsherbeeny. "Optimal Solution to the Two-Stage Hybrid Flow Shop Scheduling Problem with Removal and Transportation Times." Symmetry 14, no. 7 (2022): 1424. http://dx.doi.org/10.3390/sym14071424.

Full text
Abstract:
The two-stage hybrid flow shop scheduling problem with removal and transportation times is addressed in this paper. The maximum completion time is the objective function to be minimized. This scheduling problem is modeling real-life situations encountered in manufacturing and industrial areas. On the other hand, the studied problem is a challenging one from a theoretical point of view since it is NP-Hard in a strong sense. In addition, the problem is symmetric in the following sense. Scheduling from the second stage to the first provides the same optimal solution as the studied problem. This propriety allows extending all the proposed procedures to the symmetric problem in order to improve the quality of the obtained solution. Based on the existing literature and to the best of our knowledge, this study is the first one addressing the removal time and the transportation time in the hybrid flow shop environment simultaneously. In order to solve the studied problem optimally, a heuristic composed of two phases is proposed, and a new family of lower bounds is developed. In addition, an exact Branch and Bound algorithm is presented to solve the hard test problems. These hard instances are unsolved by the proposed heuristic. In order to evaluate the performance of the proposed procedures, an extensive experimental study is carried out over benchmark test problems with a size of up to 200 jobs. The obtained computational results provide strong evidence that the presented procedures are very effective since 90% of test problems are solved optimally within a moderate time of 47.44 s. Furthermore, the unsolved test problems present a relative gap of only 2.4%.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Wenze, and Chenyang Xu. "A comparative study between SA and GA in solving MTSP." Theoretical and Natural Science 18, no. 1 (2023): 61–70. http://dx.doi.org/10.54254/2753-8818/18/20230321.

Full text
Abstract:
The multiple traveling salesmen problems (MTSP) is a combinatorial optimization and np-hard problem. In practice, the computational resource required to solve such problems is usually prohibitive, and, in most cases, using heuristic algorithms is the only practical option. This paper implements genetic algorithms (GA) and simulated annealing (SA) to solve the MTSP and does an experimental study based on a benchmark from the TSPLIB instance to compare the performance of two algorithms in reality. The results show that GA can achieve an acceptable solution in a shorter time for any of the MTSP cases and is more accurate when the data size is small. Meanwhile, SA is more robust and achieves a better solution than GA for complex MTSP cases, but it takes more time to converge. Therefore, the result indicates that it is hard to identify which algorithm is comprehensively superior to the other one. However, It also provides an essential reference to developers who want to choose algorithms to solve MTSP in real life, facilitating them to balance the algorithms performance on different metrics they value.
APA, Harvard, Vancouver, ISO, and other styles
6

Kwan, Raymond S. K., Ann S. K. Kwan, and Anthony Wren. "Evolutionary Driver Scheduling with Relief Chains." Evolutionary Computation 9, no. 4 (2001): 445–60. http://dx.doi.org/10.1162/10636560152642869.

Full text
Abstract:
Public transport driver scheduling problems are well known to be NP-hard. Although some mathematically based methods are being used in the transport industry, there is room for improvement. A hybrid approach incorporating a genetic algorithm (GA) is presented. The role of the GA is to derive a small selection of good shifts to seed a greedy schedule construction heuristic. A group of shifts called a relief chain is identi-fied and recorded. The relief chain is then inherited by the offspring and used by the GA for schedule construction. The new approach has been tested using real-life data sets, some of which represent very large problem instances. The results are generally better than those compiled by experienced schedulers and are comparable to solutions found by integer linear programming (ILP). In some cases, solutions were obtained when the ILP failed within practical computational limits.
APA, Harvard, Vancouver, ISO, and other styles
7

Yaşar, Abdurrahman, Muhammed Fati̇h Balin, Xiaojing An, Kaan Sancak, and Ümit V. Çatalyürek. "On Symmetric Rectilinear Partitioning." ACM Journal of Experimental Algorithmics 27 (December 31, 2022): 1–26. http://dx.doi.org/10.1145/3523750.

Full text
Abstract:
Even distribution of irregular workload to processing units is crucial for efficient parallelization in many applications. In this work, we are concerned with a spatial partitioning called rectilinear partitioning (also known as generalized block distribution). More specifically, we address the problem of symmetric rectilinear partitioning of two dimensional domains, and utilize sparse matrices to model them. By symmetric, we mean both dimensions (i.e., the rows and columns of the matrix) are identically partitioned yielding a tiling where the diagonal tiles (blocks) will be squares. We first show that the optimal solution to this problem is NP-hard, and we propose four heuristics to solve two different variants of this problem. To make the proposed techniques more applicable in real life application scenarios, we further reduce their computational complexities by utilizing effective sparsification strategies together with an efficient sparse prefix-sum data structure. We experimentally show the proposed algorithms are efficient and effective on more than six hundred test matrices/graphs.
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Yajun, Zhongfei Li, Xin Wang, and Qinghua Hu. "Finding the Shortest Path with Vertex Constraint over Large Graphs." Complexity 2019 (February 19, 2019): 1–13. http://dx.doi.org/10.1155/2019/8728245.

Full text
Abstract:
Graph is an important complex network model to describe the relationship among various entities in real applications, including knowledge graph, social network, and traffic network. Shortest path query is an important problem over graphs and has been well studied. This paper studies a special case of the shortest path problem to find the shortest path passing through a set of vertices specified by user, which is NP-hard. Most existing methods calculate all permutations for given vertices and then find the shortest one from these permutations. However, the computational cost is extremely expensive when the size of graph or given set of vertices is large. In this paper, we first propose a novel exact heuristic algorithm in best-first search way and then give two optimizing techniques to improve efficiency. Moreover, we propose an approximate heuristic algorithm in polynomial time for this problem over large graphs. We prove the ratio bound is 3 for our approximate algorithm. We confirm the efficiency of our algorithms by extensive experiments on real-life datasets. The experimental results validate that our algorithms always outperform the existing methods even though the size of graph or given set of vertices is large.
APA, Harvard, Vancouver, ISO, and other styles
9

Peng, Liwen, and Yongguo Liu. "Feature Selection and Overlapping Clustering-Based Multilabel Classification Model." Mathematical Problems in Engineering 2018 (2018): 1–12. http://dx.doi.org/10.1155/2018/2814897.

Full text
Abstract:
Multilabel classification (MLC) learning, which is widely applied in real-world applications, is a very important problem in machine learning. Some studies show that a clustering-based MLC framework performs effectively compared to a nonclustering framework. In this paper, we explore the clustering-based MLC problem. Multilabel feature selection also plays an important role in classification learning because many redundant and irrelevant features can degrade performance and a good feature selection algorithm can reduce computational complexity and improve classification accuracy. In this study, we consider feature dependence and feature interaction simultaneously, and we propose a multilabel feature selection algorithm as a preprocessing stage before MLC. Typically, existing cluster-based MLC frameworks employ a hard cluster method. In practice, the instances of multilabel datasets are distinguished in a single cluster by such frameworks; however, the overlapping nature of multilabel instances is such that, in real-life applications, instances may not belong to only a single class. Therefore, we propose a MLC model that combines feature selection with an overlapping clustering algorithm. Experimental results demonstrate that various clustering algorithms show different performance for MLC, and the proposed overlapping clustering-based MLC model may be more suitable.
APA, Harvard, Vancouver, ISO, and other styles
10

Withers, P. J., and T. M. Holden. "Diagnosing Engineering Problems with Neutrons." MRS Bulletin 24, no. 12 (1999): 17–23. http://dx.doi.org/10.1557/s0883769400053677.

Full text
Abstract:
In the past, many unexpected failures of components were due to poor quality control or a failure to calculate—or to miscalculate—the stresses or fatigue stresses a component would experience in service. Today, improved manufacturing, fracture mechanics, and computational finite element methods combine to provide a solid framework for reducing safety factors, enabling leaner design. In this context, residual stress—that is, stress that equilibrates within the structure and is always present at some level due to manufacturing—presents a real problem. It is difficult to predict and as hard to measure. If unaccounted for in design, these stresses can superimpose upon in-service stresses to result in unexpected failures.Neutron diffraction is one of the few methods able to provide maps of residual stress distributions deep within crystalline materials and engineering components. Neutron strain scanning, as the technique is called, is becoming an increasingly important tool for the materials scientist and engineer alike. Point, line-scan, area-scan, and full three-dimensional (3D) maps are being used to design new materials, optimize engineering processes, validate finite element modeis, predict component life, and diagnose engineering failures.
APA, Harvard, Vancouver, ISO, and other styles
11

Xu, Dong, and Nazrul I. Shaikh. "A Heuristic Approach for Ranking Items Based on Inputs from Multiple Experts." International Journal of Information Systems and Social Change 9, no. 3 (2018): 1–22. http://dx.doi.org/10.4018/ijissc.2018070101.

Full text
Abstract:
This article describes how rank aggregation focuses on synthesizing a single ranked list based on rankings supplied by multiple judges. Such aggregations are widely applied in the areas of information retrieval, web search, and data mining. The problem of rank aggregation has been shown to be NP-hard and this article presents a heuristic approach to create an aggregated ranking score for all items on the lists. The proposed heuristic is scalable and performs. A computational study, as well as a real-life study involving the ranking of 147 engineering colleges in the US is presented to elucidate the performance. The authors' key finding is that the quality of the solution is sensitive to (a) the number of judges available to rank, (b) how the items are assigned to judges, and (c) how consistent/inconsistent the judges are. All these factors are generally considered exogenous in most of the rank aggregation algorithms in extant literature.
APA, Harvard, Vancouver, ISO, and other styles
12

Park, Dongjoo, Laurence R. Rilett, and Changho Choi. "A class of multicriteria shortest path problems for real-time in-vehicle routing." Canadian Journal of Civil Engineering 34, no. 9 (2007): 1096–109. http://dx.doi.org/10.1139/l07-013.

Full text
Abstract:
In-route guidance systems fastest path routing has typically been adopted because of its simplicity. However, empirical studies on route choice behavior have shown that drivers use numerous criteria in choosing a route. The objective of this paper is to develop computationally efficient algorithms for identifying a manageable subset of the nondominated (i.e., Pareto optimal) paths for real-time in-vehicle routing. The basic notion of the proposed approach is that (i) enumerating all nondominated paths is computationally too expensive, (ii) obtaining a stable mathematical representation of the driver's utility function is theoretically difficult and impractical, and (iii) identifying the optimal path given a nonlinear utility function is a nondeterministic polynomial time (NP)-hard problem. Consequently, a heuristic two-stage strategy that identifies multiple routes and then selects the near-optimal path may be effective and practical. As the first stage, we relax the uniqueness of the utility function by measuring the context-dependent preference using an entropy model and propose a branch-and-bound technique that discards most of the nondominated paths. To make sure that the paths identified are dissimilar in terms of links used, the portion of shared links between routes is limited. The test of the algorithm in a large real-life traffic network shows that the algorithm can significantly reduce computational complexity while identifying reasonable alternative paths. Key words: real-time vehicle routing, multiple routes, utility function, optimal path.
APA, Harvard, Vancouver, ISO, and other styles
13

Boixel, Arthur, and Ronald De Haan. "On the Complexity of Finding Justifications for Collective Decisions." Proceedings of the AAAI Conference on Artificial Intelligence 35, no. 6 (2021): 5194–201. http://dx.doi.org/10.1609/aaai.v35i6.16656.

Full text
Abstract:
In a collective decision-making process, having the possibility to provide non-expert agents with a justification for why a target outcome is a good compromise given their individual preferences, is an appealing idea. Such questions have recently been addressed in the computational social choice community at large---whether it was to explain the outcomes of a specific rule in voting theory or to seek transparency and accountability in multi-criteria decision making. Ultimately, the development of real-life applications based on these notions depends on their practical feasibility and on the scalability of the approach taken. In this paper, we provide computational complexity results that address the problem of finding and verifying justifications for collective decisions. In particular, we focus on the recent development of a general notion of justification for outcomes in voting theory. Such a justification consists of a step-by-step explanation, grounded in a normative basis, showing how the selection of the target outcome follows from the normative principles considered. We consider a language in which normative principles can be encoded---either as an explicit list of instances of the principles (by means of quantifier-free sentences), or in a succinct fashion (using quantifiers). We then analyse the computational complexity of identifying and checking justifications. For the case where the normative principles are given in the form of a list of instances, verifying the correctness of a justification is DP-complete and deciding on the existence of such a justification is complete for Sigma 2 P. For the case where the normative principles are given succinctly, deciding whether a justification is correct is in NEXP wedge coNEXP, and NEXP-hard, and deciding whether a justification exists is in EXP with access to an NP oracle and is NEXP-hard.
APA, Harvard, Vancouver, ISO, and other styles
14

Gaar, Elisabeth, Melanie Siebenhofer, and Angelika Wiegele. "An SDP-based approach for computing the stability number of a graph." Mathematical Methods of Operations Research 95, no. 1 (2022): 141–61. http://dx.doi.org/10.1007/s00186-022-00773-1.

Full text
Abstract:
AbstractFinding the stability number of a graph, i.e., the maximum number of vertices of which no two are adjacent, is a well known NP-hard combinatorial optimization problem. Since this problem has several applications in real life, there is need to find efficient algorithms to solve this problem. Recently, Gaar and Rendl enhanced semidefinite programming approaches to tighten the upper bound given by the Lovász theta function. This is done by carefully selecting some so-called exact subgraph constraints (ESC) and adding them to the semidefinite program of computing the Lovász theta function. First, we provide two new relaxations that allow to compute the bounds faster without substantial loss of the quality of the bounds. One of these two relaxations is based on including violated facets of the polytope representing the ESCs, the other one adds separating hyperplanes for that polytope. Furthermore, we implement a branch and bound (B&amp;B) algorithm using these tightened relaxations in our bounding routine. We compare the efficiency of our B&amp;B algorithm using the different upper bounds. It turns out that already the bounds of Gaar and Rendl drastically reduce the number of nodes to be explored in the B&amp;B tree as compared to the Lovász theta bound. However, this comes with a high computational cost. Our new relaxations improve the run time of the overall B&amp;B algorithm, while keeping the number of nodes in the B&amp;B tree small.
APA, Harvard, Vancouver, ISO, and other styles
15

Eberbach, Eugene. "$-Calculus of Bounded Rational Agents: Flexible Optimization as Search under Bounded Resources in Interactive Systems." Fundamenta Informaticae 68, no. 1-2 (2005): 47–102. https://doi.org/10.3233/fun-2005-681-203.

Full text
Abstract:
This paper presents a novel model for resource bounded computation based on process algebras. Such model is called the $-calculus (cost calculus). Resource bounded computation attempts to find the best answer possible given operational constraints. The $-calculus provides a uniform representation for optimization in the presence of limited resources. It uses cost-optimization to find the best quality solutions while using a minimal amount of resources. A unique aspect of the approach is to propose a resource bounded process algebra as a generic problem solving paradigm targeting interactive AI applications. The goal of the $-calculus is to propose a computational model with built-in performance measure as its central element. This measure allows not only the expression of solutions, but also provides the means to incrementally construct solutions for computationally hard, real-life problems. This is a dramatic contrast with other models like Turing machines, λ-calculus, or conventional process algebras. This highly expressive model must therefore be able to express approximate solutions. This paper describes the syntax and operational cost semantics of the calculus. A standard cost function has been defined for strongly and weakly congruent cost expressions. Example optimization problems are given which take into account the incomplete knowledge and the amount of resources used by an agent. The contributions of the paper are twofold: firstly, some necessary conditions for achieving global optimization by performing local optimization in time and/or space are found. That deals with incomplete information and complexity during problem solving. Secondly, developing an algebra which expresses current practices, e.g., neural nets, cellular automata, dynamic programming, evolutionary computation, or mobile robotics as limiting cases, provides a tool for exploring the theoretical underpinnings of these methods. As the result, hybrid methods can be naturally expressed and developed using the algebra.
APA, Harvard, Vancouver, ISO, and other styles
16

Abdelmaguid , Tamer F. "Bi-Objective, Dynamic, Multiprocessor Open-Shop Scheduling: A Hybrid Scatter Search–Tabu Search Approach." Algorithms 17, no. 8 (2024): 371. http://dx.doi.org/10.3390/a17080371.

Full text
Abstract:
This paper presents a novel, multi-objective scatter search algorithm (MOSS) for a bi-objective, dynamic, multiprocessor open-shop scheduling problem (Bi-DMOSP). The considered objectives are the minimization of the maximum completion time (makespan) and the minimization of the mean weighted flow time. Both are particularly important for improving machines’ utilization and customer satisfaction level in maintenance and healthcare diagnostic systems, in which the studied Bi-DMOSP is mostly encountered. Since the studied problem is NP-hard for both objectives, fast algorithms are needed to fulfill the requirements of real-life circumstances. Previous attempts have included the development of an exact algorithm and two metaheuristic approaches based on the non-dominated sorting genetic algorithm (NSGA-II) and the multi-objective gray wolf optimizer (MOGWO). The exact algorithm is limited to small-sized instances; meanwhile, NSGA-II was found to produce better results compared to MOGWO in both small- and large-sized test instances. The proposed MOSS in this paper attempts to provide more efficient non-dominated solutions for the studied Bi-DMOSP. This is achievable via its hybridization with a novel, bi-objective tabu search approach that utilizes a set of efficient neighborhood search functions. Parameter tuning experiments are conducted first using a subset of small-sized benchmark instances for which the optimal Pareto front solutions are known. Then, detailed computational experiments on small- and large-sized instances are conducted. Comparisons with the previously developed NSGA-II metaheuristic demonstrate the superiority of the proposed MOSS approach for small-sized instances. For large-sized instances, it proves its capability of producing competitive results for instances with low and medium density.
APA, Harvard, Vancouver, ISO, and other styles
17

László, Barna Iantovics. "Black-Box-Based Mathematical Modelling of Machine Intelligence Measuring." Mathematics 9, no. 6 (2022): 681. https://doi.org/10.3390/math9060681.

Full text
Abstract:
Current machine intelligence metrics rely on a different philosophy, hindering their effective comparison. There is no standardization of what is machine intelligence and what should be measured to quantify it. In this study, we investigate the measurement of intelligence from the viewpoint of real-life difficult-problem-solving abilities, and we highlight the importance of being able to make accurate and robust comparisons between multiple cooperative multiagent systems (CMASs) using a novel metric. A recent metric presented in the scientific literature, called&nbsp;MetrIntPair, is capable of comparing the intelligence of only two CMASs at an application. In this paper, we propose a generalization of that metric called&nbsp;MetrIntPairII.&nbsp;MetrIntPairII&nbsp;is based on pairwise problem-solving intelligence comparisons (for the same problem, the problem-solving intelligence of the studied CMASs is evaluated experimentally in pairs). The pairwise intelligence comparison is proposed to decrease the necessary number of experimental intelligence measurements.&nbsp;MetrIntPairII&nbsp;has the same properties as&nbsp;MetrIntPair, with the main advantage that it can be applied to any number of CMASs conserving the accuracy of the comparison, while it exhibits enhanced robustness. An important property of the proposed metric is the universality, as it can be applied as a black-box method to intelligent agent-based systems (IABSs) generally, not depending on the aspect of IABS architecture. To demonstrate the effectiveness of the&nbsp;MetrIntPairII&nbsp;metric, we provide a representative experimental study, comparing the intelligence of several CMASs composed of agents specialized in solving an NP-hard problem.
APA, Harvard, Vancouver, ISO, and other styles
18

Iantovics, László Barna, Roumen Kountchev, and Gloria Cerasela Crișan. "ExtrIntDetect—A New Universal Method for the Identification of Intelligent Cooperative Multiagent Systems with Extreme Intelligence." Symmetry 11, no. 9 (2019): 1123. https://doi.org/10.3390/sym11091123.

Full text
Abstract:
In this research, we define a specific type of performance of the intelligent agent-based systems (IABSs) in terms of a difficult problem-solving intelligence measure. Many studies present the successful application of intelligent cooperative multiagent systems (ICMASs) for efficient, flexible and robust solving of difficult real-life problems. Based on a comprehensive study of the scientific literature, we conclude that there is no unanimous view in the scientific literature on machine intelligence, or on what an intelligence metric must measure. Metrics presented in the scientific literature are based on diverse paradigms. In our approach, we assume that the measurement of intelligence is based on the ability to solve difficult problems. In our opinion, the measurement of intelligence in this context is important, as it allows the differentiation between ICMASs based on the degree of intelligence in problem-solving. The recent&nbsp;OutIntSys&nbsp;method presented in the scientific literature can identify systems with outlier high and outlier low intelligence from a set of studied ICMASs. In this paper, a novel universal method called&nbsp;ExtrIntDetect, defined on the basis of a specific series of computing processes and analyses, is proposed for the detection of the ICMASs with statistical outlier low and high problem-solving intelligence from a given set of studied ICMASs.&nbsp;ExtrIntDetect&nbsp;eliminates the disadvantage of the&nbsp;OutIntSys&nbsp;method with respect to its limited robustness. The recent symmetric&nbsp;MetrIntSimil&nbsp;metric presented in the literature is capable of measuring and comparing the intelligence of large numbers of ICMASs and based on their respective problem-solving intelligences in order to classify them into intelligence classes. Systems whose intelligence does not statistically differ are classified as belonging to the same class of intelligent systems. Systems classified in the same intelligence class are therefore able to solve difficult problems using similar levels of intelligence. One disadvantage of the symmetric&nbsp;MetrIntSimil&nbsp;lies in the fact that it is not able to detect outlier intelligence. Based on this fact, the&nbsp;ExtrIntDetect&nbsp;method could be used as an extension of the&nbsp;MetrIntSimil&nbsp;metric. To validate and evaluate the&nbsp;ExtrIntDetect&nbsp;method, an experimental evaluation study on six ICMASs is presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
19

Bhaskaran, S., Raja Marappan, and B. Santhi. "Design and Comparative Analysis of New Personalized Recommender Algorithms with Specific Features for Large Scale Datasets." Mathematics 8, no. 7 (2020): 1106. http://dx.doi.org/10.3390/math8071106.

Full text
Abstract:
Nowadays, because of the tremendous amount of information that humans and machines produce every day, it has become increasingly hard to choose the more relevant content across a broad range of choices. This research focuses on the design of two different intelligent optimization methods using Artificial Intelligence and Machine Learning for real-life applications that are used to improve the process of generation of recommenders. In the first method, the modified cluster based intelligent collaborative filtering is applied with the sequential clustering that operates on the values of dataset, user′s neighborhood set, and the size of the recommendation list. This strategy splits the given data set into different subsets or clusters and the recommendation list is extracted from each group for constructing the better recommendation list. In the second method, the specific features-based customized recommender that works in the training and recommendation steps by applying the split and conquer strategy on the problem datasets, which are clustered into a minimum number of clusters and the better recommendation list, is created among all the clusters. This strategy automatically tunes the tuning parameter λ that serves the role of supervised learning in generating the better recommendation list for the large datasets. The quality of the proposed recommenders for some of the large scale datasets is improved compared to some of the well-known existing methods. The proposed methods work well when λ = 0.5 with the size of the recommendation list, |L| = 30 and the size of the neighborhood, |S| &lt; 30. For a large value of |S|, the significant difference of the root mean square error becomes smaller in the proposed methods. For large scale datasets, simulation of the proposed methods when varying the user sizes and when the user size exceeds 500, the experimental results show that better values of the metrics are obtained and the proposed method 2 performs better than proposed method 1. The significant differences are obtained in these methods because the structure of computation of the methods depends on the number of user attributes, λ, the number of bipartite graph edges, and |L|. The better values of the (Precision, Recall) metrics obtained with size as 3000 for the large scale Book-Crossing dataset in the proposed methods are (0.0004, 0.0042) and (0.0004, 0.0046) respectively. The average computational time of the proposed methods takes &lt;10 seconds for the large scale datasets and yields better performance compared to the well-known existing methods.
APA, Harvard, Vancouver, ISO, and other styles
20

Cicerone, Serafino, and Gabriele Di Stefano. "Special Issue on “Graph Algorithms and Applications”." Algorithms 14, no. 5 (2021): 150. http://dx.doi.org/10.3390/a14050150.

Full text
Abstract:
The mixture of data in real life exhibits structure or connection property in nature. Typical data include biological data, communication network data, image data, etc. Graphs provide a natural way to represent and analyze these types of data and their relationships. For instance, more recently, graphs have found new applications in solving problems for emerging research fields such as social network analysis, design of robust computer network topologies, frequency allocation in wireless networks, and bioinformatics. Unfortunately, the related algorithms usually suffer from high computational complexity, since some of these problems are NP-hard. Therefore, in recent years, many graph models and optimization algorithms have been proposed to achieve a better balance between efficacy and efficiency. The aim of this Special Issue is to provide an opportunity for researchers and engineers from both academia and the industry to publish their latest and original results on graph models, algorithms, and applications to problems in the real world, with a focus on optimization and computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
21

Sbai, Ines, and Saoussen Krichen. "A Novel Adaptive Genetic Algorithm for Dynamic Vehicle Routing Problem With Backhaul and Two-Dimensional Loading Constraints." International Journal of Applied Metaheuristic Computing 13, no. 1 (2022): 1–34. http://dx.doi.org/10.4018/ijamc.2022010103.

Full text
Abstract:
In this paper, we consider an extension of the Dynamic Vehicle Routing Problem with Backhauls integrated with two-dimensional loading problem called DVRPB with 2D loading constraints (2L-DVRPB). In the VRPB, a vehicle can deliver (Linehaul) then collect goods from customers (backhaul) and bring back to the depot. Once customer demand is formed by a set of two-dimensional items the problem will be treat as a 2L-VRPB. The 2L-VRPB has been studied on the static case. However, in most real-life application, new customer requests can be happen over time of backhaul and thus perturb the optimal routing schedule that was originally invented. This problem has not been analysed sofar in the literature. The 2L-DVRPB is an NP-Hard problem, so, we propose to use a Genetic algorithm for routing and a packing problems. We applied our approach in a real case study of the Regional Post Office of the city of Jendouba in the North of Tunisia. Results indicate that the AGA approach is considered as the best approach in terms of solutions quality for a real world routing system.
APA, Harvard, Vancouver, ISO, and other styles
22

Rico, Noelia, Camino R. Vela, Raúl Pérez-Fernández, and Irene Díaz. "Reducing the Computational Time for the Kemeny Method by Exploiting Condorcet Properties." Mathematics 9, no. 12 (2021): 1380. http://dx.doi.org/10.3390/math9121380.

Full text
Abstract:
Preference aggregation and in particular ranking aggregation are mainly studied by the field of social choice theory but extensively applied in a variety of contexts. Among the most prominent methods for ranking aggregation, the Kemeny method has been proved to be the only one that satisfies some desirable properties such as neutrality, consistency and the Condorcet condition at the same time. Unfortunately, the problem of finding a Kemeny ranking is NP-hard, which prevents practitioners from using it in real-life problems. The state of the art of exact algorithms for the computation of the Kemeny ranking experienced a major boost last year with the presentation of an algorithm that provides searching time guarantee up to 13 alternatives. In this work, we propose an enhanced version of this algorithm based on pruning the search space when some Condorcet properties hold. This enhanced version greatly improves the performance in terms of runtime consumption.
APA, Harvard, Vancouver, ISO, and other styles
23

Cheikh, Salmi, and Jessie J. Walker. "Solving Task Scheduling Problem in the Cloud Using a Hybrid Particle Swarm Optimization Approach." International Journal of Applied Metaheuristic Computing 13, no. 1 (2022): 1–25. http://dx.doi.org/10.4018/ijamc.2022010105.

Full text
Abstract:
Synergistic confluence of pervasive sensing, computing, and networking is generating heterogeneous data at unprecedented scale and complexity. Cloud computing has emergered in the last two decades as a unique storage and computing resource to support a diverse assortment of applications. Numerous organizations are migrating to the cloud to store and process their information. When the cloud infrastructures and resources are insufficient to satisfy end-users requests, scheduling mechanisms are required. Task scheduling, especially in a distributed and heterogeneous system is an NP-hard problem since various task parameters must be considered for an appropriate scheduling. In this paper we propose a hybrid PSO and extremal optimization-based approach to resolve task scheduling in the cloud. The algorithm optimizes makespan which is an important criterion to schedule a number of tasks on different Virtual Machines. Experiments on synthetic and real-life workloads show the capability of the method to successfully schedule task and outperforms many known methods of the state of the art.
APA, Harvard, Vancouver, ISO, and other styles
24

CATTAFI, MASSIMILIANO, MARCO GAVANELLI, MADDALENA NONATO, STEFANO ALVISI, and MARCO FRANCHINI. "Optimal placement of valves in a water distribution network with CLP(FD)." Theory and Practice of Logic Programming 11, no. 4-5 (2011): 731–47. http://dx.doi.org/10.1017/s1471068411000275.

Full text
Abstract:
AbstractThis paper presents a new application of logic programming to a real-life problem in hydraulic engineering. The work is developed as a collaboration of computer scientists and hydraulic engineers, and applies Constraint Logic Programming to solve a hard combinatorial problem. This application deals with one aspect of the design of a water distribution network, i.e., the valve isolation system design. We take the formulation of the problem by Giustolisi and Savić (2008 Optimal design of isolation valve system for water distribution networks. InProceedings of the 10th Annual Water Distribution Systems Analysis Conference WDSA2008, J. Van Zyl, A. Ilemobade, and H. Jacobs, Eds.) and show how, thanks to constraint propagation, we can get better solutions than the best solution known in the literature for the Apulian distribution network. We believe that the area of the so-calledhydroinformaticscan benefit from the techniques developed in Constraint Logic Programming and possibly from other areas of logic programming, such as Answer Set Programming.
APA, Harvard, Vancouver, ISO, and other styles
25

Juan, Angel Alejandro, Canan Gunes Corlu, Rafael David Tordecilla, Rocio de la Torre, and Albert Ferrer. "On the Use of Biased-Randomized Algorithms for Solving Non-Smooth Optimization Problems." Algorithms 13, no. 1 (2019): 8. http://dx.doi.org/10.3390/a13010008.

Full text
Abstract:
Soft constraints are quite common in real-life applications. For example, in freight transportation, the fleet size can be enlarged by outsourcing part of the distribution service and some deliveries to customers can be postponed as well; in inventory management, it is possible to consider stock-outs generated by unexpected demands; and in manufacturing processes and project management, it is frequent that some deadlines cannot be met due to delays in critical steps of the supply chain. However, capacity-, size-, and time-related limitations are included in many optimization problems as hard constraints, while it would be usually more realistic to consider them as soft ones, i.e., they can be violated to some extent by incurring a penalty cost. Most of the times, this penalty cost will be nonlinear and even noncontinuous, which might transform the objective function into a non-smooth one. Despite its many practical applications, non-smooth optimization problems are quite challenging, especially when the underlying optimization problem is NP-hard in nature. In this paper, we propose the use of biased-randomized algorithms as an effective methodology to cope with NP-hard and non-smooth optimization problems in many practical applications. Biased-randomized algorithms extend constructive heuristics by introducing a nonuniform randomization pattern into them. Hence, they can be used to explore promising areas of the solution space without the limitations of gradient-based approaches, which assume the existence of smooth objective functions. Moreover, biased-randomized algorithms can be easily parallelized, thus employing short computing times while exploring a large number of promising regions. This paper discusses these concepts in detail, reviews existing work in different application areas, and highlights current trends and open research lines.
APA, Harvard, Vancouver, ISO, and other styles
26

Valouxis, Christos, Christos Gogos, Angelos Dimitsas, Petros Potikas, and Anastasios Vittas. "A Hybrid Exact–Local Search Approach for One-Machine Scheduling with Time-Dependent Capacity." Algorithms 15, no. 12 (2022): 450. http://dx.doi.org/10.3390/a15120450.

Full text
Abstract:
Machine scheduling is a hard combinatorial problem having many manifestations in real life. Due to the schedule followed, the possibility of installations of machines operating sub-optimally is high. In this work, we examine the problem of a single machine with time-dependent capacity that performs jobs of deterministic durations, while for each job, its due time is known in advance. The objective is to minimize the aggregated tardiness in all tasks. The problem was motivated by the need to schedule charging times of electric vehicles effectively. We formulate an integer programming model that clearly describes the problem and a constraint programming model capable of effectively solving it. Due to the usage of interval variables, global constraints, a powerful constraint programming solver, and a heuristic we have identified, which we call the “due times rule”, the constraint programming model can reach excellent solutions. Furthermore, we employ a hybrid approach that exploits three local search improvement procedures in a schema where the constraint programming part of the solver plays a central role. These improvement procedures exhaustively enumerate portions of the search space by exchanging consecutive jobs with a single job of the same duration, moving cost-incurring jobs to earlier times in a consecutive sequence of jobs or even exploiting periods where capacity is not fully utilized to rearrange jobs. On the other hand, subproblems are given to the exact constraint programming solver, allowing freedom of movement only to certain parts of the schedule, either in vertical ribbons of the time axis or in groups of consecutive sequences of jobs. Experiments on publicly available data show that our approach is highly competitive and achieves the new best results in many problem instances.
APA, Harvard, Vancouver, ISO, and other styles
27

Ahmid, Ali, Thien-My Dao, and Ngan Van Le. "Enhanced Hyper-Cube Framework Ant Colony Optimization for Combinatorial Optimization Problems." Algorithms 14, no. 10 (2021): 286. http://dx.doi.org/10.3390/a14100286.

Full text
Abstract:
Solving of combinatorial optimization problems is a common practice in real-life engineering applications. Trusses, cranes, and composite laminated structures are some good examples that fall under this category of optimization problems. Those examples have a common feature of discrete design domain that turn them into a set of NP-hard optimization problems. Determining the right optimization algorithm for such problems is a precious point that tends to impact the overall cost of the design process. Furthermore, reinforcing the performance of a prospective optimization algorithm reduces the design cost. In the current study, a comprehensive assessment criterion has been developed to assess the performance of meta-heuristic (MH) solutions in the domain of structural design. Thereafter, the proposed criterion was employed to compare five different variants of Ant Colony Optimization (ACO). It was done by using a well-known structural optimization problem of laminate Stacking Sequence Design (SSD). The initial results of the comparison study reveal that the Hyper-Cube Framework (HCF) ACO variant outperforms the others. Consequently, an investigation of further improvement led to introducing an enhanced version of HCFACO (or EHCFACO). Eventually, the performance assessment of the EHCFACO variant showed that the average practical reliability became more than twice that of the standard ACO, and the normalized price decreased more to hold at 28.92 instead of 51.17.
APA, Harvard, Vancouver, ISO, and other styles
28

Vidya Chitre. "Exploring Machine Learning Techniques for Predictive Analytics in Computational Mathematics." Panamerican Mathematical Journal 34, no. 2 (2024): 1–19. http://dx.doi.org/10.52783/pmj.v34.i2.919.

Full text
Abstract:
The machine learning (ML) called predictive analytics has become a useful tool in computer mathematics. It lets you make models that can predict what will happen in the future based on data from the past. This paper looks at several machine learning (ML) methods used in predictive analytics within the field of computational mathematics. It focuses on how they work, what they can be used for, and how well they work. We look at regression analysis, neural networks, decision trees, support vector machines (SVM), and ensemble methods in depth, looking at both their theoretical bases and how they are used in real life. It is possible to understand how factors are related using regression analysis, which includes both linear and polynomial regressions. However, because it is so simple, it may not be able to be used for complicated, non-linear situations. Because they are based on biological systems, neural networks are very good at making predictions, especially when dealing with big datasets with lots of complex patterns. But their training process uses a lot of computers and a lot of skill to keep it from overfitting. Decision trees work well for classification and regression tasks because they are simple and easy to understand, but they can become unstable when small changes happen in the data. Because they are based on strong theory, support vector machines work best in spaces with a lot of dimensions and are especially good at solving classification problems. Some methods, like random forests and gradient boosting, use more than one model to make predictions more accurate and reliable. However, they can use a lot of resources and be hard to tune. This paper gives a full picture of what's going on in predictive analytics for computational mathematics by comparing the pros and cons of each method. The new information is meant to help academics and practitioners choose the right machine learning methods for their specific predictive modeling needs. This will eventually help the field of computational mathematics by making predictive analytics more accurate and efficient.
APA, Harvard, Vancouver, ISO, and other styles
29

Hammad, Mohamed, Samia Allaoua Chelloug, Reem Alkanhel, et al. "Automated Detection of Myocardial Infarction and Heart Conduction Disorders Based on Feature Selection and a Deep Learning Model." Sensors 22, no. 17 (2022): 6503. http://dx.doi.org/10.3390/s22176503.

Full text
Abstract:
An electrocardiogram (ECG) is an essential piece of medical equipment that helps diagnose various heart-related conditions in patients. An automated diagnostic tool is required to detect significant episodes in long-term ECG records. It is a very challenging task for cardiologists to analyze long-term ECG records in a short time. Therefore, a computer-based diagnosis tool is required to identify crucial episodes. Myocardial infarction (MI) and conduction disorders (CDs), sometimes known as heart blocks, are medical diseases that occur when a coronary artery becomes fully or suddenly stopped or when blood flow in these arteries slows dramatically. As a result, several researchers have utilized deep learning methods for MI and CD detection. However, there are one or more of the following challenges when using deep learning algorithms: (i) struggles with real-life data, (ii) the time after the training phase also requires high processing power, (iii) they are very computationally expensive, requiring large amounts of memory and computational resources, and it is not easy to transfer them to other problems, (iv) they are hard to describe and are not completely understood (black box), and (v) most of the literature is based on the MIT-BIH or PTB databases, which do not cover most of the crucial arrhythmias. This paper proposes a new deep learning approach based on machine learning for detecting MI and CDs using large PTB-XL ECG data. First, all challenging issues of these heart signals have been considered, as the signal data are from different datasets and the data are filtered. After that, the MI and CD signals are fed to the deep learning model to extract the deep features. In addition, a new custom activation function is proposed, which has fast convergence to the regular activation functions. Later, these features are fed to an external classifier, such as a support vector machine (SVM), for detection. The efficiency of the proposed method is demonstrated by the experimental findings, which show that it improves satisfactorily with an overall accuracy of 99.20% when using a CNN for extracting the features with an SVM classifier.
APA, Harvard, Vancouver, ISO, and other styles
30

Comakli Sokmen, Özlem, and Mustafa yılmaz. "The new approaches for solving hierarchical Chinese postman problem with stochastic travel times." Journal of Intelligent & Fuzzy Systems, February 28, 2023, 1–22. http://dx.doi.org/10.3233/jifs-222097.

Full text
Abstract:
The hierarchical Chinese postman problem (HCPP) aims to find the shortest tour or tours by passing through the arcs classified according to precedence relationship. HCPP, which has a wide application area in real-life problems such as shovel snow and routing patrol vehicles where precedence relations are important, belongs to the NP-hard problem class. In real-life problems, travel time between the two locations in city traffic varies due to reasons such as traffic jam, weather conditions, etc. Therefore, travel times are uncertain. In this study, HCPP was handled with the chance-constrained stochastic programming approach, and a new type of problem, the hierarchical Chinese postman problem with stochastic travel times, was introduced. Due to the NP-hard nature of the problem, the developed mathematical model with stochastic parameter values cannot find proper solutions in large-size problems within the appropriate time interval. Therefore, two new solution approaches, a heuristic method based on the Greedy Search algorithm and a meta-heuristic method based on ant colony optimization were proposed in this study. These new algorithms were tested on modified benchmark instances and randomly generated problem instances with 817 edges. The performance of algorithms was compared in terms of solution quality and computational time.
APA, Harvard, Vancouver, ISO, and other styles
31

Crisan, Gloria Cerasela, Laszlo Barna Iantovics, and László Kovács. "On the neutrality of two symmetric TSP solvers toward instance specification." SCIENCE CHINA Information Sciences 62, no. 219103 (2019). https://doi.org/10.1007/s11432-018-9829-5.

Full text
Abstract:
Currently, many computationally difficult problems&nbsp;can be solved using very efficient methods.&nbsp;Some of these state-of-the-art methods have onlineimplementations. The research question addressed&nbsp;herein is how sensitive are such implementations ifthe input data are preprocessed in a specific manner?&nbsp;The symmetric traveling salesman problem(sTSP), which is an NP-hard problem with many&nbsp;real-life applications is studied. The proposedmethod includes systematic transformation using&nbsp;rotations and reflections of the vertex order ofsTSP instances. This model was used for investigating&nbsp;the neutrality of Concorde [1] (currently thebest exact sTSP solver) and Lin&ndash;Kernighan implementation&nbsp;[2], both from NEOS [3] (the state-of-the-art collection of online tools in computational&nbsp;optimization).
APA, Harvard, Vancouver, ISO, and other styles
32

Gillen, Colin P., Alexander Veremyev, Oleg A. Prokopyev, and Eduardo L. Pasiliao. "Fortification Against Cascade Propagation Under Uncertainty." INFORMS Journal on Computing, March 9, 2021. http://dx.doi.org/10.1287/ijoc.2020.0992.

Full text
Abstract:
Network cascades represent a number of real-life applications: social influence, electrical grid failures, viral spread, and so on. The commonality between these phenomena is that they begin from a set of seed nodes and spread to other regions of the network. We consider a variant of a critical node detection problem dubbed the robust critical node fortification problem, wherein the decision maker wishes to fortify nodes (within a budget) to limit the spread of cascading behavior under uncertain conditions. In particular, the arc weights—how much influence one node has on another in the cascade process—are uncertain but are known to lie in some range bounded by a worst-case budget uncertainty. This problem is shown to be [Formula: see text]-hard even in the deterministic case. We formulate a mixed-integer program (MIP) to solve the deterministic problem and improve its continuous relaxation via nonlinear constraints and convexification. The robust problem is computationally more difficult, and we present an MIP-based expand-and-cut exact solution algorithm, in which the expansion is enhanced by cutting planes, which are themselves tied to the expansion process. Insights from these exact solutions motivate two novel (interrelated) centrality measures, and a centrality-based heuristic that obtains high-quality solutions within a few seconds. Finally, extensive computational results are given to validate our theoretical developments as well as provide insights into structural properties of the robust problem and its solution.
APA, Harvard, Vancouver, ISO, and other styles
33

Salemi, Hosseinali, and Austin Buchanan. "Solving the Distance-Based Critical Node Problem." INFORMS Journal on Computing, January 19, 2022. http://dx.doi.org/10.1287/ijoc.2021.1136.

Full text
Abstract:
In critical node problems, the task is to identify a small subset of so-called critical nodes whose deletion maximally degrades a network’s “connectivity” (however that is measured). Problems of this type have been widely studied, for example, for limiting the spread of infectious diseases. However, existing approaches for solving them have typically been limited to networks having fewer than 1,000 nodes. In this paper, we consider a variant of this problem in which the task is to delete b nodes so as to minimize the number of node pairs that remain connected by a path of length at most k. With the techniques developed in this paper, instances with up to 17,000 nodes can be solved exactly. We introduce two integer programming formulations for this problem (thin and path-like) and compare them with an existing recursive formulation. Although the thin formulation generally has an exponential number of constraints, it admits an efficient separation routine. Also helpful is a new, more general preprocessing procedure that, on average, fixes three times as many variables than before. Summary of Contribution: In this paper, we consider a distance-based variant of the critical node problem in which the task is to delete b nodes so as to minimize the number of node pairs that remain connected by a path of length at most k. This problem is motivated by applications in social networks, telecommunications, and transportation networks. In our paper, we aim to solve large-scale instances of this problem. Standard out-of-the-box approaches are unable to solve such instances, requiring new integer programming models, methodological contributions, and other computational insights. For example, we propose an algorithm for finding a maximum independent set of simplicial nodes that runs in time O(nm) that we use in a preprocessing procedure; we also prove that the separation problem associated with one of our integer programming models is NP-hard. We apply our branch-and-cut implementation to real-life networks from a variety of domains and observe speedups over previous approaches.
APA, Harvard, Vancouver, ISO, and other styles
34

Molenbruch, Yves, Kris Braekers, Ohad Eisenhandler, and Mor Kaspi. "The Electric Dial-a-Ride Problem on a Fixed Circuit." Transportation Science, April 25, 2023. http://dx.doi.org/10.1287/trsc.2023.1208.

Full text
Abstract:
Shared mobility services involving electric autonomous shuttles have increasingly been implemented in recent years. Because of various restrictions, these services are currently offered on fixed circuits and operated with fixed schedules. This study introduces a service variant with flexible stopping patterns and schedules. Specifically, in the electric dial-a-ride problem on a fixed circuit (eDARP-FC), a fleet of capacitated electric shuttles operates on a given circuit consisting of a recharging depot and a sequence of stations where passengers can be picked up and dropped off. The shuttles may perform multiple laps, between which they may need to recharge. The goal of the problem is to determine the vehicles’ stopping sequences and schedules, including recharging plans, so as to minimize a weighted sum of the total passenger excess time and the total number of laps. The eDARP-FC is formulated as a nonstandard lap-based mixed integer linear programming and is shown to be NP-Hard. Efficient polynomial time algorithms are devised for two special scheduling subproblems. These algorithms and several heuristics are then applied as subroutines within a large neighborhood search metaheuristic. Experiments on instances derived from a real-life system demonstrate that the flexible service results in a 32%–75% decrease in the excess time at the same operational costs. Funding: This work was supported by the Fonds Wetenschappelijk Onderzoek [Project Data-Driven Logistics: Grant S007318N; Project Optimizing the Design of a Hybrid Urban Mobility System: Grant G020222N; and Grant OR4Logistics]. Y. Molenbruch is partially funded by the Fonds Wetenschappelijk Onderzoek [Grant 1202719N]. The computational resources and services used in this work were provided by the Flemish Supercomputer Center funded by the Fonds Wetenschappelijk Onderzoek and the Flemish Government. Supplemental Material: The electronic companion is available at https://doi.org/10.1287/trsc.2023.1208 .
APA, Harvard, Vancouver, ISO, and other styles
35

Boeckling, Toon, and Antoon Bronselaer. "Cleaning data with Swipe." Journal of Data and Information Quality, February 14, 2025. https://doi.org/10.1145/3712205.

Full text
Abstract:
The repair problem for functional dependencies is the problem where an input database needs to be modified such that all functional dependencies are satisfied and the difference with the original database is minimal. The output database is then called a minimal-cost repair . If the allowed modifications are value updates, finding a minimal-cost repair is \(\mathsf {NP} \) -hard. A well-known approach to find approximations of minimal-cost repairs builds a Chase tree in which each internal node resolves violations of one functional dependency and leaf nodes represent repairs. A key property of this approach is that controlling the branching factor of the Chase tree allows to control the trade-off between repair quality and computational efficiency. In this paper, we explore an extreme variant of this idea in which the Chase tree has only one path. To construct this path, we first create an ordered partition of attributes (i.e., a partition of which the classes are totally ordered) such that classes can be repaired sequentially. We repair each class only once and do so by fixing the order in which dependencies are repaired. This principle is called priority repairing and we provide a simple heuristic to determine priority. The techniques for attribute partitioning and priority repair are combined in an algorithm called Swipe. An empirical study on four real-life data sets shows that Swipe is one to three orders of magnitude faster than Llunatic and HoloClean, whereas the quality of repairs is comparable or better. A scalability analysis shows that Swipe scales linearly for an increasing number of tuples and quadratically for an increasing number of FDs.
APA, Harvard, Vancouver, ISO, and other styles
36

Jiang, Shan, Shu-Cherng Fang та Qingwei Jin. "Sparse Solutions by a Quadratically Constrained ℓq (0 < q < 1) Minimization Model". INFORMS Journal on Computing, 30 вересня 2020. http://dx.doi.org/10.1287/ijoc.2020.1004.

Full text
Abstract:
Finding sparse solutions to a system of equations and/or inequalities is an important topic in many application areas such as signal processing, statistical regression and nonparametric modeling. Various continuous relaxation models have been proposed and widely studied to deal with the discrete nature of the underlying problem. In this paper, we propose a quadratically constrained [Formula: see text] (0 &lt; q &lt; 1) minimization model for finding sparse solutions to a quadratic system. We prove that solving the proposed model is strongly NP-hard. To tackle the computation difficulty, a first order necessary condition for local minimizers is derived. Various properties of the proposed model are studied for designing an active-set-based descent algorithm to find candidate solutions satisfying the proposed condition. In addition to providing a theoretical convergence proof, we conduct extensive computational experiments using synthetic and real-life data to validate the effectiveness of the proposed algorithm and to show the superior capability in finding sparse solutions of the proposed model compared with other known models in the literature. We also extend our results to a quadratically constrained [Formula: see text] (0 &lt; q &lt; 1) minimization model with multiple convex quadratic constraints for further potential applications. Summary of Contribution: In this paper, we propose and study a quadratically constrained [Formula: see text] minimization (0 &lt; q &lt; 1) model for finding sparse solutions to a quadratic system which has wide applications in sparse signal recovery, image processing and machine learning. The proposed quadratically constrained [Formula: see text] minimization model extends the linearly constrained [Formula: see text] and unconstrained [Formula: see text]-[Formula: see text] models. We study various properties of the proposed model in aim of designing an efficient algorithm. Especially, we propose an unrelaxed KKT condition for local/global minimizers. Followed by the properties studied, an active-set based descent algorithm is then proposed with its convergence proof being given. Extensive numerical experiments with synthetic and real-life Sparco datasets are conducted to show that the proposed algorithm works very effectively and efficiently. Its sparse recovery capability is superior to that of other known models in the literature.
APA, Harvard, Vancouver, ISO, and other styles
37

Cicerone, Serafino, and Stefano Gabriele Di. "Special Issue on "Graph Algorithms and Applications"." May 10, 2021. https://doi.org/10.3390/a14050150.

Full text
Abstract:
The mixture of data in real life exhibits structure or connection property in nature. Typical data include biological data, communication network data, image data, etc. Graphs provide a natural way to represent and analyze these types of data and their relationships. For instance, more recently, graphs have found new applications in solving problems for emerging research fields such as social network analysis, design of robust computer network topologies, frequency allocation in wireless networks, and bioinformatics. Unfortunately, the related algorithms usually suffer from high computational complexity, since some of these problems are NP-hard. Therefore, in recent years, many graph models and optimization algorithms have been proposed to achieve a better balance between efficacy and efficiency. The aim of this Special Issue is to provide an opportunity for researchers and engineers from both academia and the industry to publish their latest and original results on graph models, algorithms, and applications to problems in the real world, with a focus on optimization and computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
38

Maddox, Alexia, and Luke J. Heemsbergen. "Digging in Crypto-Communities’ Future-Making." M/C Journal 24, no. 2 (2021). http://dx.doi.org/10.5204/mcj.2755.

Full text
Abstract:
Introduction This article situates the dark as a liminal and creative space of experimentation where tensions are generative and people tinker with emerging technologies to create alternative futures. Darkness need not mean chaos and fear of violence – it can mean privacy and protection. We define dark as an experimental space based upon uncertainties rather than computational knowns (Bridle) and then demonstrate via a case study of cryptocurrencies the contribution of dark and liminal social spaces to future(s)-making. Cryptocurrencies are digital cash systems that use decentralised (peer-to-peer) networking to enable irreversible payments (Maurer, Nelms, and Swartz). Cryptocurrencies are often clones or variations on the ‘original’ Bitcoin payment systems protocol (Trump et al.) that was shared with the cryptographic community through a pseudonymous and still unknown author(s) (Nakamoto), creating a founder mystery. Due to the open creation process, a new cryptocurrency is relatively easy to make. However, many of them are based on speculative bubbles that mirror Bitcoin, Ethereum, and ICOs’ wealth creation. Examples of cryptocurrencies now largely used for speculation due to their volatility in holding value are rampant, with online clearing houses competing to trade hundreds of different assets from AAVE to ZIL. Many of these altcoins have little to no following or trading volume, leading to their obsolescence. Others enjoy immense popularity among dedicated communities of backers and investors. Consequently, while many cryptocurrency experiments fail or lack adoption and drop from the purview of history, their constant variation also contributes to the undertow of the future that pulls against more visible surface waves of computational progress. The article is structured to first define how we understand and leverage ‘dark’ against computational cultures. We then apply thematic and analytical tactics to articulate future-making socio-technical experiments in the dark. Based on past empirical work of the authors (Maddox "Netnography") we focus on crypto-cultures’ complex emancipatory and normative tensions via themes of construction, disruption, contention, redirection, obsolescence, and iteration. Through these themes we illustrate the mutation and absorption of dark experimental spaces into larger social structures. The themes we identify are not meant as a complete or necessarily serial set of occurrences, but nonetheless contribute a new vocabulary for students of technology and media to see into and grapple with the dark. Embracing the Dark: Prework &amp; Analytical Tactics for Outside the Known To frame discussion of the dark here as creative space for alternative futures, we focus on scholars who have deeply engaged with notions of socio-technical darkness. This allows us to explore outside the blinders of computational light and, with a nod to Sassen, dig in the shadows of known categories to evolve the analytical tactics required for the study of emerging socio-technical conditions. We understand the Dark Web to usher shifting and multiple definitions of darkness, from a moral darkness to a technical one (Gehl). From this work, we draw the observation of how technologies that obfuscate digital tracking create novel capacities for digital cultures in spaces defined by anonymity for both publisher and user. Darknets accomplish this by overlaying open internet protocols (e.g. TCP/IP) with non-standard protocols that encrypt and anonymise information (Pace). Pace traces concepts of darknets to networks in the 1970s that were 'insulated’ from the internet’s predecessor ARPANET by air gap, and then reemerged as software protocols similarly insulated from cultural norms around intellectual property. ‘Darknets’ can also be considered in ternary as opposed to binary terms (Gehl and McKelvey) that push to make private that which is supposed to be public infrastructure, and push private platforms (e.g. a Personal Computer) to make public networks via common bandwidth. In this way, darknets feed new possibilities of communication from both common infrastructures and individual’s platforms. Enabling new potentials of community online and out of sight serves to signal what the dark accomplishes for the social when measured against an otherwise unending light of computational society. To this point, a new dark age can be welcomed insofar it allows an undecided future outside of computational logics that continually define and refine the possible and probable (Bridle). This argument takes von Neumann’s 1945 declaration that “all stable processes we shall predict. All unstable processes we shall control” (in Bridle 21) as a founding statement for computational thought and indicative of current society. The hope expressed by Bridle is not an absence of knowledge, but an absence of knowing the future. Past the computational prison of total information awareness within an accelerating information age (Castells) is the promise of new formations of as yet unknowable life. Thus, from Bridle’s perspective, and ours, darkness can be a place of freedom and possibility, where the equality of being in the dark, together, is not as threatening as current privileged ways of thinking would suggest (Bridle 15). The consequences of living in a constant glaring light lead to data hierarchies “leaching” (Bridle) into everything, including social relationships, where our data are relationalised while our relations are datafied (Maddox and Heemsbergen) by enforcing computational thinking upon them. Darkness becomes a refuge that acknowledges the power of unknowing, and a return to potential for social, equitable, and reciprocal relations. This is not to say that we envision a utopian life without the shadow of hierarchy, but rather an encouragement to dig into those shadows made visible only by the brightest of lights. The idea of digging in the shadows is borrowed from Saskia Sassen, who asks us to consider the ‘master categories’ that blind us to alternatives. According to Sassen (402), while master categories have the power to illuminate, their blinding power keeps us from seeing other presences in the landscape: “they produce, then, a vast penumbra around that center of light. It is in that penumbra that we need to go digging”. We see darkness in the age of digital ubiquity as rejecting the blinding ‘master category’ of computational thought. Computational thought defines social/economic/political life via what is static enough to predict or unstable enough to render a need to control. Otherwise, the observable, computable, knowable, and possible all follow in line. Our dig in the shadows posits a penumbra of protocols – both of computational code and human practice – that circle the blinding light of known digital communications. We use the remainder of this short article to describe these themes found in the dark that offer new ways to understand the movements and moments of potential futures that remain largely unseen. Thematic Resonances in the Dark This section considers cryptocultures of the dark. We build from a thematic vocabulary that has been previously introduced from empirical examples of the crypto-market communities which tinker with and through the darkness provided by encryption and privacy technologies (Maddox "Netnography"). Here we refine these future-making themes through their application to events surrounding community-generated technology aimed at disrupting centralised banking systems: cryptocurrencies (Maddox, Singh, et al.). Given the overlaps in collective values and technologies between crypto-communities, we find it useful to test the relevance of these themes to the experimental dynamics surrounding cryptocurrencies. We unpack these dynamics as construction, rupture and disruption, redirection, and the flip-sided relationship between obsolescence and iteration leading to mutation and absorption. This section provides a working example for how these themes adapt in application to a community dwelling at the edge of experimental technological possibilities. The theme of construction is both a beginning and a materialisation of a value field. It originates within the cyberlibertarians’ ideological stance towards using technological innovations to ‘create a new world in the shell of the old’ (van de Sande) which has been previously expressed through the concept of constructive activism (Maddox, Barratt, et al.). This libertarian ideology is also to be found in the early cultures that gave rise to cryptocurrencies. Through their interest in the potential of cryptography technologies related to social and political change, the Cypherpunks mailing list formed in 1992 (Swartz). The socio-cultural field surrounding cryptocurrencies, however, has always consisted of a diverse ecosystem of vested interests building collaborations from “goldbugs, hippies, anarchists, cyberpunks, cryptographers, payment systems experts, currency activists, commodity traders, and the curious” (Maurer, Nelms, and Swartz 262). Through the theme of construction we can consider architectures of collaboration, cooperation, and coordination developed by technically savvy populations. Cryptocurrencies are often developed as code by teams who build in mechanisms for issuance (e.g. ‘mining’) and other controls (Conway). Thus, construction and making of cryptocurrencies tend to be collective yet decentralised. Cryptocurrencies arose during a time of increasing levels of distrust in governments and global financial instability from the Global Financial Crisis (2008-2013), whilst gaining traction through their usefulness in engaging in illicit trade (Saiedi, Broström, and Ruiz). It was through this rupture in the certainties of ‘the old system’ that this technology, and the community developing it, sought to disrupt the financial system (Maddox, Singh, et al.; Nelms et al.). Here we see the utility of the second theme of rupture and disruption to illustrate creative experimentation in the liminal and emergent spaces cryptocurrencies afford. While current crypto crazes (e.g. NFTs, ICOs) have their detractors, Cohen suggests, somewhat ironically, that the momentum for change of the crypto current was “driven by the grassroots, and technologically empowered, movement to confront the ills perceived to be powered and exacerbated by market-based capitalism, such as climate change and income inequality” (Cohen 739). Here we can start to envision how subterranean currents that emerge from creative experimentations in the dark impact global social forces in multifaceted ways – even as they are dragged into the light. Within a disrupted environment characterised by rupture, contention and redirection is rife (Maddox "Disrupting"). Contention and redirection illustrate how competing agendas bump and grind to create a generative tension around a deep collective desire for social change. Contention often emerges within an environment of hacks and scams, of which there are many stories in the cryptocurrency world (see Bartlett for an example of OneCoin, for instance; Kavanagh, Miscione, and Ennis). Other aspects of contention emerge around how the technology works to produce (mint) cryptocurrencies, including concern over the environmental impact of producing cryptocurrencies (Goodkind, Jones, and Berrens) and the production of non-fungible tokens for the sale of digital assets (Howson). Contention also arises through the gendered social dynamics of brogramming culture skewing inclusive and diverse engagement (Bowles). Shifting from the ideal of inclusion to the actual practice of crypto-communities begs the question of whose futures are being made. Contention and redirections are also evidenced by ‘hard forks’ in cryptocurrency. The founder mystery resulted in the gifting of this technology to a decentralised and leaderless community, materialised through the distributed consensus processes to approve software updates to a cryptocurrency. This consensus system consequently holds within it the seeds for governance failures (Trump et al.), the first of which occurred with the ‘hard forking’ of Bitcoin into Bitcoin cash in 2017 (Webb). Hard forks occur when developers and miners no longer agree on a proposed change to the software: one group upgraded to the new software while the others operated on the old rules. The resulting two separate blockchains and digital currencies concretised the tensions and disagreements within the community. This forking resulted initially in a shock to the market value of, and trust in, the Bitcoin network, and the dilution of adoption networks across the two cryptocurrencies. The ongoing hard forks of Bitcoin Cash illustrate the continued contention occurring within the community as crypto-personalities pit against each other (Hankin; Li). As these examples show, not all experiments in cryptocurrencies are successful; some become obsolete through iteration (Arnold). Iteration engenders mutations in the cultural framing of socio-technical experiments. These mutations of meaning and signification then facilitate their absorption into novel futures, showing the ternary nature of how what happens in the dark works with what is known by the light. As a rhetorical device, cryptocurrencies have been referred to as a currency (a payment system) or a commodity (an investment or speculation vehicle; Nelms et al. 21). However, new potential applications for the underlying technologies continue emerge. For example, Ethereum, the second-most dominant cryptocurrency after Bitcoin, now offers smart contract technology (decentralised autonomous organisations, DAO; Kavanagh, Miscione, and Ennis) and is iterating technology to dramatically reduce the energy consumption required to mine and mint the non-fungible tokens (NFTs) associated with crypto art (Wintermeyer). Here we can see how these rhetorical framings may represent iterative shifts and meaning-mutation that is as pragmatic as it is cultural. While we have considered here the themes of obsolescence and iteration threaded through the technological differentiations amongst cryptocurrencies, what should we make of these rhetorical or cultural mutations? This cultural mutation, we argue, can be seen most clearly in the resurgence of Dogecoin. Dogecoin is a cryptocurrency launched in 2013 that takes its name and logo from a Shiba Inu meme that was popular several years ago (Potts and Berg). We can consider Dogecoin as a playful infrastructure (Rennie) and cultural product that was initially designed to provide a low bar for entry into the market. Its affordability is kept in place by the ability for miners to mint an unlimited number of coins. Dogecoin had a large resurgence of value and interest just after the meme-centric Reddit community Wallstreetbets managed to drive the share price of video game retailer GameStop to gain 1,500% (Potts and Berg). In this instance we see the mutation of a cryptocurrency into memecoin, or cultural product, for which the value is a prism to the wild fluctuations of internet culture itself, linking cultural bubbles to financial ones. In this case, technologies iterated in the dark mutated and surfaced as cultural bubbles through playful infrastructures that intersected with financial systems. The story of dogecoin articulates how cultural mutation articulates the absorption of emerging techno-potentials into larger structures. Conclusion From creative experiments digging in the dark shadows of global socio-economic forces, we can see how the future is formed beneath the surface of computational light. Yet as we write, cryptocurrencies are being absorbed by centralising and powerful entities to integrate them into global economies. Examples of large institutions hoarding Bitcoin include the crypto-counterbalancing between the Chinese state through its digital currency DCEP (Vincent) and Facebook through the Libra project. Vincent observes that the state-backed DCEP project is the antithesis of the decentralised community agenda for cryptocurrencies to enact the separation of state and money. Meanwhile, Facebook’s centralised computational control of platforms used by 2.8 billion humans provide a similarly perverse addition to cryptocurrency cultures. The penumbra fades as computational logic shifts its gaze. Our thematic exploration of cryptocurrencies highlights that it is only in their emergent forms that such radical creative experiments can dwell in the dark. They do not stay in the dark forever, as their absorption into larger systems becomes part of the future-making process. The cold, inextricable, and always impending computational logic of the current age suffocates creative experimentations that flourish in the dark. Therefore, it is crucial to tend to the uncertainties within the warm, damp, and dark liminal spaces of socio-technical experimentation. References Arnold, Michael. "On the Phenomenology of Technology: The 'Janus-Faces' of Mobile Phones." Information and Organization 13.4 (2003): 231-56. Bartlett, Jamie. "Missing Cryptoqueen: Why Did the FCA Drop Its Warning about the Onecoin Scam?" BBC News 11 Aug. 2020. 19 Feb. 2021 &lt;https://www.bbc.com/news/technology-53721017&gt;. Bowles, Nellie. "Women in Cryptocurrencies Push Back against ‘Blockchain Bros’." New York Times 25 Feb. 2018. 21 Apr. 2021 &lt;https://www.nytimes.com/2018/02/25/business/cryptocurrency-women-blockchain-bros.html&gt;. Bridle, James. New Dark Age: Technology, Knowledge and the End of the Future. London: Verso, 2018. Castells, Manuel. The Information Age: Economy, Society and Culture. 2nd ed. Oxford: Blackwell, 2000. Cohen, Boyd. "The Rise of Alternative Currencies in Post-Capitalism." Journal of Management Studies 54.5 (2017): 739-46. Conway, Luke. "The 10 Most Important Cryptocurrencies Other than Bitcoin." Investopedia Jan. 2021. 19 Feb. 2021 &lt;https://www.investopedia.com/tech/most-important-cryptocurrencies-other-than-bitcoin/&gt;. Gehl, Robert, and Fenwick McKelvey. "Bugging Out: Darknets as Parasites of Large-Scale Media Objects." Media, Culture &amp; Society 41.2 (2019): 219-35. Goodkind, Andrew L., Benjamin A. Jones, and Robert P. Berrens. "Cryptodamages: Monetary Value Estimates of the Air Pollution and Human Health Impacts of Cryptocurrency Mining." Energy Research &amp; Social Science 59 (2020): 101281. Hankin, Aaron. "What You Need to Know about the Bitcoin Cash ‘Hard Fork’." MarketWatch 13 Nov. 2018. 21 Apr. 2021 &lt;https://www.marketwatch.com/story/what-you-need-to-know-about-the-bitcoin-cash-hard-fork-2018-11-13&gt;. Howson, Peter. "NFTs: Why Digital Art Has Such a Massive Carbon Footprint." The Conversation April 2021. 21 Apr. 2021 &lt;https://theconversation.com/nfts-why-digital-art-has-such-a-massive-carbon-footprint-158077&gt;. Kavanagh, Donncha, Gianluca Miscione, and Paul J. Ennis. "The Bitcoin Game: Ethno-Resonance as Method." Organization (2019): 1-20. Li, Shine. "Bitcoin Cash (Bch) Hard Forks into Two New Blockchains Following Disagreement on Miner Tax." Blockchain.News Nov. 2020. 19 Feb. 2021 &lt;https://blockchain.news/news/bitcoin-cash-bch-hard-forks-two-new-blockchains-disagreement-on-miner-tax&gt;. Maddox, Alexia. "Disrupting the Ethnographic Imaginarium: Challenges of Immersion in the Silk Road Cryptomarket Community." Journal of Digital Social Research 2.1 (2020): 31-51. ———. "Netnography to Uncover Cryptomarkets." Netnography Unlimited: Understanding Technoculture Using Qualitative Social Media Research. Eds. Rossella Gambetti and Robert V. Kozinets. London: Routledge, 2021: 3-23. Maddox, Alexia, Monica J. Barratt, Matthew Allen, and Simon Lenton. "Constructive Activism in the Dark Web: Cryptomarkets and Illicit Drugs in the Digital ‘Demimonde’." Information Communication and Society 19.1 (2016): 111-26. Maddox, Alexia, and Luke Heemsbergen. "The Electrified Social: A Policing and Politics of the Dark." Continuum (forthcoming). Maddox, Alexia, Supriya Singh, Heather Horst, and Greg Adamson. "An Ethnography of Bitcoin: Towards a Future Research Agenda." Australian Journal of Telecommunications and the Digital Economy 4.1 (2016): 65-78. Maurer, Bill, Taylor C. Nelms, and Lana Swartz. "'When Perhaps the Real Problem Is Money Itself!': The Practical Materiality of Bitcoin." Social Semiotics 23.2 (2013): 261-77. Nakamoto, Satoshi. "Bitcoin: A Peer-to-Peer Electronic Cash System." Bitcoin.org 2008. 21 Apr. 2021 &lt;https://bitcoin.org/bitcoin.pdf&gt;. Nelms, Taylor C., et al. "Social Payments: Innovation, Trust, Bitcoin, and the Sharing Economy." Theory, Culture &amp; Society 35.3 (2018): 13-33. Pace, Jonathan. "Exchange Relations on the Dark Web." Critical Studies in Media Communication 34.1 (2017): 1-13. Potts, Jason, and Chris Berg. "After Gamestop, the Rise of Dogecoin Shows Us How Memes Can Move Market." The Conversation Feb. 2021. 21 Apr. 2021 &lt;https://theconversation.com/after-gamestop-the-rise-of-dogecoin-shows-us-how-memes-can-move-markets-154470&gt;. Rennie, Ellie. "The Governance of Degenerates Part II: Into the Liquidityborg." Medium Nov. 2020. 21 Apr. 2021 &lt;https://ellierennie.medium.com/the-governance-of-degenerates-part-ii-into-the-liquidityborg-463889fc4d82&gt;. Saiedi, Ed, Anders Broström, and Felipe Ruiz. "Global Drivers of Cryptocurrency Infrastructure Adoption." Small Business Economics (Mar. 2020). Sassen, Saskia. "Digging in the Penumbra of Master Categories." British Journal of Sociology 56.3 (2005): 401-03. Swartz, Lana. "What Was Bitcoin, What Will It Be? The Techno-Economic Imaginaries of a New Money Technology." Cultural Studies 32.4 (2018): 623-50. Trump, Benjamin D., et al. "Cryptocurrency: Governance for What Was Meant to Be Ungovernable." Environment Systems and Decisions 38.3 (2018): 426-30. Van de Sande, Mathijs. "Fighting with Tools: Prefiguration and Radical Politics in the Twenty-First Century." Rethinking Marxism 27.2 (2015): 177-94. Vincent, Danny. "'One Day Everyone Will Use China's Digital Currency'." BBC News Sep. 2020. 19 Feb. 2021 &lt;https://www.bbc.com/news/business-54261382&gt;. Webb, Nick. "A Fork in the Blockchain: Income Tax and the Bitcoin/Bitcoin Cash Hard Fork." North Carolina Journal of Law &amp; Technology 19.4 (2018): 283-311. Wintermeyer, Lawrence. "Climate-Positive Crypto Art: The Next Big Thing or NFT Overreach." Forbes 19 Mar. 2021. 21 Apr. 2021 &lt;https://www.forbes.com/sites/lawrencewintermeyer/2021/03/19/climate-positive-crypto-art-the-next-big-thing-or-nft-overreach/&gt;.
APA, Harvard, Vancouver, ISO, and other styles
39

Lackner, Marie-Louise, Christoph Mrkvicka, Nysret Musliu, Daniel Walkiewicz, and Felix Winter. "Exact methods for the Oven Scheduling Problem." Constraints, July 4, 2023. http://dx.doi.org/10.1007/s10601-023-09347-2.

Full text
Abstract:
AbstractThe Oven Scheduling Problem (OSP) is a new parallel batch scheduling problem that arises in the area of electronic component manufacturing. Jobs need to be scheduled to one of several ovens and may be processed simultaneously in one batch if they have compatible requirements. The scheduling of jobs must respect several constraints concerning eligibility and availability of ovens, release dates of jobs, setup times between batches as well as oven capacities. Running the ovens is highly energy-intensive and thus the main objective, besides finishing jobs on time, is to minimize the cumulative batch processing time across all ovens. This objective distinguishes the OSP from other batch processing problems which typically minimize objectives related to makespan, tardiness or lateness. We propose to solve this NP-hard scheduling problem using exact techniques and present two different modelling approaches, one based on batch positions and another on representative jobs for batches. These models are formulated as constraint programming (CP) and integer linear programming (ILP) models and implemented both in the solver-independent modeling language MiniZinc and using interval variables in CP Optimizer. An extensive experimental evaluation of our solution methods is performed on a diverse set of problem instances. We evaluate the performance of several state-of-the-art solvers on the different models and on three variants of the objective function that reflect different real-life scenarios. We show that our models can find feasible solutions for instances of realistic size, many of those being provably optimal or nearly optimal solutions.
APA, Harvard, Vancouver, ISO, and other styles
40

Sara, Salah1 Abdel-Rahman Hedar2 and Marghny. H. Mohammed3. "ENHANCED POPULATION BASED ANT COLONY FOR THE 3D HYDROPHOBIC POLAR PROTEIN STRUCTURE PREDICTION PROBLEM." September 23, 2013. https://doi.org/10.5281/zenodo.1402500.

Full text
Abstract:
International Journal on Bioinformatics &amp; Biosciences (IJBB) Vol.3, No.3, September 2013 DOI: 10.5121/ijbb.2013.3304 41 ENHANCED POPULATION BASED ANT COLONY FOR THE 3D HYDROPHOBIC POLAR PROTEIN STRUCTURE PREDICTION PROBLEM Sara Salah1 , Abdel-Rahman Hedar2 and Marghny. H. Mohammed3 1 Computer Science Department, Faculty of Science, Assiut University 2 Computer Science Department, Faculty of Computers and Information, Assiut University 3Computer Science Department, Faculty of Computers and Information, Assiut University ABSTRACT Population-based Ant Colony algorithm is stochastic local search algorithm that mimics the behavior of real ants, simulating pheromone trails to search for solutions to combinatorial optimization problems. This paper introduces population-based Ant Colony algorithm to solve 3D Hydrophobic Polar Protein structure Prediction Problem then introduces a new enhanced approach of population-based Ant Colony algorithm called Enhanced Population-based Ant Colony algorithm (EP-ACO) to avoid stagnation problem in population-based Ant Colony algorithm and increase exploration in the search space escaping from local optima, The experiments show that our approach appears more efficient results than state of art method. KEYWORDS Population based ACO, Ant Colony Optimization, HP Model, Protein Structure Prediction 1. INTRODUCTION Recent breakthroughs in DNA and protein sequencing have unlocked many secrets of molecular biology. A complete understanding of gene function, however, requires a protein structure in addition to its sequence. Accordingly, the better we understand how proteins are built, the better we can deal with many common diseases. In particular, information on structural properties of proteins can give insight into the way they work and therefore help and influence modern medicine and drug development [1]. The protein structure prediction problem (PSP) is that of computationally predicting the three dimensional structure of protein from the sequence of amino acids alone. This has been an open problem for more than 30 years and developing a practical solution is widely considered the &#39;holy grail&#39; of computational biology. The various approaches to the problem can be classified into two categories: knowledge based methods building the structure based on knowledge of a good template structure [16]; Ab initio methods building the structure from scratch using primary principles. Ab initio do not rely on known structures in the PDB [2] as International Journal on Bioinformatics &amp; Biosciences (IJBB) Vol.3, No.3, September 2013 42 knowledge based methods, instead, they predict the 3D structure of proteins given their primary sequences only. The underlying strategy is to find the best stable structure based on a chosen energy function. According to Anfinsen famous hypothesis, a protein native structure is determined by its sequence which corresponds to minimum Gibbs energy [3]. The main challenge of this approach is to search for the most stable structure in a huge search space. In general, Ab initio PSP can be reduced to the following three steps: 1) Design a simple model with a desired level of accuracy to represents the protein structure, when we approach PSP problem, probably the first thing is to represent protein structure in the problem space we can represent protein structures using two categories: All-atom Model and Simplified Models.  All-atom model: protein structures are represented by lists of 3D coordinates of all atoms in a protein. Although an accurate all-atom model is desired in the structure prediction, it causes too huge a computation overhead even for very small proteins.  Simplified Models: simplified models can be classified into lattice models and offlattice models. Lattice models adopted lattice environment which is a grid and structural elements are positioned only at grid intersections; whereas off-lattice models use off-lattice environment in which structural elements are positioned in a continuous space In this paper we concern on lattice models, perhaps the simplest lattice protein model is Hydrophobic Polar (HP Model). It was proposed by Dill [4] and is widely studied for Ab initio prediction. 2) Define an energy function that can effectively discriminate native states from nonnative states. The HP model is based on the observation that the Hydrophobic Force is the main force for protein folding more about energy function we will discuss in section 2. 3) Design an efficient algorithm to find minimal energy conformations easily. Ant Colony Optimization (ACO) [5] [6], a non-deterministic algorithm, aims to mimic the behaviors of real ant colonies to solve real-world optimization problems. ACO algorithms are a class of constructive heuristic algorithms, which build solutions to a given optimization problem, one solution component at a time, according to a defined set of rules (heuristics), starting with an empty solution add solution components until a complete solution is built. One of the main characteristics of an ACO algorithm is the pheromone information which stores information on good solutions that have been found by ants of former iterations. The pheromone information is what is transferred from one iteration of the algorithm to the next. An alternative scheme was introduced called Population based ACO (P-ACO) instead of pheromone information as in ACO, in P-ACO a population of solutions is transferred from one iteration of the algorithm to the next. In this paper we provide a description of P-ACO algorithm and applying it for the first time to solve 3D HP lattice protein structure prediction problem then introduce new approach of population based ant colony algorithm called Enhanced Population Ant Colony (EP-ACO) to avoid stagnation problem in P-ACO algorithm. The experimental results based on different test cases of the PSP show that our algorithm enhances the performance of P- ACO. International Journal on Bioinformatics &amp; Biosciences (IJBB) Vol.3, No.3, September 2013 43 The paper is organized as follows: Section 2 describes the HP model and mentions some heuristics algorithms form the literature for the protein structure prediction problem in the 3D HP model. An introduction to Population based ACO is given in Section 3. Our algorithm EP-ACO and application in the 3D HP Protein structure prediction is described in Section 4. The experiments and the results are presented in section 5. Conclusions are given in Section 6. 2. The HP model The HP model is considered as the simplest abstraction of the PSP problem. This model divides the 20 standard amino acids into only two classes, according to their affinity to water: Hydrophobic amino acid is represented by (H) and Polar amino acid is represented by (P) as shown in Table 1. Table 1. The used Hydrophobic-Polar classification of amino acids takes from [7]. The folding of amino acid sequences is represented in a lattice, usually used in either square lattice (for the bi-dimensional model- 2D HP) or cubic lattice (for the three-dimensional model3D HP). Thus, each amino acid is occupies one lattice site, connected to its chain neighbors. After each amino acid takes one site on lattice, then it will form a shape that is considered to be the conformation (structure), this confirmation must be self avoiding walk to be valid. An example for a protein conformation under the 3D HP model is shown in Figure 1. There are several common ways to represent protein sequence ݏ} &ni; ܪ {ܲ ,on lattice, like Cartesian Coordinates, Internal Coordinate and Distance Geometry [8]. We concern on Internal Coordinate where a conformation is represented as a string of moving steps on the lattice from one amino acid to next one. There are two types of Internal Coordinate relative encoding and absolute encoding, in our study we use absolute encoding where the protein sequence is encoded as a string of character of absolute direction. The coordination number of the 3D lattice model is six, (each point has six neighbors). Thus there are six possible absolute moves from a given location. When we use absolute encoding the candidate solutions are represented as a string of characters {ܦ,ܷ,ܨ ,ܤ ,ܮ ,ܴ} ௡ିଵ representing the six directions: Right, Left, Backward, Forward, Up and Down, where n is the length of the protein sequence. The example in Figure 1(a) shows a confirmation of protein sequence S1 in Table 2. Its string representation would be BUBRFRDLDFURULULDFD. In the HP model, the energy of a conformation is defined as a number of topological contacts between H amino acids that are neighbors in the conformation but not successive on the protein International Journal on Bioinformatics &amp; Biosciences (IJBB) Vol.3, No.3, September 2013 44 sequence, more specifically, a confirmation ܿ that has number ݊ of H-H contacts has free energy E(ܿ) = ݊. (&minus;1) as shown in Figure 1(b). Figure 1. (a) A sample protein conformation on the 3D HP model which corresponds to protein sequence S1 in Table 2, green circle represents H amino acids while white circle symbolizes P amino acids, (b)The energy of this conformation is the number of H-H contacts that are neighbors on conformation and not successive on sequence, indicted in the Figure by dashed lines so the energy will be -11 The PSP problem can be formally defined as follows given an amino acid sequence ݏ= {ݏ1, ݏ2, &hellip; , ݏ ,{݊where each amino acid in sequence ݏ is one of two classes H or P, find an energy minimizing conformation of ݏ ,i.e. find ܿ &lowast; &isin; ܥ)ݏ (such that E &lowast; = E(ܿ &lowast;) = min{E(c)|c &isin; ܥ ,{where ܥ)ݏ (is a set of all valid conformations for ݏ ,It was recently proved that this problem and several variations of it are NP-hard combinatorial optimization problem [9] [10]. A number of well known heuristic optimization methods have been applied to solve PSP in 3D HP lattice model. These include: Cutello and Nicosia [8] introduce an Immune Algorithm (IA) based on the clonal selection principle; they employ a new aging operator and specific mutation operators. Shmygelska and Hoos [11] use Ant Colony Optimization (ACO) with Local Search which consists of long-range mutation moves to improve diversity on the solutions. Lin [12] introduces a hybrid of Genetic Algorithm and Particle Swarm Optimization in order to solve PSP on 3D HP lattice. Lin [15] presents a modified artificial bee colony algorithm for protein structure prediction on lattice models. 3. Population Based ACO (P-ACO) P-ACO has been proposed by Guntsch and Middendorf [13] and it is introduce a new way for updating pheromone matrix, (where as genetic algorithm) a population of solutions is directly transferred to next iteration, these solutions are then used to compute pheromone information for ants of new iteration where for every solution in the population some amount of pheromone added to corresponding edges. In more detail the first generation of ants works in the same way as in standard ant colony algorithm i.e. the ants search solutions using the initial pheromone matrix. But no pheromone evaporation is done. The best solution is then put in the (initially empty) population Q After k generations there exactly k solutions in the population. From generation k+1 on: International Journal on Bioinformatics &amp; Biosciences (IJBB) Vol.3, No.3, September 2013 45  One solution Qout must leave population, the solution leaves the population is decided by update strategy and when ever solution Qout leave population the corresponding amount of pheromone is subtracted from the elements of the pheromone matrix which called negative update (i.e. it correspond to evaporation ). rs rs rs        a Qout . (1)  One solution Qin is entering population and some amounts of pheromone are added to the edges presented that solution which called positive update. rs rs rs        a Qin . (2) The amount  which added or subtracted from pheromone matrix is defined as numerical number. The algorithm representation of P-ACO is provided in Figure 2. Figure 2. P-ACO algorithm A subtle difference between ACO and P-ACO is the introduction of the solution storage and the pheromone update process. There are many update strategies to decide which solution will be deleted from the population and which solution will be remain in P-ACO algorithm, in quality update strategy if a new solution is better (in terms of quality) than the worst quality member of the population, the new solution replaces the worst solution, otherwise there is no change to the population. The aim of this strategy is that the population will retain good solutions which may have been found earlier in the search process. A possible weakness of International Journal on Bioinformatics &amp; Biosciences (IJBB) Vol.3, No.3, September 2013 46 this strategy is that there is no way to ensure that the population does not end up with what are essentially multiple copies of the same solution. 4. Our Algorithm Enhanced Population Ant Colony (EP-ACO) P-ACO is developed especially for dynamic problem such as DTSPs, but the stagnation behavior remains unsolved since identical ants may be stored in the population memory and generate high intensity of pheromone to a single trail. In our algorithm we try to avoid early stagnation by maintain a certain level of diversity in the population by adding two main aspects: 1) P-ACO has a strong exploitation capability that allows a fast convergence to a good quality solution. However, its exploration during the search may be insufficient. We add procedure to enhance the exploration of new area of search space called Segmentation where we select best solution in the population and cut it into segments and refold random segments of this solution trying to find new solution on new area of search space. 2) As in Max-Min Ant System [14] to avoid stagnation, we restart the P-ACO by reinitializing the pheromone values after r iterations without improvement. The basic idea of our algorithm, as follow: after creating the initial population, the main loop repeated until termination condition reached, where m solution is constructed, if the cost of the best of m solution less than the cost of the worst in the population the P-ACO update rule is used, then Segmentation procedure is begin by selecting best individual of the population and repeat for s times; select random point and cut the solution from this random point to segments, select one segment randomly and refold this segment, finally, if the cost of the new solution is less than cost of the worst solution in the population the P-ACO update rule is used. The main EP-ACO algorithm is shown in Figure 3 International Journal on Bioinformatics &amp; Biosciences (IJBB) Vol.3, No.3, September 2013 47 Figure 3. EP-ACO algorithm When applying P-ACO to 3D HP model; we want to know how to construct solution and then follow the storage of solutions and update pheromone rules in Figure 2, in construct solution step: ants start to construct solution. There are six possible positions on the 3D lattice for every amino acid. They are the neighbor positions of the precedence amino acid. Since conformations are rotationally invariant, the position of the first two amino acids can be fixed without loss of generality. During the construction phase, ants fold a protein from the left end of the sequence adding one amino acid at a time based on the two sources of information: pheromone matrix value, and heuristic information. The transition probability to select the position of the next amino acid is given as: = ௗ௜,݌ ఎ೔,೏ ഀ .ఛ ೔,೏ ഁ &sum; ఎ೔,೐ ഀ .ఛ ೔,೐ ഁ ೐&isin;{ೆ,ವ,ಽ,ೃ,ಷ,ಳ} , (3) where and &beta; are parameters that determine the relative influence of pheromone and heuristic information. The pheromone values ߬௜,ௗ indicate the amount of pheromone deposited by each ant on the path (i, d), the heuristic function ߟ,௜ௗ used here as illustrated in section 2. 5. Experiments and Results In this section we apply P-ACO and EP-ACO to solve PSP on 3D HP lattice model, first a comparative study between simple P-ACO approach and EP-ACO is done to show the performance of our algorithm. Then the behavior of EP-ACO is compared with state of art methods used to solve this problem. For the following experiment results, All experiments were performed on PCs with 2 GHz Intel core(TM)2 due CPU and 2 MB RAM, running windows 7 (our reference machine), the program was written using java program language and run-time was measured in terms of CPU time. Table 2 presents 3D HP instances considered for the computational experiments taken from [8]. For each HP sequence, the column Instance represents the sequence number; the Length represents the number of amino acid in the protein sequence. International Journal on Bioinformatics &amp; Biosciences (IJBB) Vol.3, No.3, September 2013 48 Table 2. HP instances Instance Length Protein Sequence S1 20 HPHPPHHPHPPHPHHPPHPH S2 24 HHPPHPPHPPHPPHPPHPPHPPHH S3 25 PPHPPHHPPPPHHPPPPHHPPPPHH S4 36 PPPHHPPHHPPPPPHHHHHHHPPHHPPPPHHPPHPP S5 48 PPHPPHHPPHHPPPPPHHHHHHHHHHPPPPPPHHPPH HPPHPPHHHHH S6 50 HHPHPHPHPHHHHPHPPPHPPPHPPPPHPPPHPPPHP HHHHPHPHPHPHH S7 60 PPHHHPHHHHHHHHPPPHHHHHHHHHHPHPPPHHH HHHHHHHHHPPPPHHHHHHPHHPHP S8 64 HHHHHHHHHHHHPHPHPPHHPPHHPPHPPHHPPHH PPHPPHHPPHHPPHPHPHHHHHHHHHHHH Table3, presents comparison between P-ACO and our proposed EP-ACO, The parameters settings for the P-ACO and our proposed EP-ACO are ߙ = 1, ߚ = 3,߬଴ = 1 number of ants ݉ = 100, pop size =10 and ∆ is equal to the energy of that solution that will be added or removed, for EP-ACO ݏ = 50 for small sequence length (n 48) we set ݏ = 100, finally, we reinitialize the pheromone after 3000 iteration. Each run of algorithm ends when the maximum number of evaluation to fitness function is equal to 10଺ . All the experimental results reported in Table 3 are averaged over 30 independent runs. The column Best means the best found energy (Fitness) value; the Mean is the mean of energy found over 30 independent runs. As shown in Table 3, our algorithm EP-ACO achieves best result than P-ACO. Table 3. Comparison between P-ACO algorithm and EP-ACO algorithm Instance P-ACO EP-ACO Best Mean Best Mean S1 -11 -11 -11 -11 S2 -13 -12.6 -13 -13 S3 -9 -9 -9 -9 S4 -18 -17.8 -18 -18 International Journal on Bioinformatics &amp; Biosciences (IJBB) Vol.3, No.3, September 2013 49 S5 -30 -29.4 -31 -31 S6 -28 -27 -32 31.2 S7 -52 -50.8 -54 -53.2 S8 -54 50.6 -58 -57.3 The performance of the proposed model is compared to the best results obtained by other algorithms for protein structure prediction in 3D HP model. Table 4 presents the results as follows: the best energy found by the proposed method (in the last column), the results of Protein 3D HP Model Folding Simulation Using a Hybrid of Genetic Algorithm and Particle Swarm Optimization (HGAPSO) [12], Immune Algorithm for Protein Structure Prediction (IA) [8] and Artificial Bee Colony Algorithm For Protein Structure Prediction On Lattice Models (MABC) [15]. As shown in Table 4, the EP-ACO model is able to identify the protein configurations having the best Fitness Energy for sequences S5 to S8. The structure of 8 protein sequences can be clearly seen in Figure 4. Table 4. The simulation results obtained from the proposed algorithm compared with the methods given in the literature. Figures in bold indicate the lowest energy Instance HGA-PSO IA MABC EP-ACO Best Best Best Best S1 -11 -11 -11 -11 S2 -13 -13 -13 -13 S3 -9 -9 -9 -9 S4 -18 -18 -18 -18 S5 -29 -29 -29 -31 S6 -26 -23 -26 -32 S7 -49 -41 -49 -54 S8 - -42 - -58 . International Journal on Bioinformatics &amp; Biosciences (IJBB) Vol.3, No.3, September 2013 50 (S1) (S2) (S1) (S2) (S3) (S4) (S5) (S6) (S7) (S8) Figure 4. Results of the structure of 8 protein sequence International Journal on Bioinformatics &amp; Biosciences (IJBB) Vol.3, No.3, September 2013 51 6. Conclusion In this paper, we presented the Population Based Ant Colony algorithm for 3D HP Protein Structure Prediction Problem then introduce a new approach called Enhanced Population-based Ant Colony algorithm (EP-ACO) to avoid stagnation problem in population-based Ant Colony algorithm and increase exploration in the search space escaping from local optima. It shown experimentally that our algorithm EP-ACO achieves on nearly all test sequences comparable results to other state of the art algorithms and is much better than simple P-ACO algorithm. References [1] Qatawneh, S., Alneaimi, A., Rawashdeh, T., Muhairat, M., Qahwaji, R., and Ipson, S. (2012) &lsquo;Efficient Prediction of DNA-Binding Proteins Using Machine Learning&rsquo;, International Journal on Bioinformatics &amp; Biosciences, arXiv preprint arXiv:1207.2600. [2] Berman, H., Henrick, K. and Nakamura, H. (2003) &lsquo;Announcing the worldwide protein data bank&rsquo;, Nature Structural &amp; Molecular Biology, 10(12), 980-980. [3] Anfinsen, C. B. (1973) &lsquo;Principles that govern the folding of protein chains&rsquo;, Science, 181(96), 223- 230. [4] Dill, K. A., Bromberg, S., Yue, K., Fiebig, K. M., Yee, D. P., Thomas, P. D. and Chan, H. S. (1995) &lsquo;Principles of protein folding--a perspective from simple exact models&rsquo;, Protein science: a publication of the Protein Society, 4(4), 561. [5] Dorigo, M., Caro, G. D. and Gambardella, L. M. (1999) &lsquo;Ant algorithms for discrete optimization&rsquo;, Artificial life, 5(2), 137-172. [6] Dorigo, M. and Gambardella, L. M. (1997) &lsquo;Ant colony system: A cooperative learning approach to the traveling. [7] Ullah, A. D., Kapsokalivas, L., Mann, M. and Steinh&ouml;fel, K. (2009) &lsquo;Protein folding simulation by two-stage optimization&rsquo;, In Computational Intelligence and Intelligent Systems. Springer BerliHeidelberg, 138-145. [8] Cutello, V., Nicosia, G., Pavone, M. and Timmis, J. (2007) &lsquo;An immune algorithm for protein structure prediction on lattice models&rsquo;, Evolutionary Computation, IEEE Transactions on, 11(1), 101-117. [9] Unger, R. and Moult, J. (1993) &lsquo;Finding the lowest free energy conformation of a protein is an NPhard problem&rsquo;, proof and implications. Bulletin of Mathematical Biology, 55(6), 1183-1198. [10] Moult, J. (1993) &lsquo;A genetic algorithm for 3D protein folding simulations&rsquo;, In Genetic Algorithms, Morgan Kaufmann Publishers, 581. [11] Shmygelska, A. and Hoos, H. H. (2005) &lsquo;An ant colony optimization algorithm for the 2D and 3D hydrophobic polar protein folding problem&rsquo;, BMC bioinformatics, 6(1), 30. [12] Lin, C. J. and Su, S. C. (2011) &lsquo;Protein 3D HP Model Folding Simulation Using a Hybrid of Genetic Algorithm and Particle Swarm Optimization&rsquo;, International Journal of Fuzzy Systems, 13(2), 140-147. [13] Guntsch, M. and Middendorf, M. (2002) &lsquo;A population based approach for ACO&rsquo;, In Applications of Evolutionary Computing. Springer Berlin Heidelberg., 72-81. [14]St&uuml;tzle, T. and Hoos, H. H. (2000) &lsquo;MAX&ndash;MIN ant system&rsquo;. Future generation computer systems, 16(8), 889-914. [15] Lin, C. J., and Su, S. C. (2012) &lsquo;Using an efficient artificial bee colony algorithm for protein structure prediction on lattice models&rsquo;, International journal of innovative computing, information and control, 8, 2049-2064. [16]Jehangir, M. and Ahmad, S. F. (2013) &lsquo;Structural studies of aspartic endopeptidase pep2 from neosartorya fisherica using homolgy modeling techniques&rsquo;, International Journal on Bioinformatics &amp; Biosciences. International Journal on Bioinformatics &amp; Biosciences (IJBB) Vol.3, No.3, September 2013 52 Abdel-Rahman Hedar, received his Ph.D. degree in computer science from the University Kyoto, Japan, in 2004, his MSc and BSc From Assiut university, Assiut, Egypt, in 1997 and 1993, respectively. He is Director of Quality Assurance Unit, Associate Professor , Computer Science Department , Faculty of Computers and Information, Assiut University. Marghny H. Mohamed, received his Ph.D. degree in computer science from the University of Kyushu, Japan, in 2001, his MSc and BSc From Assiut university, Assiut, Egypt, in 1993 and 1988, respectively. He is currently an associate professor in the Department of Computer Science, and Vice Dean for Education and Student Affairs of the Faculty of Computers and Information Systems, University of Assiut, Egypt.
APA, Harvard, Vancouver, ISO, and other styles
41

Behnia, Bardia, Babak Shirazi, Iraj Mahdavi, and Mohammad Mahdi Paydar. "Nested bi-level metaheuristic algorithms for cellular manufacturing systems considering workers’ interest." RAIRO - Operations Research, August 29, 2019. http://dx.doi.org/10.1051/ro/2019075.

Full text
Abstract:
Due to the competitive nature of the market and the various products production requirements with short life cycles, cellular manufacturing systems have found a special role in manufacturing environments. Creativity and innovation in products are the results of the mental effort of the workforces in addition to machinery and parts allocation. Assignment of the workforce to cells based on the interest and ability indices is a tactical decision while the cell formation is a strategic decision. To make the correct decision, these two problems should be solved separately while considering their impacts on each other classically. For this reason, a novel bi-level model is designed to make decentralized decisions. Because of the importance of minimizing voids and exceptional element in the cellular manufacturing system, it is considered as a leader at the first level and the assignment of human resources is considered as a follower at the second level. To achieve product innovation and synergy among staff in the objective function at the second level, increasing the worker’s interest in order to cooperate with each other is considered too. Given the NP-Hard nature of cell formation and bi-level programming, nested bi-level genetic algorithm and particle swarm optimization are developed to solve the mathematical model. Various test problems have been solved by applying these two methods and validated results have been shown the efficiency of the proposed model. Also, real experimental comparisons have been presented. These results in contrast with previous works have been shown the minimum amount of computational time, cell load variation, total intercellular movements, and total intracellular movements of this new method. These effects have an important role in order to the improvement of cellular manufacturing behavior.
APA, Harvard, Vancouver, ISO, and other styles
42

Khan, Sulaiman, Habib Ullah Khan, and Shah Nazir. "Systematic analysis of healthcare big data analytics for efficient care and disease diagnosing." Scientific Reports 12, no. 1 (2022). http://dx.doi.org/10.1038/s41598-022-26090-5.

Full text
Abstract:
AbstractBig data has revolutionized the world by providing tremendous opportunities for a variety of applications. It contains a gigantic amount of data, especially a plethora of data types that has been significantly useful in diverse research domains. In healthcare domain, the researchers use computational devices to extract enriched relevant information from this data and develop smart applications to solve real-life problems in a timely fashion. Electronic health (eHealth) and mobile health (mHealth) facilities alongwith the availability of new computational models have enabled the doctors and researchers to extract relevant information and visualize the healthcare big data in a new spectrum. Digital transformation of healthcare systems by using of information system, medical technology, handheld and smart wearable devices has posed many challenges to researchers and caretakers in the form of storage, minimizing treatment cost, and processing time (to extract enriched information, and minimize error rates to make optimum decisions). In this research work, the existing literature is analysed and assessed, to identify gaps that result in affecting the overall performance of the available healthcare applications. Also, it aims to suggest enhanced solutions to address these gaps. In this comprehensive systematic research work, the existing literature reported during 2011 to 2021, is thoroughly analysed for identifying the efforts made to facilitate the doctors and practitioners for diagnosing diseases using healthcare big data analytics. A set of rresearch questions are formulated to analyse the relevant articles for identifying the key features and optimum management solutions, and laterally use these analyses to achieve effective outcomes. The results of this systematic mapping conclude that despite of hard efforts made in the domains of healthcare big data analytics, the newer hybrid machine learning based systems and cloud computing-based models should be adapted to reduce treatment cost, simulation time and achieve improved quality of care. This systematic mapping will also result in enhancing the capabilities of doctors, practitioners, researchers, and policymakers to use this study as evidence for future research.
APA, Harvard, Vancouver, ISO, and other styles
43

Shaw, Janice Marion. "The Curious Transformation of Boy to Computer." M/C Journal 19, no. 4 (2016). http://dx.doi.org/10.5204/mcj.1130.

Full text
Abstract:
Mark Haddon’s The Curious Incident of the Dog in the Night-Time has achieved success as “the new Rain Man” or “the new definitive, popular account of the autistic condition” (Burks-Abbott 294). Integral to its favourable reception is the way it conflates the autistic main character, the fifteen-year-old narrator Christopher Boone, with the savant, or individual who exhibits both neurological problems and giftedness, thereby engaging with the way autism is presented in popular culture. In a variety of contemporary films and television series, autism has been transformed from a disability to a form of giftedness by relating it to abilities associated in contemporary media with a genius, in particular by invoking the metaphor of an autistic mind as a type of computer. As a result, the book engages with the current association of giftedness in mathematics and science with social awkwardness and isolation as constructed in popular culture: in idiomatic terms, the genius “nerd” figure characterised by an uncertain, adolescent approach to social contact (Kendall 353). The disablement of the character is, then, lessened so that the idea of being “special,” continually evoked throughout the text, has a transformative function that is related less to the special needs of those with a disability and more to the common element in adolescent fiction of longing for extraordinary power and control through being a special, gifted individual. The Curious Incident of the Dog in the Night-Time relates the protagonist, Christopher, to Sherlock Holmes and his methods of detection, specifically through the title being taken from a story by Conan Doyle, “Silver Blaze,” in which the “curious incident” referred to is that the dog did nothing in the night. In the original story, that the dog did not bark or react to an intruder was a clue that the person was known to the animal, so allowing Holmes to solve the crime by a process of deduction. Christopher copies these traditional methods of the classical detective to solve his personal mystery, that of who killed a neighbour’s dog, Wellington. The adoption of this title allows a double irony to emerge. Christopher’s attempts to emulate Holmes in his approach to crime are predicated on his assumption of his likeness to the model of the classical detective as he states, “I think that if I were a proper detective he is the kind of detective I would be,” pointing out the similarity of their powers of observation and his ability, like Holmes, to “detach his mind at will” as well as his capacity to find patterns in events (92). Through the novel, these attributes are aligned with his autism, constructing a trope of his disability conferring extraordinary abilities that are predicated on a computer-like detachment and precision in his method of thinking. The accessible narrative of the autistic Christopher gives the reader the impression of being able to understand the perspective of an individual with a spectrum disorder. In this way, the text not only engages with, but contributes to the construction of this disability in current popular culture as merely an extension of giftedness, especially in mathematics, and an associated unwillingness to communicate. Indeed, according to Raoul Eshelman, “one of its most engaging narrative devices is to make us identify with a mentally impaired narrator who is manifestly not interested in identifying either with us or anyone else” (1). The main character’s reference to mathematical and scientific ideas exploits an interest in giftedness already established by popular literature and film, and engages with a transformation effected in popular culture of the genius as autistic, and its corollary of an autistic person as potentially a genius. Such a construction ranges from fictional characters like Sheldon in The Big Bang Theory, Charlie and his physicist colleagues in Numb3rs, and Raymond Babbitt in Rain Man, to real life characters or representative figures in reality series and feature films such as x + y, The Imitation Game, The Big Short, and the television program Beauty and the Geek. While never referring specifically to autism, all the real or fictional representations contribute to the construction of a stereotype in which behaviours on the autistic spectrum are linked to a talent in mathematics and the sciences. In addition to this, detectives in the classical crime fiction alluded to in the novel typically exhibit traits of superhuman powers of deduction, pattern making, and problem solving that engage with the popular notion of genius in general and mathematics in particular by possessing a mind like a computer. Such detectives from current television series as Saga from The Bridge and Spencer Reid from Criminal Minds exhibit distance, coldness, and lack of social awareness or empathy with others, and this is presented as the basis of their extraordinary ability to discern patterns and solve crime. Spencer Reid, for example, has three PhDs in Science disciplines and Mathematics. Charlie in the television series Numb3rs is also a genius who uses his mathematical abilities to not only find the solution to crime but also explain the maths behind it to his FBI colleagues, and, in conjunction, the audience. But the character with the clearest association to Christopher is, naturally, Sherlock Holmes, both as constructed in Conan Doyle’s original text and the current adaptations and transformations of it. The television series Sherlock and Elementary, as well as the films Sherlock Holmes and Sherlock Holmes: A Game of Shadows all invoke a version of Holmes in which his powers of deduction are associated with symptoms to be found in a spectrum disorder.Like Christopher, the classical detective is characterised by being cold, emotionless, distant, socially inept, and isolated, but also keenly observant, analytical, and scientific; one who approaches the crime as a puzzle to be solved (Cawelti 43) with computer-like precision. In what is considered to be the original detective story, The Murders in the Rue Morgue, Poe included a “pseudo-mathematical logic in his literary scenario” (Platten 255). In Conan Doyle’s stories, Holmes, too, adopts a mathematical and scientific approach to construct patterns from clues that he alone can discern, and thereby solve the crime. The depiction of investigators in contemporary media such as Charlie in Numb3rs engages with these origins so that he is objective, dispassionate, and able to relate to real-world problems only through the filter of mathematical formulae. Christopher is presented similarly by engaging with the idea of the detective as implied savant and relying on an ability to discern patterns for successful crime solving.The book links the disabling behaviours of autism with the savant, so that the stereotype of the mystic displaying both disability and giftedness in fiction of earlier ages has been transformed in contemporary literature to a figure with extraordinary powers related both to autism and to the contemporary form of mysticism: innate mathematical ability and computer-style calculation. Allied with what Murray terms the “unknown and ambiguous nature” of autism, it is characterised as “the alien within the human, the mystical within the rational, the ultimate enigma” (25) in a way that is in keeping with the current fascination with the nature of genius and its association with being “special,” a term continually evoked and discussed throughout the book by the main character. The chapters on scientific ideas relate to Christopher’s world view, filtered through a mathematical and analytical approach to life and relationships with other people. Christopher examines beliefs such as the concept of humanity as superior to other animals, and the idea of religion and creationism, that is, the idea of humanity itself as special, with a cold and logical approach. He similarly discusses the idea of the individual person as special, linking this to a metaphor of the human mind being a computer (203, 148). Christopher’s narrow perspective as a result of his autism is not presented as disabling so much as protective, because the metaphorical connection of his viewpoint to a computer provides him with distance. Although initially Christopher fails to realise the significance of events, this allows him to be “switched off” (103) from events that he finds traumatising.The transformative metaphor of an autistic individual thinking like a computer is also invoked through Christopher’s explanation of “why people think that their brains are special, and different from computers” (147). Indeed, both in terms of his tendency to retreat or by “pressing CTRL + ALT + DEL and shutting down programs and turning the computer off and rebooting” (178) in times of stress, Christopher metaphorically views himself as a computer. Such a perspective invokes yet another popular cultural reference through the allusion to the human brain as “Captain Jean-Luc Picard in Star Trek: The Next Generation, sitting in his captain’s seat looking at a big screen” (147). But more importantly, the explanation refers to the basic premise of the book, that the text offers access to a condition that is inherently unknowable, but able to be understood by the reader through metaphor, often based on computers or technology as a result of a popular construction of autism that “the condition is the product of a brain in which the hard drive is incorrectly formatted” (Murray 25).Throughout the novel, the notion of “special” is presented as a trope for those with a disability, but as the protagonist, Christopher, points out, everyone is special in some way, so the whole idea of a disability as disabling is problematised throughout the text, while its associations of giftedness are upheld. Christopher’s disability, never actually designated as Asperger’s Syndrome or any type of spectrum disorder, is transformed into a protective mechanism that shields him from problematic social relationships of which he is unaware, but that the less naïve reader can well discern. In this way, rather than a limitation, the main character’s disorder protects him from a harsh reality. Even Christopher’s choice of Holmes as a role model is indicative of his desire to impose an eccentric order on his world, since this engages with a character in popular fiction who is famous not simply for his abilities, but for his eccentricity bordering on a form of autism. His aloof personality and cold logic not only fail to hamper him in his investigations, but these traits actually form the basis of them. The majority of recent adaptations of Conan Doyle’s stories, especially the BBC series Sherlock, depict Holmes with symptoms associated with spectrum disorder such as lack of empathy, difficulty in communication, and limited social skills, and these are clearly shown as contributing to his problem-solving ability. The trope of Christopher as detective also allows a parodic, postmodern comment on the classical detective form, because typically this fiction has a detective that knows more than the reader, and therefore the goal for the reader is to find the solution to the crime before it is revealed by the investigator in the final stages of the text (Rzepka 14). But the narrative works ironically in the novel since the non-autistic reader knows more than a narrator who is hampered by a limited worldview. From the beginning of the book, the narrative as focalised through Christopher’s narrow perspective allows a more profound view of events to be adopted by the reader, who is able to read clues that elude the protagonist. Christopher is well aware of this as he explains his attraction to the murder mystery novel, even though he has earlier stated he does not like novels since his inability to imagine or empathise means he is unable to relate to their fiction. For him, the genre of murder mystery is more akin to the books on maths and science that he finds comprehensible, because, like the classical detective, he views the crime as primarily a puzzle to be solved: as he states, “In a murder mystery novel someone has to work out who the murderer is and then catch them. It is a puzzle. If it is a good puzzle you can sometimes work out the answer before the end of the book” (5). But unlike Christopher, Holmes invariably knows more about the crime, can interpret the clues, and find the pattern, before other characters such as Watson, and especially the reader. In contrast, in The Curious Incident of the Dog in the Night-Time, the reader has more awareness of the probable context and significance of events than Christopher because, like a computer, he can calculate but not imagine. The reader can interpret clues within the plot of the story, such as the synchronous timing of the “death” of Christopher’s mother with the breakdown of the marriage of a neighbour, Mrs Shears. The astute reader is able to connect these events and realise that his mother has not died, but is living in a relationship with the neighbour’s husband. The construction of this pattern is denied Christopher, since he fails to determine their significance due to his limited imagination. Such a failure is related to Simon Baron-Cohen’s Theory of Mind, in which he proposes that autistic individuals have difficulty with social behaviour because they lack the capacity to comprehend that other people have individual mental states, or as Christopher terms it, “when I was little I didn’t understand about other people having minds” (145). Haddon utilises fictional licence when he allows Christopher to overcome such a limitation by a conscious shift in perspective, despite the specialist teacher within the text claiming that he would “always find this very difficult” (145). Christopher has here altered his view of events through his modelling both on the detective genre and on his affinity with mathematics, since he states, “I don’t find this difficult now. Because I decided that it was a kind of puzzle, and if something is a puzzle there is always a way of solving it” (145). In this way, the main character is shown as transcending symptoms of autism through the power of his giftedness in mathematics to ultimately discern a pattern in human relationships thereby adopting a computational approach to social problems.Haddon similarly explains the perspective of an individual with autism through a metaphor of Christopher’s memory being like a DVD recording. He is able to distance himself from his memories, choosing “Rewind” and then “Fast Forward” (96) to retrieve his recollection of events. This aspect of the precision of his memory relates to his machine-like coldness and lack of empathy for the feelings of others. But it also refers to the stereotype of the nerd figure in popular culture, where the nerd is able to relate more to a computer than to other people, exemplified in Sheldon from the television series The Big Bang Theory. Thus the presentation of Christopher’s autism relates to his giftedness in maths and science more than to areas that relate to his body. In general, descriptions of inappropriate or distressing bodily functions associated with disorders are mainly confined to other students at Christopher’s school. His references to his fellow students, such as Joseph eating his poo and playing in it (129) and his unsympathetic evaluation of Steve as not as clever or interesting as a dog because he “needs help to eat his food and could not even fetch a stick” (6), make a clear distinction between him and the other children, who despite being termed “special needs” are “special” in a different way from Christopher, because, according to him, “All the other children at my school are stupid” (56). While some reference is made to Christopher’s inappropriate behaviour in times of stress, such as punching a fellow student, wetting himself while on the train, and vomiting outside the school, in the main the emphasis is on his giftedness as a result of his autism, as displayed in the many chapters where he explains scientific and mathematical concepts. This is extrapolated into a further mathematical metaphor underlying the book, that he is like one of the prime numbers he finds so fascinating, because prime numbers do not fit neatly into the pattern of the number system, but they are essential and special nevertheless. Moreover, as James Berger suggests, prime numbers can “serve as figures for the autistic subject,” because like autistic individuals “they do not mix; they are singular, indivisible, unfactorable” yet “Mathematics could not exist without these singular entities that [. . .] are only apparent anomalies” (271).Haddon therefore offers a transformation by confounding autism with a computer-like ability to solve mathematical problems, so that the text is, as Haddon concedes, “as much about a gifted boy with behavior problems as it is about anyone on the autism spectrum” (qtd. in Burks-Abbott 291). Indeed, the word “autism” does not even appear in the book, while the terms “genius,” (140) “clever,” (32, 65, 252) and the like are continually being invoked in descriptions of Christopher, even if ironically. More importantly, the reader is constantly being shown his giftedness through the reiteration of his study of A Level Mathematics, and his explanation of scientific concepts. Throughout, Christopher explains aspects of mathematics, astrophysics, and other sciences, referring to such well-known puzzles in popular culture as the Monty Hall problem, as well as more obscure formulae and their proofs. They function to establish Christopher’s intuitive grasp of complex mathematical and scientific principles, as well as providing the reader with insight into both his perspective and the paradoxical nature of an individual who is at once able to solve quadratic equations in his head, yet is incapable of understanding the simple instruction, “Take the tube to Willesden Junction” (211).The presentation of Christopher is that of an individual who displays an extension of the social problems established in popular literature as connected to a talent for mathematics, therefore engaging with a depiction already existing in popular mythology: the isolated and analytical nerd or genius social introvert. Indeed, much of Christopher’s autistic behaviour functions to protect him from unsettling or traumatic information, since he fails to realise the significance of the information he collects or the clues he is given. His disability is therefore presented as not limiting so much as protective, and so the notion of disability is subsumed by the idea of the savant. The book, then, engages with a contemporary representation within popular culture that has transformed spectrum disability into mathematical giftedness, thereby metaphorically associating the autistic mind with the computer. ReferencesBaron-Cohen, Simon. Mindblindness: An Essay on Autism and Theory of Mind. Cambridge MA: MIT Press, 1995. Berger, James. “Alterity and Autism: Mark Haddon’s Curious Incident in the Neurological Spectrum.” Autism and Representation. Ed. Mark Osteen. Hoboken: Routledge, 2007. 271–88. Burks-Abbott, Gyasi. “Mark Haddon’s Popularity and Other Curious Incidents in My Life as an Autistic.” Autism and Representation. Ed. Mark Osteen. Hoboken: Routledge, 2007. 289–96. Cawelti, John G. Adventure, Mystery, and Romance: Formula Stories as Art and Popular Culture. Chicago: U of Chicago P, 1976. Eshelman, Raoul. “Transcendence and the Aesthetics of Disability: The Case of The Curious Incident of the Dog in the Night-Time.” Anthropoetics: The Journal of Generative Anthropology 15.1 (2009). Haddon, Mark. The Curious Incident of the Dog in the Night-Time. London: Random House Children’s Books, 2004. Kendall, Lori. “The Nerd Within: Mass Media and the Negotiation of Identity among Computer-Using Men.” Journal of Men’s Studies 3 (1999): 353–67. Murray, Stuart. “Autism and the Contemporary Sentimental: Fiction and the Narrative Fascination of the Present.” Literature and Medicine 25.1 (2006): 24–46. Platten, David. “Reading Glasses, Guns and Robots: A History of Science in French Crime Fiction.” French Cultural Studies 12 (2001): 253–70. Rzepka, Charles J. Detective Fiction. Cambridge, UK: Polity Press, 2005.
APA, Harvard, Vancouver, ISO, and other styles
44

Dieter, Michael. "Amazon Noir." M/C Journal 10, no. 5 (2007). http://dx.doi.org/10.5204/mcj.2709.

Full text
Abstract:
&#x0D; &#x0D; &#x0D; There is no diagram that does not also include, besides the points it connects up, certain relatively free or unbounded points, points of creativity, change and resistance, and it is perhaps with these that we ought to begin in order to understand the whole picture. (Deleuze, “Foucault” 37) Monty Cantsin: Why do we use a pervert software robot to exploit our collective consensual mind? Letitia: Because we want the thief to be a digital entity. Monty Cantsin: But isn’t this really blasphemic? Letitia: Yes, but god – in our case a meta-cocktail of authorship and copyright – can not be trusted anymore. (Amazon Noir, “Dialogue”) In 2006, some 3,000 digital copies of books were silently “stolen” from online retailer Amazon.com by targeting vulnerabilities in the “Search inside the Book” feature from the company’s website. Over several weeks, between July and October, a specially designed software program bombarded the Search Inside!™ interface with multiple requests, assembling full versions of texts and distributing them across peer-to-peer networks (P2P). Rather than a purely malicious and anonymous hack, however, the “heist” was publicised as a tactical media performance, Amazon Noir, produced by self-proclaimed super-villains Paolo Cirio, Alessandro Ludovico, and Ubermorgen.com. While controversially directed at highlighting the infrastructures that materially enforce property rights and access to knowledge online, the exploit additionally interrogated its own interventionist status as theoretically and politically ambiguous. That the “thief” was represented as a digital entity or machinic process (operating on the very terrain where exchange is differentiated) and the emergent act of “piracy” was fictionalised through the genre of noir conveys something of the indeterminacy or immensurability of the event. In this short article, I discuss some political aspects of intellectual property in relation to the complexities of Amazon Noir, particularly in the context of control, technological action, and discourses of freedom. Software, Piracy As a force of distribution, the Internet is continually subject to controversies concerning flows and permutations of agency. While often directed by discourses cast in terms of either radical autonomy or control, the technical constitution of these digital systems is more regularly a case of establishing structures of operation, codified rules, or conditions of possibility; that is, of guiding social processes and relations (McKenzie, “Cutting Code” 1-19). Software, as a medium through which such communication unfolds and becomes organised, is difficult to conceptualise as a result of being so event-orientated. There lies a complicated logic of contingency and calculation at its centre, a dimension exacerbated by the global scale of informational networks, where the inability to comprehend an environment that exceeds the limits of individual experience is frequently expressed through desires, anxieties, paranoia. Unsurprisingly, cautionary accounts and moral panics on identity theft, email fraud, pornography, surveillance, hackers, and computer viruses are as commonplace as those narratives advocating user interactivity. When analysing digital systems, cultural theory often struggles to describe forces that dictate movement and relations between disparate entities composed by code, an aspect heightened by the intensive movement of informational networks where differences are worked out through the constant exposure to unpredictability and chance (Terranova, “Communication beyond Meaning”). Such volatility partially explains the recent turn to distribution in media theory, as once durable networks for constructing economic difference – organising information in space and time (“at a distance”), accelerating or delaying its delivery – appear contingent, unstable, or consistently irregular (Cubitt 194). Attributing actions to users, programmers, or the software itself is a difficult task when faced with these states of co-emergence, especially in the context of sharing knowledge and distributing media content. Exchanges between corporate entities, mainstream media, popular cultural producers, and legal institutions over P2P networks represent an ongoing controversy in this respect, with numerous stakeholders competing between investments in property, innovation, piracy, and publics. Beginning to understand this problematic landscape is an urgent task, especially in relation to the technological dynamics that organised and propel such antagonisms. In the influential fragment, “Postscript on the Societies of Control,” Gilles Deleuze describes the historical passage from modern forms of organised enclosure (the prison, clinic, factory) to the contemporary arrangement of relational apparatuses and open systems as being materially provoked by – but not limited to – the mass deployment of networked digital technologies. In his analysis, the disciplinary mode most famously described by Foucault is spatially extended to informational systems based on code and flexibility. According to Deleuze, these cybernetic machines are connected into apparatuses that aim for intrusive monitoring: “in a control-based system nothing’s left alone for long” (“Control and Becoming” 175). Such a constant networking of behaviour is described as a shift from “molds” to “modulation,” where controls become “a self-transmuting molding changing from one moment to the next, or like a sieve whose mesh varies from one point to another” (“Postscript” 179). Accordingly, the crisis underpinning civil institutions is consistent with the generalisation of disciplinary logics across social space, forming an intensive modulation of everyday life, but one ambiguously associated with socio-technical ensembles. The precise dynamics of this epistemic shift are significant in terms of political agency: while control implies an arrangement capable of absorbing massive contingency, a series of complex instabilities actually mark its operation. Noise, viral contamination, and piracy are identified as key points of discontinuity; they appear as divisions or “errors” that force change by promoting indeterminacies in a system that would otherwise appear infinitely calculable, programmable, and predictable. The rendering of piracy as a tactic of resistance, a technique capable of levelling out the uneven economic field of global capitalism, has become a predictable catch-cry for political activists. In their analysis of multitude, for instance, Antonio Negri and Michael Hardt describe the contradictions of post-Fordist production as conjuring forth a tendency for labour to “become common.” That is, as productivity depends on flexibility, communication, and cognitive skills, directed by the cultivation of an ideal entrepreneurial or flexible subject, the greater the possibilities for self-organised forms of living that significantly challenge its operation. In this case, intellectual property exemplifies such a spiralling paradoxical logic, since “the infinite reproducibility central to these immaterial forms of property directly undermines any such construction of scarcity” (Hardt and Negri 180). The implications of the filesharing program Napster, accordingly, are read as not merely directed toward theft, but in relation to the private character of the property itself; a kind of social piracy is perpetuated that is viewed as radically recomposing social resources and relations. Ravi Sundaram, a co-founder of the Sarai new media initiative in Delhi, has meanwhile drawn attention to the existence of “pirate modernities” capable of being actualised when individuals or local groups gain illegitimate access to distributive media technologies; these are worlds of “innovation and non-legality,” of electronic survival strategies that partake in cultures of dispersal and escape simple classification (94). Meanwhile, pirate entrepreneurs Magnus Eriksson and Rasmus Fleische – associated with the notorious Piratbyrn – have promoted the bleeding away of Hollywood profits through fully deployed P2P networks, with the intention of pushing filesharing dynamics to an extreme in order to radicalise the potential for social change (“Copies and Context”). From an aesthetic perspective, such activist theories are complemented by the affective register of appropriation art, a movement broadly conceived in terms of antagonistically liberating knowledge from the confines of intellectual property: “those who pirate and hijack owned material, attempting to free information, art, film, and music – the rhetoric of our cultural life – from what they see as the prison of private ownership” (Harold 114). These “unruly” escape attempts are pursued through various modes of engagement, from experimental performances with legislative infrastructures (i.e. Kembrew McLeod’s patenting of the phrase “freedom of expression”) to musical remix projects, such as the work of Negativland, John Oswald, RTMark, Detritus, Illegal Art, and the Evolution Control Committee. Amazon Noir, while similarly engaging with questions of ownership, is distinguished by specifically targeting information communication systems and finding “niches” or gaps between overlapping networks of control and economic governance. Hans Bernhard and Lizvlx from Ubermorgen.com (meaning ‘Day after Tomorrow,’ or ‘Super-Tomorrow’) actually describe their work as “research-based”: “we not are opportunistic, money-driven or success-driven, our central motivation is to gain as much information as possible as fast as possible as chaotic as possible and to redistribute this information via digital channels” (“Interview with Ubermorgen”). This has led to experiments like Google Will Eat Itself (2005) and the construction of the automated software thief against Amazon.com, as process-based explorations of technological action. Agency, Distribution Deleuze’s “postscript” on control has proven massively influential for new media art by introducing a series of key questions on power (or desire) and digital networks. As a social diagram, however, control should be understood as a partial rather than totalising map of relations, referring to the augmentation of disciplinary power in specific technological settings. While control is a conceptual regime that refers to open-ended terrains beyond the architectural locales of enclosure, implying a move toward informational networks, data solicitation, and cybernetic feedback, there remains a peculiar contingent dimension to its limits. For example, software code is typically designed to remain cycling until user input is provided. There is a specifically immanent and localised quality to its actions that might be taken as exemplary of control as a continuously modulating affective materialism. The outcome is a heightened sense of bounded emergencies that are either flattened out or absorbed through reconstitution; however, these are never linear gestures of containment. As Tiziana Terranova observes, control operates through multilayered mechanisms of order and organisation: “messy local assemblages and compositions, subjective and machinic, characterised by different types of psychic investments, that cannot be the subject of normative, pre-made political judgments, but which need to be thought anew again and again, each time, in specific dynamic compositions” (“Of Sense and Sensibility” 34). This event-orientated vitality accounts for the political ambitions of tactical media as opening out communication channels through selective “transversal” targeting. Amazon Noir, for that reason, is pitched specifically against the material processes of communication. The system used to harvest the content from “Search inside the Book” is described as “robot-perversion-technology,” based on a network of four servers around the globe, each with a specific function: one located in the United States that retrieved (or “sucked”) the books from the site, one in Russia that injected the assembled documents onto P2P networks and two in Europe that coordinated the action via intelligent automated programs (see “The Diagram”). According to the “villains,” the main goal was to steal all 150,000 books from Search Inside!™ then use the same technology to steal books from the “Google Print Service” (the exploit was limited only by the amount of technological resources financially available, but there are apparent plans to improve the technique by reinvesting the money received through the settlement with Amazon.com not to publicise the hack). In terms of informational culture, this system resembles a machinic process directed at redistributing copyright content; “The Diagram” visualises key processes that define digital piracy as an emergent phenomenon within an open-ended and responsive milieu. That is, the static image foregrounds something of the activity of copying being a technological action that complicates any analysis focusing purely on copyright as content. In this respect, intellectual property rights are revealed as being entangled within information architectures as communication management and cultural recombination – dissipated and enforced by a measured interplay between openness and obstruction, resonance and emergence (Terranova, “Communication beyond Meaning” 52). To understand data distribution requires an acknowledgement of these underlying nonhuman relations that allow for such informational exchanges. It requires an understanding of the permutations of agency carried along by digital entities. According to Lawrence Lessig’s influential argument, code is not merely an object of governance, but has an overt legislative function itself. Within the informational environments of software, “a law is defined, not through a statue, but through the code that governs the space” (20). These points of symmetry are understood as concretised social values: they are material standards that regulate flow. Similarly, Alexander Galloway describes computer protocols as non-institutional “etiquette for autonomous agents,” or “conventional rules that govern the set of possible behavior patterns within a heterogeneous system” (7). In his analysis, these agreed-upon standardised actions operate as a style of management fostered by contradiction: progressive though reactionary, encouraging diversity by striving for the universal, synonymous with possibility but completely predetermined, and so on (243-244). Needless to say, political uncertainties arise from a paradigm that generates internal material obscurities through a constant twinning of freedom and control. For Wendy Hui Kyong Chun, these Cold War systems subvert the possibilities for any actual experience of autonomy by generalising paranoia through constant intrusion and reducing social problems to questions of technological optimisation (1-30). In confrontation with these seemingly ubiquitous regulatory structures, cultural theory requires a critical vocabulary differentiated from computer engineering to account for the sociality that permeates through and concatenates technological realities. In his recent work on “mundane” devices, software and code, Adrian McKenzie introduces a relevant analytic approach in the concept of technological action as something that both abstracts and concretises relations in a diffusion of collective-individual forces. Drawing on the thought of French philosopher Gilbert Simondon, he uses the term “transduction” to identify a key characteristic of technology in the relational process of becoming, or ontogenesis. This is described as bringing together disparate things into composites of relations that evolve and propagate a structure throughout a domain, or “overflow existing modalities of perception and movement on many scales” (“Impersonal and Personal Forces in Technological Action” 201). Most importantly, these innovative diffusions or contagions occur by bridging states of difference or incompatibilities. Technological action, therefore, arises from a particular type of disjunctive relation between an entity and something external to itself: “in making this relation, technical action changes not only the ensemble, but also the form of life of its agent. Abstraction comes into being and begins to subsume or reconfigure existing relations between the inside and outside” (203). Here, reciprocal interactions between two states or dimensions actualise disparate potentials through metastability: an equilibrium that proliferates, unfolds, and drives individuation. While drawing on cybernetics and dealing with specific technological platforms, McKenzie’s work can be extended to describe the significance of informational devices throughout control societies as a whole, particularly as a predictive and future-orientated force that thrives on staged conflicts. Moreover, being a non-deterministic technical theory, it additionally speaks to new tendencies in regimes of production that harness cognition and cooperation through specially designed infrastructures to enact persistent innovation without any end-point, final goal or natural target (Thrift 283-295). Here, the interface between intellectual property and reproduction can be seen as a site of variation that weaves together disparate objects and entities by imbrication in social life itself. These are specific acts of interference that propel relations toward unforeseen conclusions by drawing on memories, attention spans, material-technical traits, and so on. The focus lies on performance, context, and design “as a continual process of tuning arrived at by distributed aspiration” (Thrift 295). This later point is demonstrated in recent scholarly treatments of filesharing networks as media ecologies. Kate Crawford, for instance, describes the movement of P2P as processual or adaptive, comparable to technological action, marked by key transitions from partially decentralised architectures such as Napster, to the fully distributed systems of Gnutella and seeded swarm-based networks like BitTorrent (30-39). Each of these technologies can be understood as a response to various legal incursions, producing radically dissimilar socio-technological dynamics and emergent trends for how agency is modulated by informational exchanges. Indeed, even these aberrant formations are characterised by modes of commodification that continually spillover and feedback on themselves, repositioning markets and commodities in doing so, from MP3s to iPods, P2P to broadband subscription rates. However, one key limitation of this ontological approach is apparent when dealing with the sheer scale of activity involved, where mass participation elicits certain degrees of obscurity and relative safety in numbers. This represents an obvious problem for analysis, as dynamics can easily be identified in the broadest conceptual sense, without any understanding of the specific contexts of usage, political impacts, and economic effects for participants in their everyday consumptive habits. Large-scale distributed ensembles are “problematic” in their technological constitution, as a result. They are sites of expansive overflow that provoke an equivalent individuation of thought, as the Recording Industry Association of America observes on their educational website: “because of the nature of the theft, the damage is not always easy to calculate but not hard to envision” (“Piracy”). The politics of the filesharing debate, in this sense, depends on the command of imaginaries; that is, being able to conceptualise an overarching structural consistency to a persistent and adaptive ecology. As a mode of tactical intervention, Amazon Noir dramatises these ambiguities by framing technological action through the fictional sensibilities of narrative genre. Ambiguity, Control The extensive use of imagery and iconography from “noir” can be understood as an explicit reference to the increasing criminalisation of copyright violation through digital technologies. However, the term also refers to the indistinct or uncertain effects produced by this tactical intervention: who are the “bad guys” or the “good guys”? Are positions like ‘good’ and ‘evil’ (something like freedom or tyranny) so easily identified and distinguished? As Paolo Cirio explains, this political disposition is deliberately kept obscure in the project: “it’s a representation of the actual ambiguity about copyright issues, where every case seems to lack a moral or ethical basis” (“Amazon Noir Interview”). While user communications made available on the site clearly identify culprits (describing the project as jeopardising arts funding, as both irresponsible and arrogant), the self-description of the artists as political “failures” highlights the uncertainty regarding the project’s qualities as a force of long-term social renewal: Lizvlx from Ubermorgen.com had daily shootouts with the global mass-media, Cirio continuously pushed the boundaries of copyright (books are just pixels on a screen or just ink on paper), Ludovico and Bernhard resisted kickback-bribes from powerful Amazon.com until they finally gave in and sold the technology for an undisclosed sum to Amazon. Betrayal, blasphemy and pessimism finally split the gang of bad guys. (“Press Release”) Here, the adaptive and flexible qualities of informatic commodities and computational systems of distribution are knowingly posited as critical limits; in a certain sense, the project fails technologically in order to succeed conceptually. From a cynical perspective, this might be interpreted as guaranteeing authenticity by insisting on the useless or non-instrumental quality of art. However, through this process, Amazon Noir illustrates how forces confined as exterior to control (virality, piracy, noncommunication) regularly operate as points of distinction to generate change and innovation. Just as hackers are legitimately employed to challenge the durability of network exchanges, malfunctions are relied upon as potential sources of future information. Indeed, the notion of demonstrating ‘autonomy’ by illustrating the shortcomings of software is entirely consistent with the logic of control as a modulating organisational diagram. These so-called “circuit breakers” are positioned as points of bifurcation that open up new systems and encompass a more general “abstract machine” or tendency governing contemporary capitalism (Parikka 300). As a consequence, the ambiguities of Amazon Noir emerge not just from the contrary articulation of intellectual property and digital technology, but additionally through the concept of thinking “resistance” simultaneously with regimes of control. This tension is apparent in Galloway’s analysis of the cybernetic machines that are synonymous with the operation of Deleuzian control societies – i.e. “computerised information management” – where tactical media are posited as potential modes of contestation against the tyranny of code, “able to exploit flaws in protocological and proprietary command and control, not to destroy technology, but to sculpt protocol and make it better suited to people’s real desires” (176). While pushing a system into a state of hypertrophy to reform digital architectures might represent a possible technique that produces a space through which to imagine something like “our” freedom, it still leaves unexamined the desire for reformation itself as nurtured by and produced through the coupling of cybernetics, information theory, and distributed networking. This draws into focus the significance of McKenzie’s Simondon-inspired cybernetic perspective on socio-technological ensembles as being always-already predetermined by and driven through asymmetries or difference. As Chun observes, consequently, there is no paradox between resistance and capture since “control and freedom are not opposites, but different sides of the same coin: just as discipline served as a grid on which liberty was established, control is the matrix that enables freedom as openness” (71). Why “openness” should be so readily equated with a state of being free represents a major unexamined presumption of digital culture, and leads to the associated predicament of attempting to think of how this freedom has become something one cannot not desire. If Amazon Noir has political currency in this context, however, it emerges from a capacity to recognise how informational networks channel desire, memories, and imaginative visions rather than just cultivated antagonisms and counterintuitive economics. As a final point, it is worth observing that the project was initiated without publicity until the settlement with Amazon.com. There is, as a consequence, nothing to suggest that this subversive “event” might have actually occurred, a feeling heightened by the abstractions of software entities. To the extent that we believe in “the big book heist,” that such an act is even possible, is a gauge through which the paranoia of control societies is illuminated as a longing or desire for autonomy. As Hakim Bey observes in his conceptualisation of “pirate utopias,” such fleeting encounters with the imaginaries of freedom flow back into the experience of the everyday as political instantiations of utopian hope. Amazon Noir, with all its underlying ethical ambiguities, presents us with a challenge to rethink these affective investments by considering our profound weaknesses to master the complexities and constant intrusions of control. It provides an opportunity to conceive of a future that begins with limits and limitations as immanently central, even foundational, to our deep interconnection with socio-technological ensembles. References “Amazon Noir – The Big Book Crime.” http://www.amazon-noir.com/&gt;. Bey, Hakim. T.A.Z.: The Temporary Autonomous Zone, Ontological Anarchy, Poetic Terrorism. New York: Autonomedia, 1991. Chun, Wendy Hui Kyong. Control and Freedom: Power and Paranoia in the Age of Fibre Optics. Cambridge, MA: MIT Press, 2006. Crawford, Kate. “Adaptation: Tracking the Ecologies of Music and Peer-to-Peer Networks.” Media International Australia 114 (2005): 30-39. Cubitt, Sean. “Distribution and Media Flows.” Cultural Politics 1.2 (2005): 193-214. Deleuze, Gilles. Foucault. Trans. Seán Hand. Minneapolis: U of Minnesota P, 1986. ———. “Control and Becoming.” Negotiations 1972-1990. Trans. Martin Joughin. New York: Columbia UP, 1995. 169-176. ———. “Postscript on the Societies of Control.” Negotiations 1972-1990. Trans. Martin Joughin. New York: Columbia UP, 1995. 177-182. Eriksson, Magnus, and Rasmus Fleische. “Copies and Context in the Age of Cultural Abundance.” Online posting. 5 June 2007. Nettime 25 Aug 2007. Galloway, Alexander. Protocol: How Control Exists after Decentralization. Cambridge, MA: MIT Press, 2004. Hardt, Michael, and Antonio Negri. Multitude: War and Democracy in the Age of Empire. New York: Penguin Press, 2004. Harold, Christine. OurSpace: Resisting the Corporate Control of Culture. Minneapolis: U of Minnesota P, 2007. Lessig, Lawrence. Code and Other Laws of Cyberspace. New York: Basic Books, 1999. McKenzie, Adrian. Cutting Code: Software and Sociality. New York: Peter Lang, 2006. ———. “The Strange Meshing of Impersonal and Personal Forces in Technological Action.” Culture, Theory and Critique 47.2 (2006): 197-212. Parikka, Jussi. “Contagion and Repetition: On the Viral Logic of Network Culture.” Ephemera: Theory &amp; Politics in Organization 7.2 (2007): 287-308. “Piracy Online.” Recording Industry Association of America. 28 Aug 2007. http://www.riaa.com/physicalpiracy.php&gt;. Sundaram, Ravi. “Recycling Modernity: Pirate Electronic Cultures in India.” Sarai Reader 2001: The Public Domain. Delhi, Sarai Media Lab, 2001. 93-99. http://www.sarai.net&gt;. Terranova, Tiziana. “Communication beyond Meaning: On the Cultural Politics of Information.” Social Text 22.3 (2004): 51-73. ———. “Of Sense and Sensibility: Immaterial Labour in Open Systems.” DATA Browser 03 – Curating Immateriality: The Work of the Curator in the Age of Network Systems. Ed. Joasia Krysa. New York: Autonomedia, 2006. 27-38. Thrift, Nigel. “Re-inventing Invention: New Tendencies in Capitalist Commodification.” Economy and Society 35.2 (2006): 279-306. &#x0D; &#x0D; &#x0D; &#x0D; Citation reference for this article&#x0D; &#x0D; MLA Style&#x0D; Dieter, Michael. "Amazon Noir: Piracy, Distribution, Control." M/C Journal 10.5 (2007). echo date('d M. Y'); ?&gt; &lt;http://journal.media-culture.org.au/0710/07-dieter.php&gt;. APA Style&#x0D; Dieter, M. (Oct. 2007) "Amazon Noir: Piracy, Distribution, Control," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?&gt; from &lt;http://journal.media-culture.org.au/0710/07-dieter.php&gt;. &#x0D;
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!