Academic literature on the topic 'Computational-hard real-life problem'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Computational-hard real-life problem.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Computational-hard real-life problem"

1

Iantovics, Laszlo Barna, László Kovács, and Corina Rotar. "MeasApplInt - a novel intelligence metric for choosing the computing systems able to solve real-life problems with a high intelligence." Applied Intelligence 49 (April 1, 2019): 3491–511. https://doi.org/10.1007/s10489-019-01440-5.

Full text
Abstract:
Intelligent agent-based systems are applied for many real-life difficult problem-solving tasks in domains like transport and healthcare. In the case of many classes of real-life difficult problems, it is important to make an efficient selection of the computing systems that are able to solve the problems very intelligently. The selection of the appropriate computing systems should be based on an intelligence metric that is able to measure the systems intelligence for real-life problem solving. In this paper, we propose a novel universal metric called&nbsp;<em>MeasApplInt</em>&nbsp;able to measure and compare the real-life problem solving machine intelligence of cooperative multiagent systems (CMASs). Based on their measured intelligence levels, two studied CMASs can be classified to the same or to different classes of intelligence.&nbsp;<em>MeasApplInt</em>&nbsp;is compared with a recent state-of-the-art metric called&nbsp;<em>MetrIntPair</em>. The comparison was based on the same principle of difficult problem-solving intelligence and the same pairwise/matched problem-solving intelligence evaluations. Our analysis shows that the main advantage of&nbsp;<em>MeasApplInt</em>&nbsp;versus the compared metric, is its robustness. For evaluation purposes, we performed an illustrative case study considering two CMASs composed of simple reactive agents providing problem-solving intelligence at the systems&rsquo; level. The two CMASs have been designed for solving an NP-hard problem with many applications in the standard, modified and generalized formulation. The conclusion of the case study, using the&nbsp;<em>MeasApplInt</em>&nbsp;metric, is that the studied CMASs have the same real-life problems solving intelligence level. An additional experimental evaluation of the proposed metric is attached as an&nbsp;Appendix.
APA, Harvard, Vancouver, ISO, and other styles
2

Konstantakopoulos, Grigorios D., Sotiris P. Gayialis, Evripidis P. Kechagias, Georgios A. Papadopoulos, and Ilias P. Tatsiopoulos. "A Multiobjective Large Neighborhood Search Metaheuristic for the Vehicle Routing Problem with Time Windows." Algorithms 13, no. 10 (2020): 243. http://dx.doi.org/10.3390/a13100243.

Full text
Abstract:
The Vehicle Routing Problem with Time Windows (VRPTW) is an NP-Hard optimization problem which has been intensively studied by researchers due to its applications in real-life cases in the distribution and logistics sector. In this problem, customers define a time slot, within which they must be served by vehicles of a standard capacity. The aim is to define cost-effective routes, minimizing both the number of vehicles and the total traveled distance. When we seek to minimize both attributes at the same time, the problem is considered as multiobjective. Although numerous exact, heuristic and metaheuristic algorithms have been developed to solve the various vehicle routing problems, including the VRPTW, only a few of them face these problems as multiobjective. In the present paper, a Multiobjective Large Neighborhood Search (MOLNS) algorithm is developed to solve the VRPTW. The algorithm is implemented using the Python programming language, and it is evaluated in Solomon’s 56 benchmark instances with 100 customers, as well as in Gehring and Homberger’s benchmark instances with 1000 customers. The results obtained from the algorithm are compared to the best-published, in order to validate the algorithm’s efficiency and performance. The algorithm is proven to be efficient both in the quality of results, as it offers three new optimal solutions in Solomon’s dataset and produces near optimal results in most instances, and in terms of computational time, as, even in cases with up to 1000 customers, good quality results are obtained in less than 15 min. Having the potential to effectively solve real life distribution problems, the present paper also discusses a practical real-life application of this algorithm.
APA, Harvard, Vancouver, ISO, and other styles
3

Joshi, Rajendra Prasad. "Analysis of Metaheuristic Solutions to the Response Time Variability Problem." Api Journal of Science 1 (December 31, 2024): 81–83. https://doi.org/10.3126/ajs.v1i1.75493.

Full text
Abstract:
The problem of variation in the response time is known as response time variability problem (RTVP). It is combinatorial NP-hard problem which has a broad range of real-life applications. The RTVP arises whenever events, jobs, clients or products need to be sequenced so as to minimize the variability of the time they wait for their next turn in obtaining the resources they need to advance. In RTVP the concern is to find out near optimal sequence of jobs with objective of minimizing the response time variability. The metaheuristic approaches to solve the RTVP are: Multi-start (MS), Greedy Randomized Adaptive Search Procedure (GRASP) and Particle Swarm Optimization (PSO). In this paper, the computational result of MS and GRASP will be analyzed.
APA, Harvard, Vancouver, ISO, and other styles
4

Hidri, Lotfi, and Ahmed M. Elsherbeeny. "Optimal Solution to the Two-Stage Hybrid Flow Shop Scheduling Problem with Removal and Transportation Times." Symmetry 14, no. 7 (2022): 1424. http://dx.doi.org/10.3390/sym14071424.

Full text
Abstract:
The two-stage hybrid flow shop scheduling problem with removal and transportation times is addressed in this paper. The maximum completion time is the objective function to be minimized. This scheduling problem is modeling real-life situations encountered in manufacturing and industrial areas. On the other hand, the studied problem is a challenging one from a theoretical point of view since it is NP-Hard in a strong sense. In addition, the problem is symmetric in the following sense. Scheduling from the second stage to the first provides the same optimal solution as the studied problem. This propriety allows extending all the proposed procedures to the symmetric problem in order to improve the quality of the obtained solution. Based on the existing literature and to the best of our knowledge, this study is the first one addressing the removal time and the transportation time in the hybrid flow shop environment simultaneously. In order to solve the studied problem optimally, a heuristic composed of two phases is proposed, and a new family of lower bounds is developed. In addition, an exact Branch and Bound algorithm is presented to solve the hard test problems. These hard instances are unsolved by the proposed heuristic. In order to evaluate the performance of the proposed procedures, an extensive experimental study is carried out over benchmark test problems with a size of up to 200 jobs. The obtained computational results provide strong evidence that the presented procedures are very effective since 90% of test problems are solved optimally within a moderate time of 47.44 s. Furthermore, the unsolved test problems present a relative gap of only 2.4%.
APA, Harvard, Vancouver, ISO, and other styles
5

Zhang, Wenze, and Chenyang Xu. "A comparative study between SA and GA in solving MTSP." Theoretical and Natural Science 18, no. 1 (2023): 61–70. http://dx.doi.org/10.54254/2753-8818/18/20230321.

Full text
Abstract:
The multiple traveling salesmen problems (MTSP) is a combinatorial optimization and np-hard problem. In practice, the computational resource required to solve such problems is usually prohibitive, and, in most cases, using heuristic algorithms is the only practical option. This paper implements genetic algorithms (GA) and simulated annealing (SA) to solve the MTSP and does an experimental study based on a benchmark from the TSPLIB instance to compare the performance of two algorithms in reality. The results show that GA can achieve an acceptable solution in a shorter time for any of the MTSP cases and is more accurate when the data size is small. Meanwhile, SA is more robust and achieves a better solution than GA for complex MTSP cases, but it takes more time to converge. Therefore, the result indicates that it is hard to identify which algorithm is comprehensively superior to the other one. However, It also provides an essential reference to developers who want to choose algorithms to solve MTSP in real life, facilitating them to balance the algorithms performance on different metrics they value.
APA, Harvard, Vancouver, ISO, and other styles
6

Kwan, Raymond S. K., Ann S. K. Kwan, and Anthony Wren. "Evolutionary Driver Scheduling with Relief Chains." Evolutionary Computation 9, no. 4 (2001): 445–60. http://dx.doi.org/10.1162/10636560152642869.

Full text
Abstract:
Public transport driver scheduling problems are well known to be NP-hard. Although some mathematically based methods are being used in the transport industry, there is room for improvement. A hybrid approach incorporating a genetic algorithm (GA) is presented. The role of the GA is to derive a small selection of good shifts to seed a greedy schedule construction heuristic. A group of shifts called a relief chain is identi-fied and recorded. The relief chain is then inherited by the offspring and used by the GA for schedule construction. The new approach has been tested using real-life data sets, some of which represent very large problem instances. The results are generally better than those compiled by experienced schedulers and are comparable to solutions found by integer linear programming (ILP). In some cases, solutions were obtained when the ILP failed within practical computational limits.
APA, Harvard, Vancouver, ISO, and other styles
7

Yaşar, Abdurrahman, Muhammed Fati̇h Balin, Xiaojing An, Kaan Sancak, and Ümit V. Çatalyürek. "On Symmetric Rectilinear Partitioning." ACM Journal of Experimental Algorithmics 27 (December 31, 2022): 1–26. http://dx.doi.org/10.1145/3523750.

Full text
Abstract:
Even distribution of irregular workload to processing units is crucial for efficient parallelization in many applications. In this work, we are concerned with a spatial partitioning called rectilinear partitioning (also known as generalized block distribution). More specifically, we address the problem of symmetric rectilinear partitioning of two dimensional domains, and utilize sparse matrices to model them. By symmetric, we mean both dimensions (i.e., the rows and columns of the matrix) are identically partitioned yielding a tiling where the diagonal tiles (blocks) will be squares. We first show that the optimal solution to this problem is NP-hard, and we propose four heuristics to solve two different variants of this problem. To make the proposed techniques more applicable in real life application scenarios, we further reduce their computational complexities by utilizing effective sparsification strategies together with an efficient sparse prefix-sum data structure. We experimentally show the proposed algorithms are efficient and effective on more than six hundred test matrices/graphs.
APA, Harvard, Vancouver, ISO, and other styles
8

Yang, Yajun, Zhongfei Li, Xin Wang, and Qinghua Hu. "Finding the Shortest Path with Vertex Constraint over Large Graphs." Complexity 2019 (February 19, 2019): 1–13. http://dx.doi.org/10.1155/2019/8728245.

Full text
Abstract:
Graph is an important complex network model to describe the relationship among various entities in real applications, including knowledge graph, social network, and traffic network. Shortest path query is an important problem over graphs and has been well studied. This paper studies a special case of the shortest path problem to find the shortest path passing through a set of vertices specified by user, which is NP-hard. Most existing methods calculate all permutations for given vertices and then find the shortest one from these permutations. However, the computational cost is extremely expensive when the size of graph or given set of vertices is large. In this paper, we first propose a novel exact heuristic algorithm in best-first search way and then give two optimizing techniques to improve efficiency. Moreover, we propose an approximate heuristic algorithm in polynomial time for this problem over large graphs. We prove the ratio bound is 3 for our approximate algorithm. We confirm the efficiency of our algorithms by extensive experiments on real-life datasets. The experimental results validate that our algorithms always outperform the existing methods even though the size of graph or given set of vertices is large.
APA, Harvard, Vancouver, ISO, and other styles
9

Peng, Liwen, and Yongguo Liu. "Feature Selection and Overlapping Clustering-Based Multilabel Classification Model." Mathematical Problems in Engineering 2018 (2018): 1–12. http://dx.doi.org/10.1155/2018/2814897.

Full text
Abstract:
Multilabel classification (MLC) learning, which is widely applied in real-world applications, is a very important problem in machine learning. Some studies show that a clustering-based MLC framework performs effectively compared to a nonclustering framework. In this paper, we explore the clustering-based MLC problem. Multilabel feature selection also plays an important role in classification learning because many redundant and irrelevant features can degrade performance and a good feature selection algorithm can reduce computational complexity and improve classification accuracy. In this study, we consider feature dependence and feature interaction simultaneously, and we propose a multilabel feature selection algorithm as a preprocessing stage before MLC. Typically, existing cluster-based MLC frameworks employ a hard cluster method. In practice, the instances of multilabel datasets are distinguished in a single cluster by such frameworks; however, the overlapping nature of multilabel instances is such that, in real-life applications, instances may not belong to only a single class. Therefore, we propose a MLC model that combines feature selection with an overlapping clustering algorithm. Experimental results demonstrate that various clustering algorithms show different performance for MLC, and the proposed overlapping clustering-based MLC model may be more suitable.
APA, Harvard, Vancouver, ISO, and other styles
10

Withers, P. J., and T. M. Holden. "Diagnosing Engineering Problems with Neutrons." MRS Bulletin 24, no. 12 (1999): 17–23. http://dx.doi.org/10.1557/s0883769400053677.

Full text
Abstract:
In the past, many unexpected failures of components were due to poor quality control or a failure to calculate—or to miscalculate—the stresses or fatigue stresses a component would experience in service. Today, improved manufacturing, fracture mechanics, and computational finite element methods combine to provide a solid framework for reducing safety factors, enabling leaner design. In this context, residual stress—that is, stress that equilibrates within the structure and is always present at some level due to manufacturing—presents a real problem. It is difficult to predict and as hard to measure. If unaccounted for in design, these stresses can superimpose upon in-service stresses to result in unexpected failures.Neutron diffraction is one of the few methods able to provide maps of residual stress distributions deep within crystalline materials and engineering components. Neutron strain scanning, as the technique is called, is becoming an increasingly important tool for the materials scientist and engineer alike. Point, line-scan, area-scan, and full three-dimensional (3D) maps are being used to design new materials, optimize engineering processes, validate finite element modeis, predict component life, and diagnose engineering failures.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Computational-hard real-life problem"

1

Schwanen, Christopher T., Wied Pakusa, and Wil M. P. van der Aalst. "A Dynamic Programming Approach for Alignments on Process Trees." In Lecture Notes in Business Information Processing. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-82225-4_7.

Full text
Abstract:
Abstract A fundamental task in conformance checking is to compute optimal alignments between a given event log and a process model. In general, it is known that this unavoidably incurs high computational costs which, in turn, leads to poor scalability in practice. One angle to attack the complexity is to develop alignment algorithms that exploit particular syntactic restrictions of the underlying process models. In this article, we study alignments for process trees with unique labels. These models are the output of the Inductive Miner, a family of state-of-the-art process discovery algorithms also used by the leading process mining tools. Our main contribution is a novel algorithm that constructs optimal alignments for process trees with unique labels efficiently, i.e., in polynomial time. This is in contrast with general process trees where the problem is NP-complete and general workflow nets where the problem is PSPACE-hard. We give a proof-of-concept implementation of our algorithm in PM4Py and evaluate it on a collection of real-life event logs.
APA, Harvard, Vancouver, ISO, and other styles
2

Corominas Albert, García-Villoria Alberto, and Pastor Rafael. "Solving the Response Time Variability Problem by means of Multi-start and GRASP metaheuristics." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2008. https://doi.org/10.3233/978-1-58603-925-7-128.

Full text
Abstract:
The Response Time Variability Problem (RTVP) is an NP-hard scheduling optimization problem that has recently appeared in the literature. This problem has a wide range of real-life applications in, for example, manufacturing, hard real-time systems, operating systems and network environments. The RTVP occurs whenever models, clients or jobs need to be sequenced to minimize variability in the time between the instants at which they receive the necessary resources. The RTVP has been already solved in the literature with a multi-start and a GRASP algorithm. We propose an improved multi-start and an improved GRASP algorithm to solve the RTVP. The computational experiment shows that, on average, the results obtained with our proposed algorithms improve on the best obtained results to date.
APA, Harvard, Vancouver, ISO, and other styles
3

Venkateswarlu, Dr S. China, Shiva Shankar J, and Dr S. Palanivel. "Deploying Artificial Intelligence into Daily Life: Artificial Intelligence for Cyber Security with More Opportunities." In Artificial Intelligence and their Applications. Iterative International Publishers, Selfypage Developers Pvt Ltd, 2024. http://dx.doi.org/10.58532/nbennurch311.

Full text
Abstract:
The rapid advancement of information technology has led to an upsurge in cybercriminal activities. As technology continues to evolve, so do the tactics used by individuals involved in digital offenses. Trends in complex, distributed, and internet[1]based computing have raised significant concerns regarding information security and privacy. Cyber infrastructures, in particular, are highly susceptible to intrusions and various threats. Traditional security measures like sensors and detectors are inadequate for safeguarding these infrastructures, necessitating the development of more sophisticated IT solutions capable of modeling normal behaviors and identifying anomalies. To address these challenges effectively, cyber defense systems must exhibit traits such as flexibility, adaptability, and robustness. They should be able to detect a wide range of threats while making intelligent real-time decisions. Given the sheer volume and speed of cyberattacks, relying solely on human intervention is inadequate for prompt analysis and response. Many of these attacks are orchestrated by intelligent agents like computer worms and viruses, making it essential to combat them using intelligent, semi-autonomous agents that can promptly detect, evaluate, and respond to cyber threats. These computer-generated forces must manage the entire process of responding to attacks, encompassing the identification of the attack type, its targets, the appropriate response, and the prioritization of secondary attack prevention. Furthermore, cyber intrusions are not confined to a single location; they represent a global menace to computer systems worldwide. The expansion of the internet has made knowledge and tools for cybercrime readily accessible to a wide audience, no longer limited to educated specialists. Traditional, rigid algorithms with hard-wired logic have proven ineffective in countering dynamically evolving cyberattacks. This underscores the importance of innovative approaches, particularly the application of Artificial Intelligence (AI), to enhance our capability to combat cybercrimes. AI introduces flexibility and learning capabilities to software, thereby assisting humans in the fight against cybercrimes. Various AI techniques, inspired by nature, including Computational Intelligence, Neural Networks, Intelligent Agents, Artificial Immune Systems, Machine Learning, Data Mining, Pattern Recognition, Fuzzy Logic, and Heuristics, are playing an increasingly crucial role in the detection and prevention of cybercrimes. AI empowers the design of autonomic computing solutions that can adapt to their usage context, employing methods such as self-management, self-tuning, self[1]configuration, self-diagnosis, and self[1]healing. In the realm of information security, AI represents a promising area of research with a focus on enhancing cybersecurity measures in cyberspace. The term "Artificial Intelligence" is used to describe a machine's ability to emulate human-like activities, including problem solving and learning, a concept often referred to as machine learning. The next generation of cybersecurity products is increasingly incorporating Artificial Intelligence and Machine Learning technologies. By analyzing extensive datasets of cybersecurity, network, and physical information, providers of cybersecurity solutions aim to identify and thwart abnormal behavior. Various approaches are employed to utilize AI for cybersecurity. Some applications analyze raw network data to detect irregularities, while others focus on user-entity behavior to identify deviations from the norm. The choice of approach depends on the type of data streams and the level of effort required by analysts
APA, Harvard, Vancouver, ISO, and other styles
4

Venkateswarlu, Dr S. China, Shiva Shankar J, and Dr S. Palanivel. "DEPLOYING ARTIFICIAL INTELLIGENCE INTO DAILY LIFE: ARTIFICIAL INTELLIGENCE FOR CYBER SECURITY WITH MORE OPPORTUNITIES." In Artificial Intelligence and their Applications. Iterative International Publishers, Selfypage Developers Pvt Ltd, 2024. http://dx.doi.org/10.58532/nbennurch336.

Full text
Abstract:
The rapid advancement of information technology has led to an upsurge in cybercriminal activities. As technology continues to evolve, so do the tactics used by individuals involved in digital offenses. Trends in complex, distributed, and internet[1]based computing have raised significant concerns regarding information security and privacy. Cyber infrastructures, in particular, are highly susceptible to intrusions and various threats. Traditional security measures like sensors and detectors are inadequate for safeguarding these infrastructures, necessitating the development of more sophisticated IT solutions capable of modeling normal behaviors and identifying anomalies. To address these challenges effectively, cyber defense systems must exhibit traits such as flexibility, adaptability, and robustness. They should be able to detect a wide range of threats while making intelligent real-time decisions. Given the sheer volume and speed of cyberattacks, relying solely on human intervention is inadequate for prompt analysis and response. Many of these attacks are orchestrated by intelligent agents like computer worms and viruses, making it essential to combat them using intelligent, semi-autonomous agents that can promptly detect, evaluate, and respond to cyber threats. These computer-generated forces must manage the entire process of responding to attacks, encompassing the identification of the attack type, its targets, the appropriate response, and the prioritization of secondary attack prevention. Furthermore, cyber intrusions are not confined to a single location; they represent a global menace to computer systems worldwide. The expansion of the internet has made knowledge and tools for cybercrime readily accessible to a wide audience, no longer limited to educated specialists. Traditional, rigid algorithms with hard-wired logic have proven ineffective in countering dynamically evolving cyberattacks. This underscores the importance of innovative approaches, particularly the application of Artificial Intelligence (AI), to enhance our capability to combat cybercrimes. AI introduces flexibility and learning capabilities to software, thereby assisting humans in the fight against cybercrimes. Various AI techniques, inspired by nature, including Computational Intelligence, Neural Networks, Intelligent Agents, Artificial Immune Systems, Machine Learning, Data Mining, Pattern Recognition, Fuzzy Logic, and Heuristics, are playing an increasingly crucial role in the detection and prevention of cybercrimes. AI empowers the design of autonomic computing solutions that can adapt to their usage context, employing methods such as self-management, self-tuning, self[1]configuration, self-diagnosis, and self[1]healing. In the realm of information security, AI represents a promising area of research with a focus on enhancing cybersecurity measures in cyberspace. The term "Artificial Intelligence" is used to describe a machine's ability to emulate human-like activities, including problem solving and learning, a concept often referred to as machine learning. The next generation of cybersecurity products is increasingly incorporating Artificial Intelligence and Machine Learning technologies. By analyzing extensive datasets of cybersecurity, network, and physical information, providers of cybersecurity solutions aim to identify and thwart abnormal behavior. Various approaches are employed to utilize AI for cybersecurity. Some applications analyze raw network data to detect irregularities, while others focus on user-entity behavior to identify deviations from the norm. The choice of approach depends on the type of data streams and the level of effort required by analysts
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Computational-hard real-life problem"

1

Docherty, David, Dale Erickson, and Scott Henderson. "Using Ai to Optimize the Use of Gas Lift in Oil Wells." In ADIPEC. SPE, 2022. http://dx.doi.org/10.2118/211028-ms.

Full text
Abstract:
Abstract Optimization of gas lift rates in an oil field with hundreds of wells is a complex challenge. Typically traditional control systems and operating strategies fail to optimize the problem due to this complexity. Typical gas lift optimization challenges in a field include the following: Real-time flow rate prediction of various phasesReal-time wells performanceReal-time pipeline network performance The dynamic nature of the problem and the variability of the solution space makes it extremely hard for traditional simulation-based solutions to locate the optimal performance point in real-time. The computational requirement is massive and makes it difficult to perform these calculations at the "edge". This is where combining simulations, human expertise, and machine learning technologies such as Deep Reinforcement Learning help build AI that can excel in rapidly computing optimized setpoints in complex domains. Using pioneering machine teaching methods combined with multiphase simulations this paper will present the solution of using Artificial Intelligence (AI) to optimize gas lift rates in real-time, finding the optimum gas lift rates for a 4 well pad, 200 well system, such that net profit is increased by 5%-25% for different baselines as the reservoir conditions change over field life.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography