Academic literature on the topic 'Void search algorithm'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Void search algorithm.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Void search algorithm"

1

Zhang, Yalong, Wei Yu, Xuan Ma, Hisakazu Ogura, and Dongfen Ye. "Multi-Objective Optimization for High-Dimensional Maximal Frequent Itemset Mining." Applied Sciences 11, no. 19 (2021): 8971. http://dx.doi.org/10.3390/app11198971.

Full text
Abstract:
The solution space of a frequent itemset generally presents exponential explosive growth because of the high-dimensional attributes of big data. However, the premise of the big data association rule analysis is to mine the frequent itemset in high-dimensional transaction sets. Traditional and classical algorithms such as the Apriori and FP-Growth algorithms, as well as their derivative algorithms, are unacceptable in practical big data analysis in an explosive solution space because of their huge consumption of storage space and running time. A multi-objective optimization algorithm was proposed to mine the frequent itemset of high-dimensional data. First, all frequent 2-itemsets were generated by scanning transaction sets based on which new items were added in as the objects of population evolution. Algorithms aim to search for the maximal frequent itemset to gather more non-void subsets because non-void subsets of frequent itemsets are all properties of frequent itemsets. During the operation of algorithms, lethal gene fragments in individuals were recorded and eliminated so that individuals may resurge. Finally, the set of the Pareto optimal solution of the frequent itemset was gained. All non-void subsets of these solutions were frequent itemsets, and all supersets are non-frequent itemsets. Finally, the practicability and validity of the proposed algorithm in big data were proven by experiments.
APA, Harvard, Vancouver, ISO, and other styles
2

Zhong, Deyun, Benyu Li, Tiandong Shi, Zhaopeng Li, Liguan Wang, and Lin Bi. "Repair of Voids in Multi-Labeled Triangular Mesh." Applied Sciences 11, no. 19 (2021): 9275. http://dx.doi.org/10.3390/app11199275.

Full text
Abstract:
In this paper, we propose a novel mesh repairing method for repairing voids from several meshes to ensure a desired topological correctness. The input to our method is several closed and manifold meshes without labels. The basic idea of the method is to search for and repair voids based on a multi-labeled mesh data structure and the idea of graph theory. We propose the judgment rules of voids between the input meshes and the method of void repairing based on the specified model priorities. It consists of three steps: (a) converting the input meshes into a multi-labeled mesh; (b) searching for quasi-voids using the breadth-first searching algorithm and determining true voids via the judgment rules of voids; (c) repairing voids by modifying mesh labels. The method can repair the voids accurately and only few invalid triangular facets are removed. In general, the method can repair meshes with one hundred thousand facets in approximately one second on very modest hardware. Moreover, it can be easily extended to process large-scale polygon models with millions of polygons. The experimental results of several data sets show the reliability and performance of the void repairing method based on the multi-labeled triangular mesh.
APA, Harvard, Vancouver, ISO, and other styles
3

Frommholz, D. "IMAGE INTERPOLATION ON THE CPU AND GPU USING LINE RUN SEQUENCES." ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences V-2-2022 (May 17, 2022): 53–60. http://dx.doi.org/10.5194/isprs-annals-v-2-2022-53-2022.

Full text
Abstract:
Abstract. This paper describes an efficient implementation of an image interpolation algorithm based on inverse distance weighting (IDW). The time-consuming search for support pixels bordering the voids to be filled is facilitated through gapless sweeps of different directions over the image. The scanlines needed for the sweeps are constructed from a path prototype per orientation whose regular substructures get reused and shifted to produce aligned duplicates covering the entire input bitmap. The line set is followed concurrently to detect existing samples around nodata patches and compute the distance to the pixels to be newly set. Since the algorithm relies on integer line rasterization only and does not need auxiliary data structures beyond the output image and weight aggregation bitmap for intensity normalization, it will run on multi-core central and graphics processing units (CPUs and GPUs). Also, occluded support pixels of non-convex void patches are ignored, and over- or undersampling close-by and distant valid neighbors is compensated. Runtime and accuracy compared to generated IDW ground truth get evaluated for the CPU and GPU implementation of the algorithm on single-channel and multispectral bitmaps of various filling degrees.
APA, Harvard, Vancouver, ISO, and other styles
4

Chaitanya, G. V. A., and G. S. Gupta. "Liquid flow in heap leaching using the discrete liquid flow model and graph-based void search algorithm." Hydrometallurgy 221 (August 2023): 106151. http://dx.doi.org/10.1016/j.hydromet.2023.106151.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Zhu, Xiaoyuan, and Changhe Yuan. "Exact Algorithms for MRE Inference." Journal of Artificial Intelligence Research 55 (March 22, 2016): 653–83. http://dx.doi.org/10.1613/jair.4867.

Full text
Abstract:
Most Relevant Explanation (MRE) is an inference task in Bayesian networks that finds the most relevant partial instantiation of target variables as an explanation for given evidence by maximizing the Generalized Bayes Factor (GBF). No exact MRE algorithm has been developed previously except exhaustive search. This paper fills the void by introducing two Breadth-First Branch-and-Bound (BFBnB) algorithms for solving MRE based on novel upper bounds of GBF. One upper bound is created by decomposing the computation of GBF using a target blanket decomposition of evidence variables. The other upper bound improves the first bound in two ways. One is to split the target blankets that are too large by converting auxiliary nodes into pseudo-targets so as to scale to large problems. The other is to perform summations instead of maximizations on some of the target variables in each target blanket. Our empirical evaluations show that the proposed BFBnB algorithms make exact MRE inference tractable in Bayesian networks that could not be solved previously.
APA, Harvard, Vancouver, ISO, and other styles
6

Okamoto, Yoshifumi, Yusuke Tominaga, Shinji Wakao, and Shuji Sato. "Topology optimization of magnetostatic shielding using multistep evolutionary algorithms with additional searches in a restricted design space." COMPEL: The International Journal for Computation and Mathematics in Electrical and Electronic Engineering 33, no. 3 (2014): 894–913. http://dx.doi.org/10.1108/compel-10-2012-0202.

Full text
Abstract:
Purpose – The purpose of this paper is to improve the multistep algorithm using evolutionary algorithm (EA) for the topology optimization of magnetostatic shielding, and the paper reveals the effectiveness of methodology by comparison with conventional optimization method. Furthermore, the design target is to obtain the novel shape of magnetostatic shielding. Design/methodology/approach – The EAs based on random search allow engineers to define general-purpose objects with various constraint conditions; however, many iterations are required in the FEA for the evaluation of the objective function, and it is difficult to realize a practical solution without island and void distribution. Then, the authors proposed the multistep algorithm with design space restriction, and improved the multistep algorithm in order to get better solution than the previous one. Findings – The variant model of optimized topology derived from improved multistep algorithm is defined to clarify the effectiveness of the optimized topology. The upper curvature of the inner shielding contributed to the reduction of magnetic flux density in the target domain. Research limitations/implications – Because the converged topology has many pixel element unevenness, the special smoother to remove the unevenness will play an important role for the realization of practical magnetostatic shielding. Practical implications – The optimized topology will give us useful detailed structure of magnetostatic shielding. Originality/value – First, while the conventional algorithm could not find the reasonable shape, the improved multistep optimization can capture the reasonable shape. Second, An additional search is attached to the multistep optimization procedure. It is shown that the performance of improved multistep algorithm is better than that of conventional algorithm.
APA, Harvard, Vancouver, ISO, and other styles
7

Savulionienė, Loreta, and Leonidas Sakalauskas. "Statistinis dažnų posekių paieškos algoritmas." Informacijos mokslai 58 (January 1, 2011): 126–43. http://dx.doi.org/10.15388/im.2011.0.3118.

Full text
Abstract:
Šiuolaikinis gyvenimas susijęs su dideliais informacijos bei duomenų kiekiais. Paieška yra viena iš pagrindinių kompiuterio darbo operacijų. Paieškos tikslas – rasti dideliame duomenų kiekyje tam tikrą elementą ar elementų seką arba patvirtinti, kad jos nėra. Pagrindinis duomenų gavybos tikslas – rasti duomenyse prasmę, t. y. ryšius tarp duomenų, jų pasikartojamumą ir pan. Straipsnyje pasiūlytas naujas statistinis dažnų posekių paieškos algoritmas, eksperimentų rezultatai bei išvados. Statistinio dažnų posekių paieškos algoritmo esmė – greitai nustatyti dažnus posekius. Šis algoritmas netikrina viso rinkmenos turinio keletą kartų. Vykdant algoritmą rinkmena peržiūrima vieną kartą pagal pasirinktą tikimybę p. Šis algoritmas yra netikslus, tačiau jo vykdymo laikas daug trumpesnis nei tiksliųjų algoritmų. Statistinis dažnų posekių paieškos algoritmas gali būti taikomas struktūrų paieškos uždaviniui, kai aktualu nustatyti, koks posekis yra dažniausias, tačiau nėra labai svarbu tikslus dažnų posekių skaičius.Pagrindiniai žodžiai: posekis, kandidatinė seka, duomenų rinkinys, dažnas elementas, elementų rinkinių generavimas, hash funkcija, pirmos rūšies klaida, antros rūšies klaida, pasikliautinumo intervalas.Statistical Algorithm for Mining Frequent SequencesLoreta Savulioniene, Leonidas Sakalauskas SummaryModern life involves large amounts of data and information. Search is one of the major operations performed by a computer. Search goal is to find a sequence (element) in large amounts of data or to confirm that it does not exist. Amounts of data in databases have reached terabytes, and therefore data retrieval, analysis, rapid decision-making become increasingly complicated. Large quantities of information cover both important and void information. The main goal of data mining is to find the meaning in data, i.e. a relationship between the data, their reproducibility, etc. This technology applies to business, medicine and other areas where large amounts of information are processed and a relationship among data is detected, i.e. new information is obtained from large amounts of data. The paper proposes a new statistic algorithm for repeated sequence search. The essence of this statistic algorithm is to identify repeated sequences quickly. During the algorithm all contents of the file are not checked several times. During the algorithm, the file is checked once according to the chosen probability p. This algorithm is inaccurate, but its execution time is shorter than of the accurate algorithms.
APA, Harvard, Vancouver, ISO, and other styles
8

Fiorini, Bartolomeo, Kazuya Koyama, and Albert Izard. "Studying large-scale structure probes of modified gravity with COLA." Journal of Cosmology and Astroparticle Physics 2022, no. 12 (2022): 028. http://dx.doi.org/10.1088/1475-7516/2022/12/028.

Full text
Abstract:
Abstract We study the effect of two Modified Gravity (MG) theories, f(R) and nDGP, on three probes of large-scale structure, the real space power spectrum estimator Q 0, bispectrum and voids, and validate fast approximate COLA simulations against full N-body simulations for the prediction of these probes. We find that using the first three even multipoles of the redshift space power spectrum to estimate Q 0 is enough to reproduce the MG boost factors of the real space power spectrum for both halo and galaxy catalogues. By analysing the bispectrum and reduced bispectrum of Dark Matter (DM), we show that the strong MG signal present in the DM bispectrum is mainly due to the enhanced power spectrum. We warn about adopting screening approximations in simulations as this neglects non-linear contributions that can source a significant component of the MG bispectrum signal at the DM level, but we argue that this is not a problem for the bispectrum of galaxies in redshift space where the signal is dominated by the non-linear galaxy bias. Finally, we search for voids in our mock galaxy catalogues using the ZOBOV watershed algorithm. To apply a linear model for Redshift-Space Distortion (RSD) in the void-galaxy cross-correlation function, we first examine the effects of MG on the void profiles entering into the RSD model. We find relevant MG signals in the integrated-density, velocity dispersion and radial velocity profiles in the nDGP theory. Fitting the RSD model for the linear growth rate, we recover the linear theory prediction in an nDGP model, which is larger than the ΛCDM prediction at the 3σ level. In f(R) theory we cannot naively compare the results of the fit with the linear theory prediction as this is scale-dependent, but we obtain results that are consistent with the ΛCDM prediction.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhao, Zhiyue, Baohui Wang, Jing Wang, et al. "Liquid film characteristics measurement based on NIR in gas–liquid vertical annular upward flow." Measurement Science and Technology 33, no. 6 (2022): 065014. http://dx.doi.org/10.1088/1361-6501/ac57ed.

Full text
Abstract:
Abstract Liquid film plays a crucial role in void fraction, friction pressure drops, momentum and heat transfer in the two-phase flow. The film thickness measurement experiments of annular flow at four pressure conditions have been conducted using near-infrared sensor. The signal is processed by variational mode decomposition, whose parameters are optimized using the sparrow search algorithm. The envelope spectrum and Pearson correlation coefficient judgment criteria were adopted for signal reconstruction, and the value of the liquid film thickness is obtained. The effect (such as flow rate, pressure, entrainment, etc) of the liquid film thickness are analyzed theoretically. The characterization parameters Weg″, Wel, Nμl and X mod have been extracted and optimized, and a new average liquid film thickness correlation is proposed. The laboratory results indicate that the mean absolute percentage error of the predictive correlation is 4.35% (current data) and 12.02% (literatures data) respectively.
APA, Harvard, Vancouver, ISO, and other styles
10

Chida, Nariyoshi, and Tachio Terauchi. "Repairing Regular Expressions for Extraction." Proceedings of the ACM on Programming Languages 7, PLDI (2023): 1633–56. http://dx.doi.org/10.1145/3591287.

Full text
Abstract:
While synthesizing and repairing regular expressions (regexes) based on Programming-by-Examples (PBE) methods have seen rapid progress in recent years, all existing works only support synthesizing or repairing regexes for membership testing, and the support for extraction is still an open problem. This paper fills the void by proposing the first PBE-based method for synthesizing and repairing regexes for extraction. Our work supports regexes that have real-world extensions such as backreferences and lookarounds. The extensions significantly affect the PBE-based synthesis and repair problem. In fact, we show that there are unsolvable instances of the problem if the synthesized regexes are not allowed to use the extensions, i.e., there is no regex without the extensions that correctly classify the given set of examples, whereas every problem instance is solvable if the extensions are allowed. This is in stark contrast to the case for the membership where every instance is guaranteed to have a solution expressible by a pure regex without the extensions. The main contribution of the paper is an algorithm to solve the PBE-based synthesis and repair problem for extraction. Our algorithm builds on existing methods for synthesizing and repairing regexes for membership testing, i.e., the enumerative search algorithms with SMT constraint solving. However, significant extensions are needed because the SMT constraints in the previous works are based on a non-deterministic semantics of regexes. Non-deterministic semantics is sound for membership but not for extraction, because which substrings are extracted depends on the deterministic behavior of actual regex engines. To address the issue, we propose a new SMT constraint generation method that respects the deterministic behavior of regex engines. For this, we first define a novel formal semantics of an actual regex engine as a deterministic big-step operational semantics, and use it as a basis to design the new SMT constraint generation method. The key idea to simulate the determinism in the formal semantics and the constraints is to consider continuations of regex matching and use them for disambiguation. We also propose two new search space pruning techniques called approximation-by-pure-regex and approximation-by-backreferences that make use of the extraction information in the examples. We have implemented the synthesis and repair algorithm in a tool called R3 (Repairing Regex for extRaction) and evaluated it on 50 regexes that contain real-world extensions. Our evaluation shows the effectiveness of the algorithm and that our new pruning techniques substantially prune the search space.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography