Academic literature on the topic 'Equivalent partitioning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Equivalent partitioning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Equivalent partitioning"

1

Richter, Christian. "Partitioning Balls into Topologically Equivalent Pieces." Elemente der Mathematik 53, no. 4 (1998): 149–58. http://dx.doi.org/10.1007/s000170050046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Schmerl, James H. "Partitioning large vector spaces." Journal of Symbolic Logic 68, no. 4 (2003): 1171–80. http://dx.doi.org/10.2178/jsl/1067620179.

Full text
Abstract:
The theme of this paper is the generalization of theorems about partitions of the sets of points and lines of finite-dimensional Euclidean spaces ℝd to vector spaces over ℝ of arbitrary dimension and, more generally still, to arbitrary vector spaces over other fields so long as these fields are not too big. These theorems have their origins in the following striking theorem of Sierpiński [12] which appeared a half century ago.Sierpiński's Theorem. The Continuum Hypothesis is equivalent to: There is a partition {X, Y, Z} of ℝ3such that if ℓ is a line parallel to the x-axis [respectively: y-axis, z-axis] then X ∩ ℓ [respectively: Y ∩ ℓ, Z ∩ ℓ] is finite.The history of this theorem and some of its subsequent developments are discussed in the very interesting article by Simms [13]. Sierpiński's Theorem was generalized by Kuratowski [9] to partitions of ℝn+2 into n + 2 sets obtaining an equivalence with . The geometric character that Sierpiński's Theorem and its generalization by Kuratowski appear to have is bogus, since the lines parallel to coordinate axes are essentially combinatorial, rather than geometric, objects. The following version of Kuratowski's theorem emphasizes its combinatorial character.Kuratowski's Theorem. Let n < ω and A be any set. Then ∣A∣ ≤ ℵnif and only if there is a partition P: An+2 → n + 2 such that if i ≤ n + 1 and ℓ is a line parallel to the i-th coordinate axis, then {x ∈ ℓ: P(x) = i} is finite.
APA, Harvard, Vancouver, ISO, and other styles
3

Chopra, Sandeep, Lata Nautiyal, and M. K. Sharma. "Boundary Analysis for Equivalent Class Partitioning by using Binary Search." International Journal of Computer Sciences and Engineering 7, no. 2 (2019): 601–5. http://dx.doi.org/10.26438/ijcse/v7i2.601605.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Wang, Feng Ying, Li Ming Du, and Zi Yang Han. "Two Partitioning Algorithms for Generating of M Sets of the Frieze Group." Applied Mechanics and Materials 336-338 (July 2013): 2238–41. http://dx.doi.org/10.4028/www.scientific.net/amm.336-338.2238.

Full text
Abstract:
Symmetric features of the frieze group equivalent mappings were analysed, and two partitioning algorithms are given for constructing generalized Mandelbrot sets of frieze group equivalent mappings in order to study the characteristics of generalized Msets. Based on generating parameter space of dynamical system, lots of patterns of generalized Mandelbrot sets are produced.
APA, Harvard, Vancouver, ISO, and other styles
5

Guofan, Liu, and M. L. Ellzey. "Finding the terms of configurations of equivalent electrons by partitioning total spins." Journal of Chemical Education 64, no. 9 (1987): 771. http://dx.doi.org/10.1021/ed064p771.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Smerchinskaya, Svetlana O., and Nina P. Yashina. "Preference levels for clusters of alternatives." International Journal of Modeling, Simulation, and Scientific Computing 10, no. 04 (2019): 1950019. http://dx.doi.org/10.1142/s1793962319500193.

Full text
Abstract:
In the decision-making tasks for ranking the alternatives and choosing the best ones, procedures on the digraphs are often used. A digraph of the aggregated relation on a set of alternatives for information of experts or criteria is preconstructed. If the digraph does not contain any cycles, then the Demukron algorithm for partitioning the digraph into levels can be used to order the alternatives by preference. This algorithm cannot be applied if there are clusters consisting of equivalent alternatives. In the paper the algorithm for partitioning an arbitrary digraph of into preference levels is proposed. In contrast to the standard procedure, the digraph of the aggregated relation admits the presence of cycles, and, consequently, of equivalent vertexes-alternatives. The vertexes in any cycle of digraph belong to one level of preference.
APA, Harvard, Vancouver, ISO, and other styles
7

Moghaddam, Saeed Nasehi, Mehdi Ghazanfari, and Babak Teimourpour. "Social Structure Discovery Using Genetic Algorithm." International Journal of Applied Metaheuristic Computing 8, no. 4 (2017): 1–26. http://dx.doi.org/10.4018/ijamc.2017100101.

Full text
Abstract:
As a way of simplifying, size reducing and making the structure of each social network be comprehensible, blockmodeling consists of two major, essential components: partitioning of actors to equivalent classes, called positions, and clarifying relations between and within positions. While actor partitioning in conventional blockmodeling is performed by several equivalence definitions, generalized blockmodeling, searches, locally, the best partition vector that best satisfies a predetermined structure. The need for known predefined structure and using a local search procedure, makes generalized blockmodeling be restricted. In this paper, the authors formulate blockmodel problem and employ a genetic algorithm for to search for the best partition vector fitting into original relational data in terms of the known indices. In addition, during multiple samples and situations such as dichotomous, signed, ordinal and interval valued, and multiple relations, the quality of results shows better fitness than classic and generalized blockmodeling.
APA, Harvard, Vancouver, ISO, and other styles
8

Fountoukis, C., A. Nenes, A. Sullivan, et al. "Thermodynamic characterization of Mexico City aerosol during MILAGRO 2006." Atmospheric Chemistry and Physics Discussions 7, no. 3 (2007): 9203–33. http://dx.doi.org/10.5194/acpd-7-9203-2007.

Full text
Abstract:
Abstract. Fast measurements of aerosol and gas-phase constituents coupled with the ISORROPIA-II thermodynamic equilibrium model are used to study the partitioning of semivolatile inorganic species and phase state of Mexico City aerosol sampled at the T1 site during the MILAGRO 2006 campaign. Overall, predicted semivolatile partitioning agrees well with measurements. PM2.5 is insensitive to changes in ammonia but is to acidic semivolatile species. Semi-volatile partitioning equilibrates on a timescale between 6 and 20 min. When the aerosol sulfate-to-nitrate molar ratio is less than 1, predictions improve substantially if the aerosol is assumed to follow the deliquescent phase diagram. Treating crustal species as "equivalent sodium" (rather than explicitly) in the thermodynamic equilibrium calculations introduces important biases in predicted aerosol water uptake, nitrate and ammonium; neglecting crustals further increases errors dramatically. This suggests that explicitly considering crustals in the thermodynamic calculations are required to accurately predict the partitioning and phase state of aerosols.
APA, Harvard, Vancouver, ISO, and other styles
9

Pan, Zhong Liang, and Ling Chen. "A New Verification Method of Digital Circuits Based on Cone-Oriented Partitioning and Decision Diagrams." Applied Mechanics and Materials 29-32 (August 2010): 1040–45. http://dx.doi.org/10.4028/www.scientific.net/amm.29-32.1040.

Full text
Abstract:
The formal verification is able to check whether the implementation of a circuit design is functionally equivalent to an earlier version described at the same level of abstraction, it can show the correctness of a circuit design. A new circuit verification method based on cone-oriented circuit partitioning and decision diagrams is presented in this paper. First of all, the structure level of every signal line in a circuit is computed. Secondly, the circuit is partitioned into a lot of cone structures. The multiple-valued decision diagram corresponding to every cone structure is generated. The verification procedure is to compare the equivalence of the multiple-valued decision diagrams of two types of cone structures. Experimental results on a lot of benchmark circuits show the method presented in this paper can effectively perform the equivalence checking of circuits.
APA, Harvard, Vancouver, ISO, and other styles
10

Daymond, A. J., P. Hadley, R. C. R. Machado, and E. Ng. "Genetic Variability in Partitioning to the Yield Component of Cacao (Theobroma cacao L.)." HortScience 37, no. 5 (2002): 799–801. http://dx.doi.org/10.21273/hortsci.37.5.799.

Full text
Abstract:
Biomass partitioning of cacao (Theobroma cacao L.) was studied in seven clones and five hybrids in a replicated experiment in Bahia, Brazil. Over an 18-month period, a 7-fold difference in dry bean yield was demonstrated between genotypes, ranging from the equivalent of 200 to 1389 kg·ha-1. During the same interval, the increase in trunk cross-sectional area ranged from 11.1 cm2 for clone EEG-29 to 27.6 cm2 for hybrid PA-150 × MA-15. Yield efficiency increment (the ratio of cumulative yield to the increase in trunk circumference), which indicated partitioning between the vegetative and reproductive components, ranged from 0.008 kg·cm-2 for clone CP-82 to 0.08 kg·cm-2 for clone EEG-29. An examination of biomass partitioning within the pod of the seven clones revealed that the beans accounted for between 32.0% (CP-82) and 44.5% (ICS-9) of the pod biomass. The study demonstrated the potential for yield improvement in cacao by selectively breeding for more efficient partitioning to the yield component.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Equivalent partitioning"

1

Zhakipbayev, Olzhas, and Aisulu Bekey. "Effectiveness of operational profile-based testing." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-53451.

Full text
Abstract:
The operational profile-based testing is currently not a well-studied topic and there are no specific instructions for writing test cases for testing the program. In our thesis, we presented our idea about on the basis of what data test cases can be written. Also, in order to show the effectiveness of operational profile-based testing, we additionally described the equivalent partitioning testing technique. The software for this experiment was taken from the open-source SIR repository. We have selected software: “Account”, that was tested by two different testing methods. The test results of both techniques were compared and it was determined that the operational profile-based testing technique is more effective.
APA, Harvard, Vancouver, ISO, and other styles
2

Olorisade, Babatunde Kazeem. "Summarizing the Results of a Series of Experiments : Application to the Effectiveness of Three Software Evaluation Techniques." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3799.

Full text
Abstract:
Software quality has become and persistently remains a big issue among software users and developers. So, the importance of software evaluation cannot be overemphasized. An accepted fact in software engineering is that software must undergo evaluation process during development to ascertain and improve its quality level. In fact, there are too many techniques than a single developer could master, yet, it is impossible to be certain that software is free of defects. Therefore, it may not be realistic or cost effective to remove all software defects prior to product release. So, it is crucial for developers to be able to choose from available evaluation techniques, the one most suitable and likely to yield optimum quality results for different products - it bogs down to choosing the most appropriate for different situations. However, not much knowledge is available on the strengths and weaknesses of the available evaluation techniques. Most of the information related to the techniques available is focused on how to apply the techniques but not on the applicability conditions of the techniques – practical information, suitability, strengths, weaknesses etc. This research focuses on contributing to the available applicability knowledge of software evaluation techniques. More precisely, it focuses on code reading by stepwise abstraction as representative of the static technique, as well as equivalence partitioning (functional technique) and decision coverage (structural technique) as representatives of the dynamic technique. The specific focus of the research is to summarize the results of a series of experiments conducted to investigate the effectiveness of these techniques among other factors. By effectiveness in this research, we mean the potential of each of the techniques to generate test cases capable of revealing software faults in the case of the dynamic techniques or the ability of the static technique to generate abstractions that will aid the detection of faults. The experiments used two versions of three different programs with seven different faults seeded into each of the programs. This work uses the results of the eight different experiments performed and analyzed separately, to explore this fact. The analysis results were pooled together and jointly summarized in this research to extract a common knowledge from the experiments using a qualitative deduction approach created in this work as it was decided not to use formal aggregation at this stage. Since the experiments were performed by different researchers, in different years and in some cases at different site, there were several problems that have to be tackled in order to be able to summarize the results. Part of the problems is the fact that the data files exist in different languages, the structure of the files are different, different names is used for data fields, the analysis were done using different confidence level etc. The first step, taken at the inception of this research was to apply all the techniques to the programs used during the experiments in order to detect the faults. This purpose of this personal experience with the experiment is to be familiarized and get acquainted to the faults, failures, the programs and the experiment situations in general and also, to better understand the data as recorded from the experiments. Afterwards, the data files were recreated to conform to a uniform language, data meaning, file style and structure. A well structured directory was created to keep all the data, analysis and experiment files for all the experiments in the series. These steps paved the way for a feasible results synthesis. Using our method, the technique, program, fault, program – technique, program – fault and technique – fault were selected as main and interaction effects having significant knowledge relevant to the analysis summary result. The result, as reported in this thesis, indicated that the functional technique and the structural technique are equally effective as far as the programs and faults in these experiments are concerned. Both perform better than the code review. Also, the analysis revealed that the effectiveness of the techniques is influenced by the fault type and the program type. Some faults were found to exhibit better behavior with certain programs, some were better detected with certain techniques and even the techniques yield different result in different programs.<br>I can alternatively be contacted through: qasimbabatunde@yahoo.co.uk
APA, Harvard, Vancouver, ISO, and other styles
3

Saeed, Umar, and Ansur Mahmood Amjad. "ISTQB : Black Box testing Strategies used in Financial Industry for Functional testing." Thesis, Blekinge Tekniska Högskola, Sektionen för ingenjörsvetenskap, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3237.

Full text
Abstract:
Black box testing techniques are important to test the functionality of the system without knowing its inner detail which makes sure correct, consistent, complete and accurate behavior or function of a system. Black box testing strategies are used to test logical, data or behavioral dependencies, to generate test data and quality of test cases which have potential to guess more defects. Black box testing strategies play pivotal role to detect possible defects in system and can help in successful completion of system according to functionality. The studies of five companies regarding important black box testing strategies are presented in this thesis. This study explores the black box testing techniques which are present in literature and practiced in industry as well. Interview studies are conducted in companies of Pakistan providing solutions to finance industry, which is an attempt to find the usage of these techniques. The advantages and disadvantages of identified Black box testing strategies are discussed, along with it; the comparison of different techniques with respect to most defect guessing, dependencies, sophistication, effort, and cost is presented as well.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Equivalent partitioning"

1

Agarwal, Pratyush, Krishnendu Chatterjee, Shreya Pathak, Andreas Pavlogiannis, and Viktor Toman. "Stateless Model Checking Under a Reads-Value-From Equivalence." In Computer Aided Verification. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-81685-8_16.

Full text
Abstract:
AbstractStateless model checking (SMC) is one of the standard approaches to the verification of concurrent programs. As scheduling non-determinism creates exponentially large spaces of thread interleavings, SMC attempts to partition this space into equivalence classes and explore only a few representatives from each class. The efficiency of this approach depends on two factors: (a) the coarseness of the partitioning, and (b) the time to generate representatives in each class. For this reason, the search for coarse partitionings that are efficiently explorable is an active research challenge.In this work we present $${\text {RVF-SMC}}$$ RVF-SMC , a new SMC algorithm that uses a novel reads-value-from (RVF) partitioning. Intuitively, two interleavings are deemed equivalent if they agree on the value obtained in each read event, and read events induce consistent causal orderings between them. The RVF partitioning is provably coarser than recent approaches based on Mazurkiewicz and “reads-from” partitionings. Our experimental evaluation reveals that RVF is quite often a very effective equivalence, as the underlying partitioning is exponentially coarser than other approaches. Moreover, $${\text {RVF-SMC}}$$ RVF-SMC generates representatives very efficiently, as the reduction in the partitioning is often met with significant speed-ups in the model checking task.
APA, Harvard, Vancouver, ISO, and other styles
2

Krafczyk, Niklas, and Jan Peleska. "Effective Infinite-State Model Checking by Input Equivalence Class Partitioning." In Testing Software and Systems. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-67549-7_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Whitesides, Clayton J., and Matthew H. Connolly. "Estimating Fractional Snow Cover in Mountain Environments with Fuzzy Classification." In Geographic Information Systems. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2038-4.ch116.

Full text
Abstract:
The disproportionate amount of water runoff from mountains to surrounding arid and semiarid lands has generated much research in snow water equivalent (SWE) modeling. A primary input in SWE models is snow covered area (SCA) which is generally obtained via satellite imagery. Mixed pixels in alpine snow studies complicate SCA measurements and can reduce accuracy. A simple method was developed to estimate fractional snow cover using freely available Landsat and data derived from DEMs, commercial and free software, as well as fuzzy classification and recursive partitioning. The authors attempted to develop a cost effective technique for estimating fractional snow cover for resource and recreation managers confined by limited budgets and resources. Results indicated that the method was non-sensitive (P = 0.426) to differences in leaf area index and solar radiation between 4 March 2000 and 13 March 2003. Fractional snow cover was predicted consistently despite variation in model parameters between years, indicating that the developed method may be a viable way for monitoring fractional snow cover in mountainous areas where capital and resources are limited.
APA, Harvard, Vancouver, ISO, and other styles
4

Gray, John S., and Michael Elliott. "Functional diversity of benthic assemblages." In Ecology of Marine Sediments. Oxford University Press, 2009. http://dx.doi.org/10.1093/oso/9780198569015.003.0009.

Full text
Abstract:
Now that we have discussed how assemblages of marine soft sediments are structured, we need to consider functional aspects. There are a few main interrelationships that need to be discussed here— inter- and intraspecific competition, feeding and predator–prey interactions, the production of biomass, and the production and delivery of recruiting stages. Other functional aspects, such as the effects of pathogens and parasites and the benefits of association (mutualism, parasitism, symbiosis, etc.) are of less importance in the present discussion. By function we mean the rate processes (i.e. those involving time) that either affect (extrinsic processes) or are inside (intrinsic processes and responses) the organisms that live in sediments. Hence these include primary and secondary production and processes that are mitigated by the organisms that live in sediments, such as nutrient and contaminant fluxes into and out of the sediment. We begin with the historical development of the field since such aspects are often overlooked in these days of electronic searches for references. Functional studies of ecosystems really began with Lindeman´s classic paper (1942) on trophic dynamics. Rather than regarding food merely as particulate matter, Lindeman expressed it in terms of the energy it contained, thereby enabling comparisons to be made between different systems. For example, 1 g of the bivalve Ensis is not equivalent in food value to 1 g of the planktonic copepod Calanus, so the two animals cannot be compared in terms of weight, but they can be compared in terms of the energy units that each gram dry weight contains. The energy unit originally used was the calorie, but this has now been superseded by the joule (J), 1 calorie being equivalent to 4.2 joules. Ensis contains 14 654 J g-1 dry wt and Calanus 30 982 J g-1 dry wt. The basic trophic system is well understood and can be summarized as we showed earlier in Fig. I.8 which gives the links between various trophic levels and the role of competition, organic matter transport, and resource partitioning. In systems fuelled by photosynthesis (so excluding the chemosynthetic deep-sea vent systems), the primary source of energy for any community is sunlight, which is fixed and stored in plant material, which thus constitutes the first trophic level in the ecosystem.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Equivalent partitioning"

1

Yousef, Mohamed, and Khaled F. Hussain. "Fast Exhaustive-Search equivalent pattern matching through hierarchical partitioning." In 2013 20th IEEE International Conference on Image Processing (ICIP). IEEE, 2013. http://dx.doi.org/10.1109/icip.2013.6738908.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Patel, Rushabh, Paolo Frasca, and Francesco Bullo. "Centroidal Area-Constrained Partitioning for Robotic Networks." In ASME 2013 Dynamic Systems and Control Conference. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/dscc2013-3742.

Full text
Abstract:
We consider the problem of optimal coverage with area-constraints in a mobile multi-agent system. For a planar environment with an associated density function, this problem is equivalent to dividing the environment into optimal subregions such that each agent is responsible for the coverage of its own region. In this paper, we design a continuous-time distributed policy which allows a team of agents to achieve a convex area-constrained partition of a convex workspace. Our work is related to the classic Lloyd algorithm, and makes use of generalized Voronoi diagrams. We also discuss practical implementation for real mobile networks. Simulation methods are presented and discussed.
APA, Harvard, Vancouver, ISO, and other styles
3

Noda, Taku. "Implementation of the frequency-partitioning fitting method for linear equivalent identification from frequency response data." In 2016 IEEE Power and Energy Society General Meeting (PESGM). IEEE, 2016. http://dx.doi.org/10.1109/pesgm.2016.7741729.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Lee, Byungwoo, and Kazuhiro Saitou. "Assembly Synthesis With Subassembly Partitioning for Optimal In-Process Dimensional Adjustability." In ASME 2003 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2003. http://dx.doi.org/10.1115/detc2003/dac-48729.

Full text
Abstract:
Achieving the dimensional integrity for a complex structural assembly is a demanding task due to the manufacturing variations of parts and the tolerance relationship between them. While assigning tight tolerances to all parts would solve the problem, an economical solution is taking advantage of small motions that joints allow, such that critical dimensions are adjusted during assembly processes. This paper presents a systematic method that decomposes product geometry at an early stage of design, selects joint types, and generates subassembly partitioning to achieve the adjustment of the critical dimensions during assembly processes. A genetic algorithm (GA) generates candidate assemblies based on a joint library specific for an application domain. Each candidate assembly is evaluated by an internal optimization routine that computes the subassembly partitioning for optimal in-process adjustability, by solving an equivalent minimum cut problem on weighted graphs. A case study on a 3D automotive space frame with the accompanying joint library is presented.
APA, Harvard, Vancouver, ISO, and other styles
5

Josserand, E., and F. Billon. "Application of Mixed Equivalent Solid and Explicit Hole Models to Analysis of Thick Perforated Plates." In ASME 2008 Pressure Vessels and Piping Conference. ASMEDC, 2008. http://dx.doi.org/10.1115/pvp2008-61885.

Full text
Abstract:
Confronted with the problem of how to conduct a complete fatigue analysis of the Tube Plate (TP) of Tubular and Shell Heat Exchangers and particularly of the Steam Generators equipping nuclear power plants of the Pressurized Water Reactor type (PWR), analysts have developed a method to analyse stress in perforated flat and thick Tube Plates with square penetration (crate) patterns, and in particular to analyse several specific zones such as the Interface Zones and various Effects, such as the Secondary (or Shell) Thermal Gradient Effect (STG Effect), the Thermal Gradient in the No-Tube Lane Effect (TGL Effect) and their interactions. The benefit of the approach is that it enables to analyze mechanical and thermal stress calculated using a full 3D Finite Element model incorporating an equivalent solid and the different Interface Zones, and allowing simulating the specific Thermo-Mechanical Effects. The Interface Zones (IZs) are those between the perforated and non-perforated area, the STG Effect is due to the strong gradient near the Secondary (or Shell) Side surface, the TGL Effect is produced by a temperature gradient across the No-Tube Lane. The method used for the fatigue analysis is based on a “Partitioning Stress Method” by means of which the stress induced by the various load types — mechanical loads, global thermal loads, local thermal effects (STG and TGL Effects), and local geometrical effects (IZs) — are first treated separately and then recombined with their appropriate Stress Multiplier Functions.
APA, Harvard, Vancouver, ISO, and other styles
6

Prakash, D. Ceglarek, and M. K. Tiwari. "Fault Failure Diagnosis for Compliant Assembly Structures Using Statistical Modal Analysis." In ASME 2006 International Manufacturing Science and Engineering Conference. ASMEDC, 2006. http://dx.doi.org/10.1115/msec2006-21086.

Full text
Abstract:
This paper develops a new diagnostics methodology for N-2-1 fixtures used in assembly processes with compliant parts. The developed methodology includes: (i) the predetermined CAD-based dimensional variation fault patterns model; (ii) data-based dimensional variation fault model; and (iii) the fault mapping procedure isolating the unknown fault. The CAD-based variation fault pattern model is based on the piece-wise linear bi-partitioning of compliant part into deformed (faulty) and un-deformed regions. Data-based dimensional variation fault models are based on the statistical modal analysis (SMA) which allow to model part deformation with varying number of deformation modes. It is proved in the paper that these independent deformation modes are equivalent to the CAD-based faults models obtained in (i). The fault mapping procedure allows to diagnose the unknown fault by comparing the unknown fault variation pattern obtained from the SMA model with one of the predetermined CAD-based fault patterns. One industrial case study from an automotive roof framing assembly illustrates the proposed method.
APA, Harvard, Vancouver, ISO, and other styles
7

Mukherjee, Rajdeep, Daniel Kroening, Tom Melham, and Mandayam Srivas. "Equivalence Checking Using Trace Partitioning." In 2015 IEEE Computer Society Annual Symposium on VLSI (ISVLSI). IEEE, 2015. http://dx.doi.org/10.1109/isvlsi.2015.110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Jahanbin, Sorour, and Bahman Zamani. "Test Model Generation Using Equivalence Partitioning." In 2018 8th International Conference on Computer and Knowledge Engineering (ICCKE). IEEE, 2018. http://dx.doi.org/10.1109/iccke.2018.8566335.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Fang, Ling, and Guoqiang Li. "Test Selection with Equivalence Class Partitioning." In 2015 2nd International Symposium on Dependable Computing and Internet of Things (DCIT). IEEE, 2015. http://dx.doi.org/10.1109/dcit.2015.14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Salamon, Todd, Roger Kempers, Brian Lynch, Kevin Terrell, and Elina Simon. "Partitioned Heat Sinks for Improved Natural Convection." In ASME 2020 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems. American Society of Mechanical Engineers, 2020. http://dx.doi.org/10.1115/ipack2020-2553.

Full text
Abstract:
Abstract The main drivers contributing to the continued growth of network traffic include video, mobile broadband and machine-to-machine communication (Internet of Things, cloud computing, etc.). Two primary technologies that next-generation (5G) networks are using to increase capacity to meet these future demands are massive MIMO (Multi-Input Multi-Output) antenna arrays and new frequency spectrum. The massive MIMO antenna arrays have significant thermal challenges due to the presence of large arrays of active antenna elements coupled with a reliance on natural convection cooling using vertical plate-finned heat sinks. The geometry of vertical plate-finned heat sinks can be optimized (for example, by choosing the fin pitch and thickness that minimize the thermal resistance of the heat sink to ambient air) and enhanced (for example, by embedding heat pipes within the base to improve heat spreading) to improve convective heat transfer. However, heat transfer performance often suffers as the sensible heat rise of the air flowing through the heat sink can be significant, particularly near the top of the heat sink; this issue can be especially problematic for the relatively large or high-aspect-ratio heat sinks associated with massive MIMO arrays. In this study a vertical plate-finned natural convection heat sink was modified by partitioning the heat sink along its length into distinct sections, where each partitioned section ejects heated air and entrains cooler air. This approach increases overall heat sink effectiveness as the net sensible heat rise of the air in any partitioned section is less than that observed in the unpartitioned heat sink. Experiments were performed using a standard heat sink and equivalent heat sinks partitioned into two and three sections for the cases of ducted and un-ducted natural convection with a uniform heat load applied to the rear of the heat sink. Numerical models were developed which compare well to the experimental results and observed trends. The numerical models also provide additional insight regarding the airflow and thermal performance of the partitioned heat sinks. The combined experimental and numerical results show that for relatively tall natural convection cooled heat sinks, the partitioning approach significantly improves convective heat transfer and overall heat sink effectiveness.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!