Academic literature on the topic 'Sample size dimensioning'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Sample size dimensioning.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Sample size dimensioning"

1

Novak, Rosilei S., and Jair M. Marques. "A Study Verifying the Dimensioning of a Multivariate Dichotomized Sample in Exploratory Factor Analysis." Journal of Modern Applied Statistical Methods 18, no. 1 (2020): 2–12. http://dx.doi.org/10.22237/jmasm/1556669760.

Full text
Abstract:
The sample size dichotomized was related to the measure of sampling adequacy, considering the explanations provided by factors and commonalities. Monte Carlo simulation generated multivariate normal samples and varying the number of observations, the factor analysis was applied in each sample dichotomized. Results were modeled by polynomial regression based on the sample sizing.
APA, Harvard, Vancouver, ISO, and other styles
2

Abdullah, Azhar, R. Abdullah, S. K. E. Shariff, and N. Haliza. "Permeability Number for Various Grain Size of Tin Mine Tailing Sand for Greensand Casting Mould." Applied Mechanics and Materials 465-466 (December 2013): 1093–97. http://dx.doi.org/10.4028/www.scientific.net/amm.465-466.1093.

Full text
Abstract:
Tin tailing sand is one of the residues from tin extraction. Tailing sand for sampling was taken from Batu Gajah, which was one of the active locations in tin mining in Malaysia. The silica content of tailing sand from Batu Gajah is between 95.9 to 98.9%. This research is to determine the effect of grain size by the increasing of water content on the permeability number. Grain size is a major determinant of mould and core permeability and surface finish of the casting. In this research involved the process of conducting the mechanical sieve grading to identify the grain size for this research. Sample was graded into 425 μm, 297 μm and 149 μm. Experiments were conducted according to American Foundrymen Society (AFS) standard of procedures. Cylindrical test pieces dimensioning of Ø50 mm×50 mm in height from various grain sand sizewater ratios bonded with 5wt% clay, were compacted by applying three ramming blows of 6666 g each using Ridsdale-Dietert metric standard rammer. The test pieces were tested for permeability number with Ridsdale-Dietert permeability meter. Grain sand size of 297 μm was discovered has appropriate permeability number with the water content of 4% which is within the requirement as moulding sand.
APA, Harvard, Vancouver, ISO, and other styles
3

M. Johnson, Wayne, Matthew Rowell, Bill Deason, and Malik Eubanks. "Comparative evaluation of an open-source FDM system." Rapid Prototyping Journal 20, no. 3 (2014): 205–14. http://dx.doi.org/10.1108/rpj-06-2012-0058.

Full text
Abstract:
Purpose – The purpose of this paper is to present a qualitative and quantitative comparison and evaluation of an open-source fused deposition modeling (FDM) additive manufacturing (AM) system with a proprietary FDM AM system based on the fabrication of a custom benchmarking model. Design/methodology/approach – A custom benchmarking model was fabricated using the two AM systems and evaluated qualitatively and quantitatively. The fabricated models were visually inspected and scanned using a 3D laser scanning system to examine their dimensional accuracy and geometric dimensioning and tolerancing (GD&T) performance with respect to the computer-aided design (CAD) model geometry. Findings – The open-source FDM AM system (CupCake CNC) successfully fabricated most of the features on the benchmark, but the model did suffer from greater thermal warping and surface roughness, and limitations in the fabrication of overhang structures compared to the model fabricated by the proprietary AM system. Overall, the CupCake CNC provides a relatively accurate, low-cost alternative to more expensive proprietary FDM AM systems. Research limitations/implications – This work is limited in the sample size used for the evaluation. Practical implications – This work will provide the public and research AM communities with an improved understanding of the performance and capabilities of an open-source AM system. It may also lead to increased use of open-source systems as research testbeds for the continued improvement of current AM processes, and the development of new AM system designs and processes. Originality/value – This study is one of the first comparative evaluations of an open-source AM with a proprietary AM system.
APA, Harvard, Vancouver, ISO, and other styles
4

Doerffel, Christoph, Gábor Jüttner, and Roland Dietze. "Micro Test Specimens for Compound Engineering with Minimum Material Needs." Materials Science Forum 825-826 (July 2015): 928–35. http://dx.doi.org/10.4028/www.scientific.net/msf.825-826.928.

Full text
Abstract:
The use of micro test specimens is a good way to characterize micro injection molding processes and the resulting material properties. The material properties of microparts may differ from standard injection molding parts, due to an overrepresentation of the surface layers with high fiber orientation and divergent morphology. In order to characterize the distribution and agglomeration of fibers and particles for the manufacturing of micro injection molding parts of functionalized polymer compounds, it is essential to manufacture the test specimens and the part using the same process. The distribution and size of these particles e.g. Carbon-Nano-Tubes (CNT) or piezo ceramic particles is dependent on the polymer plastication process during injection molding. Therefore the use of micro test specimens is a requirement for precise material selection and engineering.Due to the minimum material needs, micro test specimens are also useful for the comparison of the material properties of new polymers and compounds, which were produced in amounts of 20 g to 100 g. Another application is the testing of highly elastic and ductile materials with strains over 100%. By using micro test specimens it is possible to test high strains with low elongations in a short time.A new innovative micro test specimen has been developed at the Technische Universität Chemnitz in cooperation with the Kunststoff-Zentrum in Leipzig, that is especially designed for the testing and dimensioning of plastic microparts with weights less than 0.1 g. The main feature of the new specimen and testing process is the combined positive and force-fitted locking, which enables a precise positioning of the micro specimen and an even application of the clamping force. In order to achieve reproducible clamping, testing and handling of the sample, the clamping and testing process are spatially separated. The shape of the test specimen enables a parameter optimization for the micro injection molding process.
APA, Harvard, Vancouver, ISO, and other styles
5

Stojanovic, Zvezdan, and Djordje Babic. "Bandwidth calculation for VoIP networks based on PSTN statistical model." Facta universitatis - series: Electronics and Energetics 23, no. 1 (2010): 73–88. http://dx.doi.org/10.2298/fuee1001073s.

Full text
Abstract:
This paper shows an analysis how to calculate proper bandwidth for VoIP calls after proper dimensioning of PSTN network. For this purpose, we use Erlang B and extended Erlang B formulae. Further, we have developed a software tool, named Bandwidth Calculator to calculate proper number of the circuits on the PSTN side and after that IP bandwidth. Traffic analysis is conducted for VoIP networks considering impact of many factors on the bandwidth such as: voice codecs, samples, VAD, RTP compression. The results obtained by bandwidth calculator are compared to simulation results and data obtained by measurements. .
APA, Harvard, Vancouver, ISO, and other styles
6

Saidani, Michael, Harrison Kim, and Jinju Kim. "Designing optimal COVID-19 testing stations locally: A discrete event simulation model applied on a university campus." PLOS ONE 16, no. 6 (2021): e0253869. http://dx.doi.org/10.1371/journal.pone.0253869.

Full text
Abstract:
Providing sufficient testing capacities and accurate results in a time-efficient way are essential to prevent the spread and lower the curve of a health crisis, such as the COVID-19 pandemic. In line with recent research investigating how simulation-based models and tools could contribute to mitigating the impact of COVID-19, a discrete event simulation model is developed to design optimal saliva-based COVID-19 testing stations performing sensitive, non-invasive, and rapid-result RT-qPCR tests processing. This model aims to determine the adequate number of machines and operators required, as well as their allocation at different workstations, according to the resources available and the rate of samples to be tested per day. The model has been built and experienced using actual data and processes implemented on-campus at the University of Illinois at Urbana-Champaign, where an average of around 10,000 samples needed to be processed on a daily basis, representing at the end of August 2020 more than 2% of all the COVID-19 tests performed per day in the USA. It helped identify specific bottlenecks and associated areas of improvement in the process to save human resources and time. Practically, the overall approach, including the proposed modular discrete event simulation model, can easily be reused or modified to fit other contexts where local COVID-19 testing stations have to be implemented or optimized. It could notably support on-site managers and decision-makers in dimensioning testing stations by allocating the appropriate type and quantity of resources.
APA, Harvard, Vancouver, ISO, and other styles
7

Boldrin, Fabio, Chiara Taddia, and Gianluca Mazzini. "Web Distributed Computing Systems Implementation and Modeling." International Journal of Adaptive, Resilient and Autonomic Systems 1, no. 1 (2010): 75–91. http://dx.doi.org/10.4018/jaras.2010071705.

Full text
Abstract:
This article proposes a new approach for distributed computing. The main novelty consists in the exploitation of Web browsers as clients, thanks to the availability of JavaScript, AJAX and Flex. The described solution has two main advantages: it is client-free, so no additional programs have to be installed to perform the computation, and it requires low CPU usage, so client-side computation is no invasive for users. The solution is developed using both AJAX and Adobe®Flex® technologies embedding a pseudo-client into a Web page that hosts the computation. While users browse the hosting Web page, computation takes place resolving single sub-problems and sending the solution to the server-side part of the system. Our client-free solution is an example of high resilient and auto-administrated system that is able to organize the scheduling of the processes and the error management in an autonomic manner. A mathematical model has been developed over this solution. The main goals of the model are to describe and classify different categories of problems on the basis of the feasibility and to find the limits in the dimensioning of the scheduling systems to have convenience in the use of this approach. The new architecture has been tested through different performance metrics by implementing two examples of distributed computing, the cracking of an RSA cryptosystem through the factorization of the public key and the correlation index between samples in genetic data sets. Results have shown good feasibility of this approach both in a closed environment and also in an Internet environment, in a typical real situation.
APA, Harvard, Vancouver, ISO, and other styles
8

Steger, Jana, Isabella Patzke, Maximilian Berlet, et al. "Design of a force-measuring setup for colorectal compression anastomosis and first ex-vivo results." International Journal of Computer Assisted Radiology and Surgery 16, no. 8 (2021): 1335–45. http://dx.doi.org/10.1007/s11548-021-02371-8.

Full text
Abstract:
Abstract Purpose The introduction of novel endoscopic instruments is essential to reduce trauma in visceral surgery. However, endoscopic device development is hampered by challenges in respecting the dimensional restrictions, due to the narrow access route, and by achieving adequate force transmission. As the overall goal of our research is the development of a patient adaptable, endoscopic anastomosis manipulator, biomechanical and size-related characterization of gastrointestinal organs are needed to determine technical requirements and thresholds to define functional design and load-compatible dimensioning of devices. Methods We built an experimental setup to measure colon tissue compression piercing forces. We tested 54 parameter sets, including variations of three tissue fixation configurations, three piercing body configurations (four, eight, twelve spikes) and insertion trajectories of constant velocities (5 mms−1, 10 mms−1,15 mms−1) and constant accelerations (5 mms−2, 10 mms−2, 15 mms−2) each in 5 samples. Furthermore, anatomical parameters (lumen diameter, tissue thickness) were recorded. Results There was no statistically significant difference in insertion forces neither between the trajectory groups, nor for variation of tissue fixation configurations. However, we observed a statistically significant increase in insertion forces for increasing number of spikes. The maximum mean peak forces for four, eight and twelve spikes were 6.4 ± 1.5 N, 13.6 ± 1.4 N and 21.7 ± 5.8 N, respectively. The 5th percentile of specimen lumen diameters and pierced tissue thickness were 24.1 mm and 2.8 mm, and the 95th percentiles 40.1 mm and 4.8 mm, respectively. Conclusion The setup enabled reliable biomechanical characterization of colon material, on the base of which design specifications for an endoscopic anastomosis device were derived. The axial implant closure unit must enable axial force transmission of at least 28 N (22 ± 6 N). Implant and applicator diameters must cover a range between 24 and 40 mm, and the implant gap, compressing anastomosed tissue, between 2 and 5 mm.
APA, Harvard, Vancouver, ISO, and other styles
9

Turek, Steven, and Sam Anand. "A Hull Normal Based Approach for Cylindrical Size Assessment." Journal of Manufacturing Science and Engineering 133, no. 1 (2011). http://dx.doi.org/10.1115/1.4003332.

Full text
Abstract:
Digital measurement devices, such as coordinate measuring machines, laser scanning devices, and digital imaging, can provide highly accurate and precise coordinate data representing the sampled surface. However, this discrete measurement process can only account for measured data points, not the entire continuous form, and is heavily influenced by the algorithm that interprets the measured data. The definition of cylindrical size for an external feature as specified by ASME Y14.5.1M-1994 [The American Society of Mechanical Engineers, 1995, Dimensioning and Tolerancing, ASME Standard Y14.5M-1994, ASME, New York, NY; The American Society of Mechanical Engineers, 1995, Mathematical Definition of Dimensioning and Tolerancing Principles, ASME Standard Y14.5.1M-1994, ASME, New York, NY] matches the analytical definition of a minimum circumscribing cylinder (MCC) when rule no. 1 [The American Society of Mechanical Engineers, 1995, Dimensioning and Tolerancing, ASME Standard Y14.5M-1994, ASME, New York, NY; The American Society of Mechanical Engineers, 1995, Mathematical Definition of Dimensioning and Tolerancing Principles, ASME Standard Y14.5.1M-1994, ASME, New York, NY] is applied to ensure a linear axis. Even though the MCC is a logical choice for size determination, it is highly sensitive to the sampling method and any uncertainties encountered in that process. Determining the least-sum-of-squares solution is an alternative method commonly utilized in size determination. However, the least-squares formulation seeks an optimal solution not based on the cylindrical size definition [The American Society of Mechanical Engineers, 1995, Dimensioning and Tolerancing, ASME Standard Y14.5M-1994, ASME, New York, NY; The American Society of Mechanical Engineers, 1995, Mathematical Definition of Dimensioning and Tolerancing Principles, ASME Standard Y14.5.1M-1994, ASME, New York, NY] and thus has been shown to be biased [Hopp, 1993, “Computational Metrology,” Manuf. Rev., 6(4), pp. 295–304; Nassef, and ElMaraghy, 1999, “Determination of Best Objective Function for Evaluating Geometric Deviations,” Int. J. Adv. Manuf. Technol., 15, pp. 90–95]. This work builds upon previous research in which the hull normal method was presented to determine the size of cylindrical bosses when rule no. 1 is applied [Turek, and Anand, 2007, “A Hull Normal Approach for Determining the Size of Cylindrical Features,” ASME, Atlanta, GA]. A thorough analysis of the hull normal method’s performance in various circumstances is presented here to validate it as a superior alternative to the least-squares and MCC solutions for size evaluation. The goal of the hull normal method is to recreate the sampled surface using computational geometry methods and to determine the cylinder’s axis and radius based upon it. Based on repetitive analyses of random samples of data from several measured parts and generated forms, it was concluded that the hull normal method outperformed all traditional solution methods. The hull normal method proved to be robust by having a lower bias and distributions that were skewed toward the true value of the radius, regardless of the amount of form error.
APA, Harvard, Vancouver, ISO, and other styles

Dissertations / Theses on the topic "Sample size dimensioning"

1

Fiorin, Rubens Alex. "Ácaros na cultura de soja: genótipos, danos e tamanho de amostra." Universidade Federal de Santa Maria, 2014. http://repositorio.ufsm.br/handle/1/3239.

Full text
Abstract:
The study aimed to evaluate the influence of soybeans genotypes on spider mites populations, quantify the occurring damage from spider mite attack and determinate leaflet number collected from different genotypes to estimate the spider mite population. Two studies were carried, in São Sepé (20 genotypes) and in Santa Maria (25 genotypes). The experiments were carried in randomized block design with four replications in 4,5 and 5,0 x 25 m experimental units. Weekly samplings were carried collecting 25 leaflets from the medium stratum and 25 leaflets from the superior soybean plant stratum in each genotype and evaluated an area of 20 cm2 of each leaflet. To determinate the sample size was used the data from evaluations which at least one genotype presented average population superior to one spider mite cm-2. To estimate spider mite number was considered the number of immature + adults spider mites, averages were compared with t bootstrap test. Sample size was estimated for an amplitude of 2 and 4 spider mites 20cm-2 and the optimal sample size calculus. To quantify spider mite damage in each genotype was maintained infested plots and not infested plots by pulverizations of acaricide. The predominant specie was Mononychellus planki. Population of spider mites vary in different genotypes and concentrates on the plant superior stratum. The necessary sample size is crescent as population grows, at the beginning of the infestation, 50 leaflets are enough with CIA95% (confidence interval amplitude with 1-p=0,95) maximum equal to 2 spider mites 20cm-2. To quantify higher populations 150 leaflets is necessary with CIA95% maximum equal to 4 spider mites 20cm-2. Yield variation as response to spider mite populations attack depend on the studied genotype and to all genotypes there is difference between the infested and not-infested plots. Average damage on Santa Maria experiment was 493 kg ha-1 and São Sepé 427 kg ha-1 and average gain of 33,4%.<br>Este trabalho teve por objetivo avaliar a influência de genótipos de soja na população de ácaros, quantificar os danos decorrentes do ataque de ácaros e determinar o número de folíolos a serem coletados em diferentes genótipos para a quantificação de sua população. Para isto, foram realizados dois experimentos localizados nos municípios de São Sepé (20 genótipos) e Santa Maria (25 genótipos). Os experimentos foram conduzidos no delineamento blocos ao acaso, com quatro repetições em parcelas de 4,5 e 5,0 x 25 m. Foram realizadas amostragens semanais através da coleta de 25 folíolos do extrato médio e 25 do extrato superior das plantas em cada genótipo, avaliando uma área de 20 cm2 por folíolo. Para a determinação do tamanho de amostra foram utilizados os dados das avaliações em que pelo menos um genótipo apresentou população média superior a um ácaro.cm-². Para o número de ácaros, utilizou-se os valores de imaturos + adultos, comparando as médias dos genótipos pelo teste t bootstrap. Foi estimado o tamanho de amostra para amplitudes de 2 e 4 ácaros 20cm-2 e realizado o cálculo do tamanho ótimo de amostra. Para quantificação dos danos dos ácaros manteve-se, em cada genótipo, parcelas infestadas e sem infestação, através de aplicação de acaricidas. A espécie predominante foi Mononychellus planki. A população de ácaros é diferente em função do genótipo e concentra-se na parte superior das plantas. O tamanho de amostras necessário é crescente em função do incremento da população de ácaros, no início das infestações 50 folíolos suficientes com AIC95% (amplitude do intervalo de confiança com 1-p=0,95) máxima igual a 2 ácaros 20cm-2. Para quantificação de populações mais elevadas são necessários 150 folíolos com AIC95% máxima igual a 4 ácaros 20cm-2. A variação no rendimento de grãos pelo ataque de ácaros depende do genótipo avaliado e, comparando a área controlada com a não controlada, há diferença para todos os genótipos. O dano médio no experimento de Santa Maria foi de 493 kg ha-1 e no de São Sepé 427 kg ha-1, com ganho médio de 33,4%.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Sample size dimensioning"

1

Boldrin, Fabio, Chiara Taddia, and Gianluca Mazzini. "Web Distributed Computing Systems." In Technological Innovations in Adaptive and Dependable Systems. IGI Global, 2012. http://dx.doi.org/10.4018/978-1-4666-0255-7.ch011.

Full text
Abstract:
This article proposes a new approach for distributed computing. The main novelty consists in the exploitation of Web browsers as clients, thanks to the availability of Javascript, AJAX and Flex. The described solution has two main advantages: it is client-free, so no additional programs have to be installed to perform the computation, and it requires low CPU usage, so client-side computation is no invasive for users. The solution is developed using both AJAX and Adobe®Flex®technologies embedding a pseudo-client into a Web page that hosts the computation. While users browse the hosting Web page, computation takes place resolving single sub-problems and sending the solution to the server-side part of the system. Our client-free solution is an example of high resilient and auto-administrated system that is able to organize the scheduling of the processes and the error management in an autonomic manner. A mathematical model has been developed over this solution. The main goals of the model are to describe and classify different categories of problems on the basis of the feasibility and to find the limits in the dimensioning of the scheduling systems to have convenience in the use of this approach. The new architecture has been tested through different performance metrics by implementing two examples of distributed computing, the cracking of an RSA cryptosystem through the factorization of the public key and the correlation index between samples in genetic data sets. Results have shown good feasibility of this approach both in a closed environment and also in an Internet environment, in a typical real situation.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Sample size dimensioning"

1

Ameta, Gaurav, Joseph K. Davidson, and Jami J. Shah. "The Effects of Different Specifications on the Tolerance-Maps for an Angled Face." In ASME 2004 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2004. http://dx.doi.org/10.1115/detc2004-57199.

Full text
Abstract:
A new mathematical model for representing geometric tolerances is applied to a part with an angled face and is extended to show its sensitivity to different specifications for dimensioning and tolerancing the part. The model is compatible with the ASME/ISO Standards for geometric tolerances. Central to the new model is a Tolerance-Map®, a hypothetical volume of points that corresponds to all possible locations and variations of a segment of a plane which can arise from tolerances on size, position, form, and orientation. Every Tolerance-Map is a convex set. This model is one part of a bi-level model that we are developing for geometric tolerances. The new model makes stackup relations apparent in an assembly, and these can be used to allocate size and orientational tolerances; the same relations also can be used to identify sensitivities for these tolerances. All stackup relations can be met for 100% interchangeability or for a specified probability. This paper develops several Tolerance-Maps for a part with an angled end face for different tolerance specifications. These specifications are linear size, angularity, angular size, “linear size &amp; angularity” and “linear &amp; angular size” tolerance. Comparison of Tolerance-Maps for their content for these specifications led to the following conclusions: a) only angular size tolerance is not sufficient for tolerancing an angled face; b) if the value of tolerance remains the same, the allowable variation is more in a part having only an angularity tolerance than in one having only a size tolerance.
APA, Harvard, Vancouver, ISO, and other styles
2

Wearring, Colin. "The Functional Feature Model: Bridging the CAD/CAM Gap." In ASME 1996 Design Engineering Technical Conferences and Computers in Engineering Conference. American Society of Mechanical Engineers, 1996. http://dx.doi.org/10.1115/96-detc/cie-1653.

Full text
Abstract:
Abstract Commercial CAD systems were originally developed to support the generation of 2D engineering drawings, but evolved to support development of 3D product models. Conceptually, the product model replaces 2D engineering drawings as a means for communicating product information such as size, shape, features, datums, tolerances, and other engineering specifications. Because of their history, the software architecture and data models used by commercial CAD systems do not directly represent all the engineering product information contained in 2D engineering drawings. Computer Assisted Engineering (CAE) tools require engineering product specifications as input. When these tools are integrated directly with the CAD system, a database representation of the product model is required for their efficient operation. Without a direct link to the CAD system, information must be transferred using standard format files, or manually entered into the CAE application. To satisfy the requirements for direct integration of CAE applications with CAD systems, the Functional Feature Model (FFM) was developed. By definition, the Functional Feature Model (FFM) contains component geometry, feature definitions, datums, datum features, tolerances and other feature attributes accessed through a standard interface. The FFM was named to distinguish the functional features used by an engineer in the definition of part function, inspection, and assembly from the features employed by CAD systems in construction of geometry. Today, the FFM is used as the basis for CAE tools which perform analysis of product Geometric Dimensioning and Tolerancing (GD&amp;T), 3D tolerance analysis of assemblies, and CMM programming. Any CAE application which requires the same or similar information as these applications can obtain its input from the FFM. The FFM is a mature, commercially proven prototype for a standard product model, containing the majority of engineering product information typically represented using 2D drawings annotated with Geometric Dimensional and Tolerancing (GD&amp;T) symbols. The FFM can be used instead of 2D drawings to supply necessary product information to CAE applications. Using the FFM, there is no need to create the 2D engineering drawing, interpret the GD&amp;T annotation, and enter the interpreted product information into the CAE application. It provides a standard interface (independent of CAD system) for commercial development of CAE applications, and is designed in a fashion which makes it appropriate for use as a basis for emerging product model standards. The FFM provides a prototype for related activities like the Standard for the Exchange of Product Model Data (STEP) initiative represented by the Product Data Exchange using STEP (PDES) organization in the USA. Corporate and government consortiums such as the Rapid Response Manufacturing (RRM) or Simulation Assessment Validation Environment (SAVE) initiatives could employ the FFM directly to support their objectives of developing the next generation design and simulation environment.
APA, Harvard, Vancouver, ISO, and other styles
3

Wearring, Colin. "The Functional Feature Model: Bridging the CAD/CAM Gap." In ASME 1996 Design Engineering Technical Conferences and Computers in Engineering Conference. American Society of Mechanical Engineers, 1996. http://dx.doi.org/10.1115/96-detc/cie-1656.

Full text
Abstract:
Abstract Commercial CAD systems were originally developed to support the generation of 2D engineering drawings, but evolved to support development of 3D product models. Conceptually, the product model replaces 2D engineering drawings as a means for communicating product information such as size, shape, features, datums, tolerances, and other engineering specifications. Because of their history, the software architecture and data models used by commercial CAD systems do not directly represent all the engineering product information contained in 2D engineering drawings. Computer Assisted Engineering (CAE) tools require engineering product specifications as input. When these tools are integrated directly with the CAD system, a database representation of the product model is required for their efficient operation. Without a direct link to the CAD system, information must be transferred using standard format files, or manually entered into the CAE application. To satisfy the requirements for direct integration of CAE applications with CAD systems, the Functional Feature Model (FFM) was developed. By definition, the Functional Feature Model (FFM) contains component geometry, feature definitions, datums, datum features, tolerances and other feature attributes accessed through a standard interface. The FFM was named to distinguish the functional features used by an engineer in the definition of part function, inspection, and assembly from the features employed by CAD systems in construction of geometry. Today, the FFM is used as the basis for CAE tools which perform analysis of product Geometric Dimensioning and Tolerancing (GD&amp;T), 3D tolerance analysis of assemblies, and CMM programming. Any CAE application which requires the same or similar information as these applications can obtain its input from the FFM. The FFM is a mature, commercially proven prototype for a standard product model, containing the majority of engineering product information typically represented using 2D drawings annotated with Geometric Dimensional and Tolerancing (GD&amp;T) symbols. The FFM can be used instead of 2D drawings to supply necessary product information to CAE applications. Using the FFM, there is no need to create the 2D engineering drawing, interpret the GD&amp;T annotation, and enter the interpreted product information into the CAE application. It provides a standard interface (independent of CAD system) for commercial development of CAE applications, and is designed in a fashion which makes it appropriate for use as a basis for emerging product model standards. The FFM provides a prototype for related activities like the Standard for the Exchange of Product Model Data (STEP) initiative represented by the Product Data Exchange using STEP (PDES) organization in the USA. Corporate and government consortiums such as the Rapid Response Manufacturing (RRM) or Simulation Assessment Validation Environment (SAVE) initiatives could employ the FFM directly to support their objectives of developing the next generation design and simulation environment.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography