Academic literature on the topic 'Non-parametric DEA method'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Non-parametric DEA method.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Non-parametric DEA method"

1

Thi Vu, Loan, Nga Thu Nguyen, and Linh Hong Dinh. "Measuring banking efficiency in Vietnam: parametric and non-parametric methods." Banks and Bank Systems 14, no. 1 (2019): 55–64. http://dx.doi.org/10.21511/bbs.14(1).2019.06.

Full text
Abstract:
The article aims to evaluate the business efficiency of commercial banks in Vietnam using both parametric and non-parametric approaches. In this study, the Stochastic Frontier Analysis (SFA), which belongs to a parametric method, and Data Envelopment Analysis (DEA), a non-parametric approach, are applied to a sample of 30 joint stock commercial banks in Vietnam in the period of 2011–2015. Applying Tobit regression model, the impact of bank size, bank age, and the ownership feature on the efficiency of bank service industry in Vietnam is also investigated. The analysis results show that in general, the Vietnamese banking efficiency is improving during the selected period regardless of techniques used. However, there is small level of similarity in efficiency rankings identified from the SFA and DEA models. In terms of efficiency determinants, the results show that all three variables of size, age, and state ownership have a positive impact on bank efficiency.
APA, Harvard, Vancouver, ISO, and other styles
2

Wojciech, Młynarski, and Kaliszewski Adam. "Efficiency evaluation in forest management – a literature review." Lesne Prace Badawcze / Forest Research Papers 79, no. 3 (2018): 289–98. https://doi.org/10.2478/frp-2018-0029.

Full text
Abstract:
The aim of our work was to give an overview on efficiency evaluation in forest management as described in the literature. Here we present definitions for efficiency and productivity of economic entities as well as categories of efficiency evaluation methods and discuss ratio analysis, parametric and non-parametric approaches to measure efficiency in forestry. With regards to ratio analysis, we focused on reports employing this approach in Poland due to the abundant literature on this subject. On the other hand, studies based on parametric and non-parametric approaches for efficiency evaluation in the forest sector have only been used occasionally in Poland and thus this part of our analysis is based on research done abroad. The most important parametric method is the Stochastic Frontier Approach (SFA), while the most important non-parametric approach involves Data Envelopment Analysis (DEA), which was developed at the end of the 1970s and utilizes a mathematical programming algorithm. Our review shows that efficiency evaluation in forest management in Poland so far is mostly based on ratio analysis. However, although those methods are of considerable practical importance, in terms of scientific development they are now being replaced by more mathematically and statistically advanced parametric and non-parametric methods, which also open up more opportunities to analyze the efficiency of forest management. The first research employing non-parametric DEA recently published in Poland is a good step towards improving research quality and provides comprehensive results for the efficiency evaluation of forest management.
APA, Harvard, Vancouver, ISO, and other styles
3

Marković, Branka, Milica Lakić, and Ružica Đervida. "APPLICATION OF PARAMETER STATISTICAL TESTS AND DATA ENVELOPMENT ANALYSIS METHODS IN MODERN BUSINESS." SCIENCE International Journal 3, no. 4 (2024): 29–34. https://doi.org/10.35120/sciencej0304029m.

Full text
Abstract:
In the absence of a sufficient amount of information for quality business decision-making, i.e. successful performance of activities without unnecessary losses in the consumption of inputs, recently the non-parametric DEA method (Data Envelopment Analysis) is most often used through the linear programming technique. In the event that company managers have enough information to make business decisions, parametric statistical tests are used that compare the company's current performance with optimal performance, i.e. those that are on the edge of efficiency. However, this situation is very rare, so before making business decisions, non-parametric and then parametric statistical tests are carried out in detail. The subject of research of this paper is primarily focused on the simultaneous application of parametric and non-parametric statistical tests in the assessment of the economic efficiency of an economic entity. After the conducted research and analysis of the obtained results, it was determined that the null hypothesis, which claims that the relative efficiency of the warehouse obtained by parametric statistical tests and the DEA method is identical and that the trends have the same direction, could not be fully accepted. Namely, it was found out that the results of one and the same economic situation using the mentioned two types of analysis differ to the extent that they are not adequate for economic decision-making, however identical results were obtained in the assessment of the trend. It can be concluded that the simultaneous application of both methods, as well as its implementation in several iterations, can provide enough quality information for effective decision-making. Stochastic processes that occur during the implementation of business decisions using the DEA technique can be minimized through the simultaneous application of statistical parametric methods and tests for evaluating the expected efficiency of DEA. The effectiveness of this method in any case depends on the size of the sample implemented in the aforementioned statistical analysis. The aforementioned statistical tests enable the measurement and detection of those input parameters that will most effectively contribute to the efficiency of business systems.
APA, Harvard, Vancouver, ISO, and other styles
4

Matulova, Marketa, and Jana Rejentova. "Efficiency of European Airports: Parametric Versus Non-parametric Approach." Croatian Operational Research Review 12, no. 1 (2021): 1–14. http://dx.doi.org/10.17535/crorr.2021.0001.

Full text
Abstract:
This paper presents a performance evaluation of European airports, based on the application of both parametric and non-parametric approaches. We have evaluated the 115 busiest airports in Europe according to the number of passengers checked-in in 2018. The four inputs we used were the number of Terminals, Runways, Boarding gates, and Aircraft stands. Three variables were used to describe the outputs, namely, Passengers, Movements, and Cargo. The parametric method we chose to apply was the Stochastic Frontier Analysis (SFA) with the Cobb-Douglas production function, the Half-Normal distribution of inefficiency component, and the Normal distribution of an error term. As a basic SFA model only allows for a single output, we employed different methods to get a single efficiency score for each and every airport. Next, we evaluated the airport performance non-parametrically using several Data Envelope Analysis (DEA) models including the super-efficiency model. We compared the results obtained by individual approaches and discussed their pros and cons. Finally, we applied the program evaluation procedure to explore the effect of the different forms of airports ownership on their performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Arsad, Roslah, Zaidi Isa, and Siti Nabilah Mohd Shaari. "Estimating Efficiency Performance of Decision-Making Unit by using SFA and DEA Method: A Cross-Sectional Data Approach." International Journal of Engineering & Technology 7, no. 4.33 (2018): 25. http://dx.doi.org/10.14419/ijet.v7i4.33.23478.

Full text
Abstract:
In this paper, a cross-sectional samples data of 115 Malaysian stocks have been employed to compare both Data Envelopment Analysis (DEA) method and Stochastic Frontier Analysis (SFA) method. These approaches are used to provide a review of frontier conceptual measurement, strength and limitation of the parametric and non-parametric models. Stochastic frontier production function of Cobb-Douglas type was utilized for the estimation. The function was estimated using the maximum likelihood estimation technique. Two models in DEA, DEA-CCR and DEA-BCC are applied in this study and the ranking correlation between SFA method and both models DEA are determined by using the Spearman rank method. The result revealed using SFA, the mean technical efficiency of sample consumer product companies is 37.5% and implies that companies operating at means level of technical efficiency could produce 80.1% more output for given level of inputs if they become technically more efficient. From empirical results of the SFA method, we determined that the deviations from the efficient frontiers of production functions are largely attributed to inefficiency effects (technical inefficiency). Finally, the findings also showed that the difference in ranking stocks performance using DEA-CCR, DEA-BCC and SFA methods. The main contribution of the paper is showing the comparative performance based on both model, DEA and SFA method using financial ratio.
APA, Harvard, Vancouver, ISO, and other styles
6

Młynarski, Wojciech, and Adam Kaliszewski. "Efficiency evaluation in forest management – a literature review." Forest Research Papers 79, no. 3 (2018): 289–98. http://dx.doi.org/10.2478/frp-2018-0029.

Full text
Abstract:
Abstract The aim of our work was to give an overview on efficiency evaluation in forest management as described in the literature. Here we present definitions for efficiency and productivity of economic entities as well as categories of efficiency evaluation methods and discuss ratio analysis, parametric and non-parametric approaches to measure efficiency in forestry. With regards to ratio analysis, we focused on reports employing this approach in Poland due to the abundant literature on this subject. On the other hand, studies based on parametric and non-parametric approaches for efficiency evaluation in the forest sector have only been used occasionally in Poland and thus this part of our analysis is based on research done abroad. The most important parametric method is the Stochastic Frontier Approach (SFA), while the most important non-parametric approach involves Data Envelopment Analysis (DEA), which was developed at the end of the 1970s and utilizes a mathematical programming algorithm. Our review shows that efficiency evaluation in forest management in Poland so far is mostly based on ratio analysis. However, although those methods are of considerable practical importance, in terms of scientific development they are now being replaced by more mathematically and statistically advanced parametric and non-parametric methods, which also open up more opportunities to analyze the efficiency of forest management. The first research employing non-parametric DEA recently published in Poland is a good step towards improving research quality and provides comprehensive results for the efficiency evaluation of forest management.
APA, Harvard, Vancouver, ISO, and other styles
7

Supriyono, Supriyono, Ahmad Rodoni, Yacop Suparno, Hermadi Hermadi, and Hilyatun Nafisah. "EFFICIENCY PERFORMANCE ANALYSIS OF PANIN DUBAI SYARIAH BANK IN COLLECTING AND DISTRIBUTING THIRD PARTY FUNDS BEFORE AND AFTER MERGER." I-Finance: a Research Journal on Islamic Finance 5, no. 1 (2019): 46–56. http://dx.doi.org/10.19109/ifinace.v5i1.3716.

Full text
Abstract:
In this study, the researcher would make Panin Dubai Syariah Bank was the object of research on Islamic banks that carry out mergers and acquisitions. This research by using quarterly financial reports to determine the level of efficiency of the Panin Dubai Syariah Bank to be analyzed using the Data Envelopment Analysis (DEA) method. The researcher was used the annual report to find out the extent the Panin Dubai Syariah Bank carries out its Shariah values and objectives to be analyzed based on the Maqashid Index Sharia. The frontier approach can be divided into parametric approaches and non-parametric approaches. The parametric approach takes measurements using stochastic econometrics and seeks to eliminate interference from the effects of inefficiency. While the non-parametric approach with linear programs ( non-parametric linear programming approach ) performs non-parametric measurements using an approach is not stochastic and tends to combine the interference into inefficiency. This is based on the discovery and observation of the population and evaluates efficiency relative to the units observed. In the non- parametric method, the approaches that can be used are Data Envelopment Analysis (DEA) and Free Disposal Hull (FDH). The results of the measurement of Bank Panin Dubai Syariah using DEA indicate that the decision to merge carried out by Panin Syariah Bank with Dubai Islamic Bank was the right decision because, with the merger, Panin Dubai Syariah Bank could produce almost perfect efficiency value of 99% in the year 2015. With doing the merger, Bank Panin Dubai Syariah can minimize the inefficiencies that occur in the input variable so that it can maximize the efficiency that occurs in the output variable
APA, Harvard, Vancouver, ISO, and other styles
8

Aifeng, Song, Zhang XiaoYang, Huang Weilai, Yang xue, and Yang Juan. "Two-stage DEA for Bank Efficiency Evaluation Considering Shared Input and Unexpected Output Factors." E3S Web of Conferences 214 (2020): 01036. http://dx.doi.org/10.1051/e3sconf/202021401036.

Full text
Abstract:
With the increasingly fierce market competition, only by relying on high-quality products and high customer satisfaction can enterprises survive in the fierce competition. Among many evaluation methods, Data Envelopment Analysis (DEA), as a non-parametric statistical method to effectively deal with multi-input and multi-output problems, has received more and more attention in evaluating the relative efficiency of decision-making units. In the process of bank efficiency evaluation based on DEA method, there will be a situation that banks have both dual role factors and unexpected output factors. The Two-stage DEA model provides an effective analysis method to solve the problem of bank efficiency evaluation of complex organizational structure. In order to evaluate the efficiency of unexpected output with uncertain information, a stochastic DEA model of unexpected output is established.
APA, Harvard, Vancouver, ISO, and other styles
9

Novaes, Antonio G. N. "RAPID-TRANSIT EFFICIENCY ANALYSIS WITH THE ASSURANCE-REGION DEA METHOD." Pesquisa Operacional 21, no. 2 (2001): 179–97. http://dx.doi.org/10.1590/s0101-74382001000200004.

Full text
Abstract:
Rapid-transit services are a relevant part of the transportation network in most cities of the world. An important aspect of transport policy is the supply of public urban transportation. In particular, it is of interest to determine whether rapid-transit operators are working in a technically and scale-efficient way. Production analysis of transit services has been characterized by the econometric study of average practice technologies. A more recent method to study such production frontiers is Data Envelopment Analysis (DEA). It is a non-parametric method, but its application to rapid-transit, where the relations among technological variables are more strict, requires a previous structural analysis of the intervening inputs and outputs. DEA is employed in this paper to investigate the efficiency and returns to scale of 21 rapid-transit properties of the world. DEA was also used for the benchmarking of non-efficient rapid-transit properties, with special emphasis to the São Paulo’s subway system
APA, Harvard, Vancouver, ISO, and other styles
10

Mirmozaffari, Mirpouya, Reza Yazdani, Elham Shadkam, Seyed Mohammad Khalili, Leyla Sadat Tavassoli, and Azam Boskabadi. "A Novel Hybrid Parametric and Non-Parametric Optimisation Model for Average Technical Efficiency Assessment in Public Hospitals during and Post-COVID-19 Pandemic." Bioengineering 9, no. 1 (2021): 7. http://dx.doi.org/10.3390/bioengineering9010007.

Full text
Abstract:
The COVID-19 pandemic has had a significant impact on hospitals and healthcare systems around the world. The cost of business disruption combined with lingering COVID-19 costs has placed many public hospitals on a course to insolvency. To quickly return to financial stability, hospitals should implement efficiency measure. An average technical efficiency (ATE) model made up of data envelopment analysis (DEA) and stochastic frontier analysis (SFA) for assessing efficiency in public hospitals during and after the COVID-19 pandemic is offered. The DEA method is a non-parametric method that requires no information other than the input and output quantities. SFA is a parametric method that considers stochastic noise in data and allows statistical testing of hypotheses about production structure and degree of inefficiency. The rationale for using these two competing approaches is to balance each method’s strengths, weaknesses and introduce a novel integrated approach. To show the applicability and efficacy of the proposed hybrid VRS-CRS-SFA (VCS) model, a case study is presented.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Non-parametric DEA method"

1

Долгіх, Володимир Миколайович, Владимир Николаевич Долгих та Volodymyr Mykolaiovych Dolhikh. "Исследование относительной эффективности украинских банков". Thesis, Украинская академия банковского дела Национального банка Украины, 2013. http://essuir.sumdu.edu.ua/handle/123456789/59185.

Full text
Abstract:
Эффективное функционирование экономики государства в значительной степени зависит от эффективности деятельности банковского сектора, обеспечивающего кредитование хозяйствующих субъектов за счет перераспределения сбережений населения и оборотных средств предприятий. Для клиентов банка, собственников и акционеров важна информация о его эффективности, прибыльности, надежности и устойчивости. Получить такую информацию можно на основе анализа финансовой отчетности, публикуемой ежеквартально Национальным банком Украины.
APA, Harvard, Vancouver, ISO, and other styles
2

Prando, Giulia. "Non-Parametric Bayesian Methods for Linear System Identification." Doctoral thesis, Università degli studi di Padova, 2017. http://hdl.handle.net/11577/3426195.

Full text
Abstract:
Recent contributions have tackled the linear system identification problem by means of non-parametric Bayesian methods, which are built on largely adopted machine learning techniques, such as Gaussian Process regression and kernel-based regularized regression. Following the Bayesian paradigm, these procedures treat the impulse response of the system to be estimated as the realization of a Gaussian process. Typically, a Gaussian prior accounting for stability and smoothness of the impulse response is postulated, as a function of some parameters (called hyper-parameters in the Bayesian framework). These are generally estimated by maximizing the so-called marginal likelihood, i.e. the likelihood after the impulse response has been marginalized out. Once the hyper-parameters have been fixed in this way, the final estimator is computed as the conditional expected value of the impulse response w.r.t. the posterior distribution, which coincides with the minimum variance estimator. Assuming that the identification data are corrupted by Gaussian noise, the above-mentioned estimator coincides with the solution of a regularized estimation problem, in which the regularization term is the l2 norm of the impulse response, weighted by the inverse of the prior covariance function (a.k.a. kernel in the machine learning literature). Recent works have shown how such Bayesian approaches are able to jointly perform estimation and model selection, thus overcoming one of the main issues affecting parametric identification procedures, that is complexity selection.
While keeping the classical system identification methods (e.g. Prediction Error Methods and subspace algorithms) as a benchmark for numerical comparison, this thesis extends and analyzes some key aspects of the above-mentioned Bayesian procedure. In particular, four main topics are considered. 1. PRIOR DESIGN. Adopting Maximum Entropy arguments, a new type of l2 regularization is derived: the aim is to penalize the rank of the block Hankel matrix built with Markov coefficients, thus controlling the complexity of the identified model, measured by its McMillan degree. By accounting for the coupling between different input-output channels, this new prior results particularly suited when dealing for the identification of MIMO systems
To speed up the computational requirements of the estimation algorithm, a tailored version of the Scaled Gradient Projection algorithm is designed to optimize the marginal likelihood. 2. CHARACTERIZATION OF UNCERTAINTY. The confidence sets returned by the non-parametric Bayesian identification algorithm are analyzed and compared with those returned by parametric Prediction Error Methods. The comparison is carried out in the impulse response space, by deriving “particle” versions (i.e. Monte-Carlo approximations) of the standard confidence sets. 3. ONLINE ESTIMATION. The application of the non-parametric Bayesian system identification techniques is extended to an online setting, in which new data become available as time goes. Specifically, two key modifications of the original “batch” procedure are proposed in order to meet the real-time requirements. In addition, the identification of time-varying systems is tackled by introducing a forgetting factor in the estimation criterion and by treating it as a hyper-parameter. 4. POST PROCESSING: MODEL REDUCTION. Non-parametric Bayesian identification procedures estimate the unknown system in terms of its impulse response coefficients, thus returning a model with high (possibly infinite) McMillan degree. A tailored procedure is proposed to reduce such model to a lower degree one, which appears more suitable for filtering and control applications. Different criteria for the selection of the order of the reduced model are evaluated and compared.<br>Recentemente, il problema di identificazione di sistemi lineari è stato risolto ricorrendo a metodi Bayesiani non-parametrici, che sfruttano di tecniche di Machine Learning ampiamente utilizzate, come la regressione gaussiana e la regolarizzazione basata su kernels. Seguendo il paradigma Bayesiano, queste procedure richiedono una distribuzione Gaussiana a-priori per la risposta impulsiva. Tale distribuzione viene definita in funzione di alcuni parametri (chiamati iper-parametri nell'ambito Bayesiano), che vengono stimati usando i dati a disposizione. Una volta che gli iper-parametri sono stati fissati, è possibile calcolare lo stimatore a minima varianza come il valore atteso della risposta impulsiva, condizionato rispetto alla distribuzione a posteriori. Assumendo che i dati di identificazione siano corrotti da rumore Gaussiano, tale stimatore coincide con la soluzione di un problema di stima regolarizzato, nel quale il termine di regolarizzazione è la norma l2 della risposta impulsiva, pesata dall'inverso della funzione di covarianza a priori (tale funzione viene anche detta "kernel" nella letteratura di Machine Learning). Recenti lavori hanno dimostrato come questi metodi Bayesiani possano contemporaneamente selezionare un modello ottimale e stimare la quantità sconosciuta. In tal modo sono in grado di superare uno dei principali problemi che affliggono le tecniche di identificazione parametrica, ovvero quella della selezione della complessità di modello. Considerando come benchmark le tecniche classiche di identificazione (ovvero i Metodi a Predizione d'Errore e gli algoritmi Subspace), questa tesi estende ed analizza alcuni aspetti chiave della procedura Bayesiana sopraccitata. In particolare, la tesi si sviluppa su quattro argomenti principali. 1. DESIGN DELLA DISTRIBUZIONE A PRIORI. Sfruttando la teoria delle distribuzioni a Massima Entropia, viene derivato un nuovo tipo di regolarizzazione l2 con l'obiettivo di penalizzare il rango della matrice di Hankel contenente i coefficienti di Markov. In tal modo è possibile controllare la complessità del modello stimato, misurata in termini del grado di McMillan. 2. CARATTERIZZAZIONE DELL'INCERTEZZA. Gli intervalli di confidenza costruiti dall'algoritmo di identificazione Bayesiana non-parametrica vengono analizzati e confrontati con quelli restituiti dai metodi parametrici a Predizione d'Errore. Convertendo quest'ultimi nelle loro approssimazioni campionarie, il confronto viene effettuato nello spazio a cui appartiene la risposta impulsiva. 3. STIMA ON-LINE. L'applicazione delle tecniche Bayesiane non-parametriche per l'identificazione dei sistemi viene estesa ad uno scenario on-line, in cui nuovi dati diventano disponibili ad intervalli di tempo prefissati. Vengono proposte due modifiche chiave della procedura standard off-line in modo da soddisfare i requisiti della stima real-time. Viene anche affrontata l'identificazione di sistemi tempo-varianti tramite l'introduzione, nel criterio di stima, di un fattore di dimenticanza, il quale e' in seguito trattato come un iper-parametro. 4. RIDUZIONE DEL MODELLO STIMATO. Le tecniche di identificazione Bayesiana non-parametrica restituiscono una stima della risposta impulsiva del sistema sconosciuto, ovvero un modello con un alto (verosimilmente infinito) grado di McMillan. Viene quindi proposta un'apposita procedura per ridurre tale modello ad un grado più basso, in modo che risulti più adatto per future applicazioni di controllo e filtraggio. Vengono inoltre confrontati diversi criteri per la selezione dell'ordine del modello ridotto.
APA, Harvard, Vancouver, ISO, and other styles
3

Jin, Qianying. "Ordinal classification with non-parametric frontier methods : overview and new proposals." Thesis, Lille 1, 2020. http://www.theses.fr/2020LIL1A007.

Full text
Abstract:
Suivant l'idée de séparer deux groupes par une hypersurface, la frontière convexe (C) générée par la méthode d'analyse de l'enveloppe des données (DEA) est utilisée pour la séparation dans la classification. Aucune hypothèse sur la forme de l'hypersurface n'est nécessaire si l'on utilise une frontière DEA. De plus, son raisonnement sur l'appartenance est très clair en se référant à une observation de référence. Malgré ces points forts, le classificateur basé sur la frontière DEA n'est pas toujours performant dans la classification. Par conséquent, cette thèse vise à modifier les classificateurs frontaliers existants et à proposer de nouveaux classificateurs frontaliers pour le problème de la classification ordinale. Dans la littérature, tous les axiomes utilisés pour construire la frontière C de la DEA sont conservés pour générer une frontière de séparation, sans argumenter leur correspondance avec les informations de base correspondantes. C'est ce qui motive notre travail au chapitre 2, où les liens entre les axiomes et les informations de base sont examinés. Tout d'abord, en réfléchissant à la relation monotone, les variables caractéristiques du type d'entrée et du type de sortie sont incorporées. En outre, le modèle de la somme minimale des écarts est proposé pour détecter la relation monotone sous-jacente si cette relation n'est pas donnée a priori. Deuxièmement, un classificateur de frontière nonconvexe (NC) est construit en assouplissant l'hypothèse de convexité. Troisièmement, la mesure de la fonction de distance directionnelle (DDF) est introduite pour fournir des implications managériales, bien qu'elle ne modifie pas les résultats de la classification par rapport à la mesure radiale. Les résultats empiriques montrent que le classificateur à frontière NC a la plus grande précision de classification. Une comparaison avec six classificateurs classiques révèle également la supériorité de l'application du classificateur à frontière NC. Alors que la relation des variables caractéristiques suggère souvent la prise en compte d'une relation monotone, le problème parallèle de la prise en compte d'une relation non monotone est rarement pris en compte. Au chapitre 3, une hypothèse d'élimination généralisée qui limite l'élimination dans une fourchette de valeurs est développée pour caractériser la relation non monotone. Au lieu d'avoir une seule frontière de séparation, une coque de séparation NC qui se compose de plusieurs frontières est construite. En ajoutant l'hypothèse de convexité, une coque séparatrice C est alors construite. Un exemple illustratif montrent que le classificateur de coques NC surpasse le classificateur C. En outre, une comparaison avec certains classificateurs frontaliers existants révèle également la supériorité de l'application du classificateur de coque NC. Le chapitre 4 propose de nouveaux classificateurs frontaliers permettant de prendre en compte différentes combinaisons d'informations de classification. En réfléchissant à la relation monotone, un classificateur NC est construit. Si la relation de substitution existe, alors un classificateur C est généré. Les classificateurs NC et C génèrent tous deux deux des frontières où chacun enveloppe un groupe. L'intersection de deux frontières est connue sous le nom de chevauchement, ce qui peut entraîner des classifications erronées. Le chevauchement est réduit en permettant aux deux frontières de se déplacer vers l'intérieur dans la mesure où le coût total de la classification erronée est minimisé. Les frontières déplacées sensibles aux coûts sont alors utilisées pour séparer les groupes. Les règles discriminantes sont également conçues pour intégrer les informations sur les coûts. Les résultats empiriques montrent que le classificateur NC assure une meilleure séparation que le classificateur C. En outre, la mesure de la DDF proposée surpasse la mesure radiale couramment utilisée en fournissant une séparation raisonnable<br>Following the idea of separating two groups with a hypersurface, the convex (C) frontier generated from the data envelopment analysis (DEA) method is employed as a separating hypersurface in classification. No assumption on the shape of the separating hypersurface is required while using a DEA frontier. Moreover, its reasoning of the membership is quite clear by referring to a benchmark observation. Despite these strengths, the DEA frontier-based classifier does not always perform well in classification. Therefore, this thesis focuses on modifying the existing frontier-based classifiers and proposing novel frontier-based classifiers for the ordinal classification problem. In the classification literature, all axioms used to construct the C DEA frontier are kept in generating a separating frontier, without arguing their correspondence with the related background information. This motivates our work in Chapter 2 where the connections between the axioms and the background information are explored. First, by reflecting on the monotonic relation, both input-type and output-type characteristic variables are incorporated. Moreover, the minimize sum of deviations model is proposed to detect the underlying monotonic relation if this relation is not priori given. Second, a nonconvex (NC) frontier classifier is constructed by relaxing the commonly used convexity assumption. Third, the directional distance function (DDF) measure is introduced for providing further managerial implications, although it does not change the classification results comparing to the radial measure. The empirical results show that the NC frontier classifier has the highest classification accuracy. A comparison with six classic classifiers also reveals the superiority of applying the NC frontier classifier. While the relation of the characteristic variables often suggests consideration of a monotonic relation, its parallel problem of considering a non-monotonic relation is rarely considered. In Chapter 3, a generalized disposal assumption which limits the disposability within a value range is developed for characterizing the non-monotonic relation. Instead of having a single separating frontier, a NC separating hull which consists of several frontiers is constructed to separate the groups. By adding the convexity assumption, a C separating hull is then constructed. An illustrative example is used to test the performance. The NC hull classifier outperforms the C hull classifier. Moreover, a comparison with some existing frontier classifiers also reveals the superiority of applying the proposed NC hull classifier. Chapter 4 proposes novel frontier classifiers for accommodating different mixes of classification information. To be specific, by reflecting on the monotonic relation, a NC classifier is constructed. If there is a priori information of the substitution relation, then a C classifier is generated. Both the NC and C classifiers generate two frontiers where each envelops one group of observations. The intersection of two frontiers is known as the overlap which may lead to misclassifications. The overlap is reduced by allowing the two frontiers to shift inwards to the extent that the total misclassification cost is minimized. The shifted cost-sensitive frontiers are then used to separate the groups. The discriminant rules are also designed to incorporate the cost information. The empirical results show that the NC classifier provides a better separation than the C one does. Moreover, the proposed DDF measure outperforms the commonly used radial measure in providing a reasonable separation
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Non-parametric DEA method"

1

Cimatti, Alessandro, Alberto Griggio, and Gianluca Redondi. "Universal Invariant Checking of Parametric Systems with Quantifier-free SMT Reasoning." In Automated Deduction – CADE 28. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-79876-5_8.

Full text
Abstract:
AbstractThe problem of invariant checking in parametric systems – which are required to operate correctly regardless of the number and connections of their components – is gaining increasing importance in various sectors, such as communication protocols and control software. Such systems are typically modeled using quantified formulae, describing the behaviour of an unbounded number of (identical) components, and their automatic verification often relies on the use of decidable fragments of first-order logic in order to effectively deal with the challenges of quantified reasoning.In this paper, we propose a fully automatic technique for invariant checking of parametric systems which does not rely on quantified reasoning. Parametric systems are modeled with array-based transition systems, and our method iteratively constructs a quantifier-free abstraction by analyzing, with SMT-based invariant checking algorithms for non-parametric systems, increasingly-larger finite instances of the parametric system. Depending on the verification result in the concrete instance, the abstraction is automatically refined by leveraging canditate lemmas from inductive invariants, or by discarding previously computed lemmas.We implemented the method using a quantifier-free SMT-based IC3 as underlying verification engine. Our experimental evaluation demonstrates that the approach is competitive with the state of the art, solving several benchmarks that are out of reach for other tools.
APA, Harvard, Vancouver, ISO, and other styles
2

Arboretti, Rosa, Riccardo Ceccato, and Luigi Salmaso. "Nonparametric methods for stratified C-sample designs: a case study." In Proceedings e report. Firenze University Press, 2021. http://dx.doi.org/10.36253/978-88-5518-304-8.05.

Full text
Abstract:
Several parametric and nonparametric methods have been proposed to deal with stratified C-sample problems where the main interest lies in evaluating the presence of a certain treatment effect, but the strata effects cannot be overlooked. Stratified scenarios can be found in several different fields. In this paper we focus on a particular case study from the field of education, addressing a typical stochastic ordering problem in the presence of stratification. We are interested in assessing how the performance of students from different degree programs at the University of Padova change, in terms of university credits and grades, when compared with their entry test results. To address this problem, we propose an extension of the Non-Parametric Combination (NPC) methodology, a permutation-based technique (see Pesarin and Salmaso, 2010), as a valuable tool to improve the data analytics for monitoring University students’ careers at the School of Engineering of the University of Padova. This new procedure indeed allows us to assess the efficacy of the University of Padova’s entry tests in evaluating and selecting future students.
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Hongmei. "Non- and semi-parametric methods to select genetic and epigenetic factors." In Analyzing High-Dimensional Gene Expression and DNA Methylation Data with R. Chapman and Hall/CRC, 2020. http://dx.doi.org/10.1201/9780429155192-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Vannucci, Giulia, Anna Gottard, Leonardo Grilli, and Carla Rampichini. "Random effects regression trees for the analysis of INVALSI data." In Proceedings e report. Firenze University Press, 2021. http://dx.doi.org/10.36253/978-88-5518-304-8.07.

Full text
Abstract:
Mixed or multilevel models exploit random effects to deal with hierarchical data, where statistical units are clustered in groups and cannot be assumed as independent. Sometimes, the assumption of linear dependence of a response on a set of explanatory variables is not plausible, and model specification becomes a challenging task. Regression trees can be helpful to capture non-linear effects of the predictors. This method was extended to clustered data by modelling the fixed effects with a decision tree while accounting for the random effects with a linear mixed model in a separate step (Hajjem &amp; Larocque, 2011; Sela &amp; Simonoff, 2012). Random effect regression trees are shown to be less sensitive to parametric assumptions and provide improved predictive power compared to linear models with random effects and regression trees without random effects. We propose a new random effect model, called Tree embedded linear mixed model, where the regression function is piecewise-linear, consisting in the sum of a tree component and a linear component. This model can deal with both non-linear and interaction effects and cluster mean dependencies. The proposal is the mixed effect version of the semi-linear regression trees (Vannucci, 2019; Vannucci &amp; Gottard, 2019). Model fitting is obtained by an iterative two-stage estimation procedure, where both the fixed and the random effects are jointly estimated. The proposed model allows a decomposition of the effect of a given predictor within and between clusters. We will show via a simulation study and an application to INVALSI data that these extensions improve the predictive performance of the model in the presence of quasi-linear relationships, avoiding overfitting, and facilitating interpretability.
APA, Harvard, Vancouver, ISO, and other styles
5

Shanmugam, Selvanayaki Kolandapalayam, J. Shyamala Devi, R. Suganya, and R. Senthil Kumar. "Artificial Intelligence (AI) and Data Envelopment Analysis (DEA) in Healthcare." In Synergizing Data Envelopment Analysis and Machine Learning for Performance Optimization in Healthcare. IGI Global, 2025. https://doi.org/10.4018/979-8-3373-0081-8.ch001.

Full text
Abstract:
The convergence of Data Envelopment Analysis (DEA) and Machine Learning (ML) has sparked growing interest in efficiency analysis, as these two methods offer complementary strengths. DEA is a non-parametric method used to assess the relative efficiency of decision-making units (DMUs) based on input-output relationships. DEA plays a vital role in various economic, healthcare, education, and logistics applications to help the best-performing entities by projecting/benchmarking them against an efficient frontier. However, DEA lacks predictive capability and struggles with large, complex datasets, which could be made possible using ML algorithms. The DEA's efficiency measurement and ML's capabilities of learning, detecting, and recognizing data patterns, performance evaluation, and decision-making can be enhanced.
APA, Harvard, Vancouver, ISO, and other styles
6

Koch, Stefan. "Measuring the Efficiency of Free and Open Source Software Projects Using Data Envelopment Analysis." In Software Applications. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-060-8.ch173.

Full text
Abstract:
In this chapter, we propose for the first time a method to compare the efficiency of free and open source projects, based on the data envelopment analysis (DEA) methodology. DEA offers several advantages in this context, as it is a non-parametric optimization method without any need for the user to define any relations between different factors or a production function, can account for economies or diseconomies of scale, and is able to deal with multi-input, multi-output systems in which the factors have different scales. Using a data set of 43 large F/OS projects retrieved from SourceForge.net, we demonstrate the application of DEA, and show that DEA indeed is usable for comparing the efficiency of projects. We will also show additional analyses based on the results, exploring whether the inequality in work distribution within the projects, the licensing schem,e or the intended audience have an effect on their efficiency. As this is a first attempt at using this method for F/OS projects, several future research directions are possible. These include additional work on determining input and output factors, comparisons within application areas, and comparison to commercial or mixed-mode development projects.
APA, Harvard, Vancouver, ISO, and other styles
7

Koch, Stefan. "Measuring the Efficiency of Free and Open Source Software Projects Using Data Envelopment Analysis." In Emerging Free and Open Source Software Practices. IGI Global, 2007. http://dx.doi.org/10.4018/978-1-59904-210-7.ch002.

Full text
Abstract:
In this chapter, we propose for the first time a method to compare the efficiency of free and open source projects, based on the data envelopment analysis (DEA) methodology. DEA offers several advantages in this context, as it is a non-parametric optimization method without any need for the user to define any relations between different factors or a production function, can account for economies or diseconomies of scale, and is able to deal with multi-input, multi-output systems in which the factors have different scales. Using a data set of 43 large F/OS projects retrieved from SourceForge.net, we demonstrate the application of DEA, and show that DEA indeed is usable for comparing the efficiency of projects. We will also show additional analyses based on the results, exploring whether the inequality in work distribution within the projects, the licensing scheme or the intended audience have an effect on their efficiency. As this is a first attempt at using this method for F/OS projects, several future research directions are possible. These include additional work on determining input and output factors, comparisons within application areas, and comparison to commercial or mixed-mode development projects.
APA, Harvard, Vancouver, ISO, and other styles
8

Edeh, Natasha Cecilia. "Advancing Supply Chain Efficiency and Sustainability." In Advances in Business Information Systems and Analytics. IGI Global, 2023. http://dx.doi.org/10.4018/979-8-3693-0255-2.ch009.

Full text
Abstract:
This chapter investigates data envelopment analysis (DEA) in the context of supply chain management. DEA, a non-parametric method for assessing productive efficiency with many inputs and outputs, has considerably impacted research and practical implementation. It enables performance analysis across organizations with complicated input-output interactions. DEA provides user-friendly and customizable criterion weighting, simplifies analysis by eliminating the need for production function calculation, and delivers comprehensive efficiency measurements. This chapter examines existing research on the use of DEA in supply chains to assess present practices, recent breakthroughs, and techniques critically. This chapter addresses the central research question, “What are the latest advancements and methodologies in applying DEA to the supply chain?” The findings of this study add to the understanding of current practices at the confluence of DEA and supply chain management, which is critical in today's complex corporate context.
APA, Harvard, Vancouver, ISO, and other styles
9

Vajjhala, Narasimha Rao, and Philip Eappen. "Data Envelopment Analysis in Healthcare Management." In Advances in Business Information Systems and Analytics. IGI Global, 2023. http://dx.doi.org/10.4018/979-8-3693-0255-2.ch011.

Full text
Abstract:
This chapter explores the applications, contributions, limitations, and challenges of data envelopment analysis (DEA) in healthcare management. DEA, a non-parametric method used for evaluating the efficiency of decision-making units, has found extensive applications in healthcare sectors such as hospital management, nursing, and outpatient services. The review consolidates findings from a broad range of studies, highlighting DEA's significant contributions to efficiency measurement, benchmarking, resource allocation and optimization, and performance evaluation. However, despite DEA's robust applications, the chapter also identifies several limitations and challenges, including the selection of inputs and outputs, sensitivity to outliers, inability to handle statistical noise, lack of inherent uncertainty measures, homogeneity assumption, and the static nature of traditional DEA models. These challenges underscore the need for further research and methodological advancements in applying DEA in healthcare management.
APA, Harvard, Vancouver, ISO, and other styles
10

Savić, Gordana, and Milan Martić. "Composite Indicators Construction by Data Envelopment Analysis." In Advances in Data Mining and Database Management. IGI Global, 2017. http://dx.doi.org/10.4018/978-1-5225-0714-7.ch005.

Full text
Abstract:
Composite indicators (CIs) are seen as an aggregation of a set of sub-indicators for measuring multi-dimensional concepts that cannot be captured by a single indicator (OECD, 2008). The indicators of development in different areas are also constructed by aggregating several sub-indicators. Consequently, the construction of CIs includes weighting and aggregation of individual performance indicators. These steps in CI construction are challenging issues as the final results are significantly affected by the method used in aggregation. The main question is whether and how to weigh individual performance indicators. Verifiable information regarding the true weights is typically unavailable. In practice, subjective expert opinions are usually used to derive weights, which can lead to disagreements (Hatefi &amp; Torabi, 2010). The disagreement can appear when the experts from different areas are included in a poll since they can value criteria differently in accordance with their expertise. Therefore, a proper methodology of the derivation of weights and construction of composite indicators should be employed. From the operations research standpoint, the data envelopment analysis (DEA) and the multiple criteria decision analysis (MCDA) are proper methods for the construction of composite indicators (Zhou &amp; Ang, 2009; Zhou, Ang, &amp; Zhou, 2010). All methods combine the sub-indicators according to their weights, except that the MCDA methods usually require a priori determination of weights, while the DEA determines the weights a posteriori, as a result of model solving. This chapter addresses the DEA as a non-parametric technique, introduced by Charnes, Cooper, and Rhodes (1978), for efficiency measurement of different non-profitable and profitable units. It is lately adopted as an appropriate method for the CI construction due to its several features (Shen, Ruan, Hermans, Brijs, Wets, &amp; Vanhoof, 2011). Firstly, individual performance indicators are combined without a priori determination of weights, and secondly, each unit under observation is assessed taking into consideration the performance of all other units, which is known as the ‘benefit of the doubt' (BOD) approach (Cherchye, Moesen, Rogge, &amp; van Puyenbroeck, 2007). The methodological and theoretical aspects and the flaws of the DEA application for the construction of CIs will be discussed in this chapter, starting with the issues related to the application procedure, followed by the issues of real data availability, introducing value judgments, qualitative data, and non-desirable performance indicators. The procedure of a DEA-based CI construction will be illustrated by the case of ranking of different regions of Serbia based on their socio-economic development.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Non-parametric DEA method"

1

JETMAR, Marek. "Application of the Non-parametric DEA method to Analyze the Efficiency of selected services provided by municipalities, example of public libraries." In Current Trends in Public Sector Research. Masaryk University Press, 2020. http://dx.doi.org/10.5817/cz.muni.p210-9646-2020-7.

Full text
Abstract:
The paper deals with the possibility of applying the DEA method to measure the efficiency of local public services provided by municipalities and towns in the Czech Republic. It is testing and modeling data on the effectiveness of local libraries, which for 100 years had to provide basic education and disseminate education in municipalities. There are many models in the literature dealing with various problems of efficiency analysis. A particularly suitable and elegant model is the DEA model based on Chebyshev distance. This model can be formulated with both the assumption of constant range returns and the assumption of variable range returns. Similar to the classical DEA model, this method can be formulated as a set of optimization problems looking for weights for given inputs and outputs.
APA, Harvard, Vancouver, ISO, and other styles
2

Mansouri Kaleibar, Mozhgan, and Evelin Krmac. "RANKING OF ADRIATIC SEA CONTAINER PORTS USING DEA." In Maritime Transport Conference. Universitat Politècnica de Catalunya. Iniciativa Digital Politècnica, 2024. http://dx.doi.org/10.5821/mt.12804.

Full text
Abstract:
Ports are important transport hubs and facilitate the movement of goods for businesses in local communities and global markets. Ports on the Adriatic Sea play a special role in European transport due to their shorter distance to Asian and African markets. The ranking of ports is important not only to assess their efficiency, but also to create a competitive environment and enable port managers and policy makers to recognise and take into account their strengths and weaknesses, leading to an improvement in the performance of ports in general. Data Envelopment Analysis (DEA) is a non-parametric method for evaluating and ranking entities. Cross-efficiency is one of the ranking methods that is able to evaluate all decision-making units (DMU), including efficient and inefficient units. This method has been developed in this article for the presence of uncontrollable inputs and undesirable outputs in an uncertain environment. Therefore, the article deals with the ranking of Adriatic container ports from an economic and environmental perspective using the new improved cross-efficiency method.
APA, Harvard, Vancouver, ISO, and other styles
3

Mihaylova-Borisova, Gergana. "PANDEMIC CRISIS AND ITS EFFECTS ON BULGARIAN BANKING SYSTEM’S EFFICIENCY." In 5th International Scientific Conference – EMAN 2021 – Economics and Management: How to Cope With Disrupted Times. Association of Economists and Managers of the Balkans, Belgrade, Serbia, 2021. http://dx.doi.org/10.31410/eman.2021.95.

Full text
Abstract:
The economies are once again facing the challenges of another crisis related to the spread of coronavirus in 2020. The banking sector, being one of the main intermediaries in the economies, is also affected by the spread of the new crisis, which is different compared to the previous crises such as the global financial crisis in 2008 and the European debt crisis in 2012-2013. Still, the banking sector in Bulgaria suffers from the pandemic crisis due to decelerated growth rate of loans, provided to households and non-financial enterprises, as well as declining profits related to the narrowing spread between interest rates on loans and deposits. The pandemic crisis, which later turned into an economic one, is having a negative impact on the efficiency of the banking system. To prove the negative impact of the pandemic crisis on the efficiency of banks, the non-parametric method for measuring the efficiency, the so-called Data envelopment analysis (DEA), is used.
APA, Harvard, Vancouver, ISO, and other styles
4

Silva, Eduardo Augusto de Medeiros, and Ivan da Silva Sendin. "A non-parametric approach to identifying anomalies in Bitcoin mining." In Anais Estendidos do Simpósio Brasileiro de Segurança da Informação e de Sistemas Computacionais. Sociedade Brasileira de Computação - SBC, 2024. http://dx.doi.org/10.5753/sbseg_estendido.2024.241602.

Full text
Abstract:
Selfish Mining is an attack on the proof-of-work-based cryptocurrency consensus mechanism, enabling attackers to gain more than their fair share of rewards. Its existence indicates that the Nakamoto consensus is not incentive compatible and could jeopardize blockchain security. Recently, a method employing the Z-Score to detect selfish mining was proposed. This paper introduces a non-parametric statistical technique to identify traces of selfish miners on the blockchain without assuming any specific statistical distribution for the analyzed data. Additionally, the applicability of this type of analysis is discussed.
APA, Harvard, Vancouver, ISO, and other styles
5

Gonçalves, Victor H., and João L. G. Rosa. "Forecasting economic time series using chaotic neural networks." In XV Encontro Nacional de Inteligência Artificial e Computacional. Sociedade Brasileira de Computação - SBC, 2018. http://dx.doi.org/10.5753/eniac.2018.4470.

Full text
Abstract:
This paper describes the application of KIII, a biologically more plausible neural network model, for forecasting economic time series. K-sets are connectionist models based on neural populations and have been used in many machine learning applications. In this paper, this method was applied to IPCA, a Brazilian consumer price index surveyed by IBGE. The values ranged from August 1994 to June 2017. Experiments were performed using four non-parametric models and seven parametric methods. The statistical metric RMSE was used to compare methods performance. Freeman KIII sets worked well as a filter, but it was not a good prediction method. This paper contributes with the use of non-parametrics models for forecasting inflation in a developing country.
APA, Harvard, Vancouver, ISO, and other styles
6

S. S. Fogliatto, Matheus, Luiz Desuó N., Rafael R. M. Ribeiro, et al. "Time to Event Analysis for Failure Causes in Electrical Power Distribution Systems." In Congresso Brasileiro de Automática - 2020. sbabra, 2020. http://dx.doi.org/10.48011/asba.v2i1.1662.

Full text
Abstract:
Electricity is a fundamental resource for modern society. However, some threats are faced by Electrical Power Distribution Systems, which are responsible for delivering electricity to end consumers. Analyzing how much time these hazards will threaten these systems, causing failure events, is an essential area of study. Through statistical methods, it is possible to study this behaviour from time until failure, as well as to observe the influence of variables at this time,providing models to predict when a failure event will occur. In this study, Reliability Analysis Regression techniques are used on real data, constructing a model for all failures and for different groups of failures, using non-parametric and parametric methods to estimate the reliability and cumulative hazard curves. An analysis of the failure causes directly linked to weather events, using six weather variables, is also made.
APA, Harvard, Vancouver, ISO, and other styles
7

Mishra, Debakanta, and S. M. Naziur Mahmud. "Effect of Particle Size and Shape Characteristics on Ballast Shear Strength: A Numerical Study Using the Direct Shear Test." In 2017 Joint Rail Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/jrc2017-2322.

Full text
Abstract:
The ballast layer serves as a major structural component in typical ballasted railroad track systems. When subjected to an external load, ballast particles present a complex mechanical response which is strongly dependent on particle to particle interactions within this discrete medium. One common test used to study the shear strength characteristics of railroad ballast is the Direct Shear Test (DST). However, it is often not feasible in standard geotechnical engineering laboratories to conduct direct shear tests on ballast particles due to significantly large specimen and test setup requirements. Even for the limited number of laboratories equipped to accommodate the testing of such large specimens, conducting repeated tests for parametric analysis of different test and specimen parameters on shear strength properties is often not feasible. Numerical modeling efforts are therefore commonly used for such parametric analyses. An ongoing research study at Boise State University is using the Discrete Element Method (DEM) to evaluate the effects of varying particle size and shape characteristics (i.e., flakiness, elongation, roundness, angularity) on direct shear strength behavior of railroad ballast. A commercially available three-dimensional DEM package (PFC3D®) is being used for this purpose. In numerical modeling, railroad ballasts can be simulated using spheres (simple approach) and non-breakable clumps (complex approach). This paper utilizes both approaches to compare the ballast stress-strain response as obtained from DST. Laboratory test results available in published literature are being used to calibrate the developed numerical models. This paper presents findings from this numerical modeling effort, and draws inferences concerning the implications of these findings on the design and construction of railroad ballast layers.
APA, Harvard, Vancouver, ISO, and other styles
8

Da Silva, Maykon Renan Pereira, and Flávio Rocha. "Método de Estimação de Parâmetros para Modelagem no Domínio Wavelet do Tráfego de Redes de Computadores Usando o Algoritmo de Levenberg-Marquardt." In Workshop em Desempenho de Sistemas Computacionais e de Comunicação. Sociedade Brasileira de Computação - SBC, 2020. http://dx.doi.org/10.5753/wperformance.2020.11104.

Full text
Abstract:
Research has shown that analysis and modeling techniques that provide a better understanding of the behavior of network traffic flows are very important in the design and optimization of communication networks. For this reason, this work proposes a multifractal model based on a multiplicative cascade in the wavelet domain, to synthesize network traffic samples. For this purpose, in the proposed model, a parametric modeling based on an exponential function is used for the variance of the multipliers along the stages of the cascade. The exponential function parameters are obtained through the solution of a non-linear system, for this purpose, the Levenberg-Marquardt method is used. The main contribution of the proposed algorithm is to use a fixed and reduced number of parameters to generate network traffic samples that have characteristics such as self-similarity and wide Multifractal Spectrum Width (MSW) similar to the real network traffic traces and without the need for prior adjustment of these parameters.
APA, Harvard, Vancouver, ISO, and other styles
9

Gaskell, Jack G., Matthew McGilvray, and David R. H. Gillespie. "A Discrete Element Methods-Based Model for Particulate Deposition and Rebound in Gas Turbines." In ASME Turbo Expo 2021: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/gt2021-58477.

Full text
Abstract:
Abstract The secondary air system and cooling passages of gas turbine components are prone to blockage from sand and dust. Prediction of deposition requires accurate models of particle transport and thermo-mechanical interaction with walls. Bounce stick models predict whether a particle will bounce, stick, or shatter upon impact and calculate rebound trajectories if applicable. This paper proposes an explicit bounce stick model that uses analytical solutions of adhesion, plastic deformation and viscoelasticity to time-resolve collision physics. The Discrete-Element Methods (DEM) model shows good agreement when compared to experimental studies of micron and millimetre-scale particle collisions, requiring minimal parametric fitting. Non-physical values mechanical properties, artifices of previous models, are thus eliminated. Further comparison is made to the best resolved and industry standard semi-empirical models available in literature. In addition to coefficients of restitution, other variables crucial to accurately model rebound, for example angular velocity, are predicted. The time-stepping explicit approach allows full coupling between internal processes during contact, and shows that particle deformation and hence viscoelasticity play a significant role in adhesion. Modelling time-dependent internal variables such as wall-normal force create functionality for future modelling of arbitrarily shaped particles, the physics of which has been shown by previous work to differ significantly from that of spheres. To date these effects have not been captured well using by higher-level energy-based models.
APA, Harvard, Vancouver, ISO, and other styles
10

Albanesi, Luca, Nicolò Damiani, Carlo Filippo Manzini, and Paolo Morandi. "EXPERIMENTAL AND NUMERICAL IN-PLANE SEISMIC BEHAVIOUR OF AN INNOVATIVE STEEL REINFORCEMENT SYSTEM FOR URM WALLS." In 2nd Croatian Conference on Earthquake Engineering. University of Zagreb Faculty of Civil Engineering, 2023. http://dx.doi.org/10.5592/co/2crocee.2023.122.

Full text
Abstract:
An experimental campaign, followed by a numerical research, aimed at evaluating the in-plane seismic behaviour of an innovative steel modular system (named “Resisto 5.9”, designed by Progetto Sisma s.r.l.) for the reinforcement of load-bearing masonry walls, has been performed at the EUCENTRE Foundation in Pavia. Different masonry typologies, selected among the most common solutions in Italian existing buildings, were considered in this study. In this paper, the results related to a solid clay bricks masonry, assembled using lime mortar in “header bond” pattern, are reported. A complete mechanical characterization of units, mortars, masonry typologies and of the strengthening system components (i.e. steel elements and anchors) has been carried out. In-plane cyclic pseudo-static tests were then performed on full-scale specimens to investigate the influence of the proposed reinforcement system on the lateral in-plane response of the walls, compared to their unreinforced conditions. The main parameters which characterized the cyclic behaviour of the masonry piers, i.e. elastic stiffness, lateral strength and displacement capacity, were analysed in relation to the achieved damage mechanism. The numerical study of the research consisted of a series of parametric non-linear analyses on advanced discontinuous models based on the Distinct Element Method (DEM). Different wall dimensions, vertical load levels and boundary conditions, in addition to those tested experimentally, were considered. Moreover, the numerical campaign was also extended varying the bond pattern and the mechanical properties with respect to the experimentally tested solutions. In this paper, the results of the experimental tests on solid brick masonry together with the calibration of the related numerical DEM models were reported.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Non-parametric DEA method"

1

Ludeña, Carlos E. Agricultural Productivity Growth, Efficiency Change and Technical Progress in Latin America and the Caribbean. Inter-American Development Bank, 2010. http://dx.doi.org/10.18235/0010939.

Full text
Abstract:
This paper analyzes total factor productivity growth in agriculture in Latin America and the Caribbean between 1961 and 2007 employing the Malmquist Index, a non-parametric methodology that uses data envelopment analysis (DEA) methods. The results show that among developing regions, Latin America and the Caribbean shows the highest agricultural productivity growth. The highest growth within the region has occurred in the last two decades, especially due to improvements in efficiency and the introduction of new technologies. Within the region, land-abundant countries consistently outperform land-constrained countries. Within agriculture, crops and non-ruminant sectors have displayed the strongest growth between 1961 and 2001, and ruminant production performed the worst. Additional analysis of the cases of Brazil and Cuba illustrates potential effects of policies and external shocks on agricultural productivity; policies that do not discriminate against agricultural sectors and that remove price and production distortions may help improve productivity growth.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography