Academic literature on the topic 'IDEAL DISTANCE MINIMIZATION METHOD'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'IDEAL DISTANCE MINIMIZATION METHOD.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "IDEAL DISTANCE MINIMIZATION METHOD"

1

Khan, Abdullah, Hashim Hizam, Noor Izzri Abdul-Wahab, and Mohammad Lutfi Othman. "Solution of Optimal Power Flow Using Non-Dominated Sorting Multi Objective Based Hybrid Firefly and Particle Swarm Optimization Algorithm." Energies 13, no. 16 (2020): 4265. http://dx.doi.org/10.3390/en13164265.

Full text
Abstract:
In this paper, a multi-objective hybrid firefly and particle swarm optimization (MOHFPSO) was proposed for different multi-objective optimal power flow (MOOPF) problems. Optimal power flow (OPF) was formulated as a non-linear problem with various objectives and constraints. Pareto optimal front was obtained by using non-dominated sorting and crowding distance methods. Finally, an optimal compromised solution was selected from the Pareto optimal set by applying an ideal distance minimization method. The efficiency of the proposed MOHFPSO technique was tested on standard IEEE 30-bus and IEEE 57-bus test systems with various conflicting objectives. Simulation results were also compared with non-dominated sorting based multi-objective particle swarm optimization (MOPSO) and different optimization algorithms reported in the current literature. The achieved results revealed the potential of the proposed algorithm for MOOPF problems.
APA, Harvard, Vancouver, ISO, and other styles
2

Restifo, Richard J. "The Pedicled Robertson Mammaplasty: Minimization of Complications in Obese Patients With Extreme Macromastia." Aesthetic Surgery Journal 40, no. 12 (2020): NP666—NP675. http://dx.doi.org/10.1093/asj/sjaa073.

Full text
Abstract:
Abstract Background Breast reduction for extreme macromastia in obese patients is a potentially high-risk endeavor. Free nipple grafting as well as a variety of pedicled techniques have been advocated for large reductions in obese patients, but the number of different approaches suggests that no single method is ideal. This paper suggests the Robertson Mammaplasty, an inferior pedicle technique characterized by a curvilinear skin extension onto the pedicle, as a potentially favorable approach to this clinical situation. Objectives The author sought to determine the safety of the Pedicled Robertson Mammaplasty for extreme macromastia in obese patients. Methods The records of a single surgeon’s practice over a 15-year period were retrospectively reviewed. Inclusion criteria were a Robertson Mammaplasty performed with a >3000-g total resection and a patient weight at least 20% above ideal body weight. Records were reviewed for patient characteristics, operative times, and complications. Results The review yielded 34 bilateral reduction patients that met inclusion criteria. The mean resection weight was 1859.2 g per breast, the mean body mass index was 36.4 kg/m2, and the mean sternal notch-to-nipple distance was 41.4 cm. Mean operative time was 122 minutes. There were no cases of nipple necrosis and no major complications that required reoperation under general anesthesia. A total 26.4% of patients had minor complications that required either local wound care or small office procedures, and 4.4% received small revisions under local anesthesia. Conclusions The Pedicled Robertson Mammaplasty is a fast and safe operation that yields good aesthetic results and a relative minimum of complications in the high-risk group of obese patients with extreme macromastia. Level of Evidence: 4
APA, Harvard, Vancouver, ISO, and other styles
3

Aljohani, Khalid. "Optimizing the Distribution Network of a Bakery Facility: A Reduced Travelled Distance and Food-Waste Minimization Perspective." Sustainability 15, no. 4 (2023): 3654. http://dx.doi.org/10.3390/su15043654.

Full text
Abstract:
There are many logistics nuances specific to bakery factories, making the design of their distribution network especially complex. In particular, bakery products typically have a shelf life of under a week. To ensure that products are delivered to end-customers with freshness, speed, quality, health, and safety prioritized, the distribution network, facility location, and ordering system must be optimally designed. This study presents a multi-stage framework for a bakery factory comprised of a selection methodology of an optimum facility location, an effective distribution network for delivery operations, and a practical ordering system used by related supply chain actors. The operations function and distribution network are optimized using a multi-criteria decision-making method comprised of the Analytic Hierarchy Process (AHP) to establish optimization criteria and Technique of Order Preference Similarity to the Ideal Solution (TOPSIS) to select the optimal facility location. The optimal distribution network strategy was found using an optimization technique. This framework was applied to a real-life problem for a bakery supply chain in the Western Region, Saudi Arabia. Using a real-life, quantitative dataset and incorporating qualitative feedback from key stakeholders in the supply chain, the developed framework enabled a reduction in overall distribution costs by 14%, decreasing the total travel distance by 16%, and decreasing estimated food waste by 22%. This result was primarily achieved by solving the facility location problem in favor of operating two factories without dedicated storage facilities and implementing the distribution network strategy of direct shipment of products from the bakery to customers.
APA, Harvard, Vancouver, ISO, and other styles
4

GÖÇMEN POLAT, Elifcan. "Distribution Centre Location Selection for Disaster Logistics with Integrated Goal Programming-AHP based TOPSIS Method at the City Level." Afet ve Risk Dergisi 5, no. 1 (2022): 282–96. http://dx.doi.org/10.35341/afet.1071343.

Full text
Abstract:
The importance of disaster logistics and its share in the logistics sector are increasing significantly. Most disasters are difficult to predict; therefore, a set of measures seems to be necessary to reduce the risks. Thus, disaster logistics needs to be designed with the pre-disaster and post-disaster measures. These disasters are experienced intensely in Turkey and the importance of these measures becomes more evidential. Therefore, accurate models are required to develop an effective disaster preparedness system. One of the most important decisions to increase the preparedness is to locate the centres for handling material inventory. In this context, this paper analyses the response phase designing the disaster distribution centres in Turkey at the provincial level. AHP (Analytical Hierarchy Process) based TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) method and goal programming model integration is used to decide alternative locations of distribution centres. TOPSIS method is employed for ranking the locations, which is based on hazard scores, total area, population, and distance to centre. Two conflicting objectives are first proposed in the goal programming formulation, in which maximization of the TOPSIS scores and minimization of the number of distribution centres covering all demands named set covering model are included. Although Gecimli has the highest priority with 0.8 p score in the TOPSIS ranking, Altincevre (0.77) and Buzlupınar (0.75) ensure both the TOPSIS score and coverage of the demand nodes. The results from this paper confirm that the computational results ensure disaster prevention insights especially in regions with limited data.
APA, Harvard, Vancouver, ISO, and other styles
5

Lopez-Perez, Jose J., Uriel H. Hernandez-Belmonte, Juan-Pablo Ramirez-Paredes, Marco A. Contreras-Cruz, and Victor Ayala-Ramirez. "Distributed Multirobot Exploration Based on Scene Partitioning and Frontier Selection." Mathematical Problems in Engineering 2018 (June 20, 2018): 1–17. http://dx.doi.org/10.1155/2018/2373642.

Full text
Abstract:
In mobile robotics, the exploration task consists of navigating through an unknown environment and building a representation of it. The mobile robot community has developed many approaches to solve this problem. These methods are mainly based on two key ideas. The first one is the selection of promising regions to explore and the second is the minimization of a cost function involving the distance traveled by the robots, the time it takes for them to finish the exploration, and others. An option to solve the exploration problem is the use of multiple robots to reduce the time needed for the task and to add fault tolerance to the system. We propose a new method to explore unknown areas, by using a scene partitioning scheme and assigning weights to the frontiers between explored and unknown areas. Energy consumption is always a concern during the exploration, for this reason our method is a distributed algorithm, which helps to reduce the number of communications between robots. By using this approach, we also effectively reduce the time needed to explore unknown regions and the distance traveled by each robot. We performed comparisons of our approach with state-of-the-art methods, obtaining a visible advantage over other works.
APA, Harvard, Vancouver, ISO, and other styles
6

Shu Xuan, Leong, Tengku Juhana Tengku Hashim, and Muhamad Najib Kamarudin. "Optimal location and sizing of distributed generation to minimize losses using whale optimization algorithm." Indonesian Journal of Electrical Engineering and Computer Science 29, no. 1 (2022): 15. http://dx.doi.org/10.11591/ijeecs.v29.i1.pp15-23.

Full text
Abstract:
The conventional power plants often bring in power quality concerns for instance high power losses and poor voltage profile to the network which are caused by the locations of power plants that are placed a distance away from loads. With proper planning and systematic allocation, the introduction of distributed generation (DG) into the network will enhance the performance and condition of the power system. This paper utilizes the optimization approach named whale optimization algorithm (WOA) in the search of the most ideal location and size of DG while ensuring the reduction of power losses and the minimization of the voltage deviation. WOA implementation is done in the IEEE 33-bus radial distribution system (RDS) utilizing MATPOWER and MATLAB software for no DG, one DG and two DGs installation. The outcome obtained from using WOA was compared to other well-known optimization methods and WOA has shown its competency after comparison; the optimal location of WOA with other methods showing almost the same result. The best result presented was the system with two DGs installed due to the losses of the system was recorded to be the least compared to one DG or no DG installation.
APA, Harvard, Vancouver, ISO, and other styles
7

He, Chun, Ke Guo, and Huayue Chen. "An Improved Image Filtering Algorithm for Mixed Noise." Applied Sciences 11, no. 21 (2021): 10358. http://dx.doi.org/10.3390/app112110358.

Full text
Abstract:
In recent years, image filtering has been a hot research direction in the field of image processing. Experts and scholars have proposed many methods for noise removal in images, and these methods have achieved quite good denoising results. However, most methods are performed on single noise, such as Gaussian noise, salt and pepper noise, multiplicative noise, and so on. For mixed noise removal, such as salt and pepper noise + Gaussian noise, although some methods are currently available, the denoising effect is not ideal, and there are still many places worthy of improvement and promotion. To solve this problem, this paper proposes a filtering algorithm for mixed noise with salt and pepper + Gaussian noise that combines an improved median filtering algorithm, an improved wavelet threshold denoising algorithm and an improved Non-local Means (NLM) algorithm. The algorithm makes full use of the advantages of the median filter in removing salt and pepper noise and demonstrates the good performance of the wavelet threshold denoising algorithm and NLM algorithm in filtering Gaussian noise. At first, we made improvements to the three algorithms individually, and then combined them according to a certain process to obtain a new method for removing mixed noise. Specifically, we adjusted the size of window of the median filtering algorithm and improved the method of detecting noise points. We improved the threshold function of the wavelet threshold algorithm, analyzed its relevant mathematical characteristics, and finally gave an adaptive threshold. For the NLM algorithm, we improved its Euclidean distance function and the corresponding distance weight function. In order to test the denoising effect of this method, salt and pepper + Gaussian noise with different noise levels were added to the test images, and several state-of-the-art denoising algorithms were selected to compare with our algorithm, including K-Singular Value Decomposition (KSVD), Non-locally Centralized Sparse Representation (NCSR), Structured Overcomplete Sparsifying Transform Model with Block Cosparsity (OCTOBOS), Trilateral Weighted Sparse Coding (TWSC), Block Matching and 3D Filtering (BM3D), and Weighted Nuclear Norm Minimization (WNNM). Experimental results show that our proposed algorithm is about 2–7 dB higher than the above algorithms in Peak Signal-Noise Ratio (PSNR), and also has better performance in Root Mean Square Error (RMSE), Structural Similarity (SSIM), and Feature Similarity (FSIM). In general, our algorithm has better denoising performance, better restoration of image details and edge information, and stronger robustness than the above-mentioned algorithms.
APA, Harvard, Vancouver, ISO, and other styles
8

Jawak, Shridhar D., Sagar F. Wankhede, Alvarinho J. Luis, and Keshava Balakrishna. "Impact of Image-Processing Routines on Mapping Glacier Surface Facies from Svalbard and the Himalayas Using Pixel-Based Methods." Remote Sensing 14, no. 6 (2022): 1414. http://dx.doi.org/10.3390/rs14061414.

Full text
Abstract:
Glacier surface facies are valuable indicators of changes experienced by a glacial system. The interplay of accumulation and ablation facies, followed by intermixing with dust and debris, as well as the local climate, all induce observable and mappable changes on the supraglacial terrain. In the absence or lag of continuous field monitoring, remote sensing observations become vital for maintaining a constant supply of measurable data. However, remote satellite observations suffer from atmospheric effects, resolution disparity, and use of a multitude of mapping methods. Efficient image-processing routines are, hence, necessary to prepare and test the derivable data for mapping applications. The existing literature provides an application-centric view for selection of image processing schemes. This can create confusion, as it is not clear which method of atmospheric correction would be ideal for retrieving facies spectral reflectance, nor are the effects of pansharpening examined on facies. Moreover, with a variety of supervised classifiers and target detection methods now available, it is prudent to test the impact of variations in processing schemes on the resultant thematic classifications. In this context, the current study set its experimental goals. Using very-high-resolution (VHR) WorldView-2 data, we aimed to test the effects of three common atmospheric correction methods, viz. Dark Object Subtraction (DOS), Quick Atmospheric Correction (QUAC), and Fast Line-of-Sight Atmospheric Analysis of Hypercubes (FLAASH); and two pansharpening methods, viz. Gram–Schmidt (GS) and Hyperspherical Color Sharpening (HCS), on thematic classification of facies using 12 supervised classifiers. The conventional classifiers included: Mahalanobis Distance (MHD), Maximum Likelihood (MXL), Minimum Distance to Mean (MD), Spectral Angle Mapper (SAM), and Winner Takes All (WTA). The advanced/target detection classifiers consisted of: Adaptive Coherence Estimator (ACE), Constrained Energy Minimization (CEM), Matched Filtering (MF), Mixture-Tuned Matched Filtering (MTMF), Mixture-Tuned Target-Constrained Interference-Minimized Filter (MTTCIMF), Orthogonal Space Projection (OSP), and Target-Constrained Interference-Minimized Filter (TCIMF). This experiment was performed on glaciers at two test sites, Ny-Ålesund, Svalbard, Norway; and Chandra–Bhaga basin, Himalaya, India. The overall performance suggested that the FLAASH correction delivered realistic reflectance spectra, while DOS delivered the least realistic. Spectra derived from HCS sharpened subsets seemed to match the average reflectance trends, whereas GS reduced the overall reflectance. WTA classification of the DOS subsets achieved the highest overall accuracy (0.81). MTTCIMF classification of the FLAASH subsets yielded the lowest overall accuracy of 0.01. However, FLAASH consistently provided better performance (less variable and generally accurate) than DOS and QUAC, making it the more reliable and hence recommended algorithm. While HCS-pansharpened classification achieved a lower error rate (0.71) in comparison to GS pansharpening (0.76), neither significantly improved accuracy nor efficiency. The Ny-Ålesund glacier facies were best classified using MXL (error rate = 0.49) and WTA classifiers (error rate = 0.53), whereas the Himalayan glacier facies were best classified using MD (error rate = 0.61) and WTA (error rate = 0.45). The final comparative analysis of classifiers based on the total error rate across all atmospheric corrections and pansharpening methods yielded the following reliability order: MXL > WTA > MHD > ACE > MD > CEM = MF > SAM > MTMF = TCIMF > OSP > MTTCIMF. The findings of the current study suggested that for VHR visible near-infrared (VNIR) mapping of facies, FLAASH was the best atmospheric correction, while MXL may deliver reliable thematic classification. Moreover, an extensive account of the varying exertions of each processing scheme is discussed, and could be transferable when compared against other VHR VNIR mapping methods.
APA, Harvard, Vancouver, ISO, and other styles
9

H. Mohammed, Abbas, and Khattab S. Abdul-Razzaq. "Optimum Design of Steel Trapezoidal Box-Girders Using Finite Element Method." International Journal of Engineering & Technology 7, no. 4.20 (2018): 325. http://dx.doi.org/10.14419/ijet.v7i4.20.26130.

Full text
Abstract:
The target of basic plan is to choose part sizes with the ideal proportioning of the in general auxiliary geometry. Regular steel trapezoidal box-supports have been utilized generally in different designing fields. The target of this examination is to create three-dimensional limited component display for the size improvement of steel trapezoidal box-braces. The limited component programming bundle ANSYS was utilized to decide the ideal cross segment measurement for the steel trapezoidal-box support. Two target capacities were considered in this investigation which are: minimization of the strain vitality and minimization of the volume. The plan factors are the width of the best spine, the width of the base rib, the thickness of the best rib, the thickness of the base rib, the stature of the support and the thickness of the networks. The imperatives considered in this examination are the ordinary and shear worry in steel brace and the dislodging at mid-length of the support. Improvement consequences of steel brace show that the ideal territory of cross segment for the strain vitality minimization is more noteworthy than the ideal for volume minimization by 6 %. The base cross area is the financial structure, hence the volume minimization is more pertinence for steel brace advancement.
APA, Harvard, Vancouver, ISO, and other styles
10

Rozylowicz, Laurentiu, Florian P. Bodescu, Cristiana M. Ciocanea, et al. "Empirical analysis and modeling of Argos Doppler location errors in Romania." PeerJ 7 (January 31, 2019): e6362. http://dx.doi.org/10.7717/peerj.6362.

Full text
Abstract:
Background Advances in wildlife tracking technology have allowed researchers to understand the spatial ecology of many terrestrial and aquatic animal species. Argos Doppler is a technology that is widely used for wildlife tracking owing to the small size and low weight of the Argos transmitters. This allows them to be fitted to small-bodied species. The longer lifespan of the Argos units in comparison to units outfitted with miniaturized global positioning system (GPS) technology has also recommended their use. In practice, large Argos location errors often occur due to communication conditions such as transmitter settings, local environment, and the behavior of the tracked individual. Methods Considering the geographic specificity of errors and the lack of benchmark studies in Eastern Europe, the research objectives were: (1) to evaluate the accuracy of Argos Doppler technology under various environmental conditions in Romania, (2) to investigate the effectiveness of straightforward destructive filters for improving Argos Doppler data quality, and (3) to provide guidelines for processing Argos Doppler wildlife monitoring data. The errors associated with Argos locations in four geographic locations in Romania were assessed during static, low-speed and high-speed tests. The effectiveness of the Douglas Argos distance angle filter algorithm was then evaluated to ascertain its effect on the minimization of localization errors. Results Argos locations received in the tests had larger associated horizontal errors than those indicated by the operator of the Argos system, including under ideal reception conditions. Positional errors were similar to those obtained in other studies outside of Europe. The errors were anisotropic, with larger longitudinal errors for the vast majority of the data. Errors were mostly related to speed of the Argos transmitter at the time of reception, but other factors such as topographical conditions and orientation of antenna at the time of the transmission also contributed to receiving low-quality data. The Douglas Argos filter successfully excluded the largest errors while retaining a large amount of data when the threshold was set to the local scale (two km). Discussion Filter selection requires knowledge about the movement patterns and behavior of the species of interest, and the parametrization of the selected filter typically requires a trial and error approach. Selecting the proper filter reduces the errors while retaining a large amount of data. However, the post-processed data typically includes large positional errors; thus, we recommend incorporating Argos error metrics (e.g., error ellipse) or use complex modeling approaches when working with filtered data.
APA, Harvard, Vancouver, ISO, and other styles
More sources
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!