To see the other types of publications on this topic, follow the link: Optimizing.

Dissertations / Theses on the topic 'Optimizing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Optimizing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Pagonis, Gust W. "Optimizing strategic sealift." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1995. http://handle.dtic.mil/100.2/ADA304836.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Mansour, Ragaa Taha Ahmed. "Optimizing IVF results." Maastricht : Maastricht : Universiteit Maastricht ; University Library, Maastricht University [Host], 2003. http://arno.unimaas.nl/show.cgi?fid=6122.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Springer, Alexander D. "Optimizing cycling power." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/105573.

Full text
Abstract:
Thesis: S.B., Massachusetts Institute of Technology, Department of Mechanical Engineering, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 29).
In this study we determine a viable bioenergetic model for power allocation during a cycling race. Various models have been proposed to address power allocation in races with two models rising above others: the Morton-Margaria Three Tank model and the Skiba Energy Balance model. The energy balance model was implemented in MATLAB and compared against the gold standard implementation in Golden Cheetah to model the depletion of an athlete's energy over the course a ride. The implementation of the model was successful as verified by ride data from a cyclist in the 2014 Tour de France. Additionally, the model was further tested with sample power profiles in order to understand the depletion of energy over the course of a ride. Two key findings emerged from the investigation. First, we require a better account of exhaustion in the energy balance model which can be achieved by weighting the time spent below critical power over the time spent above critical power. This is because a cyclist becomes more exhausted by efforts at higher power outputs compared to the recovery at an effort below critical power. Second, energy balance models should use a variable time constant as rides and races have highly variable recovery periods below critical power which affects the ability of an athlete to reconstitute their energy. Use of a variable time constant could address the weighting of efforts below critical power identified in the first finding as well.
by Alexander D. Springer.
S.B.
APA, Harvard, Vancouver, ISO, and other styles
4

Li, Zhipeng (Zhipeng Simon). "Optimizing Order Promising." Thesis, Massachusetts Institute of Technology, 2007. http://hdl.handle.net/1721.1/40116.

Full text
Abstract:
Thesis (M. Eng. in Logistics)--Massachusetts Institute of Technology, Engineering Systems Division, 2007.
Includes bibliographical references (leaves 86-89).
Online purchasing is now popular following the growth of E-business. Retailers ordering online will get the exact delivery date of goods for their better management of sales operations. Suppliers should keep their competence at order promising to attract customers in the market filled with increasing competition. Generally Order Promising means that the supplier receiving an order should determine to accept the order or not. If accepted, then the supplier should determine the delivery date. Necessary data should be replied to the ordering customer. Optimizing Order Promising (OOP) is Order Promising (OP) that is optimized. This thesis probed into OP and OOP and summarized the characteristics and differences of the current OP software products on the basis of interviews and the investigation into the existing OP software suppliers - i2 Technologies, Oracle and SAP. Backed by the thorough analysis on a particular case study company, this thesis discusses the workflow and model of OOP by combining the author's own thoughts on improving existing OP workflows. A company can add many new functions to the OOP model designed in this thesis on the basis of the appropriate adjustments to the existing OP workflows and systems.
(cont.) For example, different customers can be managed in a classified way in accordance with historical sales; customer trust can be increased by the approach of Customer Allocation; every deal of the company can be guaranteed to be profitable; and no negligence to important customers will occur due to favoring unimportant customers. Moreover, in dealing with the disruptions that have frequently occurred these years, the thesis designed the order promising process dealing with emergencies for the manufacturers of public utilities, ensuring that a company will implement their social responsibility while harvesting profits.
by Zhipeng Li.
M.Eng.in Logistics
APA, Harvard, Vancouver, ISO, and other styles
5

Alkozei, Mirwais, and Dipta Subhra Guha. "Optimizing packaging management." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-302449.

Full text
Abstract:
This project was carried out at Saint Gobain Sekurit AB. The company is a leading manufacturer ofglazing and operates within the automotive industry. The purpose of the project was to create a supply chain model to optimize the packaging management of racks which are customer owned. These are used to transport finished products to the customers of Saint Gobain. When the empty packaging is return it is stored until needed for production. The company needs to optimize the management and tracking of racks for which the company intends to develop a solution model which can provide the company with the needed information for better management and tracking. The project entailed to define parameters in the company’s IT environment and make sure that they can be used in practice. The cockpit has been categorized into three different hard coded modules which are built in Excel VBA, and an analysis module. The hard coded modules do the following: - Give visual, easy to read, overview of stocks- Generate triggers for operational actions to avoid shortages of racks and to minimize costs- Allow easy reconciliation with movements in customers portal and ease the reclamation process The analysis module of the cockpit functions as a basis for providing input for strategic decisions considering investment, layout of storage, and scrap. The design specification of the cockpit has been based of the needs of the stakeholders in the company and it has been built based on the available reports. Furthermore, user instructions has been developed, and selected personell has been trained in the usage of the implemented cockpit.
Detta projekt genomfördes på Saint Gobain Sekurit AB. Företaget är en ledande tillverkare av glasrutor och verkar inom fordonsindustrin. Syftet var att skapa en leveranskedjemodell för att optimera hanteringen av återanvändbara ställningar som ägs av kunder. Dessa används för att transportera färdiga produkter till Saint Gobains kunder. När en tom ställning returneras lagras den tills den behövs för produktion. Vad företaget behövde var optimering av hantering och spårning av återanvändbara ställningar vilket företaget ville bygga en cockpit för att tillhandahålla nödvändig information. Projektet innebar att definiera parametrar i företagets IT-miljö och se till att dessa kan användas i praktiken. Cockpiten har delats in i tre olika hårdkodade moduler som är byggda i Excel VBA, och en analysmodul. De hårdkodade modulerna gör följande: - ger en visuell, lättläst översikt över lagerstatus- genererar utlösare för operativa åtgärder för att undvika brist och minimera kostnader- möjliggör enkel avstämning med rörelser i kundportalen och underlättar reklamationsprocessen Cockpitens analysmodul fungerar som en grund för att ge input för strategiska beslut med tanke på investeringar, lagerlayout och skrotning. Designspecifikationen för cockpiten har baserats på behoven hos intressenterna i företaget och cockpiten har byggts utifrån tillgängliga rapporter. Dessutom har användarinstruktioner utvecklats och personal på företaget utbildats i användningen av den implementerade cockpiten.
APA, Harvard, Vancouver, ISO, and other styles
6

Tyago, Antonello Rafael. "OPTIMIZING FINITE AUTOMATA FOR DPI ENGINES." Universidade Federal de Pernambuco, 2012. https://repositorio.ufpe.br/handle/123456789/2147.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:54:54Z (GMT). No. of bitstreams: 2 arquivo9423_1.pdf: 6736856 bytes, checksum: 7f03b32b2fa913297725321598c5ecee (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2012
Nos últimos 40 anos a Internet se tornou um componente central para o comércio eletrônico internacional, comunicações, e para o desenvolvimento técnico e científico. Inicialmente as pesquisas relacionadas à Internet se focavam em melhoramentos na velocidade de transmissão de dados, capacidade e cobertura geográfica. Atualmente medição, modelagem e análise em redes de computadores, particularmente classificação de tráfego, tornaram-se um ponto crucial para manutenção do funcionamento da rede. Isto se deve principalmente ao crescimento exponencial das redes de computares em termos de tamanho, complexidade e diversidade de serviços. Neste contexto, sistemas de Deep Packet Inspection (DPI) se tornaram um elemento importante para medição de tráfego, já que classificação de aplicações baseada em portas caiu em desuso devido ao tunelamento de protocolos e uso indevido de portas padrões, por exemplo, softwares P2P que usam portas não bloqueadas para burlar regras de firewalls. Tradicionalmente, sistemas de DPI classificavam tráfego usando técnicas de string matching, i.e., as assinaturas de aplicações eram representadas por strings (cadeias de caracteres). Dessa maneira o procedimento de busca de padrões se dava através da inspeção da carga útil dos pacotes a procura dessas strings. String matching funciona bem para padrões simples, porém falha ao descrever padrões mais complexos, e.g., padrões com tamanho variável. Para solucionar este problema, sistemas de DPI têm substituído assinaturas representadas com strings por padrões descritos através de expressões regulares. Embora mais precisos, sistemas de DPI demandam maior poder computacional e geralmente não escalam bem conforme as velocidades dos enlaces aumentam. Este fato abriu espaço para várias pesquisas relacionadas à otimização de tais sistemas. Aproveitando este espaço, esta tese propõe um novo modelo de Deterministic Finite Automata (DFA) para casamento de padrões em sistemas DPI, o Ranged Compressed DFA (RCDFA). O RCDFA, junto com três otimizações propostas, atingem níveis de compressão de até 98% em bases de assinaturas bem conhecidas. Além do mais, o RCDFA codificado com um novo layout de memória (ALE) proposto neste trabalho é até 46 vezes mais rápido que os motores de DPI baseados em DFAs tradicionais
APA, Harvard, Vancouver, ISO, and other styles
7

Falkenberg, Christiane. "Optimizing Organic Solar Cells." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-89214.

Full text
Abstract:
This thesis deals with the characterization and implementation of transparent electron transport materials (ETM) in vacuum deposited p-i-n type organic solar cells (OSC) for substituting the parasitically absorbing standard ETM composed of n-doped C60. In addition to transparency in the visible range of the sun spectrum, the desired material properties include high electron mobility and conductivity, thermal and morphological stability, as well as good energy level alignment relative to the adjacent acceptor layer which is commonly composed of intrinsic C60. In this work, representatives of three different material classes are evaluated with regard to the above mentioned criteria. HATCN (hexaazatriphenylene hexacarbonitrile) is a small discoid molecule with six electron withdrawing nitrile groups at its periphery. It forms smooth thin films with an optical energy gap of 3.3eV, thus being transparent in the visible range of the sun spectrum. Doping with either 5wt% of the cationic n-dopant AOB or 7wt% of the proprietary material NDN1 effectively increases the conductivity to 7.6*10^-6 S/cm or 2.2*10^-4 S/cm, respectively. However, the fabrication of efficient OSC is impeded by the exceptionally high electron affinity (EA ) of approximately 4.8eV that causes the formation of an electron injection barrier between n-HATCN and intrinsic C60 (EA=4.0eV). This work presents a strategy to remove the barrier by introducing doped and undoped C60 intermediate layers, thus demonstrating the importance of energy level matching in a multi-layer structure and the advantages of Fermi level control by doping. Next, a series of six Bis-Fl-NTCDI (N,N-bis(fluorene-2-yl)-naphthalenetetracarboxylic diimide) compounds, which only differ by the length of the alkyl chains attached to the C9 positions of the fluorene side groups, is examined. When increasing the chain length from 0 to 6 carbon atoms, the energy levels remain nearly unchanged: We find EA=3.5eV as estimated from cyclic voltammetry, an ionization potential (IP ) in the range between 6.45eV and 6.63eV, and Eg,opt=3.1eV which means that all compounds form transparent thin films. Concerning thin film morphology, the addition of side chains results in the formation of amorphous layers with a surface roughness <1nm on room temperature glass substrates, and (1.5+/-0.5)nm for deposition onto glass substrates heated to 100°C. In contrast, films composed of the side chain free compound Bis-HFl-NTCDI exhibit a larger surface roughness of (2.5+/-0.5)nm and 9nm, respectively, and are nanocrystalline already at room temperature. Moreover, the conductivity achievable by n-doping is very sensitive to the side chain length: Whereas doping of Bis-HFl-NTCDI with 7wt% NDN1 results in a conductivity in the range of 10^-4 S/cm, the attachment of alkyl chains causes a conductivity which is more than three orders of magnitude smaller despite equal or slightly higher doping concentrations. The insufficient transport properties of the alkylated derivatives lead to the formation of pronounced s-kinks in the jV -characteristics of p-i-n type OSC while the use of n-Bis-HFl-NTCDI results in well performing devices. The last material, HATNA-Cl6 (2,3,8,9,14,15- hexachloro-5,6,11,12,17,18-hexaazatrinaphthylene), exhibits Eg,opt=2.7eV and is therefore not completely transparent in the visible range of the sun spectrum. However, its energy level positions of EA=4.1eV and IP=7.3eV are well suited for the application as ETM in combination with i-C60 as acceptor. The compound is dopable with all available n-dopants, resulting in maximum conductivities of sigma=1.6*10^-6, 3.5*10^-3, and 7.5*10^-3 S/cm at 7.5wt% AOB, Cr2(hpp)4, and NDN1, respectively. Applying n-HATNA-Cl6 instead of the reference ETM n-C60 results in a comparable or improved photocurrent density at an ETM thickness d(ETM)=40nm or 120nm, respectively. At d(ETM)=120nm, the efficiency eta is more than doubled as it increases from eta(n-C60)=0.4% to eta(n-HATNA-Cl6)=0.9% . Optical simulations show that the replacement of n-C60 by n-Bis-HFl-NTCDI, n-HATNA-Cl6, or the previously studied n-NTCDA (naphthalenetretracarboxylic dianhydride) in p-i-n or n-i-p type device architectures is expected to result in an increased photocurrent due to reduced parasitic absorption. For quantifying the gain, the performance of p-i-n type OSC with varying ETM type and thickness is evaluated. Special care has to be taken when analyzing devices comprising the reference ETM n-C60 as its conductivity is sufficiently large to extend the area of the aluminum cathode and thus the effective device area which may lead to distorted results. Overall, the experiment is able to confirm the trends predicted by the optical simulation. At large ETM thickness in the range between 60 and 120nm, the window layer effect of the ETM is most pronounced. For instance, at d(ETM)=120nm, eta(C60) is more than doubled using n-HATNA-Cl6 and even more than tripled using n-Bis-HFl-NTCDI or n-NTCDA. At optimized device geometry the photocurrent gain is slightly less than expected but nonetheless, the efficiency is improved from eta(max)=2.1% for n-C60 and n-HATNA-Cl6 solar cells to eta(max)=2.3, and 2.4% for n-Bis-HFl-NTCDI and n-NTCDA devices, respectively. This development is supported by generally higher Voc and FF in solar cells with transparent ETM. Finally, p-i-n type solar cells with varying ETM are aged at a temperature of 50°C and an illumination intensity of approximately 2 suns. Having extrapolated lifetimes t(80) of 36, 500, and 14000h and nearly unchanged jV-characteristics after 2000h, n-C60 and n-Bis-HFl-NTCDI devices exhibit the best stability. In contrast, n-NTCDA devices suffer from a constant decrease in Isc while n-HATNA-Cl6 solar cells show a rapid dscegradation of both Isc and FF associated with a decomposition of the material or a complete de-doping of the ETM. Here, lifetimes of only 4500h and 445hare achieved.
APA, Harvard, Vancouver, ISO, and other styles
8

Hashemian, Mozhdeh. "Optimizing Police Resources Deployment." Thesis, Université d'Ottawa / University of Ottawa, 2016. http://hdl.handle.net/10393/35378.

Full text
Abstract:
The Ottawa Police Service (OPS) deploys its resources based on the needs of predefined zones. However, the current zoning approach has been acknowledged as inefficient due to negative impacts on costs, proficiency, quality of services and time management. The zoning approach has also been acknowledged as inefficient due to its static nature, its inflexibility and its inability to adjust systematically according to the number of currently available police vehicles. It also cannot assist in addressing demand changes throughout the day in order to reduce call responses in neighbouring zones. Therefore, the demand variation could lead to a significant decrease in police efficiency, since those officers who have been allocated to other zones are not able to participate in events outside their zones without permission. It may cause a high volume of waiting calls and increased response time depending on the time of day, shifts, seasons, etc. Hence, the OPS needs to find a new model for resource deployment that can provide the same coverage but with better service quality. Resource allocation has always been a challenge for emergency services like police, fire emergency, and ambulance services since it has a direct impact on the efficiency and effectiveness of the service activities. The ambulance and fire emergency services have received research attention while the optimization of police resources remains largely ignored. While there are many similarities between ambulance and police deployment there are also significant differences that mean the direct transfer of ambulance models to police deployment is not feasible. This research addresses the lack of an effective tool for the deployment of police resources. We develop a simulation model that analyzes potential deployment plans in order to determine their effect on response times. The model has been developed in partnership with the Ottawa Police Service (OPS) and will address the obstacles, disadvantages, and geographical constraints of the existing allocation model. The OPS needs to align deployment with the service demand and their operational goals (response times, visibility, workload, compliance, etc.). Repositioning police vehicles in real time, helps in responding to future calls more effectively without adding more officers.
APA, Harvard, Vancouver, ISO, and other styles
9

Starzer, Michael. "Optimizing Tor Bridge Distribution." Thesis, Karlstads universitet, Institutionen för matematik och datavetenskap, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-26543.

Full text
Abstract:
The Onion Router (Tor) is a good way to have privacy and anonymity while using the Internet. However there are several problems it has to deal with, because it is also possible to bypass governmental censorship, which also became goal of the Tor network. By different techniques several governments and other parties who have the capability to, try to block the network completely. One technique is to overwhelm the distribution strategies for bridges – which are an essential part of the Tor network, especially for censored users. Hereby a possible approach for distributing bridges via online social networks (OSN) is presented. It is based on the Proximax distribution but has also the capability to separate and exclude possible adversaries who managed to join the social group. Moreover trustful users get rewarded by a better status and less waiting time for bridges.
APA, Harvard, Vancouver, ISO, and other styles
10

Kawasaki, Takako. "Coda constraints, optimizing representations." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1999. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape9/PQDD_0010/NQ54507.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Li, Hongjie. "Optimizing drinking water filtration." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0011/MQ60148.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Kawasaki, Takako 1968. "Coda constraints : optimizing representations." Thesis, McGill University, 1998. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=35970.

Full text
Abstract:
Languages differ in their sound patterns, but these differences are, to a large extent, systematic. One goal of Universal Grammar (Chomsky 1957, 1965) is to account for the systematic patterns which are attested across languages. Toward this end, Universal Grammar is considered to contain a set of phonological primitives such as features, and some restrictions on their combination. However, in rule-based phonology, it is assumed that rules are part of the grammar of an individual language. By their very nature, rules describe operations. As such, they are not well-suited to express restrictions on the ways in which segments may combine when no overt operation is involved. To account for such restrictions, Chomsky & Halle (Sound Pattern of English (SPE): 1968) supplemented rules with Morpheme Structure Constraints (MSCs) which define the possible morpheme shapes that a particular language allows (see also Halle 1959). Thus, in SPE, both MSCs and rules played a role in accounting for the phonological patterns observed in languages.
APA, Harvard, Vancouver, ISO, and other styles
13

King, Kevin C. "Optimizing fully homomorphic encryption." Thesis, Massachusetts Institute of Technology, 2016. http://hdl.handle.net/1721.1/113156.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2016.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 50-51).
Fully homomorphic encryption (FHE) presents the possibility of removing the need to trust cloud providers with plaintext data. We present two new FHE scheme variants of BGV'12, both of which remove the need for key switching after a ciphertext multiplication, overall halving the runtime of bootstrapping. We also present multiple implementations of 32-bit integer addition evaluation, the fastest of which spends 16 seconds computing the addition circuit and 278 seconds bootstrapping. We nd that bootstrapping consumes approximately 90% of the computation time for integer addition and secure parameter settings are currently bottlenecked by the memory size of commodity hardware.
by Kevin C. King.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
14

Du, George J. "Interpreting and optimizing data." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/112840.

Full text
Abstract:
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 28-29).
The goal of this research was twofold. The first goal was to use observational data to propose interventions under various constraints, without explicitly inferring a causal graph. These interventions may be optimized for a single individual within a population, or for an entire population. Under certain assumptions, we found that is possible to provide theoretical guarantees for the intervention results when we model the data with a Gaussian process. The second goal was to map various data, including sentences and medical images, to a simple, understandable latent space, in which an intervention optimization routine may be used to nd beneficial interventions. To this end, variational autoencoders were used. We found that while the Gaussian process technique was able to successfully identify interventions in both simulations and practical applications, the variational autoencoder approach did not retain enough information about the input to be competitive with current approaches for classication, such as deep CNNs.
by George J. Du.
M. Eng.
APA, Harvard, Vancouver, ISO, and other styles
15

Marcato, Robert W. (Robert William) 1975. "Optimizing an inverse warper." Thesis, Massachusetts Institute of Technology, 1998. http://hdl.handle.net/1721.1/47626.

Full text
Abstract:
Thesis (S.B. and M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1998.
Includes bibliographical references (p. 50).
by Robert W. Marcato, Jr.
S.B.and M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
16

Workman, Patrick E. "Optimizing security force generation." Thesis, Monterey, Calif. : Naval Postgraduate School, 2009. http://edocs.nps.edu/npspubs/scholarly/theses/2009/Jun/09Jun%5FWorkman.pdf.

Full text
Abstract:
Thesis (M.S. in Operations Research)--Naval Postgraduate School, June 2009.
Thesis Advisor(s): Dell, Robert F. "June 2009." Description based on title screen as viewed on July 13, 2009. Author(s) subject terms: manpower planning, optimization, infinite horizon, variable time model, officer management, enlisted management. Includes bibliographical references (p. 61-64). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
17

Blom, Jonas. "Optimizing spare-parts management." Thesis, Högskolan i Gävle, Industriell ekonomi, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-28822.

Full text
Abstract:
Abstract Purpose: The purpose of the study is to develop a model that will facilitate the choice of maintenance strategy within the Swedish pulp and paper industry. Without compromising system availability, the model aims to reduce inventory holding costs. Methodology: At first, a literary research was conducted to create a holistic view over the chosen topic, in time it developed into a literature framework. Secondly, a case study was conducted in order to obtain empirical data. The data were obtained through interviews and archival records. The literature framework and the empirical data were then cross-analyzed with each other. Findings: In this thesis, a model has been developed based on previously applied and accepted methods. The methods have been identified and described in order to provide a strategy in which the inventory levels- and value could be lowered. The findings indicate that the organization must seek to assign ABC-classified and VED-analyzed components different maintenance actions in order to reduce the total cost. Theoretical contribution: This thesis contributes to a methodology development regarding spare parts management. It aims to add knowledge to the existing gap regarding spare parts order point and batch size. The thesis provides a procedure in which systems including critical and expensive components are evaluated in order to assign them the appropriate maintenance. Practical relevance: The model has only been exemplified by using a system position from Stora Enso Skutskär, the numerical values are examples. The model must be tested with real values and the risk analysis must be carried out with a group of employees with great insight regarding the selected component and system position. Limitations: This thesis is delimited to spare parts management and inventory management. The study only involves one Swedish organization, whereas the organization and its spare parts management illustrates the complexity concerning spares. The model will not be verified as the focus is to highlight the research gap and to develop the model.
APA, Harvard, Vancouver, ISO, and other styles
18

Feldman, Jacob. "Optimizing Restaurant Reservation Scheduling." Scholarship @ Claremont, 2010. https://scholarship.claremont.edu/hmc_theses/22.

Full text
Abstract:
We consider a yield-management approach to determine whether a restaurant should accept or reject a pending reservation request. This approach was examined by Bossert (2009), where the decision for each request is evaluated by an approximate dynamic program (ADP) that bases its decision on a realization of future demand. This model only considers assigning requests to their desired time slot. We expand Bossert's ADP model to incorporate an element of flexibility that allows requests to be assigned to a time slot that differs from the customer's initially requested time. To estimate the future seat utilization given a particular decision, a new heuristic is presented which evaluates time-slot/table assignments based on the expected number of unused seats likely to result from a given assignment. When compared against naive seating models, the proposed model produced average gains in seat utilization of 25%.
APA, Harvard, Vancouver, ISO, and other styles
19

Davis, Amber Marie. "Public resource allocation for programs aimed at managing woody plants on the Edwards Plateau: water yield, wildlife habitat, and carbon sequestration." Thesis, Texas A&M University, 2003. http://hdl.handle.net/1969.1/3938.

Full text
Abstract:
The Edwards Plateau is the drainage area for the Edwards Aquifer, which provides water to over 2.2 million people. The plateau also provides other ecosystem services, such as wildlife habitat and the sequestration of atmospheric carbon dioxide. The public concern for continued delivery of these ecosystem services is increasing; with private landowners of the plateau region affecting the delivery of these services. A geographic information systems spatial analysis was conducted for Bandera and Kerr counties, with two components being: (1) biophysical and (2) landowner interest. Together these resulted in an overarching map depicting the optimal locations to allocate government assistance to landowners for managing their property to support three ecosystem services: water yield, wildlife habitat, and carbon sequestration.In April 2003, a mail survey of selected landowners was conducted to determine their opinions regarding ecosystem services and cost-share programs (Olenick et al. 2005). In July 2004, a supplemental survey of respondents to the first survey was conducted to follow-up on a few questions answered incorrectly and to focus on landowner opinions regarding cost-share assistance programs and land management activities. Overall, it appeared that five year performance contracts were the most chosen contract type for respondents of all property sizes, earning mid/high annual incomes, and for all length of ownership time periods. Based on our findings, the publicly-funded assistance programs that should be allocated to the optimal ecosystem service locations are five and ten year performance contracts based on property size, length of ownership, and income level categories. The spatial and statistical analysis results were successful, in that optimal locations and types of cost share programs were identified for each ecosystem service in order to prioritize the allocation of limited public resources. The patches of ecosystem target areas within the final target area map can be used as land management demonstration sites to reveal to surrounding landowners the benefits of participating in publicly funded cost-share assistance programs. However, the study has been limited by the generality of the GIS statewide wildlife data.
APA, Harvard, Vancouver, ISO, and other styles
20

Reich, William F. "Optimizing Navy wholesale inventory positioning." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 1999. http://handle.dtic.mil/100.2/ADA378725.

Full text
Abstract:
Thesis (M.S. in Operations Research) Naval Postgraduate School, September 1999.
"September 1999". Thesis advisor(s): Robert F. Dell. Includes bibliographical references (p. 59-61). Also Available online.
APA, Harvard, Vancouver, ISO, and other styles
21

Cai, Wei. "Optimizing cloud gaming service delivery." Thesis, University of British Columbia, 2016. http://hdl.handle.net/2429/57861.

Full text
Abstract:
The high-profit digital gaming industry has merged with the increasing interest in transforming everything into cloud services, which leads to a novel concept called cloud gaming. In this thesis, we aim to investigate the optimization of quality of experience (QoE) for cloud gaming system, while considering different challenges, system constraints and service requirements. First, we investigate video compression technologies based on existing cloud gaming system, in which the cloud hosts the game engine and streams rendered gaming videos to players through the Internet. We propose to cooperatively encode cloud gaming videos of different players in the same game session, in order to leverage inter-gamer redundancy. This is based on an observation that game scenes of close-by gamers have non-trivial overlapping areas, and thus adding inter-gamer predictive video frames may improve the coding efficiency. Selected games are analyzed and the trace-driven simulations demonstrate the efficiency of proposed system. Second, we introduce a novel decomposed cloud gaming paradigm, which supports flexible migrations of gaming components between the cloud server and the players' terminals. We present the blueprint of the proposed system and discussed the cognitive resource optimization for the proposed decomposed cloud gaming system under distinct targets. This includes the minimization of cloud, network, and terminal resources and response delay, subject to QoE assurance, which is formulated as a graph partitioning problem that is solved by exhaustive searches. Extensive simulation results show the feasibility of cognitive resource management in a cloud gaming system to efficiently adapt itself to variations in the service environments, while satisfying different QoE requirements for gaming sessions. Finally, we explore the practical approach for the decomposed cloud gaming paradigm. We design the system framework and seek the engineering solutions for practical issues. Following these discussions, we implement the very first experimental testbed called ubiquitous cloud gaming platform. Three game prototypes are built on our testbed, which can demonstrate the feasibility and efficiency of our proposal. Experiments have been conducted to show that intelligent partitioning leads to better system performance, such as lower response latency and higher frame rate.
Applied Science, Faculty of
Electrical and Computer Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
22

Förstner, Johannes. "Optimizing Queries in Bayesian Networks." Thesis, Linköpings universitet, Databas och informationsteknik, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-86716.

Full text
Abstract:
This thesis explores and compares different methods of optimizing queries in Bayesian networks. Bayesian networks are graph-structured models that model probabilistic variables and their influences on each other; a query poses the question of what probabilities certain variables assume, given observed values on certain other variables. Bayesian inference (calculating these probabilities) is known to be NP-hard in general, but good algorithms exist in practice. Inference optimization traditionally concerns itself with finding and tweaking efficient algorithms, and leaves the choice of algorithms' parameters, as well as the construction of inference-friendly Bayesian network models, as an exercise to the end user. This thesis aims towards a more systematic approach to these topics: We try to optimize the structure of a given Bayesian network for inference, also taking into consideration what is known about the kind of queries that are posed. First, we implement several automatic model modifications that should help to make a model more suitable for inference. Examples of these are the conversion of definitions of conditional probability distributions from table form to noisy gates, and divorcing parents in the graph. Second, we introduce the concepts of usage profiles and query interfaces on Bayesian networks and try to take advantage of them. Finally, we conduct performance measurements of the different options available in the used library for Bayesian networks, to compare the effects of different options on speedup and stability, and to answer the question of which options and parameters represent the optimal choice to perform fast queries in the end product. The thesis gives an overview of what issues are important to consider when trying to optimize an application's query performance in Bayesian networks, and when trying to optimize Bayesian networks for queries. The project uses the SMILE library for Bayesian networks by the University of Pittsburgh, and includes a case study on script-generated Bayesian networks for troubleshooting by Scania AB.
APA, Harvard, Vancouver, ISO, and other styles
23

Aghezzaf, El Houssaine, Thomas L. Magnanti, and Laurence A. Wolsey. "Optimizing Constrained Subtrees of Trees." Massachusetts Institute of Technology, Operations Research Center, 1992. http://hdl.handle.net/1721.1/5407.

Full text
Abstract:
Given a tree G = (V, E) and a weight function defined on subsets of its nodes, we consider two associated problems. The first, called the "rooted subtree problem", is to find a maximum weight subtree, with a specified root, from a given set of subtrees. The second problem, called "the subtree packing problem", is to find a maximum weight packing of node disjoint subtrees chosen from a given set of subtrees, where the value of each subtree may depend on its root. We show that the complexity status of both problems is related, and that the subtree packing problem is polynomial if and only if each rooted subtree problem is polynomial. In addition we show that the convex hulls of the feasible solutions to both problems are related: the convex hull of solutions to the packing problem is given by "pasting together" the convex hulls of the rooted subtree problems. We examine in detail the case where the set of feasible subtrees rooted at node i consists of all subtrees with at most k nodes. For this case we derive valid inequalities, and specify the convex hull when k < 4.
APA, Harvard, Vancouver, ISO, and other styles
24

Steenbakkers, Roel Johannes Henricus Marinus. "Optimizing target definition for radiotherapy." [S.l. : Amsterdam : s.n.] ; Universiteit van Amsterdam [Host], 2007. http://dare.uva.nl/document/40825.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Pearson, John L. "Optimizing Unmanned Aircraft System Scheduling." Thesis, Monterey, Calif. : Naval Postgraduate School, 2008. http://handle.dtic.mil/100.2/ADA483449.

Full text
Abstract:
Thesis (M.S. in Operations Research)--Naval Postgraduate School, June 2008.
Thesis Advisor(s): Carlyle, W. Matthew. "June 2008." Description based on title screen as viewed on August 26, 2008. Includes bibliographical references (p. 35). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
26

Thoren, Ryan. "Optimizing resource recovery in Vancouver." Thesis, University of British Columbia, 2011. http://hdl.handle.net/2429/32405.

Full text
Abstract:
Vancouver’s expanding population is putting pressure on the city’s water and wastewater infrastructure; more efficient uses of this resource need to be explored as the cost of upgrading the city’s water and sewer network is daunting. Wastewater presents a significant source of water and heat and, if properly exploited, can reduce pressure on existing infrastructure while reducing stress on the receiving environment. This thesis presents a model with three scenarios that seek to quantify and optimize the amount of water that can be cascaded within the Vancouver Sewerage Area, as well as evaluates each reuse scheme for the economic, environmental, and social benefits associated with each. The first scenario shows a number of potential sources and sinks for direct cascading of wastewater between industries, however water quality represents a significant barrier to this form of water reuse. With the implementation of a satellite water reclamation facility (WRF) in scenario 2, water quality is no longer a barrier and water recycling potential is significantly increased. However, proximity becomes a problem as many of the industries are too far away from the WRF and the cost of pumping and infrastructure far outweighs the benefits of water reuse. When the model is modified in scenario 3 to include the rest of the industrial, commercial, and institutional (ICI) sector and multifamily housing, the potential for water reuse is much greater than the first two scenarios due to the proximity of reclaimed water sinks. The scenario with the greatest water reuse potential, a satellite WRF supplying ICI and multifamily water users, was calculated to recycle upwards of 1,000,000m³/year. Implementation of this scenario would require up to 50 years to allow for public acceptance, policy implementation, and buy in from government and industry. The required infrastructure is extensive but with proper planning over an appropriate time period, the added benefit of energy recovery from wastewater, and participation from industry and government, water reuse can be a viable option for Vancouver.
APA, Harvard, Vancouver, ISO, and other styles
27

Shaw, Jennifer Elizabeth. "Visualization tools for optimizing compilers." Thesis, McGill University, 2005. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=98798.

Full text
Abstract:
Optimizing compilers have traditionally had little support for visual tools which display the vast amount of information generated and which could aid in the development of analyses and teaching and provide extra information to general programmers. This thesis presents a set of visualization tools which integrate visualization support for Soot, an optimizing compiler framework, into Eclipse, a popular, extensible IDE.
In particular, this work explores making the research compiler framework more accessible to new users and general programmers. Tools for displaying data flow analysis results in intermediate representations (IRs) and in the original source code are discussed, with consideration for the issue of the mapping of information between the low-level IRs and the source using flexible and extensible mechanisms. Also described are tools for interactive control flow graphs which can be used for research and teaching and tools for displaying large graphs, such as calf graphs, in a manageable way.
Additionally, the area of communicating information generated by optimizing compilers to general programmers is explored with a small case study to determine if analyses could be useful to general programmers and how the information is displayed.
This work is shown to be useful for research to find imprecision or errors in analyses, both from visualizing the intermediate results with the interactive control flow graphs and the final results at the IR and source code levels, and for students learning about compiler optimizations and writing their first data flow analyses.
APA, Harvard, Vancouver, ISO, and other styles
28

Swallow, Robert Chandler. "Optimizing Minefield Planning and Clearance." Thesis, Monterey, California. Naval Postgraduate School, 1993. http://hdl.handle.net/10945/44810.

Full text
Abstract:
With the collapse of the Soviet Union, the role of the United States Navy is changing from that of a blue water navy to one which must meet the challenges of coastal warfare. The mining of the amphibious carrier USS Tripoli (LPH-10) and the Aegis guided missile cruiser USS Princeton (CG-59), during the Persian Gulf War, shows the impact of mine warfare in these littoral regions. Congress, recognizing these trends, has funded a modern mine countermeasures (MCM) fleet of ships and helicopters to deploy with the proposed Naval Expeditionary Force, increased mine warfare research and development, and restructured the Mine Warfare Command. Currently, the Navy has no specific method to measure the efficiency of these mine warfare assets, thus future procurement and present tactics most often result in plans which are feasible but not necessarily optimal. This thesis develops two optimization models to improve the efficiency of present and future mine warfare assets. The first model is a tactical decision aid. Taking the known mine threat for various routes requiring clearance, the model determines the tasking for the available MCM assets to clear the minefields in the fewest number of days. The second model simulates many potential mine threats and determines the expected minefield clearance times for a given mix of MCM assets. By varying the MCM asset mix, the relative worth of each asset can be determined. The models can be used for offensive mining by inputting the enemy's MCM capability's and varying the types of mines laid.
APA, Harvard, Vancouver, ISO, and other styles
29

Ford, Kevin S. "Optimizing aerobot exploration of Venus." Monterey, California. Naval Postgraduate School, 1997. http://hdl.handle.net/10945/8787.

Full text
Abstract:
Approved for public release; distribution is unlimited
Venus Flyer Robot (VFR) is an aerobot—an autonomous balloon probe—designed for remote exploration of Earth's sister planet in 2003. VFR's simple navigation and control system permits travel to virtually any location on Venus, but it can survive for only a limited duration in the harsh Venusian environment. To help address this limitation, we develop: (1) a global circulation model that captures the most important characteristics of the Venusian atmosphere; (2) a simple aerobot model that captures thermal restrictions faced by VFR at Venus; and (3) one exact and two heuristic algorithms that, using abstractions (1) and (2), construct routes making the best use of VFR's limited lifetime. We demonstrate this modeling by planning several small example missions and a prototypical mission that explores numerous interesting sites recently documented in the planetary geology literature.
APA, Harvard, Vancouver, ISO, and other styles
30

盧寵猷 and Chung-yau Lo. "Optimizing parathyroid autotransplantation during thyroidectomy." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2000. http://hub.hku.hk/bib/B30257463.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Costica, Yinon. "Optimizing classification in intelligence processing." Thesis, Monterey, California. Naval Postgraduate School, 2010. http://hdl.handle.net/10945/4986.

Full text
Abstract:
Approved for public release; distribution is unlimited
The intelligence making process, often described as the intelligence cycle, consists of phases. Congestion may be experienced in phases that require time consuming tasks such as translation, processing and analysis. To ameliorate the performance of those timeconsuming phases, a preliminary classification of intelligence items regarding their relevance and value to an intelligence request is performed. This classification is subject to false positive and false negative errors, where an item is classified as positive if it is relevant and provides valuable information to an intelligence request, and negative otherwise. The tradeoff between both types of errors, represented visually by the Receiver Operating Characteristic curve, depends on the training and capabilities of the classifiers as well as the classification test performed on each item and the decision rule that separates between positives and negatives. An important question that arises is how to best tune the classification process such that both accuracy of the classification and its timeliness are adequately addressed. An analytic answer is presented via a novel optimization model based on a tandem queue model. This thesis provides decision makers in the intelligence community with measures of effectiveness and decision support tools for enhancing the effectiveness of the classification process in a given intelligence operations scenario. In addition to the analytic study, numerical results are presented to obtain quantitative insights via sensitivity analysis of input parameters.
APA, Harvard, Vancouver, ISO, and other styles
32

Enoka, Maro D. "Optimizing Marine Security Guard assignments." Thesis, Monterey, California. Naval Postgraduate School, 2011. http://hdl.handle.net/10945/5637.

Full text
Abstract:
Approved for public release; distribution is unlimited.
The Marine Corps Embassy Security Group (MCESG) assigns 1,500 Marine Security Guards (MSGs) to 149 embassy detachments annually. While attempting to fulfill several billet requirements, MCESG strives to balance MSG experience levels at each embassy detachment and assign MSGs to their preferred posts. The current assignment process is accomplished manually by three Marines and takes more than 6,000 hours per year. This thesis presents the Marine Security Guard Assignment Tool (MSGAT). MSGAT is an Excel-based decision support tool that utilizes a system of workbooks to guide MCESG through a streamlined data collection and provide optimal assignments. MSGAT assignments result in a higher satisfaction when compared with manual assignments. MSGAT has had an immediate and quantifiable impact on the assignment process. It has reduced person-hours by 80%, increased overall assignment quality and efficiency and improved the operational readiness of MCESG by optimizing MSG assignments.
APA, Harvard, Vancouver, ISO, and other styles
33

Chester, Alden P. "Optimizing superplastic response in NAVALITE." Thesis, Monterey, California. Naval Postgraduate School, 1989. http://hdl.handle.net/10945/27193.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Nguyen, Quang (Quang Duc) 1972. "Optimizing engineering analysis resource allocation." Thesis, Massachusetts Institute of Technology, 2001. http://hdl.handle.net/1721.1/84521.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2001.
Includes bibliographical references (p. 72-73).
by Quang Nguyen.
S.M.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
35

Thorén, Gustav. "Optimizing numerical Aneurism growth predications." Thesis, KTH, Hållfasthetslära (Inst.), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-176005.

Full text
Abstract:
Abdominal Aortic aneurysm (AAA) is a localized dilation of the abdominal aorta exceeding the normal diameter by more than 50 %. A common complication is rupture of the vessel wall, which in most cases is life-threatening. In order to prohibit loss of life, efforts in trying create computational models to predict failure of the vessel wall has been made. Current non-linear finite element models for biomechanical rupture risk assessment (BRRA) are very computational demanding and require very long computational times. A BRRA using an automatic time-step scheme is presented in order to speed up the computational time of the very demanding Finite Element Analysis (FEA) models available. The numerical scheme is governed by the collagen dynamics and two numerical approaches are presented. One in which the time step depends only on the maximum incremental change of collagen in one of all directions, and a second in which the time step is governed by the total incremental change of collagen in all directions and the incremental change of the undulations stretches. Simulations were carried out both without and with contact conditions with the spine. Early results show a 50-75 % speed up in computational time at the cost of a small relative error. The results showed similar failure stress as previous studies, which implies that the constitutive model is sound and that the time-stepping approach has potential. Further research should include in vitro validation and optimization to the geometry and constitutive model of the aorta. Also, validation of the second time-stepping approach is needed. Furthermore, linking the growth of the intra-luminal thrombos (ILT) to the blood flow in the vessel is needed since the ILT adds structural integrity to the AAA.
APA, Harvard, Vancouver, ISO, and other styles
36

Barnett, Helen Yvette. "Optimizing pharmacokinetic studies utilizing microsampling." Thesis, Lancaster University, 2017. http://eprints.lancs.ac.uk/89163/.

Full text
Abstract:
In Pharmacokinetic (PK) studies, inference is made on the absorption, distribution, metabolism and excretion (ADME) of an externally administered compound within the body. This is done by measuring the concentration of the compound in some form of bodily tissue (such as whole blood or plasma) at a number of time points after administration. There are two approaches to PK analysis, modelling and non-compartmental (NCA). The modelling approach uses assumptions of the behaviour of the compound in the body to fit models to the data in order to approximate the concentration versus time curve. Whereas in NCA, no such assumptions are made, and numerical methods are used to approximate this curve. The PK behaviour is summarised by PK parameters that are derived from this approximation, such as the area under the curve (AUC), the maximum concentration (Cmax) and the time at which this maximum occurs (tmax). In this thesis, three separate topics in the area of PK studies are explored. The first two are motivated by the new blood sampling method of microsampling, which requires a smaller sample volume than traditionally used. Firstly, a methodology is introduced for comparing microsampling to traditional sampling using the derived PK parameters from PK modelling, to find evidence of equivalence of the two sampling methods. The next topic establishes an algorithm for choosing an optimal sparse sampling scheme for PK studies that use microsampling using NCA, developing a two-stage procedure that minimizes bias and variance of the PK parameter estimates. The final topic concerns how PK analysis can be conducted when some measurements are too low to be reliably detected, again using NCA. Seven methods are explored, with the introduced method of using kernel density estimation to impute values onto censored responses using an iterative procedure showing.
APA, Harvard, Vancouver, ISO, and other styles
37

Brăileanu, Patricia-Isabela. "Research on optimizing customized prostheses." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI062.

Full text
Abstract:
La thèse de doctorat intitulée " Research on optimizing customized prostheses " a pour objectif final de développer un logiciel qui modifie la géométrie d'une tige fémorale en fonction de paramètres prédéterminés après l’analyse des images tomographiques du patient. Afin d'obtenir des résultats, ont été réalisées les études suivantes : Des images tomographiques ont été obtenues de patients avec une hanche saine, de patients avec une hanche arthritique et de patients avec prothèse totale de hanche ; Une planification virtuelle de l'opération de remplacement total de la hanche a été réalisée pour construire une prothèse personnalisée et identifier les paramètres qui peuvent être optimisés ; Des études FEA ont été réalisées sur les tiges prothétiques standard et sur la tige prothétique personnalisée pour observer le comportement mécanique de la prothèse sujet à différentes charges externes ; Après avoir interprété les résultats, nous avons poursuivi le développement du logiciel, son objectif sera l’impression de la tige fémorale personnalisée par la technique de fabrication additive
This thesis aims to develop a virtual surgery planning methodology starting from the traditional Total Hip Replacement preoperative planning and having as final goal the realization of a template prosthesis that can be customized according to the femoral landmarks of each patient. Starting from the traditional preoperative planning of THR, which is done on the patients’ X-Ray and using the same principles of obtaining femoral landmarks, the CT scans of a patient with hip joint related disease that need to undergo a THR surgery were segmented by using specific algorithms in order to extract the patients’ femur and after that was imported in dedicated CAD software in which, with the help of evaluation instruments, all the patients’ femoral landmarks were identified. These femoral landmarks were used to develop a custom prosthesis starting from a standard anatomical femoral stem, which was validated using FEA simulations. Based on the information obtained, the development of a software coded in Python language was done to create somehow a tool that allows the analysis of patients’ CT scans in MPR view, but also in 3D view. It allows the bone segmentation of the affected area in order to obtain a CAD model file and perform the virtual preoperative planning in a CAD dedicated software, and finally use some of these dimensions in order to personalize a custom hip stem based on a pre-existing stem model used as basis for the desired geometrical transformations. The work is completed by printing it with FDM technology, using a biocompatible material to demonstrate the potential of this study, the versatility and the possibility of orienting the femoral stems used in THR towards personalization and AM, avoiding the use of standard prostheses that can lead to postoperative complications and thus leading to the elimination of prostheses “banks” due to the fact that they would no longer be necessary
APA, Harvard, Vancouver, ISO, and other styles
38

McIntyre, Colin Alex. "Optimizing inbound freight mode decisions." Thesis, Massachusetts Institute of Technology, 2020. https://hdl.handle.net/1721.1/126907.

Full text
Abstract:
Thesis: M.B.A., Massachusetts Institute of Technology, Sloan School of Management, in conjunction with the Leaders for Global Operations Program at MIT, May, 2020
Thesis: S.M., Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, in conjunction with the Leaders for Global Operations Program at MIT, May, 2020
Cataloged from the official PDF of thesis.
Includes bibliographical references (pages 73-74).
Retail manufacturers often expedite inbound freight shipments from contract manufacturing bases to their distribution centers in destination markets at high cost to improve service levels to their wholesale partners and retail arm. The current process around these decisions has yielded lower than anticipated improvements to service level. This thesis (1) reframes the goal of expediting inbound freight in quantitative, measurable terms that more directly impact the business outcomes, (2) develops an optimization model to select a set of freight shipments to expedite and best improve service, and (3) uses the optimization model to estimate potential improvement magnitudes with strategic changes.
by Colin Alex McIntyre.
M.B.A.
S.M.
M.B.A. Massachusetts Institute of Technology, Sloan School of Management
S.M. Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center
APA, Harvard, Vancouver, ISO, and other styles
39

Kim, Jinsung. "Optimizing Tensor Contractions on GPUs." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1563237825735994.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Ali, Aleena. "Optimizing Urbanization in South Asia." Scholarship @ Claremont, 2017. http://scholarship.claremont.edu/cmc_theses/1571.

Full text
Abstract:
Over the next few decades, urban populations in Pakistan and India are projected to increase by 350 million. Considered to be a critical driver of economic modernization and sociopolitical progress, urbanization can catalyze numerous benefits. However, the extent to which it proves beneficial is contingent on the manner in which national and sub-national leaders respond to the multitude of challenges associated with urban spatial expansion and population growth. This thesis outlines key policy priorities for Indian and Pakistani leaders and puts forth recommendations that aim to optimize urban expansion for greater prosperity and livability. It employs a comprehensive set of methodologies to examine the true extent and characteristics of urbanization in India and Pakistan. On the basis of existing and projected dynamics of urbanization and identification of key factors that currently impede the leveraging of urbanization, it offers a range of policy proposals that aim to leverage urban growth through optimizing urban planning processes and governance, urban mobility and the spatial distribution of urban populations.
APA, Harvard, Vancouver, ISO, and other styles
41

Chandarana, Upasna Piyush, and Upasna Piyush Chandarana. "Optimizing Geotechnical Risk Management Analysis." Diss., The University of Arizona, 2017. http://hdl.handle.net/10150/625550.

Full text
Abstract:
Mines have an inherent risk of geotechnical failure in both rock excavations and tailings storage facilities. Geotechnical failure occurs when there is a combination of exceptionally large forces acting on a structure and/or low material strength resulting in the structure not withstanding a designed service load. The excavation of rocks can cause unintended rock mass movements. If the movement is monitored promptly, accidents, loss of ore reserves and equipment, loss of lives, and closure of the mine can be prevented. Mining companies routinely use deformation monitoring to manage the geotechnical risk associated with the mining process. The aim of this dissertation is to review the geotechnical risk management process to optimize the geotechnical risk management analysis. In order to perform a proper analysis of slope instability, understanding the importance as well as the limitations of any monitoring system is crucial. Due to the potential threat associated with slope stability, it has become the top priority in all risk management programs to predict the time of slope failure. Datasets from monitoring systems are used to perform slope failure analysis. Innovations in slope monitoring equipment in the recent years have made it possible to scan a broad rock face in a short period with sub-millimetric accuracy. Instruments like Slope Stability Radars (SSR) provide the quantitative data that is commonly used to perform risk management analysis. However, it is challenging to find a method that can provide an accurate time of failure predictions. Many studies in the recent past have attempted to predict the time of slope failure using the Inverse Velocity (IV) method, and to analyze the probability of a failure with the fuzzy neural networks. Various method investigated in this dissertation include: Minimum Inverse Velocity (MIV), Maximum Velocity (MV), Log Velocity (LV), Log Inverse Velocity (LIV), Spline Regression (SR) and Machine Learning (ML). Based on the results of these studies, the ML method has the highest rate of success in predicting the time of slope failures. The predictions provided by the ML showed ~86% improvement in the results in comparison to the traditional IV method and ~72% improvement when compared with the MIV method. The MIV method also performed well with ~75% improvement in the results in comparison to the traditional IV method. Overall, both the new proposed methods, ML and MIV, outperformed the traditional inverse velocity technique used for predicting slope failure.
APA, Harvard, Vancouver, ISO, and other styles
42

Harwood, Cary P. "Optimizing secondary tailgate support selection." Thesis, This resource online, 1996. http://scholar.lib.vt.edu/theses/available/etd-09182008-063231/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Corder, Stuartt Allan Blake Geoffrey A. Sargent Anneila Isabel. "Optimizing image fidelity with arrays /." Diss., Pasadena, Calif. : California Institute of Technology, 2009. http://resolver.caltech.edu/CaltechETD:etd-10172008-054754.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Lo, Chung-yau. "Optimizing parathyroid autotransplantation during thyroidectomy /." Hong Kong : University of Hong Kong, 2000. http://sunzi.lib.hku.hk/hkuto/record.jsp?B22190132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Wendt, Alan Lee. "An optimizing code generator generator." Diss., The University of Arizona, 1989. http://hdl.handle.net/10150/184771.

Full text
Abstract:
This dissertation describes a system that constructs efficient, retargetable code generators and optimizers. chop reads nonprocedural descriptions of a computer's instruction set and of a naive code generator for the computer, and it writes an integrated code generator and peephole optimizer for it. The resulting code generators are very efficient because they interpret no tables; they are completely hard-coded. Nor do they build complex data structures to communicate between code generation and optimization phases. Interphase communication is reduced to the point that the code generator's output is often encoded in the program counter and conveyed to the optimizer by jumping to the right label. chop's code generator and optimizer are based on a very simple formalism, namely rewriting rules. An instrumented version of the compiler infers the optimization rules as it complies a training suite, and it records them for translation into hard code and inclusion into the production version. I have replaced the Portable C Compiler's code generator with one generated by chop. Despite a costly interface, the resulting compiler runs 30% to 50% faster than the original Portable C Compiler (pcc) and generates comparable code. This figure is diluted by common lexical analysis, parsing, and semantic analysis and by comparable code emission. Allowing for these, the new code generator appears to run approximately seven times faster than that of the original pcc.
APA, Harvard, Vancouver, ISO, and other styles
46

Lash, Michael Timothy. "Optimizing outcomes via inverse classification." Diss., University of Iowa, 2018. https://ir.uiowa.edu/etd/6602.

Full text
Abstract:
In many circumstances, predictions elicited from induced classification models are useful to a certain extent, as such predictions provide insight into what the future may hold. Such models, in and of themselves, hold little value beyond making such predictions, as they are unable to inform their user as to how to change a predicted outcome. Consider, for example, a health care domain where a classification model has been induced to learn the mapping from patient characteristics to disease outcome. A patient may want to know how to lessen their probability of developing such a disease. In this document, four different approaches to inverse classification, the process of turning predictions into prescriptions by working backwards through an induced classification model to optimize for a particular outcome of interest, are explored. The first study develops an inverse classification framework, which is created to produce instance-specific, real-world feasible recommendations that optimally improve the probability of a good outcome, while being as classifier-permissive as possible. Real-world feasible recommendations are obtained by imposition of constraints that specify which features can be optimized over and accounts for user-specific preferences. Assumptions are made as to the differentiability of the classification function, permitting the use of classifiers with exploitable gradient information, such as support vector machines (SVMs) and logistic regression. Our results show that the framework produces real-world recommendations that successfully reduce the probability of a negative outcome. In the second study, we further relax our assumptions as to the differentiability of the classifier, allowing virtually any classification function to be used. Correspondingly, we adjust our optimization methodology. To such an end, three heuristic-based optimization methods are devised. Furthermore, non-linear (quadratic) relationships between feature changes and so-called cost, which accounts for user preferences, are explored. The results suggest that non-differentiable classifiers, such as random forests, can be successfully navigated using the specified framework and updated, heuristic-based optimization methodology. Furthermore, findings suggest that regularizers, encouraging sparse solutions, should be used when quadratic/non-linear cost-change relationships are specified. The third study takes a longitudinal approach to the problem, exploring the effects of applying the inverse classification process to instances across time. Furthermore, we explore the use of added temporal linkages, in the form of features representing past predicted outcome probability (i.e., risk), on the inverse classification results. We further explore and propose a solution to a missing data subproblem that frequently arises in longitudinal data settings. In the fourth and final study, a causal formulation of the inverse classification framework is provided and explored. The formulation encompasses a Gaussian Process-based method of inducing causal classifiers, which is subsequently leveraged when the inverse classification process is applied. Furthermore, exploration of the addition of certain dependencies is explored. The results suggest the importance of including such dependencies and the benefits of taking a causal approach to the problem.
APA, Harvard, Vancouver, ISO, and other styles
47

Cesur, Fatih. "Optimizing formation movement over heteregeneous terrain." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Jun%5FCesur.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Cox, Criston W. "Optimizing bandwidth of tactical communications systems." Thesis, Monterey, Calif. : Springfield, Va. : Naval Postgraduate School ; Available from National Technical Information Service, 2005. http://library.nps.navy.mil/uhtbin/hyperion/05Jun%5FCox.pdf.

Full text
Abstract:
Thesis (M.S. in Systems Technology (Joint Command, Control, Communications, Computers and Intelligence (JC4I))--Naval Postgraduate School, June 2005.
Thesis Advisor(s): William Kemple, John Osmundson. Includes bibliographical references (p. 59-60). Also available online.
APA, Harvard, Vancouver, ISO, and other styles
49

Salamí, San Juan Esther. "Optimizing VLIW architectures for multimedia applications." Doctoral thesis, Universitat Politècnica de Catalunya, 2007. http://hdl.handle.net/10803/6002.

Full text
Abstract:
The growing interest that multimedia processing has experimented during the last decade is motivating processor designers to reconsider which execution paradigms are the most appropriate for general-purpose processors. On the other hand, as the size of transistors decreases, power dissipation has become a relevant limitation to increases in the frequency of operation. Thus, the efficient exploitation of the different sources of parallelism is a key point to investigate in order to sustain the performance improvement rate of processors and face the growing requirements of future multimedia applications. We belief that a promising option arises from the combination of the Very Long Instruction Word (VLIW) and the vector processing paradigms together with other ways of exploiting coarser grain parallelism, such as Chip MultiProcessing (CMP).

As part of this thesis, we analyze the problem of memory disambiguation in multimedia applications, as it represents a serious restriction for exploiting Instruction Level Parallelism (ILP) in VLIW architectures. We state that the real handicap for memory disambiguation in multimedia is the extensive use of pointers and indirect references usually found in those codes, together with the limited static information available to the compiler on certain occasions. Based on the observation that the input and output multimedia streams are commonly disjointed memory regions, we propose and implement a memory disambiguation technique that dynamically analyzes the region domain of every load and store before entering a loop, evaluates whether or not the full loop is disambiguated and executes the corresponding loop version. This mechanism does not require any additional hardware or instructions and has negligible effects over compilation time and code size. The performance achieved is comparable to that of advanced interprocedural pointer analysis techniques, with considerably less software complexity. We also demonstrate that both techniques can be combined to improve performance.

In order to deal with the inherent Data Level Parallelism (DLP) of multimedia kernels without disrupting the existing core designs, major processor manufacturers have chosen to include MMX-like µSIMD extensions. By analyzing the scalability of the DLP and non-DLP regions of code separately in VLIW processors with µSIMD extensions, we observe that the performance of the overall application is dominated by the performance of the non-DLP regions, which in fact exhibit only modest amounts of ILP. As a result, the performance achieved by very wide issue configurations does not compensate for the related cost. To exploit the DLP of the vector regions in a more efficient way, we propose enhancing the µSIMD -VLIW core with conventional vector processing capabilities. The combination of conventional and sub-word level vector processing results in a 2-dimensional extension that combines the best of each one, including a reduction in the number of operations, lower fetch bandwidth requirements, simplicity of the control unit, power efficiency, scalability, and support for multimedia specific features such as saturation or reduction. This enhancement has a minimal impact on the VLIW core and reaches more parallelism than wider issue µSIMD implementations at a lower cost. Similar proposals have been successfully evaluated for superscalar cores. In this thesis, we demonstrate that 2-dimensional Vector-µSIMD extensions are also effective with static scheduling, allowing for high-performance cost-effective implementations.
APA, Harvard, Vancouver, ISO, and other styles
50

Brandis, Marc Brandis Marc Michael. "Optimizing compilers for structured programming languages /." [S.l.] : [s.n.], 1995. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=11024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography