To see the other types of publications on this topic, follow the link: Multi-unit.

Dissertations / Theses on the topic 'Multi-unit'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Multi-unit.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Murillo, Espinar Javier. "Egalitarian behaviour in multi unit combinatorial auctions." Doctoral thesis, Universitat de Girona, 2010. http://hdl.handle.net/10803/7752.

Full text
Abstract:
En entornos donde los recursos son precederos y la asignación de recursos se repite en el tiempo con el mismo conjunto o un conjunto muy similar de agentes, las subastas recurrentes pueden ser utilizadas. Una subasta recurrente es una secuencia de subastas donde el resultado de una subasta puede influenciar en las siguientes. De todas formas, este tipo de subastas tienen problemas particulares cuando la riqueza de los agentes esta desequilibrada y los recursos son precederos. En esta tesis se proponen algunos mecanismos justos o equitativos para minimizar los efectos de estos problemas. En una subasta recurrente una solución justa significa que todos los participantes consiguen a largo plazo sus objetivos en el mismo grado o en el grado más parecido posible, independientemente de su riqueza. Hemos demostrado experimentalmente que la inclusión de justicia incentiva a los bidders en permanecer en la subasta minimizando los problemas de las subastas recurrentes.
In environments where resources are perishable and the allocation of resources is repeated over time with the same set or a very similar set of agents, recurrent auctions come up. A recurrent auction is a sequence of auctions where the result of one auction can influence the following ones. These kinds of auctions have particular problems, however, when the wealth of the agents is unevenly distributed and resources are perishable. In this thesis some fair mechanisms are proposed to deal with these problems. In a recurrent auction a fair solution means that at long term, all participants accomplish their goals in the most equal possible degree, independently of their wealth. We have experimentally shown how the inclusion of fairness incentives to bidders stay in the auction minimizing the problems of recurrent auctions.
APA, Harvard, Vancouver, ISO, and other styles
2

Tutam, Mahmut. "Configuring Traditional Multi-Dock, Unit-Load Warehouses." Thesis, University of Arkansas, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10839559.

Full text
Abstract:

The development of expected-distance formulas for multi-dock-door, unit-load warehouse configurations is the focus of the dissertation. From formulations derived, the width-to-depth ratios minimizing expected distances are obtained for rectangle-shaped, unit-load warehouse configurations. Partitioning the storage region in the warehouse into three classes, the performance of a multi-dock-door, unit-load warehouse is studied when storage regions can be either rectangle-shaped or contour-line-shaped.

Our first contribution is the development of formulas for expected distance traveled in storing and retrieving unit loads in a rectangle-shaped warehouse having multiple dock doors along one warehouse wall and storage racks aligned perpendicular to that wall. Two formulations of the optimization problem of minimizing expected distance are considered: a discrete formulation and a continuous formulation with decision variables being the width and depth of the warehouse for single- and dual-command travel. Based on dock door configurations treated in the literature and used in practice, three scenarios are considered for the locations of dock doors: 1) uniformly distributed over the entire width of a wall; 2) centrally located on a wall with a fixed distance between adjacent dock doors; and 3) not centrally located on a wall, but with a specified distance between adjacent dock doors.

Our second contribution is the investigation of the effect on the optimal width-to-depth ratio (shape factor) of the number and locations of dock doors located along one wall or two adjacent walls of the warehouse. Inserting a middle-cross-aisle in the storage area, storage racks are aligned either perpendicular or parallel to warehouse walls containing dock doors. As with the warehouse having storage racks aligned perpendicular to the warehouse wall, discrete and continuous formulations of the optimization problem are developed for both single- and dual-command travel and three scenarios for dock-door locations are investigated.

Our final contribution is the analysis of the performance of a unit-load warehouse when a storage region or storage regions can be either rectangle-shaped or contour-line-shaped. Particularly, we consider two cases for the locations of dock doors: equally spaced over an entire wall of the warehouse and centrally located on a wall, but with a specified distance between adjacent dock doors. Minimizing expected distance, the best rectangle-shaped configuration is determined and its expected distance is compared with the expected distance in its counterpart contour-line-shaped configuration.

APA, Harvard, Vancouver, ISO, and other styles
3

Ghosh, Gagan Pratap. "Multi-unit auctions with budget-constrained bidders." Diss., University of Iowa, 2012. https://ir.uiowa.edu/etd/3298.

Full text
Abstract:
In my dissertation, I investigate the effects of budget-constraints in multi-unit auctions. This is done in three parts. First, I analyze a case where all bidders have a common budget constraint. Precisely, I analyze an auction where two units of an object are sold at two simultaneous, sealed bid, first-price auctions, to bidders who have demand for both units. Bidders differ with respect to their valuations for the units. All bidders have an identical budget constraint which binds their ability to spend in the auction. I show that if valuation distribution is atom-less, then their does not exist any symmetric equilibrium in this auction game. In the second and third parts of my thesis, I analyze the sale of licenses for the right to drill for oil and natural gas in the Outer Continental Shelf (OCS) of the United States. These sales are conducted using simultaneous sealed-bid first-price auctions for multiple licenses, each representing a specific area (called a tract). Using aspects of observed bidding-behavior, I first make a prima facie case that bidders are budget-constrained in these auctions. In order to formalize this argument, I develop a simple extension of the standard model (where bidders differ in their valuations for the objects) by incorporating (random) budgets for the bidders. The auction-game then has a two-dimensional set of types for each player. I study the theoretical properties of this auction, assuming for simplicity that two units are being sold. I show that this game has an equilibrium in pure strategies that is symmetric with respect to the players and with respect to the units. The strategies are essentially pure in the sense that each bidder-type has a unique split (up to a permutation) of his budget between the two auctions. I then characterize the equilibrium in terms of the bid-distribution and iso-bid curves in the value-budget space. I derive various qualitative features of this equilibrium, among which are: (1) under mild assumptions, there always exist bidder-types who submit unequal bids in equilibrium, (2) the equilibrium is monotonic in the sense that bidders with higher valuations prefer more unequal splits of their budgets than bidders with lower valuations and the same budget-level. With a formal theory in place, I carry out a quantitative exercise, using data from the 1970 OCS auction. I show that the model is able to match many aspects of the data. (1) In the data, the number of tracts bidders submit bids on is positively correlated with budgets (an R² of 0.84), even though this relationship is non-monotonic; my model is able to capture this non-monotonicity, while producing an R² of 0.89 (2) In the data, the average number of bids per tract is 8.21; for the model, this number is 10.09. (3) Auction revenue in the data was $1.927 billion; the model produced a mean revenue of $1.944 billion
APA, Harvard, Vancouver, ISO, and other styles
4

Vlachos, Georgios M. Eng Massachusetts Institute of Technology. "Multi-unit auction revenue with possibilistic beliefs." Thesis, Massachusetts Institute of Technology, 2017. https://hdl.handle.net/1721.1/122392.

Full text
Abstract:
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2017
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (page 25).
The revenue of traditional auction mechanisms is benchmarked solely against the players' own valuations, despite the fact that they may also have valuable beliefs about each other's valuations. Not much is known about generating revenue in auctions of multiple identical copies of a same good. (In particular the celebrated Vickrey mechanism has no revenue guarantees.) For such auctions, we (1) put forward an attractive revenue benchmark, based on the players' possibilistic about each other, and (2) construct a mechanism that achieves such benchmark, assuming that thplayers are two-level rational (where the rationality is in the sense of Aumann).
by Georgios Vlachos.
M. Eng.
M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
5

Ahlberg, Joakim. "Multi-unit common value auctions : theory and experiments." Doctoral thesis, Örebro universitet, Handelshögskolan vid Örebro Universitet, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:oru:diva-26015.

Full text
Abstract:
Research on auctions that involve more than one identical item for sale was,almost non-existing in the 90’s, but has since then been getting increasing attention. External incentives for this research have come from the US spectrum, sales, the European 3G mobile-phone auctions,  and Internet auctions. The policy relevance and the huge amount of money involved in many of them have helped the theory and experimental research advance. But in auctions where values are equal across bidders, common value auctions, that is, when the value depends on some outside parameter, equal to all bidders, the research is still embryonic. This thesis contributes to the topic with three studies. The first uses a Bayesian game to model a simple multi-unit common value auction, the task being to compare equilibrium strategies and the seller’s revenue from three auction formats; the discriminatory, the uniform and the Vickrey auction. The second study conducts an economic laboratory experiment on basis of the first study. The third study comprises an experiment on the multi-unit common value uniform auction and compares the dynamic and the static environments of this format. The most salient result in both experiments is that subjects overbid. They are victims of the winner’s curse and bid above the expected value, thus earning a negative profit. There is some learning, but most bidders continue to earn a negative profit also in later rounds. The competitive effect when participating in an auction seems to be stronger than the rationality concerns. In the first experiment, subjects in the Vickrey auction do somewhat better in small groups than subjects in the other auction types and, in the second experiment, subjects in the dynamic auction format perform much better than subjects in the static auction format; but still, they overbid. Due to this overbidding, the theoretical (but not the behavioral) prediction that the dynamic auction should render more revenue than the static fails inthe second experiment. Nonetheless, the higher revenue of the static auction comes at a cost; half of the auctions yield negative profits to the bidders, and the winner’s curse is more severely widespread in this format. Besides, only a minority of the bidders use the equilibrium bidding strategy.The bottom line is that the choice between the open and sealed-bid formats may be more important than the choice of price mechanism, especially in common value settings.
APA, Harvard, Vancouver, ISO, and other styles
6

Shi, Tongjia. "Stochastically Equivalent Sequential Auctions with Multi-Unit Demands." Scholarship @ Claremont, 2015. http://scholarship.claremont.edu/cmc_theses/1170.

Full text
Abstract:
Past empirical analysis show that in contrast to the theory predictions; prices tend to decline in some sequential auctions, a puzzle known as the declining price anomaly. Several theoretical explanations were proposed demonstrating the possibility of a declining price pattern under certain assumptions. In this paper, we demonstrate that when bidders have private values and multi-unit demand, expected selling price can be increasing, constant, decreasing or even non-monotonic. In our model, price pattern depends on the distributions from which bidder valuations are drawn (including the size of the bidders demand reduction), and the number of bidders.
APA, Harvard, Vancouver, ISO, and other styles
7

Bae, Jinsoo. "Essays on Multi-unit Auctions: Theory and Experiment." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587029403168965.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sung, Ho-Joon. "Optimal maintenance of a multi-unit system under dependencies." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2008. http://hdl.handle.net/1853/26511.

Full text
Abstract:
Thesis (Ph.D)--Aerospace Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Schrage, Daniel; Committee Member: Loewy, Robert; Committee Member: O'Neill, Gary; Committee Member: Saleh, Joseph; Committee Member: Volovoi, Vitali. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
9

Dahlmann, Irina. "Towards a multi-word unit inventory of spoken discourse." Thesis, University of Nottingham, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.582842.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kanarat, Amnart. "Modeling and Simulation of a Multi-Unit Tracked Vehicle." Thesis, Virginia Tech, 1999. http://hdl.handle.net/10919/9755.

Full text
Abstract:
A multi-unit tracked vehicle such as a continuous haulage system is widely used in underground mining applications due to its high mobility and payload capacity on rugged and soft terrain. To automate such a system, a high fidelity model of a tracked vehicle is essential in designing a controller for each tracked vehicle in the system, and a system model is required to simulate its response to input commands. This thesis presents the 2-D mathematical models of a tracked vehicle and a multi-unit tracked vehicle. All existing track-terrain interaction models are investigated and modified. By employing the modified track-terrain interaction model and applying Newton's second law of motion, the equations of motion of both single and multi-unit tracked vehicles can be derived. Computer programs for simulating the motions of these tracked vehicles on level ground have been implemented on a digital computer based on the derived system of differential equations. The fourth-order Runge-Kutta and Keun's methods are adopted to numerically integrate these differential equations. The simulation results clearly show that the programs can accurately predict the motion of a tracked vehicle maneuvered on horizontal plane, and closely predict the response of a multi-unit tracked vehicle operated on level ground its command inputs.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
11

Lentz, Kathryn J. "An Exploratory Study of Restaurant Multi-unit Managers’ Development." Thesis, University of North Texas, 2014. https://digital.library.unt.edu/ark:/67531/metadc500197/.

Full text
Abstract:
Development is important to the initial phase of a new restaurant multi-unit manager (MUM), and appropriate training should be conducted in concert with acceptance of the position. The purpose of this study is to explore the need for individual training of restaurant MUMs in order to facilitate a smoother transition between executive level management positions. The exhaustive literature review aided in the creation of three research questions to be answered through the interpretation of collected interview data. Restaurant MUMs were invited to participate via LinkedIn, a social media network for professionals. Semi-structured interviews were conducted with 17 restaurant MUMs over a two-week period and then transcribed into Word documents and uploaded into ATLAS.ti for analysis. The use of tools within ATLAS.ti, such as network mapping and semantic layouts, allowed the researcher to interpret the correlation between codes and themes created and therefore, answer the research questions. Conventionally, managers have to leave their restaurants or area for many days in order to obtain the necessary training to be more effective in their positions. This study has concluded that while MUMs are aware of their tasks and responsibilities, they are not aware of training available in order to gain the skillset necessary to complete the tasks. Blanket training programs will not work for MUMs, they need training to be customized to such areas as new openings, wide-spread markets and the changing workforce. More courses in developing others need to be implemented so MUMs can learn the skills needed to properly develop their managers into leaders.
APA, Harvard, Vancouver, ISO, and other styles
12

Gamalath, Isuru Madhushan. "Energy performance assessment for existing multi unit residential buildings." Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/61942.

Full text
Abstract:
Climate change is a major challenge in today’s world. Energy use is directly correlated to greenhouse gas emissions, resulting in climate change. As the residential sector is a major energy consumer, improving the energy performance of the residential building stock is imperative in mitigating this issue. Evaluation of building energy performance, life cycle impacts, and economic burdens of building energy use can facilitate improved decision making in operations of existing building stock. Hence, as the primary objective of this study, a life cycle thinking-based energy assessment tool was developed for multi-unit residential buildings (MURBs). A comprehensive review of popular building energy rating systems revealed the need to incorporate life cycle thinking in evaluating building energy performance. Further, based on a comprehensive review it was identified that current rating systems do not consider the uncertainty and vagueness associated with data used for performance assessments. Most of the existing energy rating systems focus only on energy consumption when assigning the rating. Energy rating systems rarely consider the factors affecting energy use and the impacts of energy use in assigning their score/rating for the building. An assessment tool with indicators representing the impacts of energy use and factors affecting operational energy use of buildings was developed to address the identified issues. A questionnaire survey was conducted to obtain expert views on the proposed assessment tool from professionals associated with MURBs. MURB owners, managers, designers, engineers, researchers, and government and other external stakeholders were the target audience of this survey. Feedback from this survey was used to refine the proposed tool and determine weights for indicators. In the proposed method, fuzzy set theory was used to consider the uncertainties and vagueness associated with qualitative and quantitative assessments of the identified indicator data. Fuzzy synthetic evaluation was used to aggregate the indicator value. The proposed approach extends the current body of knowledge on building energy ratings by integrating asset performance and operational performance through lifecycle thinking. A case study was conducted to demonstrate the application of the energy assessment tool. A java-based web tool was developed to assist the proposed assessment process.
Applied Science, Faculty of
Engineering, School of (Okanagan)
Graduate
APA, Harvard, Vancouver, ISO, and other styles
13

Shao, Minjie. "TWO ESSAYS ON BIDDING IN MULTI-UNIT COMMON VALUE AUCTIONS." Doctoral diss., University of Central Florida, 2010. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2262.

Full text
Abstract:
This dissertation consists of two essays on the topic of bidding in multi-unit common value auction. Essay one examines the role of capacity constraint on the auction results and bidding behavior. We consider a general case where bidders are unconstrained, and a second setting where bidders are capacity constrained. We document downward sloping demand curves for individual bidders. Bidders shade their bids by submitting quantity-price pairs and spreading their bids. The winner's curse is strong in the unconstrained treatment, but we find no evidence of the winner's curse when bidding constraints are imposed. Unconstrained bidders shade bids significantly more and their quantity-weighted prices are much lower than those in the constrained treatment. Interacting with the information structure, the capacity constraint has a significant impact on the auction results including the market clearing price, market efficiency, and the degree of market concentration. We provide evidence that efficient price discovery in multi-unit auctions with diverse information is possible, but careful attention to auction design will make this outcome more likely. Essay two examines how the introduction of a noncompetitive bidding option affects outcomes in a multi-unit uniform-price auction. The experimental design incorporates many of the characteristics of the markets that pertain to the issuance of new equity securities. Important features of the bidding environment include endogenous bidder entry, costly information acquisition, bidders that differ by capacity constraint, and substantial uncertainty with respect to the intrinsic value. We use a standard uniform-price auction as our baseline setting where only competitive bids are accepted. Our results show that introducing the noncompetitive bidding option improves auction performance by increasing revenue and reducing price error. Underpricing is found in both treatments, but is less severe in the presence of the noncompetitive bidding option. The incorporation of this option significantly increases both the small bidder participation rate and allocation, and reduces the incentive for small bidders to free ride by submitting extremely high bids. Under both treatments, information acquisition increases large bidders' profits but proves unprofitable for small bidders, and pricing accuracy is increasing in the rate of information acquisition.
Ph.D.
Department of Finance
Business Administration
Business Administration PhD
APA, Harvard, Vancouver, ISO, and other styles
14

Jain, Sulabh. "Opportunistic maintenance policy of a multi-unit system under transient state." [Tampa, Fla.] : University of South Florida, 2005. http://purl.fcla.edu/fcla/etd/SFE0001260.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Fagoonee, Lina. "A multi-functional turbo receiver based on partial unit memory codes." Thesis, Lancaster University, 2003. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.418684.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ganyaza, Thulisile Zioner. "Multi-disciplinary teamwork in an admission unit of a psychiatric institution." Thesis, Stellenbosch : Stellenbosch University, 2000. http://hdl.handle.net/10019.1/51837.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Zama, Anri. "A Relevance Rule Organizing Responsive Behavior During Projectably Multi-Unit Tellings." PDXScholar, 2016. http://pdxscholar.library.pdx.edu/open_access_etds/2750.

Full text
Abstract:
Research on projectably multi-unit tellings (e.g., stories) has largely focused on their contexts of emergence, beginnings, endings, and uptakes (or lack thereof), rather than on their ‘middles.’ The relatively small literature on such ‘middles’ has focused on different types of responsive behaviors when they do occur (e.g., continuers). However, there is virtually no research on relevance rules that might systematically organize these ‘middles,’ including the production of responsive behaviors (or lack thereof) and the management of intersubjectivity. This thesis describes and defends one such relevance rule: Advisors are strongly accountable for responding – either vocally and/or nonvocally – at each and every complex possible-completion place. This relevance rule provides an inferential framework with which to monitor and manage advisors’ understanding of ‘middle’ units. The method used is conversation analysis – including the analysis of deviant cases – complemented by the coding of data and resultant distributional patterns. Data are dual-camera-videotaped, drop-in, advising sessions conducted in English between 20 non-native-English-speaking international students and native-English-speaking advisors working for a university's Office of International Affairs. Specifically, data involve students’ projectably multi-unit problem presentations (e.g., related to Visa status, course scheduling, international travel, housing, etc.).
APA, Harvard, Vancouver, ISO, and other styles
18

Liu, Tong. "Protection control unit for the multi-view on-the-fly memory model." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/tape16/PQDD_0002/MQ31614.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Ang, Boon Seong 1966. "Design and implementation of a multi-purpose cluster system network interface unit." Thesis, Massachusetts Institute of Technology, 1999. http://hdl.handle.net/1721.1/80041.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Freitas, Eduardo Noronha de Andrade. "SCOUT: a multi-objective method to select components in designing unit testing." Universidade Federal de Goiás, 2016. http://repositorio.bc.ufg.br/tede/handle/tede/5674.

Full text
Abstract:
Submitted by Marlene Santos (marlene.bc.ufg@gmail.com) on 2016-06-09T17:02:10Z No. of bitstreams: 2 Tese - Eduardo Noronha de Andrade Freitas - 2016.pdf: 1936673 bytes, checksum: 4336d187b0e552ae806ef83b9f695db0 (MD5) license_rdf: 19874 bytes, checksum: 38cb62ef53e6f513db2fb7e337df6485 (MD5)
Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2016-06-10T11:14:00Z (GMT) No. of bitstreams: 2 Tese - Eduardo Noronha de Andrade Freitas - 2016.pdf: 1936673 bytes, checksum: 4336d187b0e552ae806ef83b9f695db0 (MD5) license_rdf: 19874 bytes, checksum: 38cb62ef53e6f513db2fb7e337df6485 (MD5)
Made available in DSpace on 2016-06-10T11:14:00Z (GMT). No. of bitstreams: 2 Tese - Eduardo Noronha de Andrade Freitas - 2016.pdf: 1936673 bytes, checksum: 4336d187b0e552ae806ef83b9f695db0 (MD5) license_rdf: 19874 bytes, checksum: 38cb62ef53e6f513db2fb7e337df6485 (MD5) Previous issue date: 2016-02-15
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES
Fundação de Amparo à Pesquisa do Estado de Goiás - FAPEG
The creation of a suite of unit testing is preceded by the selection of which components (code units) should be tested. This selection is a significant challenge, usually made based on the team member’s experience or guided by defect prediction or fault localization models. We modeled the selection of components for unit testing with limited resources as a multi-objective problem, addressing two different objectives: maximizing benefits and minimizing cost. To measure the benefit of a component, we made use of important metrics from static analysis (cost of future maintenance), dynamic analysis (risk of fault, and frequency of calls), and business value. We tackled gaps and challenges in the literature to formulate an effective method, the Selector of Software Components for Unit testing (SCOUT). SCOUT was structured in two stages: an automated extraction of all necessary data and a multi-objective optimization process. The Android platform was chosen to perform our experiments, and nine leading open-source applications were used as our subjects. SCOUT was compared with two of the most frequently used strategies in terms of efficacy.We also compared the effectiveness and efficiency of seven algorithms in solving a multi-objective component selection problem: random technique; constructivist heuristic; Gurobi, a commercial tool; genetic algorithm; SPEA_II; NSGA_II; and NSGA_III. The results indicate the benefits of using multi-objective evolutionary approaches such as NSGA_II and demonstrate that SCOUT has a significant potential to reduce market vulnerability. To the best of our knowledge, SCOUT is the first method to assist software testing managers in selecting components at the method level for the development of unit testing in an automated way based on a multi-objective approach, exploring static and dynamic metrics and business value.
(Sem resumo)
APA, Harvard, Vancouver, ISO, and other styles
21

Ledolter, Johannes. "Multi-Unit Longitudinal Models with Random Coefficients and Patterned Correlation Structure: Modelling Issues." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1999. http://epub.wu.ac.at/432/1/document.pdf.

Full text
Abstract:
The class of models which is studied in this paper, multi-unit longitudinal models, combines both the cross-sectional and the longitudinal aspects of observations. Many empirical investigations involve the analysis of data structures that are both cross-sectional (observations are taken on several units at a specific time period or at a specific location) and longitudinal (observations on the same unit are taken over time or space). Multi-unit longitudinal data structures arise in economics and business where panels of subjects are studied over time, biostatistics where groups of patients on different treatments are observed over time, and in situations where data are taken over time and space. Modelling issues in multi-unit longitudinal models with random coefficients and patterned correlation structure are illustrated in the context of two data sets. The first data set deals with short time series data on annual death rates and alcohol consumption for twenty-five European countries. The second data set deals with glaceologic time series data on snow temperature at 14 different locations within a small glacier in the Austrian Alps. A practical model building approach, consisting of model specification, estimation, and diagnostic checking, is outlined. (author's abstract)
Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
22

Schmelzer, Claire Dobson. "A case study investigation of strategy implementation in three multi-unit restaurant firms /." This resource online, 1992. http://scholar.lib.vt.edu/theses/available/etd-02022007-133632/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Gulcu, Altan. "Designing pricing mechanisms in the presence of rational customers with multi-unit demands." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/28176.

Full text
Abstract:
Thesis (M. S.)--Industrial and Systems Engineering, Georgia Institute of Technology, 2009.
Committee Chair: Pinar Keskinocak; Committee Co-Chair: Wedad Elmaghraby; Committee Member: Julie Swann; Committee Member: Mark Ferguson; Committee Member: Paul Griffin.
APA, Harvard, Vancouver, ISO, and other styles
24

Schmelzer, Claire D. "A case study investigation of strategy implementation in three multi-unit restaurant firms." Diss., Virginia Tech, 1992. http://hdl.handle.net/10919/37269.

Full text
Abstract:
The primary objective of this study was to conduct an exploratory investigation of the process of strategy implementation in multi-unit restaurant firms. A model comprised of five context variables and five process variables was developed on the basis of a review of the theoretical literature about the restaurant industry, strategy implementation, and organization theory. Qualitative research methods, specifically case study design, concept mapping, and matrix analysis were used to collect and analyze the data from three firms. The findings from this investigation included 14 propositions that explain the associations between the variables and other factors found to affect implementation in the three companies, which were investigated. A new framework was developed from the propositions that further delineates the strategy implementation process. The framework introduces four additional variables found to be involved in the implementation process: life cycle stage of the firm, size and geographic dispersion of the firm, manager demographics, and training. Three primary context variables, organizational culture, organizational structure, and perceived environmental uncertainty; and three primary process variables, information processing, planning and control, and resource allocation were found to have a major effect on strategy implementation. The results obtained provide a basis for further study of the implementation process.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
25

Ogitani, Catherine Louise. "Participant perceived satisfaction with the Jobs and Employment Services Department multi-service unit." CSUSB ScholarWorks, 2001. https://scholarworks.lib.csusb.edu/etd-project/2021.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Twardowski, Michael D. "Deriving Motor Unit-based Control Signals for Multi-Degree-of-Freedom Neural Interfaces." Digital WPI, 2020. https://digitalcommons.wpi.edu/etd-dissertations/601.

Full text
Abstract:
Beginning with the introduction of electrically powered prostheses more than 65 years ago surface electromyographic (sEMG) signals recorded from residual muscles in amputated limbs have served as the primary source of upper-limb myoelectric prosthetic control. The majority of these devices use one or more neural interfaces to translate the sEMG signal amplitude into voltage control signals that drive the mechanical components of a prosthesis. In so doing, users are able to directly control the speed and direction of prosthetic actuation by varying the level of muscle activation and the associated sEMG signal amplitude. Consequently, in spite of decades of development, myoelectric prostheses are prone to highly variable functional control, leading to a relatively high-incidence of prosthetic abandonment among 23-35% of upper-limb amputees. Efforts to improve prosthetic control in recent years have led to the development and commercialization of neural interfaces that employ pattern recognition of sEMG signals recorded from multiple locations on a residual limb to map different intended movements. But while these advanced algorithms have made strident gains, there still exists substantial need for further improvement to increase the reliability of pattern recognition control solutions amongst the variability of muscle co-activation intensities. In efforts to enrich the control signals that form the basis for myoelectric control, I have been developing advanced algorithms as part of a next generation neural interface research and development, referred to as Motor Unit Drive (MU Drive), that is able to non-invasively extract the firings of individual motor units (MUs) from sEMG signals in real-time and translate the firings into smooth biomechanically informed control signals. These measurements of motor unit firing rates and recruitment naturally provide high-levels of motor control information from the peripheral nervous system for intact limbs and therefore hold the greater promise for restoring function for amputees. The goal for my doctoral work was to develop advanced algorithms for the MU Drive neural interface system, that leverage MU features to provide intuitive control of multiple degrees-of-freedom. To achieve this goal, I targeted 3 research aims: 1) Derive real-time MU-based control signals from motor unit firings, 2) Evaluate feasibility of motor unit action potential (MUAP) based discrimination of muscle intent 3) Design and evaluate MUAP-based motion Classification of motions of the arm and hand.
APA, Harvard, Vancouver, ISO, and other styles
27

Rozman, Peter Andrew. "Multi-Unit Activity in the Human Cortex as a Predictor of Seizure Onset." Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:15821597.

Full text
Abstract:
Epilepsy is a neurological disorder affecting 50 million people worldwide. It consists of a large number of syndromes, all of which are characterized by a predisposition to recurrent, unprovoked seizures, while differing by degree of focality, clinical manifestation and many other factors. Despite the prevalence of this disorder, relatively little is known about the basic physiological mechanisms that underlie the seizures themselves. Additionally, roughly 25% of patients are refractory to existing therapies. The need for more highly targeted therapies for focal epilepsies has driven decades of research on seizure prediction. While most of these studies have relied on scalp or intracranial EEG, more recent studies have taken advantage of electrodes that capture single- or multi-unit activity. We utilized a linear microelectrode array to capture multi-unit activity in humans with refractory epilepsy with the expectation that such microscale activity may provide a signal in advance of changes on electroencephalography. Twelve patients underwent long-term monitoring with both clinical electrocorticography (ECoG) and the laminar microelectrode array, which consists of linearly arranged contacts that sample all layers of the human cortex. Multi-unit (300-5000 Hz) power was compared between thirty-minute preictal and interictal time windows. Several parameters characterizing the multi-unit power were compared between preictal and interictal time windows. Parameters included proximity to seizure focus, depth of recording, and directionality of changes in multi-unit power. Optimization of these parameters resulted in a best-performing classifier with sensitivity and specificity of 0.70 and 0.80, respectively. These results demonstrate reproducible increases and decreases in multi-unit activity prior to seizure onset and suggest that multi-unit information may be useful in the development of future seizure prediction systems.
APA, Harvard, Vancouver, ISO, and other styles
28

Athanassopoulos, Antreas D. "Decision support systems for target setting and resource allocation in multi-unit and multi-level organisations using data envelopment analysis." Thesis, University of Warwick, 1995. http://wrap.warwick.ac.uk/36133/.

Full text
Abstract:
This thesis is concerned with the development of decision support systems for determining performance targets and allocating resources in multi-unit organisations. These organisations are organised into networks of decision making units (DMUs) that seek to satisfy demand for services in the public sector or attract demand for services in the for profit making sector. Mathematical programming methods in general, and data envelopment analysis in particular are the methods chiefly used throughout the thesis. The decision support systems sought to address two distinct problems faced by multi-unit organisations. The first is concerned with the allocation of recurrent type of budgets to decision making units which use their resources without interference from their headquarters. This type of problem is called a-posteriori decision support and it is addressed by developing a framework of effective target setting. Data envelopment analysis models are developed for setting targets at the DMU and the global organisational levels. Two target-based resource allocation models are then developed seeking to encapsulate alternative organisational structures and objectives of resource allocation, namely equity, effectiveness and efficiency. The second case concerns problems where the allocation of resources is made directly to prespecified DMUs. This problem is called a-priori decision support which includes a phase of managerial diagnosis and planning. In the diagnostic phase performance targets for different management tiers are assessed, and systematic procedures of micro level benchmarking are developed. In the planning phase targets for improving the scale size of individual units are assessed, the long & short run viability of the network of outlets is examined and, fmally, the marginal impacts of past investments on the performance of DMUs are investigated. The two phases of the decision support system would aid management in making decisions regarding the future of individual DMUs (e.g. investment, expansion, divestment). Application of the method to a network of 154 public houses is incorporated throughout the relevant chapters of the thesis.
APA, Harvard, Vancouver, ISO, and other styles
29

Holmes, William B. "Exploring Environmental Service Auctions." Digital Archive @ GSU, 2010. http://digitalarchive.gsu.edu/econ_diss/63.

Full text
Abstract:
The chapters of this dissertation explore related aspects of the procurement of conservation services from private landowners. In the first chapter, heuristic laboratory experiments reveal the impact of potential government regulation on strategic forces and efficiency properties in conservation procurement auctions. In the second chapter, data from past procurement auctions are analyzed to discover the existence and magnitude of premiums received by auction participants. The first Chapter, “Procurement Auctions Under Regulatory Threat,” examines how strategic forces and efficiency properties are impacted in auctions for the procurement of environmental services when a threat of regulation is levied. Laboratory experiments examining different regulatory environments demonstrate that a threat of regulation will reduce the amount of public funds necessary to purchase a given level of environmental services. However, adverse selection costs and equity are negatively impacted by threat implementation. The second Chapter, “Estimating Bid Inflation in Procurement of Environmental Services,” studies the size of premiums received by program participants in conservation programs. Predictions informed by economic literature and theory elicit the underlying value distribution for a unique dataset of procurement auctions. Average premiums for auction participants range from almost 50 percent to less than 1 percent across auction periods and institutions. The results demonstrate that both repetition and rule variation may improve the efficiency of procurement auctions. The auctions studied here are shown to yield efficiency improvements of more than 32 percent over standard fixed-payment schemes for service procurement.
APA, Harvard, Vancouver, ISO, and other styles
30

Baharoon, Walid A. M. "Architect-user communication process through the use of computers in multi-unit housing design." Thesis, McGill University, 1990. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=59621.

Full text
Abstract:
It is commonly believed that by involving the user in the design process of his dwelling unit, a higher level of satisfaction could be achieved. Attempts have been made in several countries to include users in the design process of their units using different communication media including computers. However, today computers have been implemented primarily for the use of architects. This study aims at narrowing the architect-user communication gap by using computers in multi-unit housing design. The author reviews past work in user involvement in the design process through the use of computers and the possibility of introducing computers into the North American homebuilding Industry. Through an algorithm the author will demonstrate how the communication process can take place. Two simulations were conducted in order to test the proposed system in a realistic situation. The results of the study suggest that the user is able to make his own decisions, control his budget and satisfy his needs independently within a reasonable amount of time. These results could have further positive impact on the architect, user, building industry and the built environment.
APA, Harvard, Vancouver, ISO, and other styles
31

Gomez, Connie Sun Wei Shokoufandeh Ali. "A unit cell based multi-scale modeling and design approach for tissue engineered scaffolds /." Philadelphia, Pa. : Drexel University, 2007. http://hdl.handle.net/1860/1766.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Siegel, Jessica Lynn. "A Multi-level Model Examining the Effects of Unit-level Culture on Abusive Supervision." Diss., The University of Arizona, 2011. http://hdl.handle.net/10150/203029.

Full text
Abstract:
This study examines the effects of unit-level culture on abusive supervision. Utilizing Baumeister and colleagues' (2000) self-regulatory resource depletion model as an explanatory framework, I argue that aggressive unit-level culture will increase the incidence of abusive supervision, whereas people- and team-oriented unit-level cultures will decrease the incidence of abusive supervision. In line with these arguments, I then examine the degree to which those effects are mediated by ego depletion. In sum, I argue that aggressive unit-level culture will increase, while people- and team-oriented cultures will reduce, the amount of supervisor ego depletion, which then increases the incidence of abusive supervision. Using Hobfoll et al.'s (1990) Social Support Resource Theory, I further argue that the relationship between unit-level culture and ego depletion is moderated by supervisor home social support. I tested my model using a sample of 340 nurses and 52 nursing directors working in a large hospital system in the Southwestern United States. I was unable to demonstrate support for my model as hypothesized. However, I am able to contribute to the literature concerning antecedents to abusive supervision by showing that alternative conceptualization of culture impact abusive supervision. Further, I show that aggressive norms mediate the relationship between aggressive culture and abusive supervision. I also contribute to the literature examining resource depletion in the workplace by demonstrating the buffering role of supervisor home social support on ego depletion. Implications and future directions are discussed.
APA, Harvard, Vancouver, ISO, and other styles
33

Eluripati, Rajeev. "A multi-fiber unit cell for prediction of transverse properties in metal matrix composites." Morgantown, W. Va. : [West Virginia University Libraries], 2003. http://etd.wvu.edu/templates/showETD.cfm?recnum=3018.

Full text
Abstract:
Thesis (M.S.)--West Virginia University, 2003.
Title from document title page. Document formatted into pages; contains xiii, 85 p. : ill. (some col.). Includes abstract. Includes bibliographical references (p. 83-85).
APA, Harvard, Vancouver, ISO, and other styles
34

Abu-Khalil, Ramy 1978. "Developing a unified manufacturing and sourcing strategy in a multi-business unit engineering firm." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/34829.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2005.
Includes bibliographical references (p. 137-139).
Competitive pressures in manufacturing industries have led to an increased utilization of outsourcing as a strategic alternative to vertical integration. This thesis develops a methodology to aid multi-business unit firms in formulating outsourcing strategies on the corporate or business group level. It offers frameworks for identifying non-core manufacturing capabilities and make versus buy decision making. In addition, it identifies critical organizational and communication linkages between levels of management and functional groups that are necessary precursors to developing a successful outsourcing strategy. Finally, it presents an analysis of the growing importance of the strategic sourcing function within the engineering firm, the informational inputs needed for the sourcing organization to adequately support activities across all business units, and investigates issues of measurement and performance within a cross-business unit support function. The research leading to the development of the described outsourcing methodology was conducted jointly between the MIT Leaders for Manufacturing Program and Honeywell International within the Honeywell Automation and Control Solutions Business Group.
by Ramy Abu-Khalil.
S.M.
M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
35

Potgieter, Paul Stephanus. "South African unit standards for sight-singing, realised in a multi-media study package." Pretoria : [s.n.], 2003. http://upetd.up.ac.za/thesis/available/etd-09292004-070324.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Yalamanchili, Pranavi. "A MULTI-LEVEL IMPLEMENTATION OF IMAGE AMPLIFICATION ON THE GENERAL PURPOSE GRAPHICAL PROCESSING UNIT." Scholarly Commons, 2017. https://scholarlycommons.pacific.edu/uop_etds/3659.

Full text
Abstract:
Image amplification is an important image enhancement technique for applications such as medicine, satellite imaging, forensic sciences, remote sensing, among others. The existing techniques are highly computationally intensive and take a lot of time to execute on conventional processors. Their highly computationally intensive nature makes them a good fit for massively parallel architectures such as the general-purpose graphical processing unit (GPGPU) devices. In this research, we accelerate a state-of-the-art image amplification technique on Nvidia’s GPGPU device, Kepler GK110 using the Compute Unified Device Architecture (CUDA) programming model. The technique comprises four computationally intensive stages namely Canny edge detection, vertical edge preservation, horizontal edge preservation and mean preserving interpolation. Using efficacious CUDA optimization techniques, we successively map the four stages of the algorithm to the GPGPU device, creating a hierarchy of five implementations. The final implementation of the hierarchy completely maps all of the algorithm stages to the GPGPU device, eliminating any costly intermediate host-device computations and focusing more on useful computations. We provide a detailed analysis of the kernel time and end-to-end application time obtained for each implementation in the hierarchy. We also compare the GPGPU execution time for each algorithm stage with the equivalent serial implementation. We discuss an empirical method for identifying optimal GPGPU execution configuration to maximize the device utilization. All of the GPGPU kernels executed on the Kepler GPGPU device achieve high speedup, as high as 90x, versus the optimized serial implementation. In addition, for the largest image size of 10240x10240, the most optimal GPGPU implementation achieves an end-to-end application speedup of 11.75x versus the serial counterpart. The research also presents the analysis on the implementation of the application on Amazon Web Services’ instances. This analysis further provides an opportunity to study the scalability of the application.
APA, Harvard, Vancouver, ISO, and other styles
37

Costa, Jorge. "A study of strategic planning and environmental scanning in the multi-unit Portugese hotel sector." Thesis, University of Surrey, 1997. http://epubs.surrey.ac.uk/836/.

Full text
Abstract:
This study addresses the strategic planning and environmental scanning activities of the hotel chains operating in Portugal, and compares attitudes towards planning and scanning activities by companies where strategy is formalised through a formal written strategic plan (intenders) and those companies where strategy is informally developed through a 'vision' or 'informal plan' (realisers). The main challenges facing Portuguese hoteliers as identified by the representatives of the Portuguese government, hotel associations and hotel chains inform the development of the study. The aims of the research derive from these challenges faced by the Portuguese hoteliers and their need for a more proactive attitude towards strategic planning, as well as from the literature on strategic planning and environmental scanning. The study is exploratory and descriptive based on a qualitative and inductive approach. This methodology is used to elicit and represent the existing practices as well as managers' perceptions towards strategic planning and environmental scanning. The fmdings reveal a lack of formal continuous environmental scanning by both formal and informal planning chains and a significant number of similarities in terms of the scanning methods and sources used by these two types of organisations. A grounded theory methodology is used to identif' the core themes emerging and to develop theory on the planning and scanning activities of hotel chains. The use of this methodology also allows a better understanding of the relationship between strategic planning and environmental scanning by hotel chains where strategy is intended (existence of a formal written strategic planning) and by those where strategy is realised (no existence of a formal written strategic planning). A cognitive mapping technique is used for the analysis of respondents' perceptions towards the development of a formal environmental scanning process. This technique is also applied in the identification of the relevance and structure of a formal environmental scanning process, in the assessment of the barriers to the development of this process as well as possible actions to overcome them. The study also finds that the existing differences amongst intenders and realisers rely essentially on aspects of content rather than on aspects of process, and that keeping a high degree of flexibility in the decision making process is considered of premium importance. The preponderance of similarities as opposed to differences lead to the development of a series of output propositions common to all four comparison groups. These propositions, together with other recommendations suggested in the literature, are used to identify the necessary conditions for the development and implementation of a continuous environmental scanning process by formal and informal planning hotel chains and develop a theoretical model of environmental scanning.
APA, Harvard, Vancouver, ISO, and other styles
38

Jarno, Aurélien Allard Bruno Bacon Roland. "Développement d'un modèle numérique de l'instrument MUSE / VLT (Multi Unit Spectroscopic Explorer / Very Large Telescope)." Villeurbanne : Doc'INSA, 2009. http://docinsa.insa-lyon.fr/these/pont.php?id=jarno.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Jarno, Aurélien. "Développement d'un modèle numérique de l'instrument MUSE / VLT (Multi Unit Spectroscopic Explorer / Very Large Telescope)." Lyon, INSA, 2008. http://theses.insa-lyon.fr/publication/2008ISAL0047/these.pdf.

Full text
Abstract:
Au cours de la dernière décade, la course aux grands et très grands télescopes a été accompagnée d'une augmentation non seulement de la taille et de la complexité des projets instrumentaux en astrophysique, mais aussi des exigences sur les performances des instruments. Cette combinaison de complexité et d'exigences accrues rend de plus en plus nécessaire le développement de modèles sophistiqués pour ces instruments. Ce mémoire de thèse présente le modèle numérique développé au Centre de Recherche Astronomique de Lyon (CRAL) pour simuler l'instrument MUSE. Cet instrument est un spectrographe intégral de champ pour les grands télescopes européens du VLT, en cours de réalisation par le CRAL. Ce modèle intègre l’ensemble de la chaîne d’acquisition, depuis l'atmosphère et le télescope jusqu'aux détecteurs inclus. Il prend en compte les aberrations optiques et les effets de la diffraction, en propageant un front d'onde à travers l'instrument selon le concept de l'optique de Fourier. Cette présentation du logiciel est accompagnée d'une discussion des difficultés rencontrées et des moyens mis en oeuvre pour les résoudre. Elle est suivie par des exemples de simulations et d'utilisations de ce logiciel pour étudier les performances attendues de l'instrument. En parallèle au travail de modélisation décrit ci-dessus, ce travail de thèse a aussi porté sur l'adaptation du logiciel de gestion documentaire du projet MUSE pour permettre son utilisation dans un environnement de travail UNIX. Ce travail est aussi décrit dans ce mémoire
During the last ten years, the race to the large and very large telescopes has been associated with an increase of the size and the complexity of instrumental projects in astrophysics, and of their performance requirements. This combination of complexity and increased requirements encourages the development of sophisticated models for these instruments. This dissertation presents the numerical model developed at the Centre de Recherche Astronomique de Lyon (CRAL) to simulate the MUSE instrument. This instrument, currently being built at CRAL, is an integral field spectrograph which will be installed on one of the large European telescopes of the VLT. This model integrates the whole chain of acquisition from the atmosphere to the telescope and including the detectors. It takes into account both optical aberrations and diffraction effects, by propagating a wavefront through the instrument, according to the Fourier optics concept. The description of the software is associated with a discussion of the problems that have been encountered and the solutions that have been implemented. It is followed by examples of simulations and use cases of the software to study the performances of the instrument. In parallel to the modeling work described above, the product data management software used in the MUSE project has been adapted to the UNIX environment. This work is also described in this dissertation
APA, Harvard, Vancouver, ISO, and other styles
40

Samuel, Reuven Meyer. "Design and optimization of a reconfigurable shared floating point unit in a multi-processor environment." The Ohio State University, 1998. http://rave.ohiolink.edu/etdc/view?acc_num=osu1235242668.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Lopez, Behar Diana. "Installation of charging infrastructure for electric vehicles in multi-unit residential buildings in British Columbia." Thesis, University of British Columbia, 2017. http://hdl.handle.net/2429/63813.

Full text
Abstract:
Electric Vehicles (EVs) contribute to the mitigation of climate change through reduced greenhouse gas emissions, when powering with sustainable sources of electricity. The province of British Columbia (BC) is an attractive location for EV deployment since most of its electricity is sourced from clean renewable energy sources. Due to their driving range and potential to reduce local emissions, EVs work well in urban contexts, where most residential buildings are located. As a result, residents from Multi-Unit Residential Buildings (MURBs) are among those interested in becoming EV owners, thus requiring access to charging infrastructure, especially overnight home charging, which is the preferred charging alternative. However, most residential buildings are not equipped with charging infrastructure and its installation can have numerous challenges that can turn into barriers. This thesis explores the implications, challenges and decision-making processes of EV charging infrastructure installation in MURBs to identify present and future barriers to infrastructure provision, as well as potential policy-driven interventions to address them. The methods used to conduct the research study include the utilization of conceptual frameworks and the application of systems thinking principles to map the interrelation and causalities of the problem domains as causal loop diagrams. A review of the literature identified the key problem domains. Policy recommendations were then classified based on each problem domain. First, financial or fiscal policy measures include creating incentives for EV owners and extending them to the building owners, as well as programs to incentivize and provide financial aid for building owners to develop building retrofit plans. Second, regulatory policy measures include revising the regulations and addressing the rights and obligations of the stakeholders, as well as making mandatory the installation of charging stations in new MURBs. Third, information and awareness policy measures include expanding the existing guidelines and informing the development of a long-term EV charging infrastructure plan. These policy recommendations are relevant to different stakeholders as they have the potential to inform the decisions and policy programs of the municipal and provincial government of BC, as well as other governmental and non-governmental agencies and associations.
Applied Science, Faculty of
Civil Engineering, Department of
Graduate
APA, Harvard, Vancouver, ISO, and other styles
42

Badillo, Almaraz Hiram. "Numerial modelling based on the multiscale homogenization theory. Application in composite materials and structures." Doctoral thesis, Universitat Politècnica de Catalunya, 2012. http://hdl.handle.net/10803/83924.

Full text
Abstract:
A multi-domain homogenization method is proposed and developed in this thesis based on a two-scale technique. The method is capable of analyzing composite structures with several periodic distributions by partitioning the entire domain of the composite into substructures making use of the classical homogenization theory following a first-order standard continuum mechanics formulation. The need to develop the multi-domain homogenization method arose because current homogenization methods are based on the assumption that the entire domain of the composite is represented by one periodic or quasi-periodic distribution. However, in some cases the structure or composite may be formed by more than one type of periodic domain distribution, making the existing homogenization techniques not suitable to analyze this type of cases in which more than one recurrent configuration appears. The theoretical principles used in the multi-domain homogenization method were applied to assemble a computational tool based on two nested boundary value problems represented by a finite element code in two scales: a) one global scale, which treats the composite as an homogeneous material and deals with the boundary conditions, the loads applied and the different periodic (or quasi-periodic) subdomains that may exist in the composite; and b) one local scale, which obtains the homogenized response of the representative volume element or unit cell, that deals with the geometry distribution and with the material properties of the constituents. The method is based on the local periodicity hypothesis arising from the periodicity of the internal structure of the composite. The numerical implementation of the restrictions on the displacements and forces corresponding to the degrees of freedom of the domain's boundary derived from the periodicity was performed by means of the Lagrange multipliers method. The formulation included a method to compute the homogenized non-linear tangent constitutive tensor once the threshold of nonlinearity of any of the unit cells has been surpassed. The procedure is based in performing a numerical derivation applying a perturbation technique. The tangent constitutive tensor is computed for each load increment and for each iteration of the analysis once the structure has entered in the non-linear range. The perturbation method was applied at the global and local scales in order to analyze the performance of the method at both scales. A simple average method of the constitutive tensors of the elements of the cell was also explored for comparison purposes. A parallelization process was implemented on the multi-domain homogenization method in order to speed-up the computational process due to the huge computational cost that the nested incremental-iterative solution embraces. The effect of softening in two-scale homogenization was investigated following a smeared cracked approach. Mesh objectivity was discussed first within the classical one-scale FE formulation and then the concepts exposed were extrapolated into the two-scale homogenization framework. The importance of the element characteristic length in a multi-scale analysis was highlighted in the computation of the specific dissipated energy when strain-softening occurs. Various examples were presented to evaluate and explore the capabilities of the computational approach developed in this research. Several aspects were studied, such as analyzing different composite arrangements that include different types of materials, composites that present softening after the yield point is reached (e.g. damage and plasticity) and composites with zones that present high strain gradients. The examples were carried out in composites with one and with several periodic domains using different unit cell configurations. The examples are compared to benchmark solutions obtained with the classical one-scale FE method.
En esta tesis se propone y desarrolla un método de homogeneización multi-dominio basado en una técnica en dos escalas. El método es capaz de analizar estructuras de materiales compuestos con varias distribuciones periódicas dentro de un mismo continuo mediante la partición de todo el dominio del material compuesto en subestructuras utilizando la teoría clásica de homogeneización a través de una formulación estándar de mecánica de medios continuos de primer orden. La necesidad de desarrollar este método multi-dominio surgió porque los métodos actuales de homogeneización se basan en el supuesto de que todo el dominio del material está representado por solo una distribución periódica o cuasi-periódica. Sin embargo, en algunos casos, la estructura puede estar formada por más de un tipo de distribución de dominio periódico. Los principios teóricos desarrollados en el método de homogeneización multi-dominio se aplicaron para ensamblar una herramienta computacional basada en dos problemas de valores de contorno anidados, los cuales son representados por un código de elementos finitos (FE) en dos escalas: a) una escala global, que trata el material compuesto como un material homogéneo. Esta escala se ocupa de las condiciones de contorno, las cargas aplicadas y los diferentes subdominios periódicos (o cuasi-periódicos) que puedan existir en el material compuesto; y b) una escala local, que obtiene la respuesta homogenizada de un volumen representativo o celda unitaria. Esta escala se ocupa de la geometría, y de la distribución espacial de los constituyentes del compuesto así como de sus propiedades constitutivas. El método se basa en la hipótesis de periodicidad local derivada de la periodicidad de la estructura interna del material. La implementación numérica de las restricciones de los desplazamientos y las fuerzas derivadas de la periodicidad se realizaron por medio del método de multiplicadores de Lagrange. La formulación incluye un método para calcular el tensor constitutivo tangente no-lineal homogeneizado una vez que el umbral de la no-linealidad de cualquiera de las celdas unitarias ha sido superado. El procedimiento se basa en llevar a cabo una derivación numérica aplicando una técnica de perturbación. El tensor constitutivo tangente se calcula para cada incremento de carga y para cada iteración del análisis una vez que la estructura ha entrado en el rango no-lineal. El método de perturbación se aplicó tanto en la escala global como en la local con el fin de analizar la efectividad del método en ambas escalas. Se lleva a cabo un proceso de paralelización en el método con el fin de acelerar el proceso de cómputo debido al enorme coste computacional que requiere la solución iterativa incremental anidada. Se investiga el efecto de ablandamiento por deformación en el material usando el método de homogeneización en dos escalas a través de un enfoque de fractura discreta. Se estudió la objetividad en el mallado dentro de la formulación clásica de FE en una escala y luego los conceptos expuestos se extrapolaron en el marco de la homogeneización de dos escalas. Se enfatiza la importancia de la longitud característica del elemento en un análisis multi-escala en el cálculo de la energía específica disipada cuando se produce el efecto de ablandamiento. Se presentan varios ejemplos para evaluar la propuesta computacional desarrollada en esta investigación. Se estudiaron diferentes configuraciones de compuestos que incluyen diferentes tipos de materiales, así como compuestos que presentan ablandamiento después de que el punto de fluencia del material se alcanza (usando daño y plasticidad) y compuestos con zonas que presentan altos gradientes de deformación. Los ejemplos se llevaron a cabo en materiales compuestos con uno y con varios dominios periódicos utilizando diferentes configuraciones de células unitarias. Los ejemplos se comparan con soluciones de referencia obtenidas con el método clásico de elementos finitos en una escala.
APA, Harvard, Vancouver, ISO, and other styles
43

Badillo, Almaraz Hiram. "Numerical modelling based on the multiscale homogenization theory. Application in composite materials and structures." Doctoral thesis, Universitat Politècnica de Catalunya, 2012. http://hdl.handle.net/10803/83924.

Full text
Abstract:
A multi-domain homogenization method is proposed and developed in this thesis based on a two-scale technique. The method is capable of analyzing composite structures with several periodic distributions by partitioning the entire domain of the composite into substructures making use of the classical homogenization theory following a first-order standard continuum mechanics formulation. The need to develop the multi-domain homogenization method arose because current homogenization methods are based on the assumption that the entire domain of the composite is represented by one periodic or quasi-periodic distribution. However, in some cases the structure or composite may be formed by more than one type of periodic domain distribution, making the existing homogenization techniques not suitable to analyze this type of cases in which more than one recurrent configuration appears. The theoretical principles used in the multi-domain homogenization method were applied to assemble a computational tool based on two nested boundary value problems represented by a finite element code in two scales: a) one global scale, which treats the composite as an homogeneous material and deals with the boundary conditions, the loads applied and the different periodic (or quasi-periodic) subdomains that may exist in the composite; and b) one local scale, which obtains the homogenized response of the representative volume element or unit cell, that deals with the geometry distribution and with the material properties of the constituents. The method is based on the local periodicity hypothesis arising from the periodicity of the internal structure of the composite. The numerical implementation of the restrictions on the displacements and forces corresponding to the degrees of freedom of the domain's boundary derived from the periodicity was performed by means of the Lagrange multipliers method. The formulation included a method to compute the homogenized non-linear tangent constitutive tensor once the threshold of nonlinearity of any of the unit cells has been surpassed. The procedure is based in performing a numerical derivation applying a perturbation technique. The tangent constitutive tensor is computed for each load increment and for each iteration of the analysis once the structure has entered in the non-linear range. The perturbation method was applied at the global and local scales in order to analyze the performance of the method at both scales. A simple average method of the constitutive tensors of the elements of the cell was also explored for comparison purposes. A parallelization process was implemented on the multi-domain homogenization method in order to speed-up the computational process due to the huge computational cost that the nested incremental-iterative solution embraces. The effect of softening in two-scale homogenization was investigated following a smeared cracked approach. Mesh objectivity was discussed first within the classical one-scale FE formulation and then the concepts exposed were extrapolated into the two-scale homogenization framework. The importance of the element characteristic length in a multi-scale analysis was highlighted in the computation of the specific dissipated energy when strain-softening occurs. Various examples were presented to evaluate and explore the capabilities of the computational approach developed in this research. Several aspects were studied, such as analyzing different composite arrangements that include different types of materials, composites that present softening after the yield point is reached (e.g. damage and plasticity) and composites with zones that present high strain gradients. The examples were carried out in composites with one and with several periodic domains using different unit cell configurations. The examples are compared to benchmark solutions obtained with the classical one-scale FE method.
En esta tesis se propone y desarrolla un método de homogeneización multi-dominio basado en una técnica en dos escalas. El método es capaz de analizar estructuras de materiales compuestos con varias distribuciones periódicas dentro de un mismo continuo mediante la partición de todo el dominio del material compuesto en subestructuras utilizando la teoría clásica de homogeneización a través de una formulación estándar de mecánica de medios continuos de primer orden. La necesidad de desarrollar este método multi-dominio surgió porque los métodos actuales de homogeneización se basan en el supuesto de que todo el dominio del material está representado por solo una distribución periódica o cuasi-periódica. Sin embargo, en algunos casos, la estructura puede estar formada por más de un tipo de distribución de dominio periódico. Los principios teóricos desarrollados en el método de homogeneización multi-dominio se aplicaron para ensamblar una herramienta computacional basada en dos problemas de valores de contorno anidados, los cuales son representados por un código de elementos finitos (FE) en dos escalas: a) una escala global, que trata el material compuesto como un material homogéneo. Esta escala se ocupa de las condiciones de contorno, las cargas aplicadas y los diferentes subdominios periódicos (o cuasi-periódicos) que puedan existir en el material compuesto; y b) una escala local, que obtiene la respuesta homogenizada de un volumen representativo o celda unitaria. Esta escala se ocupa de la geometría, y de la distribución espacial de los constituyentes del compuesto así como de sus propiedades constitutivas. El método se basa en la hipótesis de periodicidad local derivada de la periodicidad de la estructura interna del material. La implementación numérica de las restricciones de los desplazamientos y las fuerzas derivadas de la periodicidad se realizaron por medio del método de multiplicadores de Lagrange. La formulación incluye un método para calcular el tensor constitutivo tangente no-lineal homogeneizado una vez que el umbral de la no-linealidad de cualquiera de las celdas unitarias ha sido superado. El procedimiento se basa en llevar a cabo una derivación numérica aplicando una técnica de perturbación. El tensor constitutivo tangente se calcula para cada incremento de carga y para cada iteración del análisis una vez que la estructura ha entrado en el rango no-lineal. El método de perturbación se aplicó tanto en la escala global como en la local con el fin de analizar la efectividad del método en ambas escalas. Se lleva a cabo un proceso de paralelización en el método con el fin de acelerar el proceso de cómputo debido al enorme coste computacional que requiere la solución iterativa incremental anidada. Se investiga el efecto de ablandamiento por deformación en el material usando el método de homogeneización en dos escalas a través de un enfoque de fractura discreta. Se estudió la objetividad en el mallado dentro de la formulación clásica de FE en una escala y luego los conceptos expuestos se extrapolaron en el marco de la homogeneización de dos escalas. Se enfatiza la importancia de la longitud característica del elemento en un análisis multi-escala en el cálculo de la energía específica disipada cuando se produce el efecto de ablandamiento. Se presentan varios ejemplos para evaluar la propuesta computacional desarrollada en esta investigación. Se estudiaron diferentes configuraciones de compuestos que incluyen diferentes tipos de materiales, así como compuestos que presentan ablandamiento después de que el punto de fluencia del material se alcanza (usando daño y plasticidad) y compuestos con zonas que presentan altos gradientes de deformación. Los ejemplos se llevaron a cabo en materiales compuestos con uno y con varios dominios periódicos utilizando diferentes configuraciones de células unitarias. Los ejemplos se comparan con soluciones de referencia obtenidas con el método clásico de elementos finitos en una escala.
APA, Harvard, Vancouver, ISO, and other styles
44

Buys, Gerhardus Martinus. "Formulation of a chitosan multi-unit dosage form for drug delivery to the colon / G.M. Buys." Thesis, North-West University, 2006. http://hdl.handle.net/10394/1687.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

McAdams, David. "Monotone Equilibrium in Multi-Unit Auctions." 2002. http://hdl.handle.net/1721.1/1598.

Full text
Abstract:
In a large class of multi-unit auctions of identical objects that includes the uniform-price, as-bid (or discriminatory), and Vickrey auctions, a Bayesian Nash equilibrium exists in monotone pure strategies whenever there is a finite price / quantity grid and each bidder's interim expected payoff function satisfies single-crossing in own bid and type. A stronger condition, non-decreasing differences in own bid and type, is satisfied in this class of auctions given (a) independent types and (b) risk-neutral bidders with marginal values that are (c) non-decreasing in own type and have (d) non-increasing differences in own type and others' quantities. A key observation behind this analysis is that each bidder's valuation for what he wins is always modular in own bid in any multi-unit auction in which the allocation is determined by market-clearing. This paper also provides the first proof of pure strategy equilibrium existence in the uniform-price auction when bidders have multi-unit demand and values that are not private.
APA, Harvard, Vancouver, ISO, and other styles
46

Li, Kuen-Wei, and 李堃瑋. "Advanced Multi-Function Texture Unit Design." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/59531850061947054292.

Full text
Abstract:
碩士
國立中山大學
資訊工程學系研究所
99
With the growing demand of embedded graphics applications, how to provide an efficient graphics hardware acceleration solution has drawn much attention. It is well known that computer graphics contains two major domains: two-dimensional (2D) and three-dimensional (3D) graphics. Each domain owns large amounts of applications, such that general embedded platforms will require both graphics acceleration supports. This thesis proposes an advanced texture unit architecture which can provide various 3D texture filtering functions including trilinear, anistrophics filtering etc , and 2D coloring, painting, and texturing functions. Our proposed design consists of a core computation unit, and a set of data registers. The equations for those supported functions are decomposed into a series of basic arithmetic operations such as multiply-add-accumulation, multiply, etc executed by the core computation unit. To evaluate those equations for each pixel may require some pre-computed parameters which will be computed outside our unit in advance by the system’s micro-controller. The equations can be computed by our texture unit based on the selected finite-state machine sequences which is stored in the on-chip control table. By updating those sequences can change the functionality provided by our chip. The overall cost of the proposed unit is about 28.36k gates. In addition to various texturing functions, this thesis also proposes an implementation of texture function for high-dynamic range (HDR) textures. HDR textures can provide various color details according to the frame’s global illumination environment. Therefore, the 3D rendering system has to incorporate a tone-mapping mechanism to map the HDR image into normal color range of output display system. To reduce the overall tone-mapping implementation cost, this thesis uses an extra accumulator between the standard per-fragment rendering pipeline stages to accumulate the global illumination intensity based on the depth comparison result of the incoming pixel. After all of the pixels have passed through the pipeline stages, every pixel of the stored rendering result will be fetched into a mapping unit which will generate its mapping color in the normal dynamic range. The overall cost of the additional hardware for the realization of HDR textures is about 6.98k gates.
APA, Harvard, Vancouver, ISO, and other styles
47

Chang, Ming-Fong, and 張銘峰. "A Multi-functional Multi-precision 4D Dot Product Unit with SIMD Architecture." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/p48a44.

Full text
Abstract:
碩士
國立中山大學
資訊工程學系研究所
103
In modern graphics processing unit, the vertex shader is a quite important component. It is mainly responsible for the coordinate transformation and light’s geometric operations. When a graphics processing unit executes 3D graphics pipelining to generate a 3D image, the vertex shader usually has to perform a lot of floating-point arithmetic operations. Therefore, this thesis, proposes a scheme which is suitable for a multi-functional multi-precision 4D dot product unit with single instruction multiple data architecture compliant with IEEE754 standard for a single-precision floating-point arithmetic. The proposed 4D dot product unit can perform multiple instruction, including floating-point addition, floating-point multiplication, floating-point multiply-add, floating-point 3D dot product and floating-point 4D dot product. Multi-function means you can use one 4D dot product unit to perform one of five floating-point instruction. It is equivalent to fuse five independent arithmetic units into a hardware, to reduce area. In addition to multi-function, it also provides users four floating-point operation’s precision modes, which are 23-Bit, 18-Bit, 13-Bit, and 7-Bit. In low precision mode, graphics processing unit will generate distorted image, but human eyes can’t clearly identify a slight distortion of the rendered image. As a result, power and energy savings can be achieved by turning off a part of circuit and reducing the switching activities when a little image distortion is allowable.
APA, Harvard, Vancouver, ISO, and other styles
48

Chen, Hou-Jen, and 陳厚任. "Multi-unit Analysis:From Single Cell to Neuronal Ensemble." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/20028386338745287530.

Full text
Abstract:
碩士
國立臺灣大學
醫學工程學研究所
95
The advance of ensemble recording makes it possible to simultaneously observe the neuronal activities of a specific brain area and has been a trend in neuroscience. The data from ensemble recording are multi-unit action potentials (APs), which need computer software to analyze before giving useful information. In our laboratory, we have focused on studying evoked responses. This thesis is mainly about developing software for the analysis of multi-unit APs, i.e. spike sorting. A data acquisition (DAQ) interface is also developed in this thesis. We found that evoked multi-unit analysis is very different from analyzing high SNR spontaneous neural signals. First, we finished the ordinary sorting methods and applied to our data. Principal component analysis (PCA) and k-means were used to identify units from the detected spikes. Then, we improved the spikes detection, which is the first step of spike sorting. By using modified cumulative energy difference (CED), the detection of multi-unit action potentials was refined to have better discrimination. Further, to see if this detection has significant differences to the original CED, we compared the numbers and the flips of spikes collected by these two detections. The DAQ interfaces we developed have functions of counters for firing rate counting, tape play back, and multi-channel acquisitions of action potentials and/or field potentials. All of these functions support triggers from the stimulators for synchronized responses. We compared our program to two commercial products, which records and analyzes spontaneous APs. They are less suitable for our experiments because they don’t support triggered recording. Our spike sorting was developed for evoked APs, which are usually overlapped with each other. Further, we are still improving our spike sorting program and trying to include new appropriate techniques.
APA, Harvard, Vancouver, ISO, and other styles
49

Chen, Hou-Jen. "Multi-unit Analysis:From Single Cell to Neuronal Ensemble." 2007. http://www.cetd.com.tw/ec/thesisdetail.aspx?etdun=U0001-1607200716120500.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

LIAO, HSUAN-YUAN, and 廖軒磒. "Multi-unit Auctions Based on Bidder's Demanding Amounts." Thesis, 2016. http://ndltd.ncl.edu.tw/handle/3e6xe7.

Full text
Abstract:
碩士
國立中正大學
資訊工程研究所
104
We know many of the multi-unit auctions theses are discussions about how many items auctioneers are selling to bidders. In this thesis, we are reversing the facade into how many items are going to be bought from different sellers, each seller has a private value for each units minimum price and a unit limit on sale in the auction. We propose a mechanism for buying all the units and it is two rules are applied, they are decision rule is based on sellers' unit and value to decide whether a seller will be assigned any units and pricing rule is based on sellers' unit to decide the price for buying from each participants unit which they sale. In our designed mechanism, two properties of fairness, Truthfulness and Near Pareto optimality are satisfied. And our algorithm is compared with Greedy algorithm. In our experimental evaluations, our algorithm can prevent seller from lying his value and units for making profit. Keywords: Multi-Unit Auctions, Truthfulness, Near Pareto Optimality
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography