To see the other types of publications on this topic, follow the link: And zero order release.

Dissertations / Theses on the topic 'And zero order release'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'And zero order release.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Sinha, Piyush M. "Nanoengineered implantable devices for controlled drug delivery." The Ohio State University, 2005. http://rave.ohiolink.edu/etdc/view?acc_num=osu1115138930.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Zhi, Kaining. "Formulation and Fabrication of a Novel Subcutaneous Implant for the Zero-Order Release of Selected Protein and Small Molecule Drugs." Diss., Temple University Libraries, 2017. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/482373.

Full text
Abstract:
Pharmaceutical Sciences<br>Ph.D.<br>Diabetes is a leading cause of death and disability in the United States. Diabetes requires a lifetime medical treatment. Some diabetes drugs could be taken orally, while others require daily injection or inhalation to maximize bioavailability and minimize toxicity. Parenteral delivery is a group of delivery routes which bypass human gastrointestinal track. Among all the parenteral methods, we chose subcutaneous implant based on its fast act and high patient compliance. When using subcutaneous implant, drug release needs to be strictly controlled. There are three major groups of controlled release methods. Solvent controlled system is already used as osmotic implant. Matrix controlled system is used in Zoladex® implant to treat cancer. Membrane controlled systems is widely used in coating tablets, but not that popular as an implant. Based on the research reported by previous scientists, we decided to build a hybrid system using both matrix and membrane control to delivery human insulin and other small molecule drugs. Subcutaneous environment is different from human GI track. It has less tolerance for external materials so many polymers cannot be used. From the FDA safe excipient database, we selected albumin as our primary polymer and gelatin as secondary choice. In our preliminary insulin diffusion study, we successfully found that insulin mixed with albumin provided a slower diffusion rate compared with control. In addition, we added zinc chloride, a metal salt that can precipitate albumin. The insulin diffusion rate is further reduced. The preliminary study proved that matrix control using albumin is definitely feasible and we might add zinc chloride as another factor. In order to fabricate an implant with appropriate size, we use lyophilisation technology to produce uniformly mixed matrix. Apart from albumin and human insulin, we added sucrose as protectant and plasticizer. The fine powder after freeze-dry was pressed as a form of tablet. The tablets were sealed in Falcon® cell culture insert. Cell culture insert provide a cylinder shape and 0.3 cm2 surface area for drug release. Insulin release study provided a zero order kinetics from prototypes with zinc chloride or 0.4 micron pore size membrane. Caffeine was used as a model drug to investigate the releasing mechanism. Three pore size membranes (0.4, 3 and 8 micron) were tested with same formulation. While 0.4 micron prototypes provided the slowest release, 3 micron ones surprisingly released caffeine faster than 8 micron implants. We calculated the porosity with pore size and concluded that the percentage of open area on a membrane is the key point to control caffeine release. 0.4 micron membranes were used for future research. We increased the percentage of albumin in our excipient, and achieved a slower caffeine release. However, the zero order release could only last for 3 days. After we replaced sucrose with gelatin, a 5 day zero order release of caffeine was achieved. With all the results, we proposed our “Three Phase” drug release mechanism controlled by both membrane and matrix. Seven other small molecule drugs were tested using our prototype. Cloudy suspension was observed with slightly soluble drugs. We updated our “Three Phase” drug release mechanism with the influence of drug solubility. Data shows that releasing rate with same formulation and membrane follows the solubility in pH 7.4. This result proves that our prototype might be used for different drugs based on their solubility. Finally, with all the information of our prototype, we decided to build a “smart insulin implant” with dose adjustment. We proposed an electrical controlled implant with different porosity membranes. Solenoid was used as the mechanical arm to control membrane porosity. 3-D printing technology was used to produce the first real prototype of our implant. Finally, insulin implant with clinically effective insulin release rate was achieved.<br>Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Quan. "Development of a novel gastro-retentive delivery system using alfuzosin HCl as a model drug." Diss., Temple University Libraries, 2010. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/80170.

Full text
Abstract:
Pharmaceutics;<br>Ph.D.<br>The objectives of this project encompass the design and development of a drug delivery system to continuously deliver therapeutic agents from the stomach to the proximal region of the intestine. The delivery system designed would have sufficient gastric residence time together with near zero-order release kinetics. The physicochemical properties pertaining to the formulation development of the model drug (alfuzosin HCl) were evaluated. Excipients were selected based on the studies of their physicochemical properties and compatibility with the active ingredient. Gastro-retentive dosage forms have been the topic of interest in recent years as a practical approach in drug deliveries to the upper GI tract or for release prolongation and absorption. These dosage forms are particularly suitable for drugs that have local effects on the gastric mucosa in the stomach. Other candidates include drugs that are likely to be absorbed in the upper small intestine, or drugs that are unstable in basic environment of distal intestine and colon or those with low solubility at elevated pH conditions (i.e. weak bases). To develop a gastro-retentive delivery system the following steps were taken. First, to investigate the possible incompatibility issues between the model drug and excipients to be used for the delivery system. Stability and physicochemical properties of the active agent and its mixture with excipients were studied using analytical techniques such as Raman spectroscopy and Differential scanning calorimetry (DSC). No incompatibility issues were detected. Second, Kollidon SR as a relatively new release-rate controlling polymer was incorporated in the final formulation. For solid dosage form the ability of the final powder mix to flow well during manufacturing and the intrinsic characteristics that make it compressible are critical. The in-depth compaction study of Kollidon SR was assessed with the help of a compaction simulator. The flowability, swelling and erosion behavior together with release-rate retarding properties of Kollidon SR were also assessed. The final oral delivery system was based on Kollidon SR and Polyethylene Oxide (PEO) 303 as a monolithic matrix system. The noneffervescent monolithic matrix was made by direct compression. In vitro evaluation of the designed system released the active content in a near zero manner. The dosage form was bouyant in pH 2.0 acidic buffer with no floatation lag time which minimizes the possibility of early gastric emptying.
APA, Harvard, Vancouver, ISO, and other styles
4

Guzman, Cardozo Gustavo A. Guzman. "Bimodal Amphiphilic Polymer Conetworks: Structure-Property Characterization, Processing and Applications." University of Akron / OhioLINK, 2016. http://rave.ohiolink.edu/etdc/view?acc_num=akron1471428782.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Dekyndt, Bérengère. "La libération modifiée de principes actifs, développement de deux approches." Thesis, Lille 2, 2015. http://www.theses.fr/2015LIL2S005.

Full text
Abstract:
Les thérapeutiques individualisées et ciblées se développent actuellement, les formes galéniques évoluent donc en parallèle pour contrôler la libération des principes actifs (PA) et les conduire au plus proche des sites d’intérêts. Les formes orales solides représentent les formulations galéniques les plus utilisées, faciles d’emploi, indolores et réduisant le risque d’infection. Lors de leur conception, il est aussi possible de moduler la libération du PA.Deux approches sont étudiées dans ce manuscrit, l’une correspond au ciblage de la libération d’un PA vers son site d’action thérapeutique qui est le colon, la seconde consiste à contrôler la libération du PA pour maintenir une concentration constante, minimiser les effets indésirables et les périodes de présence de concentrations sub-thérapeutiques au niveau du site d’action.Première approche :Les traitements des Maladies Inflammatoires Chroniques de l’Intestin (MICI) peuvent être significativement améliorées par une libération localisée du PA. Une des approches est l’utilisation d’enrobages composés de polysaccharides dégradés par les enzymes sécrétées par la microflore colique. Mais l’absence d’une méthode in vitro reproductible simulant les conditions physiologiques du colon et l’impact potentiel des traitements antibiotiques associées qui pourraient affecter la quantité et la qualité des bactéries présentes et des enzymes sécrétées est un obstacle à sa mise au point. L’objectif de l’étude était d’effectuer un screening de polysaccharides ayant un intérêt dans le développement de nouvelles formulations à libération colique. Après cette sélection, la libération des formulations retenues ont été évaluées par une méthode utilisant des selles de patients atteints de MICI traités ou non par antibiothérapie. Enfin, l’utilisation de mélanges bactériens pour un éventuel remplacement de l’utilisation de selles fraiches a été évaluée.Seconde approche : Les formes orales enrobées présentent un grand potentiel pour la libération contrôlée de PA. Néanmoins, il est difficile d’obtenir une libération à vitesse constante avec ce type de formulation. Ceci est généralement dû au rôle prédominant du transport de masse par diffusion, ce qui entraine, avec le temps, une diminution de la concentration en PA au cœur du système, donc une réduction du gradient de concentration qui est la force motrice de la libération du PA. Ce type de cinétique de libération peut être inapproprié pour un traitement médicamenteux sûr et efficace. Malgré l’importance pratique de ce défi crucial de formulation, étonnamment, peu de stratégies efficaces sont connues. Dans cette étude, une nouvelle approche, basée sur une succession de couches de PA et de polymères (initialement dépourvu de PA) présentant une distribution initiale de PA non homogène, associé à un effet de temps de latence et à une diffusion partielle initiale à travers le noyau de la minigranule. Des variations de type, de quantité, d’épaisseur et de séquence des couches de PA et de polymères ont été testées. Un système assez simple composé de quatre couches (deux couches de PA et deux couches de polymère) permettait d’aboutir à une libération relativement constante durant 8h<br>Individualized and targeted therapies are currently developed, therefore the dosage forms move in parallel to control the drug release and drive it nearest to interest sites. Solid oral dosage forms are the pharmaceutical formulations the most common, easy to use, painless and reducing the infectious risk. In these formulation designs, it is also possible to adjust the drug release.Two approaches are discussed in this manuscript, the first one targets the drug release to the therapeutic site of action which is the colon, and the second one consists on controlling the drug release to maintain a constant concentration, minimize side effects and periods of presence of sub-therapeutic concentrations at the site of action.The first approach:The treatment of colonic disease like Inflammatory Bowel Diseases (IBD), can be significantly improved via local drug delivery. One approach is to use polysaccharide coatings, which are degraded by enzymes secreted by the colonic microflora. However, the lack of a reliable in vitro test simulating conditions in a living colon and the potential impact of associated antibiotic treatments that could affect the quality and quantity of bacteria and enzymes secreted is an obstacle to its development. The aim of the study was to perform a screening of polysaccharides suitable for the development of new colonic release formulations. After this selection, the drug release of selected formulations were evaluated by a method using the stools of IBD patients treated or not with antibiotics. Finally, the use of bacterial mixtures substituting fresh fecal samples has been evaluated.The second approach: Coated pellets offer a great potential for controlling drug delivery systems. However, constant drug release rates are difficult to achieve with this type of dosage forms if the drug is freely water-soluble. This is because diffusional mass transport generally plays a major role and with time the drug concentration within the system decreases, resulting in decreased concentration gradients, which are the driving forces for drug release. This type of release kinetics might be inappropriate for an efficient and safe drug treatment. Despite the great practical importance of this potentially crucial formulation challenge, surprisingly little is yet known about efficient formulations. In this study, a novel approach is presented based on sequential layers of drug and polymer (initially free of drug) to provide a non-homogeneous initial drug distribution, combined with lag-time effects and partial initial drug diffusion towards the pellet’s core. By changing the type, number, thickness and sequence of the drug and polymer layers, a rather simple 4 layers system (2 drug and 2 polymer layers) allowed an about constant drug release during 8 h
APA, Harvard, Vancouver, ISO, and other styles
6

Vainio, Tanja 1974. "Intelligent order scheduling and release in a build to order environment." Thesis, Massachusetts Institute of Technology, 2004. http://hdl.handle.net/1721.1/34780.

Full text
Abstract:
Thesis (M.B.A.)--Massachusetts Institute of Technology, Sloan School of Management; and, (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering; in conjunction with the Leaders for Manufacturing Program at MIT, 2004.<br>Includes bibliographical references (p. 75-76).<br>Dell's computer manufacturing process involves a complex system of material flow and assembly. This includes intelligent replenishment of sub-components from local warehouses according to the manufacturing schedule, just-in-time manufacturing of custom configured computer systems including hard-drive image and custom software download, packaging the unit for delivery, order accumulation, and finally, distribution and shipping to the customer. This thesis examines Dell's current order fulfillment process and suggests methods that can help Dell meet or exceed customers' delivery time air shipments to certain destinations could be converted to less expensive ground shipments. However, this is only possible when the entire fulfillment process is integrated in such a way that eligible ground shipments meet their appropriate shipping windows. This analysis shows that optimizing these windows not only requires an examination of the average cycle time in each phase but also of the impact that cycle time variations have on the expectations at minimum logistics cost in the just-in-time environment. By manufacturing and shipping products based on certain times of the day, success of this air-to-ground conversion strategy. Through the use of simulation models I found that the key factors in reducing logistics cost require setting appropriate scheduling rules for each order size and reducing the cycle time variation.<br>by Tanja Vainio.<br>S.M.<br>M.B.A.
APA, Harvard, Vancouver, ISO, and other styles
7

Azoza, M. A. "Disaggregation and order release in manufacturing systems." Thesis, University of Nottingham, 1987. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.378757.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Aktug, Onur. "An Agent-based Order Review And Release System In Make-to-order Production." Master's thesis, METU, 2004. http://etd.lib.metu.edu.tr/upload/2/12605611/index.pdf.

Full text
Abstract:
Workload control (WLC) systems constitute a framework mainly for the inputoutput control systems which regulate both jobs&rsquo<br>queues into the workshop and the flow of finished goods out of the workshop. This study is concerned with the job entry and release level of WLC which maintains a pool of unreleased jobs for the controlled release of jobs. While most of the studies in WLC concepts deal with the centralized workload control, our study decentralizes the job entry and release control and makes workstations more powerful in schedule decision making. Job&rsquo<br>s information is sent to the workstations by mediator which is the supervisor of the workstation. Both mediator and work stations are represented by agents in a distributed system. Jobs&rsquo<br>routing information is assumed to be known in advance. The developed system is verified and validated by means of test runs. Results are analyzed as well.
APA, Harvard, Vancouver, ISO, and other styles
9

Alsuhibany, Suliman Abdullah. "Quantitative analysis of the release order of defensive mechanisms." Thesis, University of Newcastle upon Tyne, 2014. http://hdl.handle.net/10443/2549.

Full text
Abstract:
Dependency on information technology (IT) and computer and information security (CIS) has become a critical concern for many organizations. This concern has essentially centred on protecting secrecy, confidentiality, integrity and availability of information. To overcome this concern, defensive mechanisms, which encompass a variety of services and protections, have been proposed to protect system resources from misuse. Most of these defensive mechanisms, such as CAPTCHAs and spam filters, rely in the first instance on a single algorithm as a defensive mechanism. Attackers would eventually break each mechanism. So, each algorithm would ultimately become useless and the system no longer protected. Although this broken algorithm will be replaced by a new algorithm, no one shed light on a set of algorithms as a defensive mechanism. This thesis looks at a set of algorithms as a holistic defensive mechanism. Our hypothesis is that the order in which a set of defensive algorithms is released has a significant impact on the time taken by attackers to break the combined set of algorithms. The rationale behind this hypothesis is that attackers learn from their attempts, and that the release schedule of defensive mechanisms can be adjusted so as to impair the learning process. To demonstrate the correctness of our hypothesis, an experimental study involving forty participants was conducted to evaluate the effect of algorithms’ order on the time taken to break them. In addition, this experiment explores how the learning process of attackers could be observed. The results showed that the order in which algorithms are released has a statistically significant impact on the time attackers take to break all algorithms. Based on these results, a model has been constructed using Stochastic Petri Nets, which facilitate theoretical analysis of the release order of a set of algorithms approach. Moreover, a tailored optimization algorithm is proposed using a Markov Decision Process model in order to obtain efficiently the optimal release strategy for any given model by maximizing the time taken to break a set of algorithms. As our hypothesis is based on the learning acquisition ability of attackers while interacting with the system, the Attacker Learning Curve (ALC) concept is developed. Based on empirical results of the ALC, an attack strategy detection approach is introduced and evaluated, which has achieved a detection success rate higher than 70%. The empirical findings in this detection approach provide a new understanding of not only how to detect the attack strategy used, but also how to track the attack strategy through the probabilities of classifying results that may provide an advantage for optimising the release order of defensive mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
10

Ratsibi, Humbelani Edzani. "Laser drilling of metals and glass using zero-order bessel beams." University of the Western Cape, 2013. http://hdl.handle.net/11394/5428.

Full text
Abstract:
>Magister Scientiae - MSc<br>This dissertation consists of two main sections. The first section focuses on generating zero order Bessel beams using axicons. An axicon with an opening angle y = 5⁰ was illuminated with a Gaussian beam of width ω₀ = 1.67 mm from a cw fiber laser with central wavelength λ = 1064 nm to generate zero order Bessel beams with a central spot radius r₀ = 8.3 ± 0.3 μm and propagation distance ½zmax = 20.1 ± 0.5 mm. The central spot size of a Bessel beam changes slightly along the propagation distance. The central spot radius r₀ can be varied by changing the opening angle of the axicon, y, and the wavelength of the beam. The second section focuses on applications of the generated Bessel beams in laser microdrilling. A Ti:Sapphire pulsed femtosecond laser (λ = 775 nm, ω₀ = 2.5 mm, repetition rate kHz, pulse energy mJ, and pulse duration fs) was used to generate the Bessel beams for drilling stainless steel thin sheets of thickness 50 μm and 100 μm and microscopic glass slides 1 mm thick. The central spot radius was r₀ = 15.9 ± 0.3 μm and ½zmax = 65.0 ± 0.5 mm. The effect of the Bessel beam shape on the quality of the holes was analysed and the results were discussed. It was observed that Bessel beams drill holes of better quality on transparent microscopic glass slides than on stainless steel sheet. The holes drilled on stainless steel sheets deviated from being circular on both the top and bottom surface for both thicknesses. However the holes maintained the same shape on both sides of each sample, indicating that the walls are close to being parallel. The holes drilled on the glass slides were circular and their diameters could be measured. The measured diameter (15.4±0.3 μm) of the hole is smaller than the diameter of the central spot (28.2 ± 0.1 μm) of the Bessel beam. Increasing the pulse energy increased the diameter of the drilled hole to a value close to the measured diameter of the central spot.
APA, Harvard, Vancouver, ISO, and other styles
11

Olinde, Lindsay. "Sediment Oxygen Demand Kinetics." Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/42437.

Full text
Abstract:
Hypolimnetic oxygen diffusers increase sediment oxygen demand (SOD) and, if not accounted for in design, can further exacerbate anoxic conditions. A study using extracted sediment cores, that included both field and laboratory experiments, was performed to investigate SOD kinetics in Carvinâ s Cove Reservoir, a eutrophic water supply reservoir for Roanoke, Virginia. A bubble-plume diffuser is used in Carvinâ s Cove to replenish oxygen consumed while the reservoir is thermally stratified. The applicability of zero-order, first-order, and Monod kinetics to describe transient and steady state SOD was modeled using analytical and numerical techniques. Field and laboratory experiments suggested that first-order kinetics characterize Carvinâ s Cove SOD. SOD calculated from field experiments reflected diffuser flow changes. Laboratory experiments using mini-diffusers to vary dissolved oxygen concentration and turbulence were conducted at 4°C and 20°C. Similar to field observations, the laboratory results followed changes in mini-diffuser flow. Kinetic-temperature relationships were also observed in the laboratory experiments. A definitive conclusion could not be made on the broad applicability of first-order kinetics to Carvinâ s Cove SOD due to variability within field experiments. However, in situ experiments are underway that should assist in the overall understanding of the reservoirâ s SOD kinetics.<br>Master of Science
APA, Harvard, Vancouver, ISO, and other styles
12

Mhanna, Elissa. "Beyond gradients : zero-order approaches to optimization and learning in multi-agent environments." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG123.

Full text
Abstract:
L'essor des dispositifs connectés et des données qu'ils génèrent a stimulé le développement d'applications à grande échelle. Ces dispositifs forment des réseaux distribués avec un traitement de données décentralisé. À mesure que leur nombre augmente, des défis comme la surcharge de communication et les coûts computationnels se présentent, nécessitant des méthodes d'optimisation adaptées à des contraintes de ressources strictes, surtout lorsque les dérivées sont coûteuses ou indisponibles. Cette thèse se concentre sur les méthodes d'optimisation sans dérivées, qui sont idéales quand les dérivées des fonctions sont inaccessibles. Ces méthodes estiment les gradients à partir des évaluations de fonction, ce qui les rend adaptées à l'apprentissage distribué et fédéré, où les dispositifs collaborent pour résoudre des tâches d'optimisation avec peu d'informations et des données bruitées. Dans le premier chapitre, nous traitons de l'optimisation distribuée sans dérivées pour des fonctions fortement convexes sur plusieurs agents. Nous proposons un algorithme distribué de descente de gradient projete sans dérivées, qui utilise des estimations de gradient à un point, où la fonction est interrogée une seule fois par réalisation stochastique, et les évaluations sont bruitées. Ce chapitre démontre la convergence presque sûre de l'algorithme et fournit des bornes théoriques sur le taux de convergence. Avec des pas constants, l'algorithme atteint un taux de convergence linéaire. C'est la première fois que ce taux est établi pour des estimations de gradient à un point (voire même pour des estimations de gradient à deux points) pour des fonctions stochastiques. Nous analysons aussi les effets des pas décroissants, établissant un taux de convergence correspondant aux méthodes centralisées sans dérivées. Le deuxième chapitre se penche sur les défis de l'apprentissage fédéré qui est limité par le coût élevé de la transmission de données sur des réseaux à bande passante restreinte. Pour y répondre, nous proposons un nouvel algorithme qui réduit la surcharge de communication en utilisant des estimations de gradient à un point. Les dispositifs transmettent des valeurs scalaires plutôt que de grands vecteurs de gradient, réduisant ainsi la quantité de données envoyées. L'algorithme intègre aussi directement les perturbations des communications sans fil dans l'optimisation, éliminant le besoin de connaître explicitement l'état du canal. C'est la première approche à inclure les propriétés du canal sans fil dans un algorithme d'apprentissage, le rendant résilient aux problèmes de communication réels. Nous prouvons la convergence presque sûre de cette méthode dans des environnements non convexes et validons son efficacité à travers des expériences. Le dernier chapitre étend l'algorithme précédent au cas des estimations de gradient à deux points. Contrairement aux estimations à un point, les estimations à deux points interrogent la fonction deux fois, fournissant une approximation plus précise du gradient et améliorant le taux de convergence. Cette méthode conserve l'efficacité de communication des estimations à un point, avec uniquement des valeurs scalaires transmises, et assouplit l'hypothèse de bornitude de la fonction objective. Nous prouvons des taux de convergence linéaires pour des fonctions fortement convexes et lisses. Pour les problèmes non convexes, nous montrons une amélioration notable du taux de convergence, en particulier pour les fonctions dominées par le gradient K, atteignant également un taux linéaire. Nous fournissons aussi des résultats montrant l'efficacité de communication par rapport à d'autres techniques d'apprentissage fédéré<br>The rise of connected devices and the data they produce has driven the development of large-scale applications. These devices form distributed networks with decentralized data processing. As the number of devices grows, challenges like communication overhead and computational costs increase, requiring optimization methods that work under strict resource constraints, especially where derivatives are unavailable or costly. This thesis focuses on zero-order (ZO) optimization methods are ideal for scenarios where explicit function derivatives are inaccessible. ZO methods estimate gradients based only on function evaluations, making them highly suitable for distributed and federated learning environments where devices collaborate to solve global optimization tasks with limited information and noisy data. In the first chapter, we address distributed ZO optimization for strongly convex functions across multiple agents in a network. We propose a distributed zero-order projected gradient descent algorithm that uses one-point gradient estimates, where the function is queried only once per stochastic realization, and noisy function evaluations estimate the gradient. The chapter establishes the almost sure convergence of the algorithm and derives theoretical upper bounds on the convergence rate. With constant step sizes, the algorithm achieves a linear convergence rate. This is the first time this rate has been established for one-point (and even two-point) gradient estimates. We also analyze the effects of diminishing step sizes, establishing a convergence rate that matches centralized ZO methods' lower bounds. The second chapter addresses the challenges of federated learning (FL) which is often hindered by the communication bottleneck—the high cost of transmitting large amounts of data over limited-bandwidth networks. To address this, we propose a novel zero-order federated learning (ZOFL) algorithm that reduces communication overhead using one-point gradient estimates. Devices transmit scalar values instead of large gradient vectors, lowering the data sent over the network. Moreover, the algorithm incorporates wireless communication disturbances directly into the optimization process, eliminating the need for explicit knowledge of the channel state. This approach is the first to integrate wireless channel properties into a learning algorithm, making it resilient to real-world communication issues. We prove the almost sure convergence of this method in nonconvex optimization settings, establish its convergence rate, and validate its effectiveness through experiments. In the final chapter, we extend the ZOFL algorithm to include two-point gradient estimates. Unlike one-point estimates, which rely on a single function evaluation, two-point estimates query the function twice, providing a more accurate gradient approximation and enhancing the convergence rate. This method maintains the communication efficiency of one-point estimates, where only scalar values are transmitted, and relaxes the assumption that the objective function must be bounded. The chapter demonstrates that the proposed two-point ZO method achieves linear convergence rates for strongly convex and smooth objective functions. For nonconvex problems, the method shows improved convergence speed, particularly when the objective function is smooth and K-gradient-dominated, where a linear rate is also achieved. We also analyze the impact of constant versus diminishing step sizes and provide numerical results showing the method's communication efficiency compared to other federated learning techniques
APA, Harvard, Vancouver, ISO, and other styles
13

Venteris, Erik Ray. "Spatial sampling, landscape modeling, and interpretation of soil organic carbon on zero-order watersheds /." The Ohio State University, 2002. http://rave.ohiolink.edu/etdc/view?acc_num=osu1486459267522259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

MIERZWA, JOSE C. "Estudo sobre tratamento integrado de efluentes quimicos e radioativos, introduzindo-se o conceito de descarga zero." reponame:Repositório Institucional do IPEN, 1996. http://repositorio.ipen.br:8080/xmlui/handle/123456789/10448.

Full text
Abstract:
Made available in DSpace on 2014-10-09T12:38:51Z (GMT). No. of bitstreams: 0<br>Made available in DSpace on 2014-10-09T14:02:10Z (GMT). No. of bitstreams: 1 02810.pdf: 8898138 bytes, checksum: 9b668b43cd5cb62aa7ded0cd1c80d31f (MD5)<br>Dissertacao (Mestrado)<br>IPEN/D<br>Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
APA, Harvard, Vancouver, ISO, and other styles
15

Martin, Christopher Reed. "Reduced-Order Models for the Prediction of Unsteady Heat Release in Acoustically Forced Combustion." Diss., Virginia Tech, 2009. http://hdl.handle.net/10919/30238.

Full text
Abstract:
This work presents novel formulations for models describing acoustically forced combustion in three disjoint regimes; highly turbulent, laminar, and the moderately turbulent flamelet regime. Particular emphasis is placed on simplification of the models to facilitate analytical solutions while still reflecting real phenomenology. Each derivation is treated by beginning with general reacting flow equations, identifying a small subset of physics thought to be dominant in the corresponding regime, and making appropriate simplifications. Each model is non-dimensionalized and both naturally occurring and popular dimensionless parameters are investigated. The well-stirred reactor (WSR) is used to characterize the highly turbulent regime. It is confirmed that, consistent with the regime to which it is ascribed for static predictions, the WSR is most appropriate to predict the dynamics of chemical kinetics. Both convection time and chemical time dynamics are derived as explicit closed-form functions of dimensionless quantities such as the Damk\"ohler number and several newly defined parameters. The plug-flow reactor (PFR) is applied to a laminar, burner stabilized flame, using a number of established approaches, but with new attention to developing simple albeit accurate expressions governing the flame's frequency response. The system is studied experimentally using a ceramic honeycomb burner, combusting a methane-air mixture, numerically using a nonlinear FEA solver, and analytically by exact solution of the simplified governing equations. Accurately capturing non-unity Lewis-number effects are essential to capturing both the static and the dynamic response of the flame. It is shown that the flame dynamics can be expressed solely in terms of static quantities. Finally, a Reynolds-averaged flamelet model is applied to a hypothetical burner stabilized flame with homogeneous, isotropic turbulence. Exact solution with a simplified turbulent reaction model parallels that of the plug flow reactor closely, demonstrating a relation between static quantities and the flame frequency response. Comparison with published experiments using considerably more complex flame geometries yields unexpected similarities in frequency scale, and phase behavior. The observed differences are attributed to specific physical phenomena that were deliberately omitted to simplify the model's derivation.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
16

Buschi, Daniele. "Zero-intelligence Models e crisi di liquidità endogene nei mercati finanziari." Bachelor's thesis, Alma Mater Studiorum - Università di Bologna, 2020. http://amslaurea.unibo.it/21933/.

Full text
Abstract:
Lo scopo della presente tesi è quello di descrivere la dinamica dei mercati, e in particolar modo quella del prezzo, da un punto di vista fisico. Sono presentati modelli e approssimazioni che rimandano all’approccio tipico della fisica dei sistemi complessi, delineando una fondamentale collaborazione (quella tra Fisica ed Economia) che sta conoscendo un importante sviluppo, soprattutto grazie alla grande mole di dati disponibili oggigiorno. La tesi è strutturata in tre capitoli principali. Nel primo capitolo si descrive la dinamica elementare del prezzo, illustrando i principi base del random walk, si delineano i concetti necessari alla comprensione dello zero-intelligence model e si discute la scelta di optare per un modello che descriva la dinamica del prezzo ponendo l’accento sulle conseguenze della struttura istituzionale a cui si fa riferimento, piuttosto che alla razionalità deli agenti. Il secondo capitolo presenta il modello, cosiddetto zero-intelligence in grado di descrivere la dinamica di una delle più comuni microstrutture di formazione del prezzo: il limit order book. Lo sviluppo di tale modello è trattato tramite una sezione che utilizza la tecnica dell’analisi dimensionale per effettuare predizioni, inerenti a varie grandezze di interesse economico, confermate da dati simulati numericamente; e un’ultima sezione in cui, attraverso l’approssimazione di campo medio, si delineano due approcci teorici che spiegano in maniera più formale i risultati di tale modello. È infine presente un terzo capitolo che tratta una modifica del modello analizzato al secondo capitolo grazie all’aggiunta del feedback. Tale fattore incide pesantemente sulla probabilità di comparsa di crisi di liquidità e, per quanto non si è rigorosamente mostrata una transizione di fase, si assiste a bruschi cambiamenti di tale probabilità a seconda dell’intensità del feedback stesso.
APA, Harvard, Vancouver, ISO, and other styles
17

CAMPANELLA, Lucia. "NET ZERO ENERGY BUILDINGS: AN ITALIAN CASE STUDY. ANALYSIS OF THE ENERGY BALANCE AND RETROFIT HYPOTHESIS IN ORDER TO REACH THE NET ZERO ENERGY TARGET." Doctoral thesis, Università degli Studi di Palermo, 2014. http://hdl.handle.net/10447/90763.

Full text
Abstract:
In the last years the concept of Net Zero Energy Building (NZEB) has been developing and spreading in the scientific community. The work presented in this thesis has been largely developed in the context of the International Energy Agency (IEA) joint Programme Solar Heating and Cooling (SHC) Task40 and Energy Conservation in Buildings and Community Systems (ECBCS) Annex52: Towards Net Zero Energy Solar Buildings. It is known that the energy consumption in Europe for residential and commercial buildings is around 40 % of the total production. It is then extremely important to optimize both the implementation of energy efficiency measures and the usage of renewable resources that can be harvested on site. When energy efficiency measures are successfully combined with on-site renewable energy sources, and the energy consumption is equal (or nearly) to the energy production, then the output achieved can be referred to as ―near net zero energy ―or ―net zero-energy building‖. In Chapter 2 a description of the main typologies of NZEB is carried out, revealing that the most important ones are the following: site-ZEB and source-ZEB depending on where the energy balance is calculated. After a brief description of the NZEB most common definitions and classifications many examples have been examined, analyzing their features in relation to the climate in which they are, in order to show different solutions and approaches to the problem of reaching net zero energy balances (Chapter 3). In this thesis an Italian case-study has been examined: the Leaf House (LH) located in Ancona, Italy. The Leaf House is one of the best case studies of the IEA/SHC/ECBS/Task 40 Programme, in terms of thermo-physical characteristics of the building envelope, thermal plant, building automation system and energy monitoring. In Chapter 5 the Leaf House case-study is described in detail as well as the model implemented into the TRNSYS software (Chapter 6), reproducing the energy production system, the thermal features of the building and comparing simulated with monitored data. Particular attention has to be paid to the Leaf House monitoring system, which allows the assessment of the building energy balance.A careful analysis of monitored data brings to search some improving strategies to reach the zero energy target. After the simulation of the real building systems (through the software TRNSYS-Chapter 6), several scenarios have been investigated to improve energy performances of the building. Moreover the implemented model has been properly calibrated. The study proposes a detailed analysis of the case-study in order to show the possible energy savings that an NZEB can achieve in comparison with a non-net zero energy building. The re-design options are then proposed and the results evaluated by TRNSYS are described in detail. The monitored situation shows an energy consumption of 37 MWh for the year 2009; although around 6 MWh are wasted in the monitoring equipment the energy production is lower than this value. A simple solution, to reach the NZEB status is moving towards a higher production: e.g. the substitution of the PV panels with higher efficient others. In this way the energy balance reaches ―zero‖ during the year. Nevertheless the problem can be solved otherwise, reducing the energy needs. In this direction, the Geothermal Heat Pump and its energy needs have been analyzed in detail. It has been verified that the COP of the machine is way lower than the declared 4.6 and that an effective 4.6 COP could lead to significant energy savings. The idea of reaching higher efficiencies led to the proposal of a different plant scheme with the exclusion of a heat exchanger to reduce as much as possible energy losses. While it is possible to obtain the NZEB status simply making a substitution of the PV panels, the investigation on further energy savings has been continued. Finally the Italian case study allow to identify the strategies to improve the energy performances of the a Near Net Zero Energy building to reach the NZEB target. It represents also an Italian reference for others who wish to build NZEBs in the Italian context. At last two annexes to this work are shown, the first shows objectives and activities of the Task 40 ECBS Programme while the second shows the Building description file created into TRNBUILD environment, in order to describe the Leaf House building envelope features.
APA, Harvard, Vancouver, ISO, and other styles
18

Suer, Bekir Ilker. "Order-driven Flexibility Management In Make-to-order Companies With Flexible Shops." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12611045/index.pdf.

Full text
Abstract:
In this study, an operational (short term) flexibility management approach is proposed for make-to-order companies with flexible shops. Order Review and Release (ORR) techniques and typical Flexible Manufacturing System (FMS) decisions are combined in this method. The proposed method prepares a shop environment by allocating process and routing flexibility types at different levels to the shop in each production cycle. Variety, volume, and criticality of the part types in the pool and the anticipated orders constitute the main inputs for flexibility allocation. A flexibility management policy is introduced and determination of the proper policy is realized with the integrated utilization of mathematical programming and simulation modeling. An experimental study is performed to investigate the effects of proposed method on a hypothetical flexible shop. Results show that with an appropriate policy, periodical and online flexibility management can be an effective tool to cope with uncertainty in demand if combined with ORR techniques.
APA, Harvard, Vancouver, ISO, and other styles
19

Jarrah, Bilal. "Fractional Order and Inverse Problem Solutions for Plate Temperature Control." Thesis, Université d'Ottawa / University of Ottawa, 2020. http://hdl.handle.net/10393/40551.

Full text
Abstract:
Surface temperature control of a thin plate is investigated. Temperature is controlled on one side of the plate using the other side temperature measurements. This is a decades-old problem, reactivated more recently by the awareness that this is a fractional-order problem that justifies the investigation of the use of fractional order calculus. The approach is based on a transfer function obtained from the one-dimensional heat conduction equation solution that results in a fractional-order s-domain representation. Both the inverse problem approach and the fractional controller approach are studied here to control the surface temperature, the first one using inverse problem plus a Proportional only controller, and the second one using only the fractional controller. The direct problem defined as the ratio of the output to the input, while the inverse problem defined as the ratio of the input to the output. Both transfer functions are obtained, and the resulting fractional-order transfer functions were approximated using Taylor expansion and Zero-Pole expansion. The finite number of terms transfer functions were used to form an open-loop control scheme and a closed-loop control scheme. Simulation studies were done for both control schemes and experiments were carried out for closed-loop control schemes. For the fractional controller approach, the fractional controller was designed and used in a closed-loop scheme. Simulations were done for fractional-order-integral, fractional-order-derivative and fractional-integral-derivative controller designs. The experimental study focussed on the fractional-order-integral-derivative controller design. The Fractional-order controller results are compared to integer-order controller’s results. The advantages of using fractional order controllers were evaluated. Both Zero-Pole and Taylor expansions are used to approximate the plant transfer functions and both expansions results are compared. The results show that the use of fractional order controller performs better, in particular concerning the overshoot.
APA, Harvard, Vancouver, ISO, and other styles
20

Haimene, Rachel N. "Presentation of the Namibia Zero Order Stations and Information Site for Directorate of Survey and Mapping." Thesis, University of Gävle, Department of Technology and Built Environment, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hig:diva-239.

Full text
Abstract:
<p>This project is focused on the presentation of the Namibia Zero Order Stations, including the descriptions of the 21 stations across the country and to create information site for Directorate of Survey and Mapping in Namibia. The main reason for the implementation of web site is for the distribution of information and data to domestic and international clients. Most of the materials and information used in this project were available in digital format. Some information was collected from Directorate of Survey and Mapping of Namibia, Swedesurvey of Sweden, and Asci of Sweden as well as from the internet and library facilities. As such it was very important to analyse and display geo-spatial data before creating web site. The computer makes it possible to create a link between filed documents, maps, graphic documents and other related information using hyperlinking. Therefore the computer made the world easier to communicate and mapping via internet.</p>
APA, Harvard, Vancouver, ISO, and other styles
21

Poon, Wing-hong Stanley. "Re-integration of offenders and protection of public order a case study on the Hong Kong release under supervision scheme /." Click to view the E-thesis via HKUTO, 1995. http://sunzi.lib.hku.hk/HKUTO/record/B36194955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Bulut, Aykut. "Order Driven Flexible Shop Management." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613395/index.pdf.

Full text
Abstract:
The difficulties in responding to variation in product order mixes and load levels effectively in make to order are known. Most of the existing approaches consider releasing jobs to the shop (input control), changing capacity levels (output control) in a controlled way, order acceptance with different definitions of work load and due date assignment. Controlling the processes, routing options and the order accepting capacity with various tool combinations that will decrease tool loading are not considered properly. However the manufacturing flexibility provided by the computer numerically controlled (CNC) machines, provides both part variety and due date achievement given a reasonable extra capacity. Positive effects of flexibility on the due date achievement of the make to order is shown with a variety of experimental and field studies leaving little doubt. However taking flexibility only as a strategic issue and not considering it as a means of planning and management in either the short term or medium term decisions have been commonplace practice. In this study, benefits of providing three kinds of flexibility, considering order pool and acceptance probability of the new arrivals in a periodic setting, is the focal issue. If the required flexible environment is provided, the necessity to make a detailed job loading, route planning and scheduling will be reduced to a low level and a high shop congestion and due date achievement will be realized simultaneously. A typical realistic shop with a scaled part mix is assumed in the flexibility management modeling and simulation experiments are conducted applying periodical flexibility planning approach. These experiments briefly support the ideas that worth of anticipation is more than plain expectations and flexibility improves robustness.
APA, Harvard, Vancouver, ISO, and other styles
23

Sammons, Jonathan D. "Use Of Near-Zero Leachate Irrigation Systems For Container Production Of Woody Ornamental Plants." The Ohio State University, 2008. http://rave.ohiolink.edu/etdc/view?acc_num=osu1228241327.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Sanja, Lončar. "Negative Selection - An Absolute Measure of Arbitrary Algorithmic Order Execution." Phd thesis, Univerzitet u Novom Sadu, Prirodno-matematički fakultet u Novom Sadu, 2017. https://www.cris.uns.ac.rs/record.jsf?recordId=104861&source=NDLTD&language=en.

Full text
Abstract:
Algorithmic trading is an automated process of order execution on electronic stock markets. It can be applied to a broad range of financial instruments, and it is&nbsp; characterized by a signicant investors&#39; control over the execution of his/her orders, with the principal goal of finding the right balance between costs and risk of not (fully) executing an order. As the measurement of execution performance gives information whether best execution is achieved, a signicant number of diffeerent benchmarks is&nbsp; used in practice. The most frequently used are price benchmarks, where some of them are determined before trading (Pre-trade benchmarks), some during the trading&nbsp; day (In-traday benchmarks), and some are determined after the trade (Post-trade benchmarks). The two most dominant are VWAP and Arrival Price, which is along with other pre-trade price benchmarks known as the Implementation Shortfall (IS).We introduce Negative Selection as a posteriori measure of the execution algorithm performance. It is based on the concept of Optimal Placement, which represents the ideal order that could be executed in a given time win-dow, where the notion of ideal means that it is an order with the best execution price considering&nbsp; market &nbsp;conditions&nbsp; during the time window. Negative Selection is dened as a difference between vectors of optimal and executed orders, with vectors dened as a quantity of shares at specied price positionsin the order book. It is equal to zero when the order is optimally executed; negative if the order is not (completely) filled, and positive if the order is executed but at an unfavorable price.Negative Selection is based on the idea to offer a new, alternative performance measure, which will enable us to find the&nbsp; optimal trajectories and construct optimal execution of an order.The first chapter of the thesis includes a list of notation and an overview of denitions and theorems that will be used further in the thesis. Chapters 2 and 3 follow with a&nbsp; theoretical overview of concepts related to market microstructure, basic information regarding benchmarks, and theoretical background of algorithmic trading. Original results are presented in chapters 4 and 5. Chapter 4 includes a construction of optimal placement, definition and properties of Negative Selection. The results regarding the properties of a Negative Selection are given in [35]. Chapter 5 contains the theoretical background for stochastic optimization, a model of the optimal execution formulated as a stochastic optimization problem with regard to Negative Selection, as well as original work on nonmonotone line search method [31], while numerical results are in the last, 6th chapter.<br>Algoritamsko trgovanje je automatizovani proces izvr&scaron;avanja naloga na elektronskim berzama. Može se primeniti na &scaron;irok spektar nansijskih instrumenata kojima se trguje na berzi i karakteri&scaron;e ga značajna kontrola investitora nad izvr&scaron;avanjem njegovih naloga, pri čemu se teži nalaženju pravog balansa izmedu tro&scaron;ka i rizika u vezi sa izvr&scaron;enjem naloga. S ozirom da se merenjem performasi izvr&scaron;enja naloga određuje da li je postignuto najbolje izvr&scaron;enje, u praksi postoji značajan broj različitih pokazatelja. Najče&scaron;će su to pokazatelji cena, neki od njih se određuju pre trgovanja (eng. Pre-trade), neki u toku trgovanja (eng. Intraday), a neki nakon trgovanja (eng. Post-trade). Dva najdominantnija pokazatelja cena su VWAP i Arrival Price koji je zajedno sa ostalim &quot;pre-trade&quot; pokazateljima cena poznat kao Implementation shortfall (IS).Pojam negative selekcije se uvodi kao &quot;post-trade&quot; mera performansi algoritama izvr&scaron;enja, polazeći od pojma optimalnog naloga, koji predstavlja idealni nalog koji se&nbsp; mogao izvrsiti u datom vremenskom intervalu, pri ćemu se pod pojmom &quot;idealni&quot; podrazumeva nalog kojim se postiže najbolja cena u trži&scaron;nim uslovima koji su vladali&nbsp; u toku tog vremenskog intervala. Negativna selekcija se defini&scaron;e kao razlika vektora optimalnog i izvr&scaron;enog naloga, pri čemu su vektori naloga defisani kao količine akcija na odgovarajućim pozicijama cena knjige naloga. Ona je jednaka nuli kada je nalog optimalno izvr&scaron;en; negativna, ako nalog nije (u potpunosti) izvr&scaron;en, a pozitivna ako je nalog izvr&scaron;en, ali po nepovoljnoj ceni.Uvođenje mere negativne selekcije zasnovano je na ideji da se ponudi nova, alternativna, mera performansi i da se u odnosu na nju nađe optimalna trajektorija i konstrui&scaron;e optimalno izvr&scaron;enje naloga.U prvom poglavlju teze dati su lista notacija kao i pregled definicija i teorema&nbsp; neophodnih za izlaganje materije. Poglavlja 2 i 3 bave se teorijskim pregledom pojmova i literature u vezi sa mikrostrukturom trži&scaron;ta, pokazateljima trgovanja i algoritamskim trgovanjem. Originalni rezultati su predstavljeni u 4. i 5. poglavlju. Poglavlje 4 sadrži konstrukciju optimalnog naloga, definiciju i osobine negativne selekcije. Teorijski i praktični rezultati u vezi sa osobinama negativna selekcije dati su u [35]. Poglavlje 5 sadrži teorijske osnove stohastičke optimizacije, definiciju modela za optimalno izvr&scaron;enje, kao i originalni rad u vezi sa metodom nemonotonog linijskog pretraživanja [31], dok 6. poglavlje sadrži empirijske rezultate.
APA, Harvard, Vancouver, ISO, and other styles
25

Shkliar, Khrystyna. "Lean supply chain of service companies : application of order review and release systems to improve its performances." Thesis, KTH, Industriell Management, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-124575.

Full text
Abstract:
This research aims to contribute to the theoretical knowledgebase about supply chains in service companies and lean implementation in this area. The focus of the study is testing the feasibility of order review and release systems application, which proved to be effective in “leaning” the flow of manufacturing companies, to the service supply chain. The influence of one of the main characteristics of services – processing times variability and preciseness in estimating the required processing times, is studied. The research is purely theoretical and was conducted with the help of simulation modeling. The model of the service supply chain was developed based on the literature review and statistical distributions as an input data were used. Two kinds of order review and release systems are considered: upper-bound limited workload and lean-based balanced workload model. Their impact on the performance of the service supply chain is described and compared to the results of the model with immediate release. The findings show that order review and release systems can perform well even in the conditions of unknown exact processing times and thus can be applied to services as well as they are applied in manufacturing. The application of order review and release systems will help to eliminate waste within the service supply chain, make it more flexible and thus increase added value to the customers.
APA, Harvard, Vancouver, ISO, and other styles
26

Poon, Wing-hong Stanley. "Re-integration of offenders and protection of public order: a case study on the Hong Kong release undersupervision scheme." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B36194955.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Li, Yan. "Digital holography and optical contouring." Thesis, Liverpool John Moores University, 2009. http://researchonline.ljmu.ac.uk/4539/.

Full text
Abstract:
Digital holography is a technique for the recording of holograms via CCD/CMOS devices and enables their subsequent numerical reconstruction within computers, thus avoiding the photographic processes that are used in optical holography. This thesis investigates the various techniques which have been developed for digital holography. It develops and successfully demonstrates a number of refinements and additions in order to enhance the performance of the method and extend its applicability. The thesis contributes to both the experimental and numerical analysis aspects of digital holography. Regarding experimental work: the thesis includes a comprehensive review and critique of the experimental arrangements used by other workers and actually implements and investigates a number of these in order to compare performance. Enhancements to these existing methods are proposed, and new methods developed, aimed at addressing some of the perceived short-comings of the method. Regarding the experimental aspects, the thesis specifically develops:• Super-resolution methods, introduced in order to restore the spatial frequencies that are lost or degraded during the hologram recording process, a problem which is caused by the limited resolution of CCD/CMOS devices.• Arrangements for combating problems in digital holography such as: dominance of the zero order term, the twin image problem and excessive speckle noise.• Fibre-based systems linked to tunable lasers, including a comprehensive analysis of the effects of: signal attenuation, noise and laser instability within such systems.• Two-source arrangements for contouring, including investigating the limitations on achievable accuracy with such systems. Regarding the numerical processing, the thesis focuses on three main areas. Firstly, the numerical calculation of the Fresnel-Kirchhoff integral, which is of vital importance in performing the numerical reconstruction of digital holograms. The Fresnel approximation and the convolution approach are the two most common methods used to perform numerical reconstruction. The results produced by these two methods for both simulated holograms and real holograms, created using our experimental systems, are presented and discussed. Secondly, the problems of the zero order term, twin image and speckle noise are tackled from a numerical processing point of view, complementing the experimental attack on these problems. A digital filtering method is proposed for use with reflective macroscopic objects, in order to suppress both the zero-order term and the twin image. Thirdly, for the two-source contouring technique, the following issues have been discussed and thoroughly analysed: the effects of the linear factor, the use of noise reduction filters, different phase unwrapping algorithms, the application of the super-resolution method, and errors in the illumination angle. Practical 3D measurement of a real object, of known geometry, is used as a benchmark for the accuracy improvements achievable via the use of these digital signal processing techniques within the numerical reconstruction stage. The thesis closes by seeking to draw practical conclusions from both the experimental and numerical aspects of the investigation, which it is hoped will be of value to those aiming to use digital holography as a metrology tool.
APA, Harvard, Vancouver, ISO, and other styles
28

Högberg, Sofia. "Zero-order manipulation task to obtain a food reward in Colombian black spider monkeys (Ateles fusciceps rufiventris) kept in a zoo." Thesis, Linköpings universitet, Institutionen för fysik, kemi och biologi, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-58155.

Full text
Abstract:
Spider monkeys (Ateles sp.) are common in zoological parks, but rare in scientific publications. Studies on tool use in primates have mostly focused on impressive tool users such as chimpanzees. Spider monkeys fulfill several criteria that are known to be associated with tool use. To be able to give an appropriate environment and enrichment for spider monkeys in captivity more knowledge is needed about their cognitive abilities. In this study we wanted to see if five male spider monkeys kept in a zoo could learn to use tools to reach a reward. Experiment 1 examined the subjects’ ability to learn to use a stick-tool to extract honey from a tube and experiment 2 their ability to learn to use a rake-tool to reach a reward. Each experiment consisted of three parts; A – monkeys got tools and treat next to each other; B – monkeys were shown how to use tool to get treat by a keeper and then got tools and treats next to each other; C – monkeys got tools and treats so they just could pull out the tool and get the treat. In both experiments at least two different spider monkeys succeeded with the zero-order manipulation task to pull out the tool and get treat in part C. Longer studies need to be conducted to be able to say if spider monkeys are able to learn a more complex tool using behavior as needed in part A and B.
APA, Harvard, Vancouver, ISO, and other styles
29

MEDEIROS, Rex Antonio da Costa. "Zero-Error capacity of quantum channels." Universidade Federal de Campina Grande, 2008. http://dspace.sti.ufcg.edu.br:8080/jspui/handle/riufcg/1320.

Full text
Abstract:
Submitted by Johnny Rodrigues (johnnyrodrigues@ufcg.edu.br) on 2018-08-01T21:11:37Z No. of bitstreams: 1 REX ANTONIO DA COSTA MEDEIROS - TESE PPGEE 2008..pdf: 1089371 bytes, checksum: ea0c95501b938e0d466779a06faaa4f6 (MD5)<br>Made available in DSpace on 2018-08-01T21:11:37Z (GMT). No. of bitstreams: 1 REX ANTONIO DA COSTA MEDEIROS - TESE PPGEE 2008..pdf: 1089371 bytes, checksum: ea0c95501b938e0d466779a06faaa4f6 (MD5) Previous issue date: 2008-05-09<br>Nesta tese, a capacidade erro-zero de canais discretos sem memória é generalizada para canais quânticos. Uma nova capacidade para a transmissão de informação clássica através de canais quânticos é proposta. A capacidade erro-zero de canais quânticos (CEZQ) é definida como sendo a máxima quantidade de informação por uso do canal que pode ser enviada através de um canal quântico ruidoso, considerando uma probabilidade de erro igual a zero. O protocolo de comunicação restringe palavras-código a produtos tensoriais de estados quânticos de entrada, enquanto que medições coletivas entre várias saídas do canal são permitidas. Portanto, o protocolo empregado é similar ao protocolo de Holevo-Schumacher-Westmoreland. O problema de encontrar a CEZQ é reformulado usando elementos da teoria de grafos. Esta definição equivalente é usada para demonstrar propriedades de famílias de estados quânticos e medições que atingem a CEZQ. É mostrado que a capacidade de um canal quântico num espaço de Hilbert de dimensão d pode sempre ser alcançada usando famílias compostas de, no máximo,d estados puros. Com relação às medições, demonstra-se que medições coletivas de von Neumann são necessárias e suficientes para alcançar a capacidade. É discutido se a CEZQ é uma generalização não trivial da capacidade erro-zero clássica. O termo não trivial refere-se a existência de canais quânticos para os quais a CEZQ só pode ser alcançada através de famílias de estados quânticos não-ortogonais e usando códigos de comprimento maior ou igual a dois. É investigada a CEZQ de alguns canais quânticos. É mostrado que o problema de calcular a CEZQ de canais clássicos-quânticos é puramente clássico. Em particular, é exibido um canal quântico para o qual conjectura-se que a CEZQ só pode ser alcançada usando uma família de estados quânticos não-ortogonais. Se a conjectura é verdadeira, é possível calcular o valor exato da capacidade e construir um código de bloco quântico que alcança a capacidade. Finalmente, é demonstrado que a CEZQ é limitada superiormente pela capacidade de Holevo-Schumacher-Westmoreland.
APA, Harvard, Vancouver, ISO, and other styles
30

Newbury, James. "Limit order books, diffusion approximations and reflected SPDEs : from microscopic to macroscopic models." Thesis, University of Oxford, 2016. https://ora.ox.ac.uk/objects/uuid:825d9465-842b-424b-99d0-ff4dfa9ebfc5.

Full text
Abstract:
Motivated by a zero-intelligence approach, the aim of this thesis is to unify the microscopic (discrete price and volume), mesoscopic (discrete price and continuous volume) and macroscopic (continuous price and volume) frameworks of limit order books, with a view to providing a novel yet analytically tractable description of their behaviour in a high to ultra high-frequency setting. Starting with the canonical microscopic framework, the first part of the thesis examines the limiting behaviour of the order book process when order arrival and cancellation rates are sent to infinity and when volumes are considered to be of infinitesimal size. Mathematically speaking, this amounts to establishing the weak convergence of a discrete-space process to a mesoscopic diffusion limit. This step is initially carried out in a reduced-form context, in other words, by simply looking at the best bid and ask queues, before the procedure is extended to the whole book. This subsequently leads us to the second part of the thesis, which is devoted to the transition between mesoscopic and macroscopic models of limit order books, where the general idea is to send the tick size to zero, or equivalently, to consider infinitely many price levels. The macroscopic limit is then described in terms of reflected SPDEs which typically arise in stochastic interface models. Numerical applications are finally presented, notably via the simulation of the mesocopic and macroscopic limits, which can be used as market simulators for short-term price prediction or optimal execution strategies.
APA, Harvard, Vancouver, ISO, and other styles
31

Sheehe, Suzanne Marie Lanier. "Heat Release Studies by pure Rotational Coherent Anti-Stokes Raman Scattering Spectroscopy in Plasma Assisted Combustion Systems excited by nanosecond Discharges." The Ohio State University, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=osu1401377491.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Jan, Naeem A. "Anomalous Nature Of Metamaterial Inclusion and Compact Metamaterial-Inspired Antennas Model For Wireless Communication Systems. A Study of Anomalous Comportment of Small Metamaterial Inclusions and their Effects when Placed in the Vicinity of Antennas, and Investigation of Different Aspects of Metamaterial-Inspired Small Antenna Models." Thesis, University of Bradford, 2017. http://hdl.handle.net/10454/16003.

Full text
Abstract:
Metamaterials are humanly engineered artificial electromagnetic materials which produce electromagnetic properties that are unusual, yet can be observed readily in nature. These unconventional properties are not a result of the material composition but rather of the structure formed. The objective of this thesis is to investigate and design smaller and wideband metamaterial-inspired antennas for personal communication applications, especially for WiMAX, lower band and higher band WLAN applications. These antennas have been simulated using HFSS Structure Simulator and CST Microwave Studio software. The first design to be analysed is a low-profile metamaterial-inspired CPW-Fed monopole antenna for WLAN applications. The antenna is based on a simple strip loaded with a rectangular patch incorporating a zigzag E-shape metamaterial-inspired unit cell to enable miniaturization effect. Secondly, a physically compact, CSRR loaded monopole antenna with DGS has been proposed for WiMAX/WLAN operations. The introduction of CSRR induces frequency at lower WLAN 2.45 GHz band while the DGS has provided bandwidth enhancement in WiMAX and upper WLAN frequency bands, keeping the radiation pattern stable. The next class of antenna is a compact cloud-shaped monopole antenna consisting of a staircase-shaped DGS has been proposed for UWB operation ranges from 3.1 GHz to 10.6 GHz. The novel shaped antenna along with carefully designed DGS has resulted in a positive gain throughout the operational bandwidth. Finally, a quad-band, CPW-Fed metamaterial-inspired antenna with CRLH-TL and EBG is designed for multi-band: Satellite, LTE, WiMAX and WLAN.
APA, Harvard, Vancouver, ISO, and other styles
33

Sandin, Sara. "Elevers olika uppfattningar av tal i decimalform i en svensk kontext. : - En studie som bygger på kategorisering av elevers uppfattningar framtagen av tidigare forskning inom det matematikdidaktiska forskningsfältet." Thesis, Jönköping University, Högskolan för lärande och kommunikation, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-54358.

Full text
Abstract:
I denna studie har tidigare internationell forskning inom området tal i decimalform undersökts i en svensk kontext. I den matematikdidaktiska forskningen har ett teoretiskt ramverk för elevers olika uppfattningar av tal i decimalform tagits fram. Tidigare studier har gjort flera försök till att kategorisera elevers olika förståelse av tal i decimalform (Moloney &amp; Stacey, 1997; Resnik et al., 1989; Sackur-Grisvard &amp; Léonard, 1985; Stacey &amp; Steinle, 1998). Sackur-Grisvard och Léonards (1985) kategorisering utgår ifrån elevernas förkunskaper inom andra matematiska områden. Deras teoretiska ramverk består av elevers användning utav tre olika regler; heltalsregeln, bråkregeln och nollregeln. Sackur-Grisvard och Léonards (1985) teoretiska ramverk har inte använts i någon högre utsträckning i det svenska forskningsfältet. Ramverket har i denna studie använts för att ta reda på om det kan användas som ett verktyg för att kategorisera elevers olika förståelse av tal i decimalform i årskurserna 4-6. I studien har metoden triangulering använts med både ett skriftligt test och semistrukturerade intervjuer. Alla elever har genomfört ett skriftligt test där de fått lösa uppgifter genom att jämföra och storleksordna olika tal i decimalform. Elevernas resultat användes sedan där några få elever ifrån årskurserna 4 och 5 valdes ut till semistrukturerade intervjuer genom ett målstyrt urval.  Resultatet visade att det teoretiska ramverket hade vissa begränsningar och att flera elever inte kunde kategoriseras till enbart en kategori utan flertalet använde sig av flera regler på det skriftliga testet. Elevernas resultat visade även en progression inom ämnesområdet där elever i årskurs 6 presterade bäst efterföljt av årskurs 5 och elever i årskurs 4 presterade sämst.<br>In this study earlier international research has been used from a Swedish perspective to investigate the field of decimal numbers. A theoretical framework for students’ various perceptions of decimal number has developed from the mathematical didactic research field. Earlier studies have done different attempts to categories students’ various perceptions of decimal numbers (Moloney &amp; Stacey, 1997; Resnik et al., 1989; Sackur-Grisvard &amp; Léonard, 1985; Stacey &amp; Steinle, 1998). Sackur-Grisvard and Léonard (1985) categorization focus on students’ earlier knowledge in the mathematical field. Their theoretical framework involves the use of three different rules; the whole number rule, the fraction rule and the zero rule. Sackur-Grisvard and Léonard’s (1985) theoretical framework has not been used much in the Swedish research field. In this study the framework has been used to investigate if it can be used as a tool to categorise students in grade 4 and grade 5 various perception of decimal numbers. In this study the method triangulation has been used which involves a written test and semi-structure interviews. In the written test all students got tasks where they would compare and order different decimal numbers.  The students result from the test were used to choose a few students from grade 4 and grade 5 to do the semi-structured interviews through a target-driven selection. The result showed that the theoretical framework did have some limits and several students´ did not belong to only one category, several students did use more than one of the three rules in the written test. The students result showed a progression where students from grade 6 performed best on the test followed by students from grade 5, students in grade 4 performed worst.
APA, Harvard, Vancouver, ISO, and other styles
34

Tongning, Robert-christopher. "Ralentir le déphasage des états de superposition atomiques dans un cristal de Tm3+ : YAG." Phd thesis, Université Paris Sud - Paris XI, 2014. http://tel.archives-ouvertes.fr/tel-01011160.

Full text
Abstract:
Ce travail se place dans le contexte des recherches sur les mémoires quantiques pour la lumière. L'information quantique est stockée dans un état de superposition atomique, dont la durée de vie détermine le temps maximum de stockage.On s'intéresse particulièrement aux matériaux capables de capturer la lumière par excitation résonnante d'une raie d'absorption, puis de conserver l'information quantique dans un état de superposition du fondamental électronique.Dans Tm3+:YAG, l'information est enregistrée dans un état de spin nucléaire. Cependant le champ magnétique qui lève la dégénérescence nucléaire entraîne les différents spins à des vitesses de précession différentes, ce qui tend à détruire l'aimantation initiale, porteuse de l'information.Une étude quantique du cristal est réalisée lors du premier chapitre de ce manuscrit. Les trois chapitres suivants traitent des différents mécanismes conduisant au déphasage des spins nucléaires. On y trouvera différente analyses théoriques qui seront confirmées par un ensemble de résultats expérimentaux, ainsi qu'une description détaillée du dispositif expérimental. Enfin le dernier chapitre, prospectif, exploite les outils développés au cours de la thèse pour préserver les cohérences optiques. Il présente quelques résultats expérimentaux prometteurs sur l'allongement du temps de vie de ces cohérences optiques.
APA, Harvard, Vancouver, ISO, and other styles
35

Mengesha, Abi Taddesse. "Characterizing phosphate desorption kinetics from soil : an approach to predicting plant available phosphorus." Thesis, University of Pretoria, 2008. http://hdl.handle.net/2263/24346.

Full text
Abstract:
Many agricultural fields that have received long-term applications of P often contain levels of P exceeding those required for optimal crop production. Knowledge of the effect of the P remaining in the soil (residual effect) is of great importance for fertilization management. In order to characterize P forms in soils, a wide variety of methods have been proposed. The use of dialysis membrane tubes filled with hydrous ferric oxide (DMT-HFO) has recently been reported as an effective way to characterize P desorption over a long-term in laboratoty studies. However, there is little information on the relationship between kinetics of P release using this new method and plant P uptake. This method consist of a procedure of shaking a sample for a long period of time there by exploiting the whole volume of the soil which is in contrast to the actual plant mode of uptake. This method has also practical limitations in employing it for a routine soil analysis, as it is very expensive and time consuming. The objectives of this study were (i) to study the changes in labile, non-labile and residual P using successive P desorption by DMT-HFO followed by a subsequent fractionation method (combined method) (ii) to assess how the information gained from P desorption kinetic data relates to plant growth at green house and field trials (iii) to investigate the effect of varying shaking time on DMT-HFO extractable P and (iv) to propose a short cut approach to the combined method. The release kinetics of the plots from long term fertilizer trials at the University of Pretoria and Ermelo were studied. P desorption kinetics were described relatively well by a two-component first-order model (R2 = 0.947, 0.918,&0.993 for NPK, MNK,&MNPK treatments respectively). The relative contributions of both the labile pool (SPA) and the less labile pool (SPB) to the total P extracted increased with increased P supply levels. Significant correlations were observed between the rate coefficients and maize grain yield for both soil types. The correlation between the cumulative P extracted and maize yield (r = 0.997**) however was highly significant for Ermelo soils. This method was also used to determine the changes in the different P pools and to relate these P fractions with maize yield. Highly significant correlations were observed between maize grain yield and the different P fractions including total P. In both soil types the contribution of both the labile and non-labile inorganic P fractions in replenishing the solution Pi was significant where as the contributions from the organic fractions were limited. The C/HCl-Pi is the fraction that decreased most in both cases as well. Investigation was carried out to evaluate the effect of varying shaking periods on the extractable DMT-HFO-Pi for UP soils of varying P levels. Four shaking options were applied. Significant difference was observed for the treatment of high P application. Shaking option 2 seemed relatively better than the others since it showed the strongest correlation. Thus for soils with high releasing kinetics and high total P content, provided that the P release from the soil is a rate limiting step, reducing the length of shaking time could shorten the duration one needs to complete the experiment with out influencing the predicting capacity of the methodology. The other objective of this thesis was also to present a short cut method alternative to the combined fractionation method. Comparison of the sum of DMT-HFO-Pi, NaHCO3-Pi, NaOH-Pi, D/HCl-Pi and C/HCl-Pi extracted by a conventional step-by-step method with the sum of DMT-HFO-Pi and a single C/HCl-Pi extraction as a short cut approach for all extraction periods resulted in strong and significant correlations. The C/HCl-Pi fraction extracted by both methods was correlated with maize grain yield and it was found to be highly significant. This study revealed that this short cut approach could be a simplified and economically viable option to study the P dynamics of soils especially for soils where the P pool acting as a source in replenishing the labile portion of P is already identified. The method employed here therefore could act as an analytical tool to approximate successive cropping experiments carried out under green house or field condition. However, data from a wider range of soils is needed to evaluate the universality of this method. More work is also required in relating desorption indices of this method with yield parameters especially at field level.<br>Thesis (PhD)--University of Pretoria, 2009.<br>Plant Production and Soil Science<br>PhD<br>unrestricted
APA, Harvard, Vancouver, ISO, and other styles
36

Ronquillo, David Carlos. "Magnetic-Field-Driven Quantum Phase Transitions of the Kitaev Honeycomb Model." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587035230123328.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Akhavanfoomani, Aria. "Derivative-free stochastic optimization, online learning and fairness." Electronic Thesis or Diss., Institut polytechnique de Paris, 2023. http://www.theses.fr/2023IPPAG001.

Full text
Abstract:
Dans cette thèse, nous étudions d'abord le problème de l'optimisation d'ordre zéro dans le cadre actif pour des fonctions lisses et trois classes différentes de fonctions : i) les fonctions qui satisfont la condition de Polyak-Łojasiewicz, ii) les fonctions fortement convexes, et iii) la classe plus large des fonctions non convexes fortement lisses.De plus, nous proposons un nouvel algorithme basé sur la randomisation de type l1, et nous étudions ses propriétés pour les fonctions convexes Lipschitz dans un cadre d'optimisation en ligne. Notre analyse est due à la dérivation d'une nouvelle inégalité de type Poincar'e pour la mesure uniforme sur la sphère l1 avec des constantes explicites.Ensuite, nous étudions le problème d'optimisation d'ordre zéro dans les schémas passifs. Nous proposons une nouvelle méthode pour estimer le minimiseur et la valeur minimale d'une fonction de régression lisse et fortement convexe f. Nous dérivons des limites supérieures pour cet algorithme et prouvons des limites inférieures minimax pour un tel cadre.Enfin, nous étudions le problème du bandit contextuel linéaire sous contraintes d'équité où un agent doit sélectionner un candidat dans un pool, et où chaque candidat appartient à un groupe sensible. Nous proposons une nouvelle notion d'équité qui est pratique dans l'exemple susmentionné. Nous concevons une politique avide qui calcule une estimation du rang relatif de chaque candidat en utilisant la fonction de distribution cumulative empirique, et nous prouvons sa propriété optimale<br>In this thesis, we first study the problem of zero-order optimization in the active setting for smooth and three different classes of functions: i) the functions that satisfy the Polyak-Łojasiewicz condition, ii) strongly convex functions, and iii) the larger class of highly smooth non-convex functions.Furthermore, we propose a novel algorithm that is based on l1-type randomization, and we study its properties for Lipschitz convex functions in an online optimization setting. Our analysis is due to deriving a new Poincar'e type inequality for the uniform measure on the l1-sphere with explicit constants.Then, we study the zero-order optimization problem in the passive schemes. We propose a new method for estimating the minimizer and the minimum value of a smooth and strongly convex regression function f. We derive upper bounds for this algorithm and prove minimax lower bounds for such a setting.In the end, we study the linear contextual bandit problem under fairness constraints where an agent has to select one candidate from a pool, and each candidate belongs to a sensitive group. We propose a novel notion of fairness which is practical in the aforementioned example. We design a greedy policy that computes an estimate of the relative rank of each candidate using the empirical cumulative distribution function, and we proved its optimal property
APA, Harvard, Vancouver, ISO, and other styles
38

Tucker, Ida. "Chiffrement fonctionnel et signatures distribuées fondés sur des fonctions de hachage à projection, l'apport des groupes de classe." Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEN054.

Full text
Abstract:
Un des enjeux actuels de la recherche en cryptographie est la mise au point de primitives cryptographiques avancées assurant un haut niveau de confiance. Dans cette thèse, nous nous intéressons à leur conception, en prouvant leur sécurité relativement à des hypothèses algorithmiques bien étudiées. Mes travaux s'appuient sur la linéarité du chiffrement homomorphe, qui permet d'effectuer des opérations linéaires sur des données chiffrées. Précisément, je suis partie d'un tel chiffrement, introduit par Castagnos et Laguillaumie à CT-RSA'15, ayant la particularité d'avoir un espace des messages clairs d'ordre premier. Afin d'aborder une approche modulaire, j'ai conçu à partir de ce chiffrement des outils techniques (fonctions de hachage projectives, preuves à divulgation nulle de connaissances) qui offrent un cadre riche se prêtant à de nombreuses applications. Ce cadre m'a d'abord permis de construire des schémas de chiffrement fonctionnel; cette primitive très expressive permet un accès mesuré à l'information contenue dans une base de données chiffrée. Puis, dans un autre registre, mais à partir de ces mêmes outils, j'ai conçu des signatures numériques à seuil, permettant de partager une clé secrète entre plusieurs utilisateurs, de sorte que ceux-ci doivent collaborer afin de produire des signatures valides. Ce type de signatures permet entre autres de sécuriser les portefeuilles de crypto-monnaie. De nets gains d'efficacité, notamment en termes de bande passante, apparaissent en instanciant ces constructions à l'aide de groupes de classes. Mes travaux se positionnent d'ailleurs en première ligne du renouveau que connait, depuis quelques années, l’utilisation de ces objets en cryptographie<br>One of the current challenges in cryptographic research is the development of advanced cryptographic primitives ensuring a high level of confidence. In this thesis, we focus on their design, while proving their security under well-studied algorithmic assumptions.My work grounds itself on the linearity of homomorphic encryption, which allows to perform linear operations on encrypted data. Precisely, I built upon the linearly homomorphic encryption scheme introduced by Castagnos and Laguillaumie at CT-RSA'15. Their scheme possesses the unusual property of having a prime order plaintext space, whose size can essentially be tailored to ones' needs. Aiming at a modular approach, I designed from their work technical tools (projective hash functions, zero-knowledge proofs of knowledge) which provide a rich framework lending itself to many applications.This framework first allowed me to build functional encryption schemes; this highly expressive primitive allows a fine grained access to the information contained in e.g., an encrypted database. Then, in a different vein, but from these same tools, I designed threshold digital signatures, allowing a secret key to be shared among multiple users, so that the latter must collaborate in order to produce valid signatures. Such signatures can be used, among other applications, to secure crypto-currency wallets. Significant efficiency gains, namely in terms of bandwidth, result from the instantiation of these constructions from class groups. This work is at the forefront of the revival these mathematical objects have seen in cryptography over the last few years
APA, Harvard, Vancouver, ISO, and other styles
39

Wang, Zhibo. "Estimations non-asymptotiques et robustes basées sur des fonctions modulatrices pour les systèmes d'ordre fractionnaire." Electronic Thesis or Diss., Bourges, INSA Centre Val de Loire, 2023. http://www.theses.fr/2023ISAB0003.

Full text
Abstract:
Cette thèse développe la méthode des fonctions modulatrices pour des estimations non-asymptotiques et robustes pour des pseudo-états des systèmes nonlinéaires d'ordre fractionnaire, des systèmes linéaires d'ordre fractionnaire avec des accélérations en sortie, et des systèmes à retards d'ordre fractionnaire. Les estimateurs conçus sont fournis en termes de formules intégrales algébriques, ce qui assure une convergence non-asymptotique. Comme une caractéristique essentielle des algorithmes d'estimation conçus, les mesures de sorties bruitées ne sont impliquées que dans les termes intégraux, ce qui confère aux estimateurs une robustesse contre les bruits. Premièrement, pour les systèmes nonlinéaires d'ordre fractionnaire et partiellement inconnu, l'estimation de la dérivée fractionnaire du pseudo-état est abordée via la méthode des fonctions modulatrices. Grâce à la loi de l'indice additif des dérivées fractionnaires, l'estimation est décomposée en une estimation des dérivées fractionnaires de la sortie et une estimation des valeurs initiales fractionnaires. Pendant ce temps, la partie inconnue est estimée via une stratégie innovante de fenêtre glissante. Deuxièmement, pour les systèmes linéaires d'ordre fractionnaire avec des accélérations comme sortie, l'estimation de l'intégrale fractionnaire de l'accélération est d'abord considérée pour les systèmes mécaniques de vibration d'ordre fractionnaire, où seules des mesures d'accélération bruitées sont disponibles. Basée sur des approches numériques existantes qui traitent des intégrales fractionnaires, notre attention se limite principalement à l'estimation des valeurs initiales inconnues en utilisant la méthode des fonctions modulatrices. Sur cette base, le résultat est ensuite généralisé aux systèmes linéaires plus généraux d'ordre fractionnaire. En particulier, le comportement des dérivées fractionnaires à zéro est étudié pour des fonctions absolument continues, ce qui est assez différent de celui de l'ordre entier. Troisièment, pour les systèmes à retards d'ordre fractionnaire, l'estimation du pseudo-état est étudiée en concevant un système dynamique auxiliaire d'ordre fractionnaire, qui fournit un cadre plus général pour générer les fonctions modulatrices requises. Avec l'introduction de l'opérateur de retard et du changement de coordonnées généralisé bicausal, l'estimation du pseudo-état du système considéré peut être réduite à celle de la forme normale correspondante. Contrairement aux travaux précédents le schéma présenté permet une estimation directe du pseudo-état plutôt que d'estimer les dérivées fractionnaires de la sortie et un ensemble de valeurs initiales fractionnaires. De plus, l'efficacité et la robustesse des estimateurs proposés sont vérifiées par des simulations numériques dans cette thèse. Enfin, un résumé de ce travail et un aperçu des travaux futurs sont tirés<br>This thesis develops the modulating functions method for non-asymptotic and robust estimations for fractional-order nonlinear systems, fractional-order linear systems with accelerations as output, and fractional-order time-delay systems. The designed estimators are provided in terms of algebraic integral formulas, which ensure non-asymptotic convergence. As an essential feature of the designed estimation algorithms, noisy output measurements are only involved in integral terms, which endows the estimators with robustness against corrupting noises. First, for fractional-order nonlinear systems which are partially unknown, fractional derivative estimation of the pseudo-state is addressed via the modulating functions method. Thanks to the additive index law of fractional derivatives, the estimation is decomposed into the fractional derivatives estimation of the output and the fractional initial values estimation. Meanwhile, the unknown part is fitted via an innovative sliding window strategy. Second, for fractional-order linear systems with accelerations as output, fractional integral estimation of the acceleration is firstly considered for fractional-order mechanical vibration systems, where only noisy acceleration measurements are available. Based on the existing numerical approaches addressing the proper fractional integrals of accelerations, our attention is primarily restricted to estimating the unknown initial values using the modulating functions method. On this basis, the result is further generalized to more general fractional-order linear systems. In particular, the behaviour of fractional derivatives at zero is studied for absolutely continuous functions, which is quite different from that of integer order. Third, for fractional-order time-delay systems, pseudo-state estimation is studied by designing a fractional-order auxiliary modulating dynamical system, which provides a more general framework for generating the required modulating functions. With the introduction of the delay operator and the bicausal generalized change of coordinates, the pseudo-state estimation of the considered system can be reduced to that of the corresponding observer normal form. In contrast to the previous work, the presented scheme enables direct estimation for the pseudo-state rather than estimating the fractional derivatives of the output and a bunch of fractional initial values. In addition, the efficiency and robustness of the proposed estimators are verified by numerical simulations in this thesis. Finally, a summary of this work and an insight into future work were drawn
APA, Harvard, Vancouver, ISO, and other styles
40

Dakkoune, Amine. "Méthodes pour l'analyse et la prévention des risques d'emballement thermique Zero-order versus intrinsic kinetics for the determination of the time to maximum rate under adiabatic conditions (TMR_ad): application to the decomposition of hydrogen peroxide Risk analysis of French chemical industry Fault detection in the green chemical process : application to an exothermic reaction Analysis of thermal runaway events in French chemical industry Early detection and diagnosis of thermal runaway reactions using model-based approaches in batch reactors." Thesis, Normandie, 2019. http://www.theses.fr/2019NORMIR30.

Full text
Abstract:
L’histoire des événements accidentels dans les industries chimiques montre que leurs conséquences sont souvent graves sur les plans humain, environnemental et économique. Cette thèse vise à proposer une approche de détection et de diagnostic des défauts dans les procédés chimiques afin de prévenir ces événements accidentels. La démarche commence par une étude préalable qui sert à identifier les causes majeures responsables des événements industriels chimiques en se basant sur le retour d’expérience (REX). En France, selon la base de données ARIA, 25% des évènements sont dus à l’emballement thermique à cause d’erreurs d’origine humaine. Il est donc opportun de développer une méthode de détection et de diagnostic précoce des défauts dus à l’emballement thermique. Pour cela nous développons une approche qui utilise des seuils dynamiques pour la détection et la collecte de mesures pour le diagnostic. La localisation des défauts est basée sur une classification des caractéristiques statistiques de la température en fonction de plusieurs modes défectueux. Un ensemble de classificateurs linéaires et de diagrammes de décision binaires indexés par rapport au temps sont utilisés. Enfin, la synthèse de l'acide peroxyformique dans un réacteur discontinu et semi-continu est considérée pour valider la méthode proposée par des simulations numériques et ensuite expérimentales. Les performances de détection de défauts se sont révélées satisfaisantes et les classificateurs ont démontré un taux de séparabilité des défauts élevés<br>The history of accidental events in chemical industries shows that their human, environmental and economic consequences are often serious. This thesis aims at proposing an approach of detection and diagnosis faults in chemical processes in order to prevent these accidental events. A preliminary study serves to identify the major causes of chemical industrial events based on experience feedback. In France, according to the ARIA database, 25% of the events are due to thermal runaway because of human errors. It is therefore appropriate to develop a method for early fault detection and diagnosis due to thermal runaway. For that purpose, we develop an approach that uses dynamical thresholds for the detection and collection of measurements for diagnosis. The localization of faults is based on a classification of the statistical characteristics of the temperature according to several defectives modes. A multiset of linear classifiers and binary decision diagrams indexed with respect to the time are used for that purpose. Finally, the synthesis of peroxyformic acid in a batch and semi batch reactor is considered to validate the proposed method by numerical simulations and then experiments. Faults detection performance has been proved satisfactory and the classifiers have proved a high isolability rate of faults
APA, Harvard, Vancouver, ISO, and other styles
41

Hernández, Cubero Óscar Rubén. "Méthodes optiques innovantes pour le contrôle rapide et tridimensionnel de l’activité neuronale." Thesis, Sorbonne Paris Cité, 2016. http://www.theses.fr/2016USPCB005.

Full text
Abstract:
La révolution en cours des outils optogénétiques - des protéines photosensibles génétiquement induites qui peuvent activer, inhiber et enregistrer l'activité neuronale - a permis d'ouvrir une nouvelle voie pour relier l'activité neuronale et la cognition. Néanmoins, pour profiter au mieux de ces outils nous avons besoin de méthodes optiques qui peuvent projeter des schémas d'illumination complexes dans le cerveau. Pendant mon doctorat, j'ai travaillé sur deux nouveaux systèmes complémentaires pour la stimulation de l'activité neuronale. Le premier système combine des déflecteurs acousto-optiques et une illumination Gaussienne à faible ouverture numérique pour produire une photo activation rapide des outils optogénétiques. La capacité d'accès aléatoire du système permet de délivrer des séquences d'illumination spatialement et temporellement complexes qui simulent avec succès les schémas physiologiques de l'activité des fibres moussues dans des tranches de cerveaux. Ces résultats démontrent que les schémas de stimulation optogénétique peuvent être utilisés pour recréer l'activité en cours et étudier les microcircuits du cerveau dans un environnement physiologique. Alternativement, l'holographie générée par ordinateur (HGO) permet d'améliorer grandement les stimulations optogénétiques en répartissant efficacement la lumière sur plusieurs cibles cellulaires simultanément. Néanmoins, le confinement axial se dégrade pour des schémas d'illuminations larges. Afin de d'améliorer ce point, l’HGO peut être combinée avec une technique de focalisation temporelle qui confine axialement la fluorescence sans dépendre de l'allongement latéral. Les précédentes configurations maintiennent l'excitation non linéaire à un unique plan focal spatiotemporel. Dans cette thèse, je décris deux méthodes différentes qui permettent de dépasser ces limitations et de permettre la génération de schémas focalisés tridimensionnellement, à la fois spatialement et temporellement<br>The ongoing revolution of optogenetic tools – genetically encoded light-sensitive proteins that can activate, silence and monitor neural activity – has opened a new pathway to bridge the gap between neuronal activity and cognition. However, to take full advantage of these tools we need optical methods that can deliver complex light patterns in the brain. During my doctorate, I worked on two novel and complementary optical systems for complex spatiotemporally neural activity stimulation. The first system combined acousto-optic deflectors and low numerical aperture Gaussian beam illumination for fast photoactivation of optogenetic tools. The random-access capabilities of the system allowed to deliver complex spatiotemporal illumination sequences that successfully emulated physiological patterns of cerebellar mossy fiber activity in acute slices. These results demonstrate that patterned optogenetic stimulation can be used to recreate ongoing activity and study brain microcircuits in a physiological activity context. Alternatively, Computer Generated Holography (CGH) can powerfully enhance optogenetic stimulation by efficiently shaping light onto multiple cellular targets simultaneously. Nonetheless, the axial confinement degrades for laterally extended illumination patterns. To address this issue, CGH can be combined with temporal focusing that axially confines fluorescence regardless of lateral extent. However, previous configurations restricted nonlinear excitation to a single spatiotemporal focal plane. In this thesis, I describe two alternative methods to overcome this limitation and enable three-dimensional spatiotemporal focused pattern generation
APA, Harvard, Vancouver, ISO, and other styles
42

Danckwerts, Michael Paul. "Development of zero-order release tablets." Thesis, 1996. https://hdl.handle.net/10539/26011.

Full text
Abstract:
A thesis submitted to the Faculty of Health Sciences, University of the Witwatersrand, in fulfilment of the requirements for the degree of Doctor of Philosophy Johannesburg, 1996<br>A new core-in-cup tablet manufactured from a novel adjustable punch, has been formulated and evaluated as to its ability to release various model drugs via a zero-order rate of release. The new punch, with an inner adjustable rod that can be adjusted to produce cup-shaped tablets of various thicknesses, was used to manufacture the core-in-cup tablets. These core-in-cup tablets were then evaluated as to their ability to be manufactured on a tabletting press, and to their ability to release model drugs both in vitro and in vivo. After evaluating the effect of various formulation factors on the compressibility and flow of various directly compressible powder combinations via factorial design, a directly compressible combination of 10%w/w camauba wax in ethylcellulose was found to produce the best cup tablets for the core-in-cup tablets<br>IT2018
APA, Harvard, Vancouver, ISO, and other styles
43

Sundy, Erica. "A novel tablet design for zero-order sustained-release." Thesis, 2002. https://hdl.handle.net/10539/26107.

Full text
Abstract:
Dissertation submitted to the Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science in Medicine in the branch of Pharmacy<br>A coated doughnut-shaped tablet is evaluated as its ability to release model drugs at a zero-order rate for 8 to 12 hours. The doughnut-shape tablets were compressed using special designed punches.Automated technology is thus feasible for this system. The coating material , 10% w /w gelatin in HPMC K15M was directly compressed and adhered to the tablet core .<br>IT2018
APA, Harvard, Vancouver, ISO, and other styles
44

DING, JIE, and 丁傑. "Design of zero-order controlled release devices:theoretical analysis and experiment." Thesis, 1991. http://ndltd.ncl.edu.tw/handle/96430548078267986146.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Moodley, Kovanya. "A polymeric triple-layered tablet for stratified zero-order drug release." Master's thesis, 2013. http://hdl.handle.net/10539/12304.

Full text
Abstract:
Patient compliance is a major factor in achieving optimal therapeutic outcomes. Pill burden, due to multiple drug therapies, has a great detrimental impact on compliance of the patient. Dose-dependent side-effects, associated with peak-trough plasma fluctuations of drugs, also have a negative impact on patient compliance with drug therapy. It is under these circumstances that zero-order drug release kinetics proves to be ideal. This is due to the lack of peak-trough fluctuations that occur with zero-order drug release, thereby minimizing the side-effects of drug therapy. Furthermore, a drug delivery system that may deliver more than one drug at a time may be beneficial to alleviate the pill burden associated with chronic diseases or specific health conditions. Novel drug delivery systems have been developed that offer zero-order or linear drug release. Amongst such systems are multilayered tablets. However these systems generally offer the delivery of just one drug. The development of a delivery system that is able to deliver up to three drugs in a zero-order manner may prove to be significantly beneficial to greatly increase patient compliance and in turn therapeutic efficacy. The purpose of this study was to design a novel triple-layered tablet (TLT) matrix targeted at achieving stratified zero-order drug release. The central factor for the establishment of the TLT was the selection of ideal and novel polymers that are capable of acting as superior drug release matrices. Modified polyamide 6,10 (PA6,10) and salted-out poly(lactic-co-glycolic acid) (PLGA) were employed as the outer drug-carrier matrices whereas poly(ethylene oxide) (PEO) was used as the middle layer drug matrix. Specialized granulation techniques and direct compression were employed to prepare the TLT matrices. Diphenhydramine HCl, ranitidine HCl and promethazine were chosen as model drugs for the study due to their similar high aqueous solubilities (100mg/mL). Matrix hardness, gel strength, swelling/erosion characteristics, Fourier Transform Infrared spectroscopy, Differential Scanning Calorimetry and in vitro drug release analysis employing High Performance Liquid Chromatography were performed on the TLT matrices in order to determine the physicomechanical and physicochemical nature of the TLT matrices. Computational molecular modeling (CMM) was employed to characterize the formation and dissolution of the TLT matrices. A box-Behnken experimental design was employed that resulted in the generation of 17 design formulations for ultimate optimization. In vivo animal studies were performed in the Large White Pig model to assess drug release behavior of the TLT. Ultra Performance Liquid Chromatography was employed for plasma sample analysis. The PA 6,10 layer provided relatively linear and controlled drug release patterns with an undesirable burst release greater than 15%, which upon addition of sodium sulphate was greatly reduced. The addition of PEO to the salted-out PLGA layer greatly reduced the initial burst release that occurred when salted-out PLGA matrix was used alone. Desirable results were obtained from FTIR, hydration and swelling/erosion analysis. CMM elucidated the possible mechanism of zero-order release from respective layers. Upon completion of the Box-Behnken design analysis, an optimized TLT formulation was established according to the formulation responses selected namely the rate constants and correlation coefficients. The TLT displayed desirable near linear release of all three drugs simultaneously over 24 hours, with approximately 10%, 50% and 90% of the drugs released in 1, 10 and 24 hours. An in vitro drug release comparison performed between the optimized TLT and the commercial tablets currently used, showed an unequivocal display of superiority of the TLT in terms of linear drug release over commercial tablets. A cardiovascular related drug regimen (Adco-simvastatin®, DISPRIN CV® and Tenormin 50®) was applied to the TLT to assess the flexibility of incorporating a range of drugs. The TLT furthermore provided near linear to linear release of the therapeutic regimen over 24 hours and maintained superiority over the commercial tablets. Benchtop Magnetic Resonance Imaging, porosity analysis and Scanning Electron Microscopy was utilized for further introspective characterization of the TLT. In vivo analysis demonstrated a definite control of drug release from the TLT as compared to commercial tablets which further confirmed the advantage of the TLT.
APA, Harvard, Vancouver, ISO, and other styles
46

Hobbs, Kim Melissa. "Development of a novel rate-modulated fixed dose analgesic combination for the treatment of mild to moderate pain." Thesis, 2010. http://hdl.handle.net/10539/8733.

Full text
Abstract:
MSc (Med),Dept of Pharmacy and Pharmacology, Faculty of Health Sciences, University of the Witwatersrand<br>Pain is the net effect of multidimensional mechanisms that engage most parts of the central nervous system (CNS) and the treatment of pain is one of the key challenges in clinical medicine (Le Bars et al., 2001; Miranda et al., 2008). Polypharmacy is seen as a barrier to analgesic treatment compliance, signifying the necessity for the development of fixed dose combinations (FDCs), which allow the number of tablets administered to be reduced, with no associated loss in efficacy or increase in the prevalence of side effects (Torres Morera, 2004). FDCs of analgesic drugs with differing mechanisms of nociceptive modulation offer benefits including synergistic analgesic effects, where the individual agents act in a greater than additive manner, and a reduced occurrence of side-effects (Raffa, 2001; Camu, 2002). This study aimed at producing a novel, rate-modulated, fixed-dose analgesic formulation for the treatment of mild to moderate pain. The fixed-dose combination (FDC) rationale of paracetamol (PC), tramadol hydrochloride (TM) and diclofenac potassium (DC) takes advantage of previously reported analgesic synergy of PC and TM as well as extending the analgesic paradigm with the addition of the anti-inflammatory component, DC. The study involved the development of a triple-layered tablet delivery system with the desired release characteristics of approximately 60% of the PC and TM being made available within 2 hours to provide an initial pain relief effect and then sustained zero-order release of DC over a period of 24 hours to combat the on-going effects of any underlying inflammatory conditions. The triple-layered tablet delivery system would thus provide both rapid onset of pain relief as well as potentially address an underlying inflammatory cause. The design of a novel triple-layered tablet allowed for the desired release characteristics to be attained. During initial development work on the polymeric matrix it was discovered that only when combined with the optimized ratio of the release retarding polymer polyethylene oxide (PEO) in combination with electrolytic-crosslinking activity, provided by the biopolymer sodium alginate and zinc gluconate, could the 24 hour zero-order release of DC be attained. It was also necessary for this polymeric matrix to be bordered on both sides by the cellulosic polymers containing PC and TM. Thus the application of multi-layered tableting technology in the form of a triple-layered tablet were capable of attaining the rate-modulated release objectives set out in the study. The induced barriers provided by the three layers also served to physically separate TM and DC, reducing the likelihood of the bioavailability-diminishing interaction noted in United States Patent 6,558,701 and detected in the DSC analysis performed as part of this study. The designed system provided significant flexibility in modulation of release kinetics for drugs of varying solubility. The suitability of the designed triple-layered tablet delivery system was confirmed by a Design of Experiments (DoE) statistical evaluation, which revealed that Formulation F4 related closest to the desired more immediate release for PC and TM and the zero-order kinetics for DC. The results were confirmed by comparing Formulation F4 to typical release kinetic mechanisms described by Noyes-Whitney, Higuchi, Power Law, Pappas-Sahlin and Hopfenberg. Using f1 and f2 fit factors Formulation F4 compared favourably to each of the criteria defined for these kinetic models. The Ultra Performance Liquid Chromatographic (UPLC) assay method developed displayed superior resolution of the active pharmaceutical ingredient (API) combinations and the linearity plots produced indicated that the method was sufficiently sensitive to detect the concentrations of each API over the concentration ranges studied. The method was successfully validated and hence appropriate to simultaneously detect the three APIs as well as 4-aminophenol, the degradation product related to PC. Textural profile analysis in the form of swelling as well as matrix hardness analysis revealed that an increase in the penetration distance was associated with an increase in hydration time of the tablet and also an increase in gel layer thickness. The swelling complexities observed in the delivery system in terms of both the PEO, crosslinking sodium alginate and both cellulose polymers as well as the actuality of the three layers of the tablet swelling simultaneously suggests further intricacies involved in the release kinetics of the three drugs from this tablet configuration. Modified release dosage forms, such as the one developed in this study, have gained widespread importance in recent years and offer many advantages including flexible release kinetics and improved therapy and patient compliance.
APA, Harvard, Vancouver, ISO, and other styles
47

Rastogi, Ashish. "Design, development, and evaluation of a scalable micro perforated drug delivery device capable of long-term zero order release." 2009. http://hdl.handle.net/2152/7542.

Full text
Abstract:
Chronic diseases can often be managed by constantly delivering therapeutic amounts of drug for prolonged periods. A controlled release for extended duration would replace the need for multiple and frequent dosing. Local drug release would provide added benefit as a lower dose of drug at the target site will be needed as opposed to higher doses required by whole body administration. This would provide maximum efficacy with minimum side effects. Nonetheless, a problem with the known implantable drug delivery devices is that the delivery rate cannot be controlled, which leads to drug being released in an unpredictable pattern resulting in poor therapeutic management of patients. This dissertation is the result of development of an implantable drug delivery system that is capable of long-term zero order local release of drugs. The device can be optimized to deliver any pharmaceutical agent for any time period up to several years maintaining a controlled and desired rate. Initially significant efforts were dedicated to the characterization, biocompatibility, and loading capacity of nanoporous metal surfaces for controlled release of drugs. The physical characterization of the nanoporous wafers using Scanning electron microscropy (SEM) and atomic force microscopy techniques (AFM) yielded 3.55 x 10⁴ nm³ of pore volume / μm² of wafer surface. In vitro drug release study using 2 - octyl cyanoacrylate and methyl orange as the polymer-drug matrix was conducted and after 7 days, 88.1 ± 5.0 % drug was released. However, the initial goal to achieve zero order drug release rates for long periods of time was not achieved. The search for a better delivery system led to the design of a perforated microtube. The delivery system was designed and appropriate dimensions for the device size and hole size were estimated. Polyimide microtubes in different sizes (125-1000 μm) were used. Micro holes with dimensions ranging from 20-600 μm were fabricated on these tubes using photolithography, laser drilling, or manual drilling procedures. Small molecules such as crystal violet, prednisolone, and ethinyl estradiol were successfully loaded inside the tubes in powder or solution using manual filling or capillary filling methods. A drug loading of 0.05 – 5.40 mg was achieved depending on the tube size and the drug filling method used. The delivery system in different dimensions was characterized by performing in vitro release studies in phosphate buffered saline (pH 7.1-7.4) and in vitreous humor from the rabbit’s eye at 37.0 ± 1.0°C for up to four weeks. The number of holes was varied between 1 and 3. The tubes were loaded with crystal violet (CV) and ethinyl estradiol (EE). Linear release rates with R²>0.9900 were obtained for all groups with CV and EE. Release rates of 7.8±2.5, 16.2±5.5, and 22.5±6.0 ng/day for CV and 30.1±5.8 ng/day for EE were obtained for small tubes (30 μm hole diameter; 125 μm tube diameter). For large tubes (362-542 μm hole diameter; 1000 μm tube diameter), a release rate of 10.8±4.1, 15.8±4.8 and 22.1±6.7 μg/day was observed in vitro in PBS and a release rate of 5.8±1.8 μg/day was observed ex vivo in vitreous humor. The delivery system was also evaluated for its ability to produce a biologically significant amounts in cells stably transfected with an estrogen receptor/luciferase construct (T47D-KBluc cells). These cells are engineered to produce a constant luminescent signal in proportion to drug exposure. The average luminescence of 1144.8±153.8 and 1219.9±127.7 RLU/day, (RLU = Relative Luminescence Units), yet again indicating the capability of the device for long-term zero order release. The polyimide device was characterized for biocompatibility. An automated goniometer was used to determine the contact angle for the device, which was found to be 63.7±3.7degreees indicating that it is hydrophilic and favors cell attachment. In addition, after 72 h incubation with mammalian cells (RAW 267.4), a high cell distribution was observed on the device’s surface. The polyimide tubes were also investigated for any signs of inflammation using inflammatory markers, TNF-α and IL-1β. No significant levels of either TNF-α or IL-1β were detected in polyimide device. The results indicated that polyimide tubes were biocompatible and did not produce an inflammatory response.<br>text
APA, Harvard, Vancouver, ISO, and other styles
48

Rakkanka, Vipaporn. "A novel self-sealing chewable sustained release tablet of acetaminophen ; Development and evaluation of novel itraconazole oral formulations ; A novel zero order release matrix tablet." Thesis, 2003. http://hdl.handle.net/1957/30894.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Tan, Wilson Hor Keong, Timothy Lee, and Chi-Hwa Wang. "Delivery of Etanidazole to Brain Tumor from PLGA Wafers." 2003. http://hdl.handle.net/1721.1/3954.

Full text
Abstract:
This paper presents the computer simulation results on the delivery of Etanidazole (radiosensitiser) to the brain tumor and examines several factors affecting the delivery. The simulation consists of a 3D model of tumor with poly (lactide-co-glycolide) (PLGA) wafers of 1% Etanidzole loading implanted in the resected cavity. A zero-order release device will produce a concentration profile in the tumor which increases with time until the drug in the carrier is depleted. This causes toxicity complications during the later stages of drug treatment. However, for wafers of similar loading, such release results in a higher drug penetration depth and therapeutic index as compared to the double drug burst profile. The numerical accuracy of the model was verified by the similar results obtained in the two-dimensional and three-dimensional models.<br>Singapore-MIT Alliance (SMA)
APA, Harvard, Vancouver, ISO, and other styles
50

Dekyndt, Bérangère. "La libération modifiée de principes actifs, développement de deux approches." Thesis, 2015. http://www.theses.fr/2015LIL2S005/document.

Full text
Abstract:
Les thérapeutiques individualisées et ciblées se développent actuellement, les formes galéniques évoluent donc en parallèle pour contrôler la libération des principes actifs (PA) et les conduire au plus proche des sites d’intérêts. Les formes orales solides représentent les formulations galéniques les plus utilisées, faciles d’emploi, indolores et réduisant le risque d’infection. Lors de leur conception, il est aussi possible de moduler la libération du PA.Deux approches sont étudiées dans ce manuscrit, l’une correspond au ciblage de la libération d’un PA vers son site d’action thérapeutique qui est le colon, la seconde consiste à contrôler la libération du PA pour maintenir une concentration constante, minimiser les effets indésirables et les périodes de présence de concentrations sub-thérapeutiques au niveau du site d’action.Première approche :Les traitements des Maladies Inflammatoires Chroniques de l’Intestin (MICI) peuvent être significativement améliorées par une libération localisée du PA. Une des approches est l’utilisation d’enrobages composés de polysaccharides dégradés par les enzymes sécrétées par la microflore colique. Mais l’absence d’une méthode in vitro reproductible simulant les conditions physiologiques du colon et l’impact potentiel des traitements antibiotiques associées qui pourraient affecter la quantité et la qualité des bactéries présentes et des enzymes sécrétées est un obstacle à sa mise au point. L’objectif de l’étude était d’effectuer un screening de polysaccharides ayant un intérêt dans le développement de nouvelles formulations à libération colique. Après cette sélection, la libération des formulations retenues ont été évaluées par une méthode utilisant des selles de patients atteints de MICI traités ou non par antibiothérapie. Enfin, l’utilisation de mélanges bactériens pour un éventuel remplacement de l’utilisation de selles fraiches a été évaluée.Seconde approche : Les formes orales enrobées présentent un grand potentiel pour la libération contrôlée de PA. Néanmoins, il est difficile d’obtenir une libération à vitesse constante avec ce type de formulation. Ceci est généralement dû au rôle prédominant du transport de masse par diffusion, ce qui entraine, avec le temps, une diminution de la concentration en PA au cœur du système, donc une réduction du gradient de concentration qui est la force motrice de la libération du PA. Ce type de cinétique de libération peut être inapproprié pour un traitement médicamenteux sûr et efficace. Malgré l’importance pratique de ce défi crucial de formulation, étonnamment, peu de stratégies efficaces sont connues. Dans cette étude, une nouvelle approche, basée sur une succession de couches de PA et de polymères (initialement dépourvu de PA) présentant une distribution initiale de PA non homogène, associé à un effet de temps de latence et à une diffusion partielle initiale à travers le noyau de la minigranule. Des variations de type, de quantité, d’épaisseur et de séquence des couches de PA et de polymères ont été testées. Un système assez simple composé de quatre couches (deux couches de PA et deux couches de polymère) permettait d’aboutir à une libération relativement constante durant 8h<br>Individualized and targeted therapies are currently developed, therefore the dosage forms move in parallel to control the drug release and drive it nearest to interest sites. Solid oral dosage forms are the pharmaceutical formulations the most common, easy to use, painless and reducing the infectious risk. In these formulation designs, it is also possible to adjust the drug release.Two approaches are discussed in this manuscript, the first one targets the drug release to the therapeutic site of action which is the colon, and the second one consists on controlling the drug release to maintain a constant concentration, minimize side effects and periods of presence of sub-therapeutic concentrations at the site of action.The first approach:The treatment of colonic disease like Inflammatory Bowel Diseases (IBD), can be significantly improved via local drug delivery. One approach is to use polysaccharide coatings, which are degraded by enzymes secreted by the colonic microflora. However, the lack of a reliable in vitro test simulating conditions in a living colon and the potential impact of associated antibiotic treatments that could affect the quality and quantity of bacteria and enzymes secreted is an obstacle to its development. The aim of the study was to perform a screening of polysaccharides suitable for the development of new colonic release formulations. After this selection, the drug release of selected formulations were evaluated by a method using the stools of IBD patients treated or not with antibiotics. Finally, the use of bacterial mixtures substituting fresh fecal samples has been evaluated.The second approach: Coated pellets offer a great potential for controlling drug delivery systems. However, constant drug release rates are difficult to achieve with this type of dosage forms if the drug is freely water-soluble. This is because diffusional mass transport generally plays a major role and with time the drug concentration within the system decreases, resulting in decreased concentration gradients, which are the driving forces for drug release. This type of release kinetics might be inappropriate for an efficient and safe drug treatment. Despite the great practical importance of this potentially crucial formulation challenge, surprisingly little is yet known about efficient formulations. In this study, a novel approach is presented based on sequential layers of drug and polymer (initially free of drug) to provide a non-homogeneous initial drug distribution, combined with lag-time effects and partial initial drug diffusion towards the pellet’s core. By changing the type, number, thickness and sequence of the drug and polymer layers, a rather simple 4 layers system (2 drug and 2 polymer layers) allowed an about constant drug release during 8 h
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography