To see the other types of publications on this topic, follow the link: Codes and Code Enforcement.

Dissertations / Theses on the topic 'Codes and Code Enforcement'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Codes and Code Enforcement.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Al-Fahad, Jasem Y. "Reform of building codes, regulations, administration and enforcement in Kuwait : within the legal, administrative, technical & social framework." Thesis, Loughborough University, 2012. https://dspace.lboro.ac.uk/2134/9883.

Full text
Abstract:
The majority of building code development and implementation practices are normally connected with the progress of construction community changing awareness, needs and perspectives, advanced technology in construction and new level of knowledge. Unproven practices and the technology of building code development and implementation in case of insufficient and outdated codes, the use of unproven advanced codes of other countries, or the infringement of the existing codes, in most cases, could lead to a large number of shortcomings of minimum requirements of public health, safety and general welfare, and poor quality of buildings. Every aspect of a building code development and implementation practice could be influenced by insufficiencies and infringements in building codes/regulations that could cause buildings failures. Generally, the success of a building code development and implementation practice is directly connected with the involved insufficiencies and infringements in the framework of building code (legal, Administrative, technical, & social), i.e. faults of building code development and implementation should be successfully resolved in order to come to an end of a building project assuring code's objectives (public health, safety and general welfare). One of the early research problems of building code development and implementation practice was conducted by Productivity Commission (2004) where the research organized and categorized the causes of shortcomings of BC according to four main functions of building code, including legal, administrative, technical, and social functions. Productivity Commission Research had been the starting point of research problems of building codes in Kuwait. For the past 20 years, many researchers have high numbers of categories, components and rankings to explain different types of insufficiencies and infringements in building codes/regulations. However, these categories and rankings produce inconsistent and overlapping cause and impact factors. In addition, researchers and practitioners at this point tend to focus on the technical and administrative sides related to the issues of building codes development and implementation, and neglecting the importance of legal and social sides. Legal issues like finding a law to prepare and enforce building codes, cover of insurance companies, building materials testing system, weak regulations related issues, building specifications, and clarity of regulation texts; as well as social issues like community awareness, issuing and enforcing legal court rules, deterrent punishments for violators, violations or cheatings in related issues, all of these were deemed not that critical by most reviewers. The research is specifically concerned with the insufficiencies and infringements in building codes/regulations which cause shortcomings of minimum requirements of public health, safety and general welfare, and how related cause and impact factors are selected and organized. Existing research highlights the need for further researches of how to relate between research and building regulations that are at present. There is evidence that construction industries around the world have little experience in this area (CIB TG37, 2001). The proposal within this research is to address this aspect of the debate by seeking to clarify the role of the four functions of building code; legal, administrative, technical, and social function as a frame of reference that stakeholder parties (building officials, design and construction professionals) might agree with and which should act as the basis for the selection and formation of occurrences of cause factors, and their iv impact on public health, safety and general welfare. The focus on the four functions of building code as a fault (cause) frame of reference potentially leads to a common, practical view of the (multi) dimensionality setting of fault (cause) within which cause factors may be identified and which, we believe, could be grounded across a wide range of practices specifically in this research of building code development and implementation. The research surveyed and examined the opinions of building officials, design and construction professionals. We assess which fault (cause) factors are most likely to occur in building and construction projects; evaluate fault (cause) impact by assessing which fault (cause) factors that building officials, design and construction professionals specifically think are likely to arise in the possibility of shortcomings of minimum requirements of public health, safety and general welfare. The data obtained were processed, analyzed and ranked. By using the EXCEL and SPSS for factor analysis, all the fault (cause) factors were reduced and groups into clusters and components for further correlation analysis. The analysis was able to prove an opinion on fault (cause) likelihood, the impact of the fault (cause) on the objectives of building code. The analysis indicates that it is possible to identify grouping of insufficiencies and infringements in building codes/regulations that is correspondent to the different parts of the framework of building code (legal, Administrative, technical, & social) these suggest three identified groups when viewing cause from the likelihood occurrence and four identified groups and their impact for each building code objective. The evidence related to the impact of building code objectives, view of cause, and provides a stronger view of which components of cause were important compared with cause likelihood. The research accounts for the difference by suggesting that a more selection and formation of cause and impact, offered by viewing cause within the context of a framework of building code, and viewing impact within the context of building code objectives (public health, safety and general welfare) allows those involved in building code development and implementation to have an understandable view of the relationships within cause factors, and between cause and impact factors. It also allows the various cause components and the associated emergent clusters to be more readily identified. The contribution of the research relates to the assessment of cause within a construction that is defined in the context of a fairly broad accepted view of the framework of building code (legal, Administrative, technical, & social). The fault (cause) likelihood construction is based on the building code framework proposed in this research and could facilitates a focus on roles and responsibilities, and allows the coordination and integration of activities for regular development and implementation with the building code goals. This contribution would better enable building officials and code writers to identify and manage faults (causes) as they emerge with BC aspects/parts and more closely reflect building and construction activities and processes and facilitate the fault (cause) administration exercise.
APA, Harvard, Vancouver, ISO, and other styles
2

Caldwell, Anita M. "Should Enforcement Provisions be Included in Newsroom Codes of Ethics." Thesis, The University of Arizona, 1994. http://hdl.handle.net/10150/292172.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Brown, Sara E. "Code enforcement, tax delinquency, and strategic management of problematic properties." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/90090.

Full text
Abstract:
Thesis: M.C.P., Massachusetts Institute of Technology, Department of Urban Studies and Planning, 2014.
Thesis: S.M. in Real Estate Development, Massachusetts Institute of Technology, Program in Real Estate Development in conjunction with the Center for Real Estate, 2014.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 129-137).
This thesis considers two interrelated sources of blight in cities: so-called "problem properties" (PP), or properties in poor physical condition where owners have stopped performing basic maintenance, and tax-delinquent properties (TDP), where owners have stopped paying their property taxes. It focuses on how cities can be more effective in addressing PP and TDP both "before" (through proactive prevention) and "after" (through correction/collection) they emerge. Fundamentally, it argues that cities should recognize the relationship between PP and TDP, which often constitute the same properties, but, more importantly, both can be "liened up" and taken through foreclosure if their owners are truly unresponsive. These liens can be established through unpaid code violation fines and receivership liens, as well as unpaid taxes. This approach is based on the premise that cities want to receive as few properties as possible through foreclosure, because of the costs associated with holding excess land and buildings, but also want to avoid the "worst of the worst" PP and TDP becoming a blight on neighborhoods. Thus, if owners are going to refuse to correct severe code violations and/or delinquency, cities want to transfer properties to responsible owners as quickly as possible. Recognizing the links between PP and TDP enables cities to switch from a reactive to proactive approach in treating blight. This thesis also discusses barriers to addressing PP and TDP. It suggests that cities treat them through comprehensive "enforcement pathways" targeted to specific property and owner types. In particular, owners are divided into three groups: cooperative, non-cooperative, and "missing in action." This segmentation methodology recognizes that different properties present different enforcement challenges and require different strategies to return them to productive use. In addition, this thesis examines the three collection methods available to cities: public collection, contracted third-party servicing, and privatized collection (tax lien sales), and addresses a major limiting factor on tax lien sales: their dependence on private market demand. Finally, it examines how cities can be more effective in managing and disposing of their property inventories. To guide property usage and disposition timing, it suggests that cities establish a central property inventory that includes critical land and building characteristics, a property potential reuse scoring system, and a market model that segments neighborhoods and identifies spatial and temporal "inflection points." It also recommends that cities not take a "one size fits all" approach to their entire inventory, but rather select the disposition method, -- sheriff's sale/auction, RFP, third-party transfer, or land banking, -- that is most appropriate for the property type, (sub)market condition, and desired outcome(s). Finally, it outlines strategies to overcome under-management of public assets, weak markets, and financing challenges. To support the discussion about how to best manage delinquency and disposition, it includes detailed case studies of Philadelphia, New York City, and Boston.
by Sara E. Brown.
M.C.P.
S.M. in Real Estate Development
APA, Harvard, Vancouver, ISO, and other styles
4

Morrison, Alexandra. "The Influence of Ethical Code of Conduct Enforcement on Unethical Behavior." Thesis, The University of Arizona, 2014. http://hdl.handle.net/10150/321884.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Beets, S. Douglas. "Effectiveness of the complaint-based enforcement system of the AICPA Code of Professional Ethics." Diss., Virginia Polytechnic Institute and State University, 1987. http://hdl.handle.net/10919/82900.

Full text
Abstract:
The American Institute of Certified Public Accountants (AICPA) is presently considering a proposal to revise the enforcement system of the Code of Professional Ethics from the current complaint-based mechanism to a system based on reviews of practitioners and their work. Inherent within the proposal is the conclusion that the existing enforcement provisions, based on complaints about violations, are not adequate. Complaints about ethics violations can originate from practically anyone although two of the primary initiators of violation complaints are Certified Public Accountants (CPAs) and their clients. CPAs, however, may have limited opportunities to observe violations committed by colleagues. Clients, on the other hand, may be in a prime position to detect departures from the ethics code but may have no incentive to report violations committed by their CPAs; e.g., a violation may benefit the client. A survey of these two groups (CPAs and clients) indicated that while both groups are familiar with the code and believe that the rules of conduct are appropriate, clients do not tend to report violations and CPAs, on average, indicated that they would report observed violations slightly more than one-half the time. These findings suggest that an enforcement system based solely on the complaints of CPAs and clients cannot be effective.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
6

Wegmann, Jacob Anthony George. ""We Just Built It|" Code Enforcement, Local Politics, and the Informal Housing Market in Southeast Los Angeles County." Thesis, University of California, Berkeley, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3708337.

Full text
Abstract:

This dissertation is an exploration of the role of informality in the housing market in southeast Los Angeles County. While informality has long been the subject of scholarship in cases from the Global South, and increasingly in the United States, examinations of housing informality in the US thus far have largely been situated in rural and peri-urban areas. This work seeks to interrogate informality in housing processes unfolding within the very heart of northern North America's leading industrial metropolis.

After a brief preface, the dissertation's second chapter reviews literature on various aspects of informality and on Accessory Dwelling Units, or additions or conversions of living quarters on residential properties. Chapter 3 introduces the work's methodological pillars, and describes the four major, mixed methods relied upon. These are a survey of code enforcement officers; interviews and direct observation; and analyses of rental and property sales markets. Two other, minor, methods employed are an analysis of building footprints and the analysis of secondary data.

Chapter 4 introduces the single case used in the dissertation. This is a group of 14 communities, with a total population of 700,000, that are collectively referred to via the neologism City of Gateway. Next follows a historical overview of the area. Following a discussion of the 1965 Watts riots as a historical watershed, trends in the City of Gateway's economy and population that have driven a dramatic informalization of the housing stock since that time are examined.

Chapter 5 describes the physical expression of the informal housing market in the City of Gateway, in seven extralegal modes that involve either the conversion of existing space or the addition of new space, and the tactics used to effect them. Chapter 5 closes with a quantification and discussion of the consequences of the characteristic urban form produced by the informal housing market, horizontal density, which is the addition of density by more intensively covering lots with buildings rather than building upwards.

Chapter 6 describes the "nuts and bolts" of the informal housing market. It presents evidence that extralegal rentals are, on balance, generally (though not always) cheaper for their occupants than formal market alternatives. It examines presale ordinances that some cities have passed to try to disrupt the informal housing market by intervening in the sale of residential property. It discusses the important role of appraisers in providing or denying mortgage credit to current or would-be homeowners with extralegal space. An analysis of property sales transactions provides evidence that extralegal space does not appear to be capitalized in property values. Finally, the chapter discusses barriers imposed by the current US mortgage system to financing the construction of rentable space on residential properties.

Chapter 7 is an examination of the role played by code enforcement in shaping the informal housing market in the City of Gateway. Specifically, it examines how code enforcement departments allocate their time and effort given that there are far more potential enforcement actions than their capacity allows. The chapter presents arguments that code enforcement reshapes the informal housing market while failing to suppress it; that it is applied unevenly; and that it paradoxically helps maintain the informal order of the informal housing market.

Chapter 8 begins by arguing that issues related to informal housing, when they are discussed at all in the local political sphere, tend to be filtered through the reductive frame of law and order. The chapter presents reasons for this state of affairs, both ones specific to the City of Gateway and others that are more general and potentially applicable to other places in the US. Chapter 8 closes with a summary of high-profile local debates in which informal housing's influence is considerable but rarely acknowledged: fair share housing, water and sewer utility capacity, parking, and school crowding.

The conclusion, Chapter 9, begins by assessing the positive and negative attributes of the informal housing system. A normative judgment is made that the former outweigh the latter, although the drawbacks are considerable and in need of urgent attention. A multiscalar palette of policy interventions intended to usefully and justly intervene in the informal housing system is put forth. Many of these are within the ambit of local government, but action in other spheres—in state and even federal government, and within the housing NGO sector—is needed. Next, lessons for advocates, policymakers, and researchers drawn from the broader implications of this dissertation are presented. After that follows a speculative discussion about the role of culture in comparison with economic necessity in driving the informal housing market in the City of Gateway. Next, informed speculation about the future of the City of Gateway's housing market is presented. The dissertation closes with a discussion of these trends' implications for the City of Gateway's continued existence as that increasingly rare of type of place, a working class enclave in the heart of a vast global metropolis.

APA, Harvard, Vancouver, ISO, and other styles
7

Pamborides, George Pan. "The impact of public international law on private shipping law : the effect of the modern international legislative and enforcement practices on certain principles of maritime law." Thesis, University of Southampton, 1997. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.264650.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Martori, Adrian Jordi. "Probabilistic Models of Partial Order Enforcement in Distributed Systems." Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0040/document.

Full text
Abstract:
Les systèmes distribués ont réussi à étendre la technologie de l’information à un public plus large, en termes d’emplacement et de nombre. Cependant, ces systèmes géo-répliqués doivent être évolutifs afin de répondre aux demandes toujours croissantes. De plus, le système doit pouvoir traiter les messages dans un ordre équivalent à celui de leur création afin d’éviter des effets indésirables. L’exécution suivant des ordres partiels fournit un ordonnancement d’événements que tous les nœuds suivront, ce qui permet donc le traitement des messages dans un ordre adéquat. Un système qui applique un ordre partiel simplifie le développement des applications distribuées et s’assure que l’utilisateur final n’observera pas des comportements défiant la causalité. Dans cette thèse, nous présentons des modèles statistiques pour différentes contraintes d’ordre partiel, en utilisant différentes distributions de modèles de latence. Étant donné un modèle de latence, qui donne le temps qu’il faut pour qu’un message passe d’un nœud à un autre, notre modèle s’appuie sur lui pour donner le temps supplémentaire qu’il faut pour appliquer un ordre partiel spécifique. Nous avons proposé les modèles suivants. Tout d’abord, dans une communication entre un et plusieurs nœuds, la probabilité que le message soit délivré dans tous les nœuds avant un temps donné. Deuxièmement, après la réception d’un message, la probabilité que tous les autres nœuds aient exécuté ce message avant temps donné. Troisièmement, dans une communication de un à plusieurs nœuds, la probabilité que le message soit arrivé à au moins un sous-ensemble d’entre eux avant un temps donné. Quatrièmement, l’ordre FIFO ou causal qui détermine si un message est prêt à être livré, dans un nœud ou plusieurs. Tout cela favorise la compréhension du comportement des systèmes distribués en présence d’ordres partiels. En outre, en utilisant cette connaissance, nous avons construit un algorithme qui utilise ces modèles de comportement du réseau pour établir un système de livraison causal fiable. Afin de valider nos modèles, nous avons développé un outil de simulation qui permet d’exécuter des scénarios adaptés à nos besoins. Nous pouvons définir les différents paramètres du modèle de latence, le nombre de clients et les charges de travail des clients. Cette simulation nous permet de comparer les valeurs générées de façon aléatoire pour chaque configuration spécifique avec les résultats prévus de notre modèle. Une des applications qui peuvent tirer profit de notre modèle, est un algorithme de livraison causale fiable. Il utilise l’information causale pour détecter les éléments manquants et réduit le besoin d’acquittement de message en contactant d’autres répliques seulement lorsque le message est supposé manquant. Cette information est fournie par notre modèle, qui définit les temporisateurs d’attente en fonction des statistiques du réseau et de la consommation des ressources. Enfin, cette application a été testée dans le même simulateur que les modèles, avec des résultats prometteurs, puis évaluée dans une expérience réelle utilisant Amazon EC2 comme plate-forme
Distributed systems have managed to extend technology to a broader audience, in both terms of location and numbers. However these geo-replicated systems need to be scalable in order to meet the ever growing demands. Moreover, the system has to be able to process messages in an equivalent order that they were created to avoid unwanted side effects. Partial order enforcement provides an ordering of events that all nodes will follow therefore processing the messages in an adequate order. A system that enforces a partial order simplifies the challenge of developing distributed applications, and ensures that the end-user will not observe causality defying behaviors. In this thesis we present models for different partial order enforcements, using different latency model distributions. While a latency model, which yields the time it takes for a message to go from one node to another, our model builds on it to give the additional time that it takes to enforce a given partial order. We have proposed the following models. First, in a one to many nodes communication, the probability for the message to be delivered in all the nodes before a given time. Second, in a one to many nodes communication from the receivers, the probability that all the other nodes have delivered the message after a given time of him receiving it. Third, in a one to many nodes communication, the probability that the message has arrived to at least a subset of them before a given time. Fourth, applying either FIFO or Causal ordering determining if a message is ready for being delivered, in one node or many. All of this furthers the understanding of how distributed systems with partial orders behave. Furthermore using this knowledge we have built an algorithm that uses the insight of network behavior to provide a reliable causal delivery system. In order to validate our models, we developed a simulation tool that allows to run scenarios tailored to our needs. We can define the different parameters of the latency model, the number of clients, and the clients workloads. This simulation allows us to compare the randomly generated values for each specific configuration with the predicted outcome from our model. One of the applications that can take advantage of our model, is a reliable causal delivery algorithm. It uses causal information to detect missing elements and removes the need of message acknowledgment by contacting other replicas only when the message is assumed missing. This information is provided by our model, that defines waiting timers according to the network statistics and resource consumption. Finally this application has been both tested in the same simulator as the models, with promising results, and then evaluated in a real-life experiment using Amazon EC2 for the platform
APA, Harvard, Vancouver, ISO, and other styles
9

Hadravová, Andrea. "Pojetí exekutorských služeb v České republice a jejich porovnání s vybranými státy EU." Master's thesis, Vysoká škola ekonomická v Praze, 2017. http://www.nusl.cz/ntk/nusl-359903.

Full text
Abstract:
Since 2001, the legislation on enforcement has been fundamentally changed, on 1st 2001, Act No. 120/2001 Coll., Executors and Enforcement Activities (Enforcement Code) entered into force. The creditor has thus been given the opportunity to decide how recover his claim. Until then, his only possibility for recovering claims was through the court, but since this date he could acquire his rights through the services of a distrainer. In 2012, this duality was abolished and is executed in most cases by private distrainer. He carries out his activity for reward, which results in his status as an entrepreneur. And his reward has been a thorny issue since 2001, when the profession came into our systhem. The issue of this topic is also evidenced by the parliamentary bill, which tries to regulate to reduce the distrainer`s tariff. The thesis compares the current situation in the Czech Republic with selected states. For purposes of this thesis, I chose Germany, because there is recovery of debts through a state employee, France because this system is one of the oldest and served as a model for many states and Slovakia, for reasons of common history and amendment that came into force in April this year. The aim of the thesis is to map the situation in selected states, to find possible deviations and sources of inspiration for the system of executive services in the territory of the Czech Republic.
APA, Harvard, Vancouver, ISO, and other styles
10

Reece, Kristie M. "Fighting Urban Blight through Community Engagement and GIS." University of Toledo / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=toledo1544810680015951.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Uhl, Philip J. "A Spatio-Temporal Data Model for Zoning." BYU ScholarsArchive, 2002. https://scholarsarchive.byu.edu/etd/1.

Full text
Abstract:
Planning departments are besieged with temporal/historical information. While for many institutions historical information can be relegated to archives, planning departments have a constant need to access and query their historical information, particularly their historical spatial information such as zoning. This can be a cumbersome process fraught with inaccuracies due to the changing organizational methods and the extended historical legacies of most municipalities. Geographic Information Systems can be a tool to provide a solution to the difficulties in querying spatio-temporal planning data. Using a data model designed specifically to facilitate the querying of historical zoning information, queries can be performed to answer basic zoning questions such as "what is the zoning history for a specific parcel of land?" This work outlines this zoning data model, its implementation, and its testing using queries basic to the needs of planning departments.
APA, Harvard, Vancouver, ISO, and other styles
12

Guesnet, Yannick. "Codes et interprétations." Rouen, 2001. http://www.theses.fr/2001ROUES019.

Full text
Abstract:
Les travaux présentés dans ce mémoire se situent dans le cadre de la théorie des codes à longueurs variables. Nous introduisons deux nouvelles classes de codes : celle des codes à délai d'interprétation fini et celle des codes adjacents. Nous montrons que ces codes satisfont une extension au théorème du défaut. Nous montrons également l'équivalence entre la notion de maximalité dans la classe des codes à délai d'interprétation fini et celle de maximalité dans la classe des codes en général pour les codes à délai d'interprétation fini coupants. Nous proposons en outre deux méthodes de complétions : l'une permet de compléter tout code à délai d'interprétation fini coupant en un code à délai d'interprétation fini maximal et l'autre permet de compléter tout code synchronisant en un code maximal et synchronisant. Ces deux méthodes préservent la rationalité des ensembles. Nous nous intéressons dans un dernier temps aux codes denses : nous montrons qu'il n'y a pas équivalence entre la maximalité dans la classe des codes circulaires et la maximalité dans la classe des codes dans le cas des codes circulaires denses. Enfin, nous donnons une méthode permettant de compléter tout code bifixe coupant et non maximal en un code dense, bifixe, maximal dans la classe des codes bifixes et non maximal dans la classe des codes.
APA, Harvard, Vancouver, ISO, and other styles
13

Leslie, Martin P. "Hypermap-Homology Quantum Codes." Diss., The University of Arizona, 2013. http://hdl.handle.net/10150/297012.

Full text
Abstract:
We introduce a new type of sparse CSS quantum error correcting code based on the homology of hypermaps. Sparse quantum error correcting codes are of interest in the building of quantum computers due to their ease of implementation and the possibility of developing fast decoders for them. Codes based on the homology of embeddings of graphs, such as Kitaev's toric code, have been discussed widely in the literature and our class of codes generalize these. We use embedded hypergraphs, which are a generalization of graphs that can have edges connected to more than two vertices. We develop theorems and examples of our hypermap-homology codes, especially in the case that we choose a special type of basis in our homology chain complex. In particular, the most straightforward generalization of the m × m toric code to hypermap-homology codes gives us a [(3/2)m²,2, m] code as compared to the toric code which is a [2m²,2, m]code. Thus we can protect the same amount of quantum information, with the same error-correcting capability, using less physical qubits.
APA, Harvard, Vancouver, ISO, and other styles
14

Tixier, Audrey. "Reconnaissance de codes correcteurs." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066554/document.

Full text
Abstract:
Dans cette thèse, nous nous intéressons au problème de la reconnaissance de code. Ce problème se produit principalement lorsqu'une communication est observée dans un milieu non-coopératif. Une liste de mots bruités issus d'un code inconnu est obtenue, l'objectif est alors de retrouver l'information contenue dans ces mots bruités. Pour cela, le code utilisé est reconstruit afin de décoder les mots observés. Nous considérons ici trois instances de ce problème et proposons pour chacune d'elle une nouvelle méthode. Dans la première, nous supposons que le code utilisé est un turbo-code et nous proposons une méthode pour reconstruire la permutation interne (les autres éléments du turbo-codeur pouvant être facilement reconstruits grâce aux méthodes existantes). Cette permutation est reconstruite pas à pas en recherchant l'indice le plus probable à chaque instant. Plus précisément, la probabilité de chaque indice est déterminée avec l'aide de l'algorithme de décodage BCJR. Dans la seconde, nous traitons le problème de la reconnaissance des codes LDPC en suggérant une nouvelle méthode pour retrouver une liste d'équations de parité de petits poids. Celle-ci généralise et améliore les méthodes existantes. Finalement, avec la dernière méthode, nous reconstruisons un code convolutif entrelacé. Cette méthode fait appel à la précédente pour retrouver une liste d'équations de parité satisfaites par le code entrelacé. Puis, en introduisant une représentation sous forme de graphe de l'intersection de ces équations de parité, nous retrouvons simultanément l'entrelaceur et le code convolutif
In this PhD, we focus on the code reconstruction problem. This problem mainly arises in a non-cooperative context when a communication consisting of noisy codewords stemming from an unknown code is observed and its content has to be retrieved by recovering the code that is used for communicating and decoding with it the noisy codewords. We consider here three possible scenarios and suggest an original method for each case. In the first one, we assume that the code that is used is a turbo-code and we propose a method for reconstructing the associated interleaver (the other components of the turbo-code can be easily recovered by the existing methods). The interleaver is reconstructed step by step by searching for the most probable index at each time and by computing the relevant probabilities with the help of the BCJR decoding algorithm. In the second one, we tackle the problem of reconstructing LDPC codes by suggesting a new method for finding a list of parity-check equations of small weight that generalizes and improves upon all existing methods. Finally, in the last scenario we reconstruct an unknown interleaved convolutional code. In this method we used the previous one to find a list of parity-check equations for this code. Then, by introducing a graph representing how these parity-check equations intersect we recover at the same time the interleaver and the convolutional code
APA, Harvard, Vancouver, ISO, and other styles
15

Szabo, Steve. "Convolutional Codes with Additional Structure and Block Codes over Galois Rings." Ohio University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1257792383.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Trinh, Thuy Duong. "Ochrana autorských práv v on-line médiích." Master's thesis, Vysoká škola ekonomická v Praze, 2013. http://www.nusl.cz/ntk/nusl-198040.

Full text
Abstract:
This diploma thesis deals with protection of copyright within online media, with a focus on personal blogs. The author presents definition of basic terms of copyright, especially definition of usage of the work and description of way of use, and deals with the issue of liability for copyright infringement and identification of persons who carry the responsibility. This thesis is also dedicated to copyrights itself and its instruments of enforcement. Increased focus is put on available jurisdiction, especially jusrisdiction of The Court of Justice of the European Union. Finally, the author conducted a survey among the authors who publish their works online to determine their experience with the violation of their rights.
APA, Harvard, Vancouver, ISO, and other styles
17

Ketkar, Avanti Ulhas. "Code constructions and code families for nonbinary quantum stabilizer code." Thesis, Texas A&M University, 2004. http://hdl.handle.net/1969.1/2743.

Full text
Abstract:
Stabilizer codes form a special class of quantum error correcting codes. Nonbinary quantum stabilizer codes are studied in this thesis. A lot of work on binary quantum stabilizer codes has been done. Nonbinary stabilizer codes have received much less attention. Various results on binary stabilizer codes such as various code families and general code constructions are generalized to the nonbinary case in this thesis. The lower bound on the minimum distance of a code is nothing but the minimum distance of the currently best known code. The focus of this research is to improve the lower bounds on this minimum distance. To achieve this goal, various existing quantum codes are studied that have good minimum distance. Some new families of nonbinary stabilizer codes such as quantum BCH codes are constructed. Different ways of constructing new codes from the existing ones are also found. All these constructions together help improve the lower bounds.
APA, Harvard, Vancouver, ISO, and other styles
18

Schofield, Mark. "The equivocation of codes." Thesis, University of Plymouth, 2018. http://hdl.handle.net/10026.1/11297.

Full text
Abstract:
Equivocation was introduced by Shannon in the late 1940’s in seminal papers that kick-started the whole field of information theory. Much ground has been covered on equivocation’s counterpart, channel capacity and in particular, its bounds. However, less work has been carried out on the evaluation of the equivocation of a code transmitted across a channel. The aim of the work covered in this thesis was to use a probabilistic approach to investigate and compare the equivocation of various codes across a range of channels. The probability and entropy of each output, given each input, can be used to calculate the equivocation. This gives a measure of the ambiguity and secrecy of a code when transmitted across a channel. The calculations increase exponentially in magnitude as both the message length and code length increase. In addition, the impact of factors such as erasures and deletions also serve to significantly complicate the process. In order to improve the calculation times offered by a conventional, linearly-programmed approach, an alternative strategy involving parallel processing with a CUDA-enabled (Compute Unified Device Architecture) graphical processor was employed. This enabled results to be obtained for codes of greater length than was possible with linear programming. However, the practical implementation of a CUDA driven, parallel processed solution gave rise to significant issues with both the software implementation and subsequent platform stability. By normalising equivocation results, it was possible to compare different codes under different conditions, making it possible to identify and select codes that gave a marked difference in the equivocation encountered by a legitimate receiver and an eavesdropper. The introduction of code expansion provided a novel method for enhancing equivocation differences still further. The work on parallel processing to calculate equivocation and the use of code expansion was published in the following conference: Schofield, M., Ahmed, M. & Tomlinson, M. (2015), Using parallel processing to calculate and improve equivocation, in ’IEEE Conference Publications - IEEE 16th International Conference on Communication Technology’. In addition to the novel use of a CUDA-enabled graphics process to calculated equivocation, equivocation calculations were also performed for expanded versions of the codes. Code expansion was shown to yield a dramatic increase in the achievable equivocation levels. Once methods had been developed with the Binary Symmetric Channel (BSC), they were extended to include work with intentional erasures on the BSC, intentional deletions on the BSC and work on the Binary Erasure Channel (BEC). The work on equivocation on the BSC with intentional erasures was published in: Schofield, M. et al, (2016), Intentional erasures and equivocation on the binary symmetric channel, in ’IEEE Conference Publications - International Computer Symposium’, IEEE, pp 233-235. The work on the BEC produced a novel outcome due to the erasure correction process employed. As the probability of an erasure occurring increases, the set of likely decoded outcomes diminishes. This directly impacts the output entropy of the system by decreasing it, thereby also affecting the equivocation value of the system. This aspect was something that had not been encountered previously. The work also extended to the consideration of intentional deletions on the BSC and the Binary Deletion Channel (BDC) itself. Although the methods used struggled to cope with the additional complexity brought by deletions, the use of Varshamov-Tenengolts codes on the BSC with intentional deletions showed that family of codes to be well suited to the channel arrangement as well as having the capability to be extended to enable the correction of multiple deletions.
APA, Harvard, Vancouver, ISO, and other styles
19

Bienert, Rolf. "Über Automorphismengruppen von zyklischen Codes." Saarbrücken VDM Verlag Dr. Müller, 2007. http://d-nb.info/989226832/04.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Qiu-Cheng, Xie, and Lei Zhong-Kui. "NEW DEVELOPMENT OF OPTIMUM GROUP SYNCHRONIZATION CODES." International Foundation for Telemetering, 1990. http://hdl.handle.net/10150/613434.

Full text
Abstract:
International Telemetering Conference Proceedings / October 29-November 02, 1990 / Riviera Hotel and Convention Center, Las Vegas, Nevada
In this paper, twenty-four optimum group synchronization codes (N=31 to 54) for PCM telemetry systems are presented. These optimum codes are the newest development at the category of optimum group synchronization codes up to now in the world.
APA, Harvard, Vancouver, ISO, and other styles
21

Bärmann, Daniel. "Aufzählen von DNA-Codes." Master's thesis, Universität Potsdam, 2006. http://opus.kobv.de/ubp/volltexte/2006/1026/.

Full text
Abstract:
In dieser Arbeit wird ein Modell zum Aufzählen von DNA-Codes entwickelt. Indem eine Ordnung auf der Menge aller DNA-Codewörter eingeführt und auf die Menge aller Codes erweitert wird, erlaubt das Modell das Auffinden von DNA-Codes mit bestimmten Eigenschaften, wie Überlappungsfreiheit, Konformität, Kommafreiheit, Stickyfreiheit, Überhangfreiheit, Teilwortkonformität und anderer bezüglich einer gegebenen Involution auf der Menge der Codewörter. Ein auf Grundlage des geschaffenen Modells entstandenes Werkzeug erlaubt das Suchen von Codes mit beliebigen Kombinationen von Codeeigenschaften. Ein weiterer wesentlicher Bestandteil dieser Arbeit ist die Untersuchung der Optimalität von DNA-Codes bezüglich ihrer Informationsrate sowie das Finden solider DNA-Codes.
In this work a model for enumerating DNA codes is developed. By applying an order on the set of DNA codewords and extending this order on the set of codes, this model assists in the discovery of DNA codes with properties like non-overlappingness, compliance, comma-freeness, sticky-freeness, overhang-freeness, subword-compliance, solidness and others with respect to a given involution on the set of codewords. This tool can be used to find codes with arbitrary combinations of code properties with respect to the standard Watson-Crick-DNA involution. The work also investigates DNA codes with respect to the optimizing of the information rate, as well as finding solid DNA codes.
APA, Harvard, Vancouver, ISO, and other styles
22

Cuevas, Ordaz Francisco Javier. "Turbo décodage de code produit haut débit." Lorient, 2004. http://www.theses.fr/2004LORIS033.

Full text
Abstract:
Cette thèse s’inscrit dans la continuité des recherches menées sur les nouvelles techniques de codes correcteurs d’erreurs, faisant suite aux travaux sur les turbo codes, TCB, introduits en 1994 par R. Pyndiah. Elle propose une architecture novatrice de turbo décodage des codes produits, utilisant des codes BCH étendus comme codes élémentaires. Cette nouvelle architecture qui oblige à stocker plusieurs données à une même adresse mémoire, permet de traiter un débit d’information élevé. Dans un premier temps, nous proposons une nouvelle architecture haut débit de turbo décodage mettant en œuvre un code BCH (32,26,4) à entrées et sorties pondérées corrigeant 1 erreur (code de Hamming). Puis, nous consacrons la seconde série de résultats au décodage haut débit de code BCH (128,106,8) à fort pouvoir de correction, corrigeant 3 erreurs (distance minimale du code produit d=64) et à fort rendement (R proche de 0,7). Le premier avantage de ces conceptions est d’utiliser un seul plan mémoire (n2 échantillons regroupés par bloc de m2) en entrée. Les conceptions des décodeurs élémentaires présentés sont capables de traiter m données à la fois, m=1, 2, 4 et 8. Le deuxième résultat est qu’en parallélisant m décodeurs de ce type pour l’architecture du turbo décodeur, on obtient une vitesse de décodage m2 fois plus élevée pour une surface m2/2 fois plus grande des décodeurs élémentaires. Pour comparer les performances et la complexité entre les différents décodeurs, le langage C a été utilisé pour les simulations, le langage VHDL pour les simulations fonctionnelles et Synopsys Design Compiler pour la synthèse. Les résultat ainsi obtenus ouvrent la possibilité d’intégration sur le silicium de turbo décodeurs à fort pouvoir de correction (distance de 64, rendement de 0,8) et à très haut débit (6,4 Gbits/s dans une technologie CMOS 0. 18μm)
This thesis continues research undertaken on new error correcting code techniques, following work on block turbo codes, BTCs, introduced in 1994 by R. Pyndiah. It proposes an innovative architecture for the turbo decoding of product codes, using extended BCH codes as elementary codes. This innovative architecture stores several data at the same address and performs parallel decoding to increase the data rate. First, we propose a new high rate turbo decoding architecture using a BCH (32,26,4) code with a soft input-soft output (SISO), correcting 1 error (Hamming code). Then, we dedicate the second group of results to decoding BCH (128,106,8) code for high data rates with strong error correction power, correcting 3 errors (minimum distance of product code d=64) and high code rates (R close to 0,7). The first advantage of theses designs is that they use only one memory (n2 data grouped into blocks of m2) at the input. The elementary decoder designs presented are capable of treating m data simultaneously, with m=1, 2, 4 and 8. The second result is that by using m parallel decoders of the same type for the architecture of the turbo decoder, we obtain a decoding m2 higher speed and a m2/2 surface area, for these elementary decoders. To compare the performance and complexity of the decoders, we use C language for behavioural simulations, VHDL for functional simulations and Synopsys Design Compiler for the synthesis. The results obtained open up the possibility of future integration on silicon for turbo decoders with strong error correction power (minimum distance 64, code rate 0. 8) and very high data rate (6. 4 Gbits/s with a CMOS 0. 18μm target library)
APA, Harvard, Vancouver, ISO, and other styles
23

Kühn, Stefan. "Organic codes and their identification : is the histone code a true organic code." Thesis, Stellenbosch : Stellenbosch University, 2014. http://hdl.handle.net/10019.1/86673.

Full text
Abstract:
Thesis (MSc)--Stellenbosch University, 2014.
ENGLISH ABSTRACT: Codes are ubiquitous in culture|and, by implication, in nature. Code biology is the study of these codes. However, the term `code' has assumed a variety of meanings, sowing confusion and cynicism. The rst aim of this study is therefore to de ne what an organic code is. Following from this, I establish a set of criteria that a putative code has to conform to in order to be recognised as a true code. I then o er an information theoretical perspective on how organic codes present a viable method of dealing with biological information, as a logical extension thereof. Once this framework has been established, I proceed to review several of the current organic codes in an attempt to demonstrate how the de nition of and criteria for identifying an organic code may be used to separate the wheat from the cha . I then introduce the `regulatory code' in an e ort to demonstrate how the code biological framework may be applied to novel codes to test their suitability as organic codes and whether they warrant further investigation. Despite the prevalence of codes in the biological world, only a few have been de nitely established as organic codes. I therefore turn to the main aim of this study which is to cement the status of the histone code as a true organic code in the sense of the genetic or signal transduction codes. I provide a full review and analysis of the major histone post-translational modi cations, their biological e ects, and which protein domains are responsible for the translation between these two phenomena. Subsequently I show how these elements can be reliably mapped onto the theoretical framework of code biology. Lastly I discuss the validity of an algorithm-based approach to identifying organic codes developed by G orlich and Dittrich. Unfortunately, the current state of this algorithm and the operationalised de nition of an organic code is such that the process of identifying codes, without the neccessary investigation by a scientist with a biochemical background, is currently not viable. This study therefore demonstrates the utility of code biology as a theoretical framework that provides a synthesis between molecular biology and information theory. It cements the status of the histone code as a true organic code, and criticises the G orlich and Dittrich's method for nding codes by an algorithm based on reaction networks and contingency criteria.
AFRIKAANSE OPSOMMING: Kodes is alomteenwoordig in kultuur|en by implikasie ook in die natuur. Kodebiologie is die studie van hierdie kodes. Tog het die term `kode' 'n verskeidenheid van betekenisse en interpretasies wat heelwat verwarring veroorsaak. Die eerste doel van hierdie studie is dus om te bepaal wat 'n organiese kode is en 'n stel kriteria te formuleer wat 'n vermeende kode aan moet voldoen om as 'n ware kode erken te word. Ek ontwikkel dan 'n inligtings-teoretiese perspektief op hoe organiese kodes `n manier bied om biologiese inligting te hanteer as 'n logiese uitbreiding daarvan. Met hierdie raamwerk as agtergrond gee ek `n oorsig van 'n aantal van die huidige organiese kodes in 'n poging om aan te toon hoe die de nisie van en kriteria vir 'n organiese kode gebruik kan word om die koring van die kaf te skei. Ek stel die `regulering kode' voor in 'n poging om te wys hoe die kode-biologiese raamwerk op nuwe kodes toegepas kan word om hul geskiktheid as organiese kodes te toets en of dit die moeite werd is om hulle verder te ondersoek. Ten spyte daarvan dat kodes algemeen in die biologiese w^ereld voorkom, is relatief min van hulle onomwonde bevestig as organiese kodes. Die hoofdoel van hierdie studie is om vas te stel of die histoonkode 'n ware organiese kode is in die sin van die genetiese of seintransduksie kodes. Ek verskaf 'n volledige oorsig en ontleding van die belangrikste histoon post-translasionele modi kasies, hul biologiese e ekte, en watter prote endomeine verantwoordelik vir die vertaling tussen hierdie twee verskynsels. Ek wys dan hoe hierdie elemente perfek inpas in die teoretiese raamwerk van kodebiologie. Laastens bespreek ek die geldigheid van 'n algoritme-gebaseerde benadering tot die identi sering van organiese kodes wat deur G orlich en Dittrich ontwikkel is. Dit blyk dat hierdie algoritme en die geoperasionaliseerde de nisie van 'n organiese kode sodanig is dat die proses van die identi sering van kodes sonder die nodige ondersoek deur 'n wetenskaplike met 'n biochemiese agtergrond tans nie haalbaar is nie. Hierdie studie bevestig dus die nut van kodebiologie as 'n teoretiese raamwerk vir 'n sintese tussen molekul^ere biologie en inligtingsteorie, bevestig die status van die histoonkode as 'n ware organiese kode, en kritiseer G orlich en Dittrich se poging om organiese kodes te identi seer met 'n algoritme wat gebaseer is op reaksienetwerke en `n kontingensie kriterium.
APA, Harvard, Vancouver, ISO, and other styles
24

Chai, Huiqiong. "Parallel concatenation of regular LDGM codes." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 81 p, 2007. http://proquest.umi.com/pqdlink?did=1251904891&Fmt=7&clientId=79356&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Kim, Han Jo. "Improving turbo codes through code design and hybrid ARQ." [Gainesville, Fla.] : University of Florida, 2005. http://purl.fcla.edu/fcla/etd/UFE0012169.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Abbara, Mamdouh. "Turbo-codes quantiques." Phd thesis, Ecole Polytechnique X, 2013. http://pastel.archives-ouvertes.fr/pastel-00842327.

Full text
Abstract:
L'idée des turbo-codes, construction très performante permettant l'encodage de l'information classique, ne pouvait jusqu'à présent pas être transposé au problème de l'encodage de l'information quantique. En effet, il subsistait des obstacles tout aussi théoriques que relevant de leur implémentation. A la version quantique connue de ces codes, on ne connaissait ni de résultat établissant une distance minimale infinie, propriété qui autorise de corriger un nombre arbitraire d'erreurs, ni de décodage itératif efficace, car les turbo-encodages quantiques, dits catastrophiques, propagent certaines erreurs lors d'un tel décodage et empêchent son bon fonctionnement. Cette thèse a permis de relever ces deux défis, en établissant des conditions théoriques pour qu'un turbo-code quantique ait une distance minimale infinie, et d'autre part, en exhibant une construction permettant au décodage itératif de bien fonctionner. Les simulations montrent alors que la classe de turbo-codes quantiques conçue est efficace pour transmettre de l'information quantique via un canal dépolarisant dont l'intensité de dépolarisation peut aller jusqu'à p = 0,145. Ces codes quantiques, de rendement constant, peuvent aussi bien être utilisés directement pour encoder de l'information quantique binaire, qu'être intégrés comme modules afin d'améliorer le fonctionnement d'autres codes tels que les LDPC quantiques.
APA, Harvard, Vancouver, ISO, and other styles
27

Delfosse, Nicolas. "Constructions et performances de codes LDPC quantiques." Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14697/document.

Full text
Abstract:
L'objet de cette thèse est l'étude des codes LDPC quantiques. Dans un premier temps, nous travaillons sur des constructions topologiques de codes LDPC quantiques. Nous proposons de construire une famille de codes couleur basée sur des pavages hyperboliques. Nous étudions ensuite les paramètres d'une famille de codes basée sur des graphes de Cayley.Dans une seconde partie, nous examinons les performances de ces codes. Nous obtenons une borne supérieure sur les performances des codes LDPC quantiques réguliers sur le canal à effacement quantique. Ceci prouve que ces codes n'atteignent pas la capacité du canal à effacement quantique. Dans le cas du canal de dépolarisation, nous proposons un nouvel algorithme de décodage des codes couleur basé sur trois décodages de codes de surface. Nos simulations numériques montrent de bonnes performances dans le cas des codes couleur toriques.Pour finir, nous nous intéressons au phénomène de percolation. La question centrale de la théorie de la percolation est la détermination du seuil critique. Le calcul exacte de ce seuil est généralement difficile. Nous relions la probabilité de percolation dans certains pavages réguliers du plan hyperbolique à la probabilité d'erreur de décodage pour une famille de codes hyperboliques. Nous en déduisons une borne sur le seuil critique de ces pavages hyperboliques basée sur des résultats de théorie de l'information quantique. Il s'agit d'une application de la théorie de l'information quantique à un problème purement combinatoire
This thesis is devoted to the study of quantum LDPC codes. The first part presents some topological constructions of quantum LDPC codes. We introduce a family of color codes based on tilings of the hyperbolic plane. We study the parameters of a family of codes based on Cayley graphs.In a second part, we analyze the performance of these codes. We obtain an upper bound on the performance of regular quantum LDPC codes over the quantum erasure channel. This implies that these codes don't achieve the capacity of the quantum erasure channel. In the case of the depolarizing channel, we propose a new decoding algorithm of color codes based on three surface codes decoding. Our numerical results show good performance for toric color codes.Finally, we focus on percolation theory. The central question in percolation theory is the determination of the critical probability. Computing the critical probability exactly is usually quite difficult. We relate the probability of percolation in some regular tilings of the hyperbolic plane to the probability of a decoding error for hyperbolic codes on the quantum erasure channel. This leads to an upper bound on the critical probability of these hyperbolic tilings based on quantum information. It is an application of quantum information to a purely combinatorial problem
APA, Harvard, Vancouver, ISO, and other styles
28

Hsu, Teh-Hsuan. "Robust concatenated codes for the slow Rayleigh fading channel." [College Station, Tex. : Texas A&M University, 2008. http://hdl.handle.net/1969.1/ETD-TAMU-2723.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Dingninou, Adodo. "Implémentation de Turbo code pour trames courtes." Brest, 2001. http://www.theses.fr/2001BRES2006.

Full text
Abstract:
La transmission de l'information s'est imposée comme l'une des activités fondamentales des sociétés humaines. Les nouveaux besoins, nés des liaisons en espace lointain avec les satellites et les sondes spatiales, ont suscité un regain de la recherche dans le domaine des codes correcteurs d'erreurs et des algorithmes de décodage performants. De ces recherches, ont émergé différents types de codes correcteurs d'erreurs dont deux se sont imposés par la suite à savoir les codes en blocs et les codes convolutifs. Le Département Électronique de l'Ecole Nationale Supérieure de Télécommunications de Bretagne a pris l'initiative de développer des circuits novateurs de codage/décodage correcteurs d'erreurs. Les études menées ont abouti à l'élaboration d'une nouvelle classe de codes baptisée ʺturbo codesʺ qui ont des performances supérieures a celles de tout autre code connu. L'idée ma˜tresse est celle d'une simple concaténation de codes convolutifs, à faible longueur de contrainte. Le décodage utilise un processus itératif, ce qui implique des codes élémentaires à entrées et sorties pondérées. Un premier décodeur à sorties pondérées a été réalisé à partir de l'algorithme Berrou-Adde (Soft Output Viterbi Algorithm) au sein du Département. Les très bonnes performances théoriques de l'algorithme du Maximum a Posteriori (MAP) ouvrent la voie à une nouvelle génération de décodeurs de codes convolutifs qui pour des raisons de complexité et d'encombrement seront implantés à partir d'une approximation du logarithme de l'algorithme MAP (SUB-MAP). Le sujet de thèse s'inscrit dans la définition et la validation des architectures utilisant le SUB-MAP. Des structures de décodage minimisant la mémoire ont été élaborées ; elles permettent d'implanter ces nouveaux types de décodeur en garantissant de bons compromis vitesse-surface-consommation.
APA, Harvard, Vancouver, ISO, and other styles
30

Ozadam, Hakan. "Repeated-root Cyclic Codes And Matrix Product Codes." Phd thesis, METU, 2012. http://etd.lib.metu.edu.tr/upload/12615304/index.pdf.

Full text
Abstract:
We study the Hamming distance and the structure of repeated-root cyclic codes, and their generalizations to constacyclic and polycyclic codes, over finite fields and Galois rings. We develop a method to compute the Hamming distance of these codes. Our computation gives the Hamming distance of constacyclic codes of length $np^s$ in many cases. In particular, we determine the Hamming distance of all constacyclic, and therefore cyclic and negacyclic, codes of lengths p^s and 2p^s over a finite field of characteristic $p$. It turns out that the generating sets for the ambient space obtained by torsional degrees and strong Groebner basis for the ambient space are essentially the same and one can be obtained from the other. In the second part of the thesis, we study matrix product codes. We show that using nested constituent codes and a non-constant matrix in the construction of matrix product codes with polynomial units is a crucial part of the construction. We prove a lower bound on the Hamming distance of matrix product codes with polynomial units when the constituent codes are nested. This generalizes the technique used to construct the record-breaking examples of Hernando and Ruano. Contrary to a similar construction previously introduced, this bound is not sharp and need not hold when the constituent codes are not nested. We give a comparison of this construction with a previous one. We also construct new binary codes having the same parameters, of the examples of Hernando and Ruano, but non-equivalent to them.
APA, Harvard, Vancouver, ISO, and other styles
31

Rodriguez, Fernandez Carlos Gustavo. "Machine learning quantum error correction codes : learning the toric code /." São Paulo, 2018. http://hdl.handle.net/11449/180319.

Full text
Abstract:
Orientador: Mario Leandro Aolita
Banca:Alexandre Reily Rocha
Banca: Juan Felipe Carrasquilla
Resumo: Usamos métodos de aprendizagem supervisionada para estudar a decodificação de erros em códigos tóricos de diferentes tamanhos. Estudamos múltiplos modelos de erro, e obtemos figuras da eficácia de decodificação como uma função da taxa de erro de um único qubit. Também comentamos como o tamanho das redes neurais decodificadoras e seu tempo de treinamento aumentam com o tamanho do código tórico.
Abstract: We use supervised learning methods to study the error decoding in toric codes ofdifferent sizes. We study multiple error models, and obtain figures of the decoding efficacyas a function of the single qubit error rate. We also comment on how the size of thedecoding neural networks and their training time scales with the size of the toric code
Mestre
APA, Harvard, Vancouver, ISO, and other styles
32

Muller, Wayne. "East City Precinct Design Code: Redevelopment through form-based codes." Master's thesis, University of Cape Town, 2014. http://hdl.handle.net/11427/12952.

Full text
Abstract:
Includes bibliographical references.
This thesis confines itself to a consideration of urban development opportunity in the East City Precinct through the understanding of it former historical character and memory which can be implemented through Form Based Codes. It locates the design process in the sub-regional context and puts forward notional spatial proposal for the physical area of the East City Precinct and its surrounds. The application of theory is tested at precinct level and emphasis remains firmly on the public elements ordering the spatial structure. With all these considerations, this dissertation presents a piece of history of District Six and the importance of memory in relation to the East City. This contested site of memory and heritage informs the area’s contextual development amid the often-essentialising multicultural in particular to the ‘new South Africa’. In turn, an understanding of District Six’s urban quality which frames the intricacies of a restitution and redevelopment plan. It also illustrates the genuine uniqueness of its principles of urbanism, in contrast to market-oriented urban development which reproduces spaces of social fragmentation, exclusion and inequality. Indeed, the vision for the East City concerns long-term urban sustainability, an investment in a city of fluid spaces, a city of difference and meaning. This dissertation contends that there is a real role for urban and social sustainability in the redevelopment potential of the study area, with its historical, social, cultural and symbolic significance. Therefore its outline the key elements and principles for a development framework prepared for the study area and discuss the prospects for urban and social sustainability. This will inform where and how to apply form based codes with in the East City context.
APA, Harvard, Vancouver, ISO, and other styles
33

Saeed, Mohamed Ahmed. "Approche algébrique sur l'équivalence de codes." Thesis, Normandie, 2017. http://www.theses.fr/2017NORMR034/document.

Full text
Abstract:
Le problème d’´équivalence de code joue un rôle important dans la théorie de code et la cryptographie basée sur le code. Cela est dû à son importance dans la classification des codes ainsi que dans la construction et la cryptanalyse des cryptosystèmes à base de codes. Il est également lié à un problème ouvert d’isomorphisme de graphes, un problème bien connu dans le domaine de la théorie de la complexité. Nous prouvons pour les codes ayant un hull trivial qu’il existe une réduction polynomiale de l’équivalence par permutation de codes à l’isomorphisme de graphes. Cela montre que cette sous-classe d’équivalence de permutation n’est pas plus dure que l’isomorphisme de graphes. Nous introduisons une nouvelle méthode pour résoudre le problème d’équivalence de code. Nous développons des approches algébriques pour résoudre le problème dans ses deux versions : en permutation et en diagonale. Nous construisons un système algébrique en établissant des relations entre les matrices génératrices et les matrices de parité des codes équivalents. Nous nous retrouvons avecun système plusieurs variables d’équations linéaires et quadratiques qui peut être résolu en utilisant des outils algébriques tels que les bases de Groebner et les techniques associées. Il est possible en théorie de résoudre l’équivalence de code avec des techniques utilisant des bases de Groebner. Cependant, le calcul en pratique devient complexe à mesure que la longueur du code augmente. Nous avons introduit plusieurs améliorations telles que la linéarisation par bloc et l’action de Frobenius. En utilisant ces techniques, nous identifions de nombreux cas où le problème d’équivalence de permutation peut être résolu efficacement. Notre méthode d’équivalence diagonale résout efficacement le problème dans les corps de petites tailles, à savoir F3 et F4. L’augmentation de la taille du corps entraîne une augmentation du nombre de variables dans notre système algébrique, ce qui le rend difficile à résoudre. Nous nous intéressons enfin au problème d’isomorphisme de graphes en considérant un système algébrique quadratique pour l’isomorphisme de graphes. Pour des instances tirées aléatoirement, le système possède des propriétés intéressantes en termes de rang de la partie linéaire et du nombre de variables. Nousrésolvons efficacement le problème d’isomorphisme de graphes pour des graphes aléatoires avec un grand nombre de sommets, et également pour certains graphes réguliers tels que ceux de Petersen, Cubical et Wagner.123
Code equivalence problem plays an important role in coding theory and code based cryptography.That is due to its significance in classification of codes and also construction and cryptanalysis of code based cryptosystems. It is also related to the long standing problem of graph isomorphism, a well-known problem in the world of complexity theory. We introduce new method for solving code equivalence problem. We develop algebraic approaches to solve the problem in its permutation and diagonal versions. We build algebraic system by establishing relations between generator matrices and parity check matrices of the equivalent codes. We end up with system of multivariables of linear and quadratic equations which can be solved using algebraic tools such as Groebner basis and related techniques. By using Groebner basis techniques we can solve the code equivalence but the computation becomes complex as the length of the code increases. We introduced several improvements such as block linearization and Frobenius action. Using these techniques we identify many cases where permutation equivalence problem can be solved efficiently. Our method for diagonal equivalence solves the problem efficiently in small fields, namely F3 and F4. The increase in the field size results in an increase in the number of variables in our algebraic system which makes it difficult to solve. We introduce a new reduction from permutation code equivalence when the hull is trivial to graph isomorphism. This shows that this subclass of permutation equivalence is not harder than graph isomorphism.Using this reduction we obtain an algebraic system for graph isomorphism with interesting properties in terms of the rank of the linear part and the number of variables. We solve the graph isomorphism problem efficiently for random graphs with large number of vertices and also for some regular graphs such as Petersen, Cubical and Wagner Graphs
APA, Harvard, Vancouver, ISO, and other styles
34

Wu, Yufei. "Implementation of Parallel and Serial Concatenated Convolutional Codes." Diss., Virginia Tech, 2000. http://hdl.handle.net/10919/27342.

Full text
Abstract:
Parallel concatenated convolutional codes (PCCCs), called "turbo codes" by their discoverers, have been shown to perform close to the Shannon bound at bit error rates (BERs) between 1e-4 and 1e-6. Serial concatenated convolutional codes (SCCCs), which perform better than PCCCs at BERs lower than 1e-6, were developed borrowing the same principles as PCCCs, including code concatenation, pseudorandom interleaving and iterative decoding. The first part of this dissertation introduces the fundamentals of concatenated convolutional codes. The theoretical and simulated BER performance of PCCC and SCCC are discussed. Encoding and decoding structures are explained, with emphasis on the Log-MAP decoding algorithm and the general soft-input soft-output (SISO) decoding module. Sliding window techniques, which can be employed to reduce memory requirements, are also briefly discussed. The second part of this dissertation presents four major contributions to the field of concatenated convolutional coding developed through this research. First, the effects of quantization and fixed point arithmetic on the decoding performance are studied. Analytic bounds and modular renormalization techniques are developed to improve the efficiency of SISO module implementation without compromising the performance. Second, a new stopping criterion, SDR, is discovered. It is found to perform well with lowest cost when evaluating its complexity and performance in comparison with existing criteria. Third, a new type-II code combining automatic repeat request (ARQ) technique is introduced which makes use of the related PCCC and SCCC. Fourth, a new code-assisted synchronization technique is presented, which uses a list approach to leverage the simplicity of the correlation technique and the soft information of the decoder. In particular, the variant that uses SDR criterion achieves superb performance with low complexity. Finally, the third part of this dissertation discusses the FPGA-based implementation of the turbo decoder, which is the fruit of cooperation with fellow researchers.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
35

Biswas, Bhaskar. "Implementational aspects of code-based cryptography." Palaiseau, Ecole polytechnique, 2010. http://pastel.archives-ouvertes.fr/docs/00/52/30/07/PDF/thesis.pdf.

Full text
Abstract:
Sur la plateforme de thèses en ligne Tel on trouve le résumé suivant : Nous présentons les détails d'implémentation du schema de chiffrement hybride McEliece (HyMES), développé avec Nicolas Sendrier, une version améliorée du cryptosystème de McEliece. Nous présentons une version modifiée du système d'origine (que nous appelons hybride). Il y a deux modifications, la première est augmente le taux d'information, la seconde réduit la taille de clé publique en faisant usage d'une matrice génératrice sous forme systématique. Nous allons montrer que la réduction de sécurité est la même que pour le système original. Nous décrivons ensuite les algorithmes de génération de clés, de chiffrement et de déchiffrement ainsi que leur mise en œuvre. Enfin nous donnerons quelques temps de calcul pour différents paramètres, nous les comparerons avec les attaques les plus connues, et nous discuterons du meilleur compromis. L'idée du schéma de McEliece est de masquer la structure du code au moyen d'une transformation de la matrice génératrice. La matrice génératrice transformée devient la clé publique alors que la clé secrete est la structure du code de Goppa ainsi que les paramètres de transformation. La sécurité repose sur le fait que le problème de décodage d'un code linéaire est NP-complet. Le cryptosystème de McEliece n'a pas eu autant de succès que le RSA, en grande partie à cause de la taille de la clé publique mais ce problème devient moins rédhibitoire avec les progrès du hardware. Notre objectif a été de mettre en œuvre un logiciel assez rapide qui pourra servir de référence. Nous présenterons également les détails algorithmiques de notre travail. L'ensemble du projet est disponible gratuitement à : http://www-roc. Inria. Fr / secret / CBCrypto / index. Php? pg = Hymes
Sur la plateforme de thèses en ligne Tel on trouve le résumé suivant : We present the implementation details of Hybrid McEliece Encryption Scheme (HyMES), a improved version of the original McEliece scheme developed with Nicolas Sendrier. We present a modified version of the original scheme (which we call hybrid). It has two modifications, the first increases the information rate by putting some data in the error pattern. The second reduces the public key size by making use of a generator matrix in systematic form. We will show that the same security reduction as for the original system holds. We then describe the key generation, the encryption and the decryption algorithms and their implementation. Finally we will give some computation time for various parameters, compare them with the best known attacks, and discuss the best trade-offs. The idea of McEliece scheme is to hide the structure of the code by means of a transformation of the generator matrix. The transformed generator matrix becomes the public key and the secret key is the structure of the Goppa code together with the transformation parameters. The security relies on the fact that the decoding problem for general linear code is NP-complete. While the RSA public-key cryptosystem has become most widely used, McEliece cryptosystem has not been quite as successful. Partly because of the large public key, which impose less problem with the advance in hardware today. Our aim has been to implement a fairly fast and concise software implementation that may be used as a reference benchmark. We present the algorithmic details of our implementation as well. That is to specify the algorithms we use and the way we use them. The whole project is freely available at http://www-roc. Inria. Fr/secret/CBCrypto/index. Php?pg=hymes
APA, Harvard, Vancouver, ISO, and other styles
36

Benoit, Didier. "Les définitions dans le Code pénal." Cergy-Pontoise, 2001. http://www.theses.fr/2001CERG0129.

Full text
Abstract:
L'étude des définitions dans le Code pénal relève d'une analyse formelle, portant sur la forme des définitions et d'une analyse sémantique permettant de dévoiler les enjeux de sens. L'analyse formelle envisage une étude syntaxique et stylistique qui met en évidence le système qui aboutit à la production de la langue. C'est la ou les manière(s) dont les définitions sont construitesL'analyse sémantique traite de la problématique de la signification. Les définitions présentent un sens figé constitué par le corpus définitionnel légal. Mais cette apparente fixité cache en vérité une très grande souplesse des notions définitionnelles. Cette souplesse, révélée par le jeu de l' interprétation, confère aux définitions un caractère extensionnel très marqué. Dès lors, l'objet devient malléable, aux contours incertains. Elément d'un discours technique et normatif, la définition se construit à partir de réalités matérielles accessibles et analysables. Elle est une entité destinée à fonctionner dans un environnement linguistique( le discours de spécialité), et dans un environnement social. Cette démarche en fait une définition de terme rapportés à un domaine (le droit pénal). Le sens des définitions se confond alors avec la conceptualisation de ce qu' elles désignent. Relevant tant d'une démarche sémasiologique qu' onomasiologique de construction du sens, les définitions s'orientent vers la reconnaissance d'un sens fonctionnel de l'objet. L'indétermination du contenu définitionnel comme l'approche prototypique de l'objet, font des définitions dans le Code pénal des outils fonctionnels de saisie de l'objet. Ainsi, la définition en droit pénal relève d'une démarche pragmatique de désignation du sens. Cette démarche tablant sur l'élasticité du sens permet aux définitions d'être un outil ductile propre à saisir la complexité des faits. Les définitions dans le Code pénal s'inscrivent parfaitement dans la tendance contemporaine de la pragmatique du langage
The study of Penal Code definitions entails a formal analysis of the definitions and a semantic analysis which shows the importance of meaning. The formal analysis encompasses a syntaxic and stylistic study which highlights the system which leads to language production. Definitions are built through either one of these studies. Semantic analysis deals with the problem of meaning. Definitions present a set meaning formed by the legal body of definitions. However, this apparent rigidity truly conceals a very wide flexibility of definitional notions. This flexibility, revealed through the process of interpretation, gives the definitions a very sharp extensional characteristic Hence, the object becomes malleable with uncertain contours. As an element of a technical and normative speech, the definition is built from concrete realities which can be easily grasped and analysed. The definition is an entity meant to be used in a linguistic environment (the speech of specialty) and in a social environment. This approach turns it into a definition of terms related to one field : penal law. The meaning of the definitions then blends with the conceptualisation of what they refer to. As they come under a semasiological approach as well as an onomasiological approach to meaning, the definitions tend towards the recognition of a functional meaning of the object. The vagueness of the definitional content as well as the prototypical approach of the object turn definitions in the penal code into functional tools for understanding the object. Thus, the definition in the penal law comes under a pragmatic approach to the description of meaning. As this approach banks on the flexibility of meaning, it permits the definitions to be a malleable tool which is capable of grasping the complexity of facts. The definitions in the penal code are perfectly in keeping with the contemporary tendency of language pragmatism
APA, Harvard, Vancouver, ISO, and other styles
37

Zhong, Wei. "Low density generator matrix codes for source and channel coding." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 144 p, 2006. http://proquest.umi.com/pqdweb?did=1172111921&sid=3&Fmt=2&clientId=8331&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Zhao, Ying. "Turbo codes for data compression and joint source-channel coding." Access to citation, abstract and download form provided by ProQuest Information and Learning Company; downloadable PDF file, 112 p, 2007. http://proquest.umi.com/pqdlink?did=1251904871&Fmt=7&clientId=79356&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Valvo, Daniel William. "Repairing Cartesian Codes with Linear Exact Repair Schemes." Thesis, Virginia Tech, 2020. http://hdl.handle.net/10919/98818.

Full text
Abstract:
In this paper, we develop a scheme to recover a single erasure when using a Cartesian code,in the context of a distributed storage system. Particularly, we develop a scheme withconsiderations to minimize the associated bandwidth and maximize the associateddimension. The problem of recovering a missing node's data exactly in a distributedstorage system is known as theexact repair problem. Previous research has studied theexact repair problem for Reed-Solomon codes. We focus on Cartesian codes, and show wecan enact the recovery using a linear exact repair scheme framework, similar to the oneoutlined by Guruswami and Wooters in 2017.
Master of Science
Distributed storage systems are systems which store a single data file over multiple storage nodes. Each storage node has a certain storage efficiency, the "space" required to store the information on that node. The value of these systems, is their ability to safely store data for extended periods of time. We want to design distributed storage systems such that if one storage node fails, we can recover it from the data in the remaining nodes. Recovering a node from the data stored in the other nodes requires the nodes to communicate data with each other. Ideally, these systems are designed to minimize the bandwidth, the inter-nodal communication required to recover a lost node, as well as maximize the storage efficiency of each node. A great mathematical framework to build these distributed storage systems on is erasure codes. In this paper, we will specifically develop distributed storage systems that use Cartesian codes. We will show that in the right setting, these systems can have a very similar bandwidth to systems build from Reed-Solomon codes, without much loss in storage efficiency.
APA, Harvard, Vancouver, ISO, and other styles
40

Panagos, Adam G., and Kurt Kosbar. "A METHOD FOR FINDING BETTER SPACE-TIME CODES FOR MIMO CHANNELS." International Foundation for Telemetering, 2005. http://hdl.handle.net/10150/604782.

Full text
Abstract:
ITC/USA 2005 Conference Proceedings / The Forty-First Annual International Telemetering Conference and Technical Exhibition / October 24-27, 2005 / Riviera Hotel & Convention Center, Las Vegas, Nevada
Multiple-input multiple output (MIMO) communication systems can have dramatically higher throughput than single-input, single-output systems. Unfortunately, it can be difficult to find the space-time codes these systems need to achieve their potential. Previously published results located good codes by minimizing the maximum correlation between transmitted signals. This paper shows how this min-max method may produce sub-optimal codes. A new method which sorts codes based on the union bound of pairwise error probabilities is presented. This new technique can identify superior MIMO codes, providing higher system throughput without increasing the transmitted power or bandwidth requirements.
APA, Harvard, Vancouver, ISO, and other styles
41

Zhongkui, Lei, Xie Qiucheng, and Cao Jie. "Research and Recommendation of Optimum Group Synchronization Codes for N = 7 -- 32." International Foundation for Telemetering, 1988. http://hdl.handle.net/10150/615069.

Full text
Abstract:
International Telemetering Conference Proceedings / October 17-20, 1988 / Riviera Hotel, Las Vegas, Nevada
In this paper, based on a series of research achievements [2,3,4,5,6], are examined the "The Optimum Frame Synchronization Codes" provided by J. L. Maury Jr. and F. J. Styles for IRIG Telemetry Standards USA, and furthermore, recommended a set of Optimum Group Synchronization Codes for China Telemetry Standards.
APA, Harvard, Vancouver, ISO, and other styles
42

Schmitt, Maxime. "Génération automatique de codes adaptatifs." Thesis, Strasbourg, 2019. http://www.theses.fr/2019STRAD029.

Full text
Abstract:
Dans cette thèse nous proposons une interface de programmation pour aider les développeurs dans leur tâche d'optimisation de programme par calcul approché. Cette interface prend la forme d'extensions aux langages de programmation pour indiquer au compilateur quelles parties du programme peuvent utiliser ce type de calcul. Le compilateur se charge alors de transformer les parties du programme visées pour rendre l'application adaptative, allouant plus de ressources aux endroits où une précision importante est requise et utilisant des approximations où la précision peut être moindre. Nous avons automatisé la découverte des paramètres d'optimisation que devrait fournir l'utilisateur pour les codes à stencil, qui sont souvent rencontrés dans des applications de traitement du signal, traitement d'image ou simulation numérique. Nous avons exploré des techniques de compression automatique de données pour compléter la génération de code adaptatif. Nous utilisons la transformée en ondelettes pour compresser les données et obtenir d'autres informations qui peuvent être utilisées pour trouver les zones avec des besoins en précision plus importantes
In this thesis we introduce a new application programming interface to help developers to optimize an application with approximate computing techniques. This interface is provided as a language extension to advise the compiler about the parts of the program that may be optimized with approximate computing and what can be done about them. The code transformations of the targeted regions are entirely handled by the compiler to produce an adaptive software. The produced adaptive application allocates more computing power to the locations where more precision is required, and may use approximations where the precision is secondary. We automate the discovery of the optimization parameters for the special class of stencil programs which are common in signal/image processing and numerical simulations. Finally, we explore the possibility of compressing the application data using the wavelet transform and we use information found in this basis to locate the areas where more precision may be needed
APA, Harvard, Vancouver, ISO, and other styles
43

Schmidt, Kai-Uwe. "On spectrally bounded codes for multicarrier communications." Dresden Vogt, 2007. http://deposit.d-nb.de/cgi-bin/dokserv?id=2973482&prov=M&dok_var=1&dok_ext=htm.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Lawson, John. "Duty specific code driven design methodology : a model for better codes." Thesis, University of Aberdeen, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.274818.

Full text
Abstract:
The thesis examines Engineering Design and its methodology in general before examining and comparing the principal differences between Inventive Design and Development Design.  This latter branch of design is described in detail, recognising that it is executed in accordance with recognised specifications, design codes and standards.  Design Codes and Standards are analysed in terms of the service they provide to the professional design engineer who will normally work under the procedures and accepted standards of a Professional Design House.  Professional design is an important part of all disciplines in the engineering profession.  Such design work is executed by specialists, invariably guided in their work by recognised Specifications, Design Standards or Codes of Practice published by recognised reputable bodies who appoint working parties or independent committees to write and maintain these documents. Design Standards and Codes of Practice are at best unclear and at worst confusing if not down right contradictory within themselves. Usually there is more than one such Standard or Code available to the professional design engineer often based on geographical location; BSI in the UK, DIN or ISO in Europe and perhaps ASME ANSI in the USA. There are of course several others. The professional design process is analysed and described in order to demonstrate the commercial and project constraints associated with professional development design.  The model usually adopted in the preparation and presentation of these codes and standards is critiqued and a better model proposed for standard adoption.
APA, Harvard, Vancouver, ISO, and other styles
45

Firmanto, Welly T. (Welly Teguh) Carleton University Dissertation Engineering Systems and Computer. "Code combining of Reed-Muller codes in an indoor wireless environment." Ottawa, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
46

Veluri, Subrahmanya Pavan Kumar. "Code Verification and Numerical Accuracy Assessment for Finite Volume CFD Codes." Diss., Virginia Tech, 2010. http://hdl.handle.net/10919/28715.

Full text
Abstract:
A detailed code verification study of an unstructured finite volume Computational Fluid Dynamics (CFD) code is performed. The Method of Manufactured Solutions is used to generate exact solutions for the Euler and Navier-Stokes equations to verify the correctness of the code through order of accuracy testing. The verification testing is performed on different mesh types which include triangular and quadrilateral elements in 2D and tetrahedral, prismatic, and hexahedral elements in 3D. The requirements of systematic mesh refinement are discussed, particularly in regards to unstructured meshes. Different code options verified include the baseline steady state governing equations, transport models, turbulence models, boundary conditions and unsteady flows. Coding mistakes, algorithm inconsistencies, and mesh quality sensitivities uncovered during the code verification are presented. In recent years, there has been significant work on the development of algorithms for the compressible Navier-Stokes equations on unstructured grids. One of the challenging tasks during the development of these algorithms is the formulation of consistent and accurate diffusion operators. The robustness and accuracy of diffusion operators depends on mesh quality. A survey of diffusion operators for compressible CFD solvers is conducted to understand different formulation procedures for diffusion fluxes. A patch-wise version of the Method of Manufactured Solutions is used to test the accuracy of selected diffusion operators. This testing of diffusion operators is limited to cell-centered finite volume methods which are formally second order accurate. These diffusion operators are tested and compared on different 2D mesh topologies to study the effect of mesh quality (stretching, aspect ratio, skewness, and curvature) on their numerical accuracy. Quantities examined include the numerical approximation errors and order of accuracy associated with face gradient reconstruction. From the analysis, defects in some of the numerical formulations are identified along with some robust and accurate diffusion operators.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
47

Huang, Weizheng. "Investigation on Digital Fountain Codes over Erasure Channels and Additive White Gaussian Noise Channels." Ohio University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1336067205.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

El, Shafey Ezzat. "Codes linguistiques et alternance de codes chez les immigrants égyptiens en France." Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUL071.

Full text
Abstract:
Cette thèse étudie pour la première fois les pratiques langagières de la communauté grandissante des immigrés égyptiens en France. L’analyse morphosyntaxique a comme cadre la théorie insertionnelle Matrix Language Frame de Myers-Scotton (1993, 1997 et 2000). Nous constatons que les Égyptiens de la première génération recourent à l’alternance codique sans s’en rendre compte ou sans le reconnaître tandis que ceux de la deuxième génération sont conscients des caractéristiques linguistiques nées du contact des langues. De plus, la femme égyptienne, avec son rôle de maintien de l’arabe auprès de ses enfants, trouve sa féminité dans l’apprentissage du français et par conséquent elle joue un rôle important dans la pratique de l’alternance codique au sein de la famille. Les raisons de l’alternance codique chez les Égyptiens sont variées, par exemple : la citation ou le discours rapporté ; la désignation d’un interlocuteur en faisant intégrer à un groupe un interlocuteur tenu à l’écart ; l’humour qui caractérise notamment les Égyptiens de la première génération ; la spontanéité et la précision en optant directement pour le lexème le plus immédiatement disponible, même s’il est dans une langue différente par rapport au reste de la communication. Nos informateurs de la première génération ont recours aux procédés morphologiques pour simplifier l’usage des mots français ayant des sons qui n’existent pas en arabe égyptien ou ceux qui se composent de plus de trois syllabes. Nous exploitons les caractéristiques de ce contact des langues pour aider les élèves égyptiens nouvellement arrivés à réussir leur scolarité et s’intégrer dans la société française via la maîtrise du français
This thesis analyses for the first time the language practices of the growing community of Egyptian immigrants in France. The morphosyntactical analysis is made in the framework of insertional theory Matrix Language Frame of Myers-Scotton (1993, 1997 and 2000). We find that the Egyptians of the first generation resort to code switching without realizing it or recognizing it while those of second generation are aware of the linguistic characteristics of the language contact. Furthermore, the Egyptian woman, with her role of maintaining Arabic with her children, finds her femininity in learning French and therefore she plays an important role in the practice of code switching within the family. The reasons of the code switching in the Egyptian community are varied, for example, the quotation or the reported speech ; the designation of an interlocutor by integrating into a group an interlocutor kept apart ; the humor that particularly characterizes the Egyptians of the first generation ; the spontaneity and the precision by opting directly for the most immediately available lexeme even if it is in a different language than the rest of the communication. Our informants of the first generation use morphological procedures to simplify the use of French words having sounds that don’t exist in Egyptian Arabic or those that consist of more than three syllables. We use the characteristics of this language contact to help Egyptian students newlly arrived to succeed in their schooling and integrate into the French society through the mastering of French language
APA, Harvard, Vancouver, ISO, and other styles
49

Parra, Avila Benigno Rafael. "On Rational and Periodic Power Series and on Sequential and Polycyclic Error-Correcting Codes." Ohio University / OhioLINK, 2009. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1257886601.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Sabir, Tanveer. "Classification of Perfect codes in Hamming Metric." Thesis, Linnéuniversitetet, Institutionen för datavetenskap, fysik och matematik, DFM, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-13833.

Full text
Abstract:
The study of coding theory aims to detect and correct the errors during the transmission of the data. It enhances the quality of data transmission and provides better control over the noisy channels.The perfect codes are collected and analyzed in the premises of the Hamming metric.This classification yields that there exists only a few perfect codes. The perfect codes do not guarantee the perfection by all means but just satisfy certain bound and properties. The detection and correction of errors is always very important for better data transmission.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography