To see the other types of publications on this topic, follow the link: Transit code.

Dissertations / Theses on the topic 'Transit code'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Transit code.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Yeung, Sze-hang Jess, and 楊思恆. "Adaptive social underground linkages urban interface for Mass Transit Railway." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2004. http://hub.hku.hk/bib/B31987412.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wang, Xiangyang, and 王向陽. "Transmit diversity in CDMA for wireless communications." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B31246072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Granelli, Tommaso <1971&gt. "Negoziare confini: dagli stati di cose ai transiti." Doctoral thesis, Alma Mater Studiorum - Università di Bologna, 2009. http://amsdottorato.unibo.it/2246/.

Full text
Abstract:
Negotiating boundaries: from state of affairs to matter of transit. The research deals with the everyday management of spatial uncertainty, starting with the wider historical question of terrains vagues (a French term for wastelands, dismantled areas and peripheral city voids, or interstitial spaces) and focusing later on a particular case study. The choice intended to privilege a small place (a mouth of a lagoon which crosses a beach), with ordinary features, instead of the esthetical “vague terrains”, often witnessed through artistic media or architectural reflections. This place offered the chance to explore a particular dimension of indeterminacy, mostly related with a certain kind of phenomenal instability of its limits, the hybrid character of its cultural status (neither natural, nor artificial) and its crossover position as a transitional space, between different tendencies and activities. The first theoretical part of the research develops a semiotic of vagueness, by taking under exam the structuralist idea of relation, in order to approach an interpretive notion of continuity and indeterminacy. This exploration highlights the key feature of actantial network distribution, which provides a bridge with the second methodological parts, dedicated to a “tuning” of the tools for the analysis. This section establishes a dialogue with current social sciences (like Actor-Network Theory, Situated action and Distributed Cognition), in order to define some observational methods for the documentation of social practices, which could be comprised in a semiotic ethnography framework. The last part, finally, focuses on the mediation and negotiation by which human actors are interacting with the varying conditions of the chosen environment, looking at people’s movements through space, their embodied dealings with the boundaries and the use of spatial artefacts as framing infrastructure of the site.
APA, Harvard, Vancouver, ISO, and other styles
4

Jaswal, Kavita. "Handoff issues in a transmit diversity system." Thesis, Texas A&M University, 2003. http://hdl.handle.net/1969.1/1586.

Full text
Abstract:
This thesis addresses handoff issues in a WCDMA system with space-time block coded transmit antenna diversity. Soft handoff has traditionally been used in CDMA systems because of its ability to provide an improved link performance due to the inherent macro diversity. Next generation systems will incorporate transmit diversity schemes employing several transmit antennas at the base station. These schemes have been shown to improve downlink transmission performance especially capacity and quality. This research investigates the possibility that the diversity obtained through soft handoff can be compensated for by the diversity obtained in a transmit diversity system with hard handoff. We analyze the system for two performance measures, namely, the probability of bit error and the outage probability, in order to determine whether the improvement in link performance, as a result of transmit diversity in a system with hard handoffs obviates the need for soft handoffs.
APA, Harvard, Vancouver, ISO, and other styles
5

Ikai, Youhei, Masaaki Katayama, Takaya Yamazato, and Akira Ogawa. "Code Acquisition of a DS/SS Signal with Transmit and Receive Antenna Diversity." IEICE, 1999. http://hdl.handle.net/2237/7219.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Johanna, Mårtensson. "Form-based codes och design codes i en svensk planeringskontext : En komparativ studie mellan länder." Thesis, Luleå tekniska universitet, Institutionen för samhällsbyggnad och naturresurser, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-85400.

Full text
Abstract:
System för planering och bebyggelsekontroll måste hantera många utmaningar. Svårigheterna och den påverkan dessa system har på den fysiska miljön gör ämnet ständigt aktuellt att undersöka och försöka utveckla. Examensarbetet gör detta genom att jämföra systemet i Sverige med det i andra länder. Mer specifikt studeras form-based codes som förespråkas av Nyurbanismen i USA och design codes i England. I en svensk kontext kan dessa codes jämföras med detaljplanens planbestämmelser och riktlinjer i kvalitets- och gestaltningsprogram. I och med propositionen ”Politik för gestaltad Livsmiljö” som antogs i maj 2018 uppmuntras kommuner ta fram en arkitekturpolitik på lokal nivå. Dessa dokument utgör också ett intressant verktyg i sammanhanget. Examensarbetets frågeställning lyder därför: Hur kan koncept och verktyg från form-based codes i USA och design codes i England utveckla svenska kommuners arkitekturpolicyer, kvalitetsprogram och detaljplaner? I en mindre utsträckning har förutom systemen i USA och England även Frankrike och det typo-morfologiska förhållningsättet till zonering inkluderats. Examensarbetet startade med en litteraturstudie och åtföljdes av ett antal fallstudier som innefattade innehållsanalyser av arkitekturpolicyer, kvalitetsprogram och detaljplaner. De policyer som valdes ut för analys bedömdes ha mest gemensamt med arbetssätt inom form-based codes och design codes. Dessa var Örebros och Linköpings policyer samt Avesta, Fagerstas och Norbergs gemensamma policy. Från de två förstnämnda kommunerna analyserades även detaljplaner. Utöver detta inkluderades detaljplanerna och kvalitetsprogrammen för Henriksdalshamnen och Kolkajen i Stockholm. Innehållsanalysen av planer och program utgick från ett antal kategorier. Resultatet från fallstudierna jämfördes sedan med litteraturstudien. Parallellt genomfördes även en intervju med en praktiserande planarkitekt som innan intervjun fick läsa en begränsad mängd material om form-based codes. Resultat och analys från jämförelsen och intervjun låg sedan till grund för utformningen av ett antal rekommendationer. Litteraturstudien behandlade användningen av codes genom historien, utvecklingen och definitioner av form-based codes och design codes, samt hur dessa är organiserade. Vidare innefattade litteraturstudien kritik som riktats mot dessa och kopplingen till urbanmorfologi i relation till det franska typo-morfologiska förhållningsättet till zonering. Slutligen behandlades även utformnings och gestaltningsfrågor kopplat till bebyggelsereglering i den svenska planprocessen. Jämförelsen mellan innehållsanalysen av de utvalda fallen och litteraturstudien visade på skillnader och likheter mellan vad som behandlas och hur detta görs inom form-based codes och design codes respektive planer och program i Sverige. Resultatet från analysen av arkitekturpolicyerna visade på likheter med form-based codes och design codes som kunde förstärkas. I detta avseende utmärkte sig framför allt Örebros arkitekturstrategi som i likhet med praktiken inom form-based codes och design codes delade in staden i olika områdestyper. De två kvalitetsprogram som analyserades skiljde sig åt i karaktären vilket kopplades samman med processen för framtagandet av dessa. I jämförelse visade sig programmet för Kolkajen ha mer gemensamt med form-based codes och design codes än det för Henriksdalshamnen. Intervjun belyste perspektiv på innehållet i planer och program, kontexten som planer och program tas fram i och verkar, samt synpunkter på form-based codes.De rekommendationer som togs fram utgjordes av 19 aspekter som på ett övergripande plan kan behandlas för olika områdestyper i en arkitekturpolicy i likhet med Örebros arkitekturstrategi. Därutöver utvecklades en tabell med rekommendationer för vad som kan behandlas i detaljplaner och kvalitetsprogram beroende på planområdets läge i staden. Tanken är att dessa ska ha en direkt anknytning till områdestyperna i policyn då kopplingen mellan områdestyper och regleringar eller riktlinjer är direkt inom form-based codes och design codes. Genom att dessa preciseras i detaljplanerna skulle kopplingen vara mer flexibel än den är inom form-based codes. Detta skulle kunna innebära ett sätt för kommuner att agera proaktivt i stället för reaktivt till enskilda exploateringsförslag. Slutligen visade även litteraturstudien fördelar med att tillämpa enkla, principiella illustrationer vilket kan göras i en större utsträckning i såväl policyer, program och detaljplaner.
Planning and development control systems must deal with many challenges. The difficulties and the impact these systems have on the physical environment make the subject constantly relevant to study and try to develop. The thesis does this by comparing the system in Sweden with that in other countries, more specifically form-based codes advocated by New Urbanism in the US and design codes in England. In a Swedish context, these codes can be compared with the building control regulations in detailed plans and guidelines in quality and design programs. With the adoption of the Government bill "Politik för gestaltad livsmiljö" in May 2018, municipalities are encouraged to develop an architectural policy at the local level. These documents are also an interesting tool in this context. The thesis’ question therefore reads: How can concepts and tools from form-based codes in the US and design codes in England develop Swedish municipalities’ architecture policies, quality programs and detailed plans? To a lesser extent, in addition to the systems in the US and England, France and the typo-morphological approach to zoning have also been included. The degree project started with a literature study and was accompanied by a few case studies that included content analysis of architectural policies, quality programs and detailed plans. The policies selected for analysis were judged to have the most in common with approaches to regulations within form-based codes and design codes. These were Örebro's and Linköping's policies and Avesta, Fagersta and Norberg’s joint policy. Detailed plans were also analysed from the first two municipalities. In addition to this, the detailed plans and quality programs for Henriksdalshamnen and Kolkajen in Stockholm were included. The content analysis of plans and programs was based on a few categories. The results from the case studies were then compared with the literature study. In parallel, an interview was also conducted with a practicing planning architect who before the interview read a limited amount of material about form-based codes. Results and analysis from the comparison and the interview then formed the basis for the formulation of recommendations.The literature study dealt with the use of codes throughout the history, the development and definitions of form-based codes and design codes, as well as how these are organized. Furthermore, the literature study included criticism of these and the connection to urban morphology in relation to the French typo-morphological approach to zoning. Finally, design issues linked to building regulation were also dealt with in the Swedish planning process. The comparison between the content analysis of the selected cases and the literature study showed differences and similarities between what is treated and how this is done within form-based and design codes and plans and programs in Sweden. The results from the analysis of the architecture policies showed similarities with form-based codes and design codes that could be strengthened. In this respect, Örebro’s architectural strategy especially distinguished, which, like the practice in form-based codes and design codes, divided the city into different area types. The two quality programs that were analysed differed in nature, which was linked to the process in which they were designed. In comparison, the program for Kolkajen turned out to have more in common with form-based codes and design codes than the program for Henriksdalshamnen. The interview shed light on perspectives on the content of plans and programs, the context in which plans, and programs are produced and operate as well as views on form-based codes. The recommendations developed consisted of 19 aspects. These can be dealt with at an overall level for different area types in an architecture policy like Örebro’s architecture strategy. In addition, a table was developed with recommendations for what can be dealt with in detailed plans and quality programs depending on the location of the area in the city. The idea is that these should have a direct connection to the area types in the policy like form-based codes and design codes, but a more flexible one as the area type should be more precisely defined in the detail plan. This could offer a way for municipalities to act proactively instead of reactively to individual development proposals. Finally, the literature study also showed the advantages of applying simple, principled illustrations, which can be done to a greater extent in policies, programs, and detailed plans in Sweden.
APA, Harvard, Vancouver, ISO, and other styles
7

Karim, Md Anisul. "Weighted layered space-time code with iterative detection and decoding." School of Electrical & Information Engineering, 2006. http://hdl.handle.net/2123/1095.

Full text
Abstract:
Master of Engineering (Research)
Multiple antenna systems are an appealing candidate for emerging fourth-generation wireless networks due to its potential to exploit space diversity for increasing conveyed throughput without wasting bandwidth and power resources. Particularly, layered space-time architecture (LST) proposed by Foschini, is a technique to achieve a significant fraction of the theoretical capacity with a reasonable implementation complexity. There has been a great deal of challenges in the detection of space-time signal; especially to design a low-complexity detector, which can efficiently remove multi-layer interference and approach the interference free bound. The application of iterative principle to joint detection and decoding has been a promising approach. It has been shown that, the iterative receiver with parallel interference canceller (PIC) has a low linear complexity and near interference free performance. Furthermore, it is widely accepted that the performance of digital communication systems can be considerably improved once the channel state information (CSI) is used to optimize the transmit signal. In this thesis, the problem of the design of a power allocation strategy in LST architecture to simultaneously optimize coding, diversity and weighting gains is addressed. A more practical scenario is also considered by assuming imperfect CSI at the receiver. The effect of channel estimation errors in LST architecture with an iterative PIC receiver is investigated. It is shown that imperfect channel estimation at an LST receiver results in erroneous decision statistics at the very first iteration and this error propagates to the subsequent iterations, which ultimately leads to severe degradation of the overall performance. We design a transmit power allocation policy to take into account the imperfection in the channel estimation process. The transmit power of various layers is optimized through minimization of the average bit error rate (BER) of the LST architecture with a low complexity iterative PIC detector. At the receiver, the PIC detector performs both interference regeneration and cancellation simultaneously for all layers. A convolutional code is used as the constituent code. The iterative decoding principle is applied to pass the a posteriori probability estimates between the detector and decoders. The decoder is based on the maximum a posteriori (MAP) algorithms. A closed-form optimal solution for power allocation in terms of the minimum BER is obtained. In order to validate the effectiveness of the proposed schemes, substantial simulation results are provided.
APA, Harvard, Vancouver, ISO, and other styles
8

Ali, Saajed. "Concatenation of Space-Time Block Codes with ConvolutionalCodes." Thesis, Virginia Tech, 2004. http://hdl.handle.net/10919/9724.

Full text
Abstract:
Multiple antennas help in combating the destructive effects of fading as well as improve the spectral efficiency of a communication system. Receive diversity techniques like maximal ratio receive combining have been popular means of introducing multiple antennas into communication systems. Space-time block codes present a way of introducing transmit diversity into the communication system with similar complexity and performance as maximal ratio receive combining. In this thesis we study the performance of space-time block codes in Rayleigh fading channel. In particular, the quasi-static assumption on the fading channel is removed to study how the space-time block coded system behaves in fast fading. In this context, the complexity versus performance trade-off for a space-time block coded receiver is studied. As a means to improve the performance of space-time block coded systems concatenation by convolutional codes is introduced. The improvement in the diversity order by the introduction of convolutional codes into the space-time block coded system is discussed. A general analytic expression for the error performance of a space-time block coded system is derived. This expression is utilized to obtain general expressions for the error performance of convolutionally concatenated space-time block coded systems utilizing both hard and soft decision decoding. Simulation results are presented and are compared with the analytical results.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
9

Jackson, Alice. "Municipal codes and race-based transit practices and policies in Atlanta, Georgia and Montgomery, Alabama." DigitalCommons@Robert W. Woodruff Library, Atlanta University Center, 2005. http://digitalcommons.auctr.edu/dissertations/3389.

Full text
Abstract:
The study sought to examine whether the Plessy v. Ferguson decision by the Supreme Court in 1896 to control black access to public transit facilities remains, in fact, the primary vehicle used to purchase land, set schedules and deny blacks equal access to the transit systems. In 1954, the Brown v. Board of Education of Topeka case reversed the Plessy v. Ferguson decision and declared separate but equal facilities unconstitutional. Even though the municipal codes, laws, and ordinances regulating blacks equal accessibility to facilities were reversed, the writer has studied and documented how the outcome of the transportation policy-makers' decisions for the low­ income population remains similar to results during the Plessly v. Ferguson era. The writer administered surveys to a sample of the transit riders in Montgomery, Alabama and Atlanta, Georgia to determine to what extent transportation decision makers utilized race, income, and class to determine available bus and rail transportation services for the community. Transportation is a key component in addressing poverty, unemployment, and equal opportunity goals, equal access to education. employment. and other public services. Access to transportation is a social justice issue and adequate accessibility to transportation is an economic issue. The majority of respondents surveyed indicated that they have equal access to the transportation systems available in the Atlanta and Montgomery communities. However, the frequency of scheduled routes and the accessibility of the transportation services varied depending on the location. Respondents with income less than $10,000 indicated that the transportation services did not provide adequate accessibility and availability. A majority of the low-income respondents who resided in the central city area of Atlanta, Georgia and Montgomery, Alabama indicated that they needed the local transit authority to provide more frequent and additional routes as well as schedules that will take them to jobs in the suburbs. The argument is that the central cities are not without job opportunities. but rather that the educational background and skills of low-income central city residents do not qualify them for the jobs they live near. Managerial and service oriented jobs tend to remain in the downtown area while the majority of the entry level, low skill jobs are located in the suburban area. The majority of the low-income residents surveyed did not have a vehicle and relied on public transportation. The residents surveyed with an income of less than $10.000 had limited or no access to personal transportation
APA, Harvard, Vancouver, ISO, and other styles
10

Kussaba, Jaqueline Yoko. "Instrumentos processuais para efetivar o acesso à justiça dos direitos transin-dividuais veiculados em ações repetitivas." Universidade Estadual de Londrina. Centro de Estudos Sociais Aplicados. Programa de Pós-Graduação em Direito Negocial, 2014. http://www.bibliotecadigital.uel.br/document/?code=vtls000198015.

Full text
Abstract:
O trabalho identifica os instrumentos processuais existentes no sistema processual brasileiro que servem à tutela dos direitos transindividuais veiculados em ações repetitivas. Busca esclarecer os conceitos de direitos transindividuais e das espécies direitos difusos, coletivos stricto sensu e individuais homogêneos, compreendendo estes últimos como direitos essencialmente coletivos. Aborda a definição das ações repetitivas, entendendo-as como lides originadas da mesma situação fática de lesão de massas e que apresentam causas de pedir e pedidos semelhantes. Esclarece, a partir dos conceitos de direitos transindividuais e direitos individuais homogêneos, que as ações repetitivas veiculam direitos transindividuais. Explana que as ações repetitivas decorrem da feição da atual sociedade, cujas relações jurídicas se dão de forma massificada, somada à subutilização das ações coletivas para a defesa dos direitos individuais homogêneos. Ressalta que as ações repetitivas ocasionam sobrecarga do Poder Judiciário e possibilitam a existência de decisões divergentes sobre situações fáticas idênticas, dificultando o efetivo acesso à justiça. Expõe os mecanismos processuais estrangeiros que tratam dos direitos coletivos e que inspiraram o legislador brasileiro a criar meios próprios para o tratamento das ações repetitivas, quais sejam, o procedimento-modelo alemão, a ordem de litígio em grupo inglês e as ações de classe norte-americanas. Neste contexto do direito comparado, também expõe a doutrina dos precedentes que, embora não tenham por fim a tutela dos direitos coletivos lato sensu, tem relevância para uniformizar entendimentos judiciais. Destaca os meios jurisdicionais para o tratamento dos direitos transindividuais veiculados em ações repetitivas no Brasil, dividindo-os em quatro grupos: a) ação coletiva para a defesa dos direitos individuais homogêneos, b) mecanismos de uniformização de jurisprudência, c) julgamento por amostragem, e d) procedimentos inibidores de lides repetitivas. Trata do novo instrumento de incidente de resolução de demandas repetitivas previsto no Projeto de Código de Processo Civil. Conclui que o ordenamento jurídico apresenta meios de solução coletiva para as ações individuais que veiculam direitos transindividuais, como forma de efetivar o acesso à justiça, a fim de evitar decisões divergentes e contribuir com a redução de sobrecarga do Poder Judiciário.
The paper identifies the existing processual tools in the Brazilian process system serving the protection of group rights present in repeated lawsuits. It aims to clarify the concepts of group rights and of the kinds of diffuse, collective stricto sensu and homogeneous individual rights, the latter being perceived as essentially collective rights. It approach the deffinition of repeated lawsuits, understanding them as disputes originated at the same factual situation of mass lesion and presenting similar cause of action and claims. It clarifies, from the concepts of group rights and homogeneous individual rights, that repeated lawsuits carry group rights. It explains that repeated lawsuits stem from the semblance of the current society, whose legal relations take place in a mass fashion, to which the underuse of collective lawsuits for the defense of homogeneous individual rights must be added. It highlights that repeated lawsuits overbear the Justiciary and enable the existence of divergent decisions on identical factual situations, making effective access to justice more difficult. I exposes the foreign procedural mechamisms that deal with the collective rights and which inspired the Brazilian legislator to create our own means for dealing with repeated lawsuits, that is, the German standard procedure, the English group litigation order and the class acts of the USA. Within that context of compared law, it also exposes the doctrine of precedents which, though not aimed at the protection of collective rights lato sensu, is relevant for the standardizing of legal understandings. It highlights the jurisdictional means for the treatment of group rights present in repeated lawsuits in Brazil, dividing them into four groups: a) group lawsuit for the defense of homogeneous individual rights, b) mechanisms of standardization of jurisprudence, c) trial by sampling, and d) procedures for inhibiting repeated lawsuits. It deals with the new incident of repeated demands resolution (test claims) tool for repeated claims to be in the Bill of Civil Code. It concludes that the legal system has collective means of solving individual lawsuits that bear group rights, as a way of effecting access to justice, in order to avoid divergent decisions and contribute to the reduction of the overload of the Judiciary.
APA, Harvard, Vancouver, ISO, and other styles
11

Chen, Yejian [Verfasser]. "Data Transmission in Wideband Code Division Multiple Access (WDMA/FDD) Systems with Multiple Transmit Antennas / Yejian Chen." Aachen : Shaker, 2007. http://d-nb.info/1166510980/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Godoy, Wagner Fontes. "Aplicação da ferramenta TRANSYT na otimização e implantação de corredores exclusivos para ônibus na cidade de Londrina." Universidade Estadual de Londrina. Centro de Tecnologia e Urbanismo. Programa de Pós-Graduação em Engenharia Elétrica, 2010. http://www.bibliotecadigital.uel.br/document/?code=vtls000158773.

Full text
Abstract:
Este trabalho apresenta uma aplicacão da ferramenta TRANSYT/10, na avaliacão do desempenho operacional do controle semafórico em três dos principais corredores arteriais da malha viária central da cidade de Londrina. A motivacão deste estudo é a recorrência de congestionamento em alguns corredores desta malha viária. Também o período de transicão na criacão de corredores exclusivos para ônibus. Esta alteracão implica no aumento da capacidade das vias, com a retirada das faixas de estacionamento. O software TRANSYT/10 é uma ferramenta amplamente utilizada em diversos países no gerenciamento de tráfego urbano. No contexto de planejamento e gerenciamento desta malha, o controle otimizado dos semáforos, ainda inexistente, possibilita a minimização do atraso e número de paradas. A avaliação de diferentes cenários, com e sem faixas exclusivas, diferentes tamanhos de ciclo e otimização do tempo de verde e defasagem confirmam que é possível melhorar as condições de tráfego para ambos os usuários de transporte coletivo e automóveis.
This paper presents an application of TRANSYT/10 program in order to assess the operational performance of traffic control in three of the main arterial corridors of the central area of Londrina city. The motivation of this study is the recurrence of congestion in some corridors of this region. This change also means increasing the capacity of the road by removing parking lanes by creating exclusive bus lanes. The TRANSYT/10 software is a tool widely used in various countries in managing urban traffic control. In the context of planning and management of this network, the optimization of traffic control, which currently nonexistent, enables the minimization of both delay and number of stops. Evaluation of different scenarios, with and without exclusive bus lanes, different cycle time, offset and green signal optimization confirms that it is possible to improve traffic conditions for both cars and public transport.
APA, Harvard, Vancouver, ISO, and other styles
13

Ghasabyan, Levon. "Use of Serpent Monte-Carlo code for development of 3D full-core models of Gen-IV fast-spectrum reactors and preparation of group constants for transiet analyses with PARCS/TRACE coupled system." Thesis, KTH, Fysik, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-118072.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Anderson, Adam L. "Unitary space-time transmit diversity for multiple antenna self-interference suppression /." Diss., CLICK HERE for online access, 2004. http://contentdm.lib.byu.edu/ETD/image/etd500.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Jensen, Michael A., Michael D. Rice, and Adam L. Anderson. "COMPARISON OF ALAMOUTI AND DIFFERENTIAL SPACE-TIME CODES FOR AERONAUTICAL TELEMETRY DUAL-ANTENNA TRANSMIT DIVERSITY." International Foundation for Telemetering, 2004. http://hdl.handle.net/10150/605313.

Full text
Abstract:
International Telemetering Conference Proceedings / October 18-21, 2004 / Town & Country Resort, San Diego, California
The placement of two antennas on an air vehicle is one possible practice for overcoming signal obstruction created by vehicle maneuvering during air-to-ground transmission. Unfortunately, for vehicle attitudes where both antennas have a clear path to the receiving station, this practice also leads to self-interference nulls, resulting in dramatic degradation in the average signal integrity. This paper discusses application of unitary space-time codes such as the Alamouti transmit diversity scheme and unitary differential space-time codes to overcome the self-interference effect observed in such systems.
APA, Harvard, Vancouver, ISO, and other styles
16

Jootar, Jittra. "Effect of noisy channel estimates on the performance of convolutionally coded systems with transmit diversity." Connect to a 24 p. preview or request complete full text in PDF format. Access restricted to UC campuses, 2006. http://wwwlib.umi.com/cr/ucsd/fullcit?p3221442.

Full text
Abstract:
Thesis (Ph. D.)--University of California, San Diego, 2006.
Title from first page of PDF file (viewed September 18, 2006). Available via ProQuest Digital Dissertations. Vita. Includes bibliographical references (p. 96-100).
APA, Harvard, Vancouver, ISO, and other styles
17

Menon, Rekha. "Impact of Channel Estimation Errors on Space Time Trellis Codes." Thesis, Virginia Tech, 2003. http://hdl.handle.net/10919/36490.

Full text
Abstract:

Space Time Trellis Coding (STTC) is a unique technique that combines the use of multiple transmit antennas with channel coding. This scheme provides capacity benefits in fading channels, and helps in improving the data rate and reliability of wireless communication. STTC schemes have been primarily designed assuming perfect channel estimates to be available at the receiver. However, in practical wireless systems, this is never the case. The noisy wireless channel precludes an exact characterization of channel coefficients. Even near-perfect channel estimates can necessitate huge overhead in terms of processing or spectral efficiency. This practical concern motivates the study of the impact of channel estimation errors on the design and performance of STTC.

The design criteria for STTC are validated in the absence of perfect channel estimates at the receiver. Analytical results are presented that model the performance of STTC systems in the presence of channel estimation errors. Training based channel estimation schemes are the most popular choice for STTC systems. The amount of training however, increases with the number of transmit antennas used, the number of multi-path components in the channel and a decrease in the channel coherence time. This dependence is shown to decrease the performance gain obtained when increasing the number of transmit antennas in STTC systems, especially in channels with a large Doppler spread (low channel coherence time). In frequency selective channels, the training overhead associated with increasing the number of antennas can be so large that no benefit is shown to be obtained by using STTC.

The amount of performance degradation due to channel estimation errors is shown to be influenced by system parameters such as the specific STTC code employed and the number of transmit and receive antennas in the system in addition to the magnitude of the estimation error. Hence inappropriate choice of system parameters is shown to significantly alter the performance pattern of STTC.

The viability of STTC in practical wireless systems is thus addressed and it is shown that that channel estimation could offset benefits derived from this scheme.


Master of Science
APA, Harvard, Vancouver, ISO, and other styles
18

Yan, Yueran. "CdTe, CdTe/CdS Core/Shell, and CdTe/CdS/ZnS Core/Shell/Shell Quantum Dots Study." Ohio University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=ohiou1327614907.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Halsema, John Anthony. "A high resolution wide-band sonar using coded noise-like waveforms and a parametric transmit array." Thesis, Massachusetts Institute of Technology, 1992. http://hdl.handle.net/1721.1/13133.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Anderson, Adam Lane. "Unitary Space-Time Transmit Diversity for Multiple Antenna Self-Interference Suppression." BYU ScholarsArchive, 2004. https://scholarsarchive.byu.edu/etd/154.

Full text
Abstract:
A common practice for government defense agencies and commercial aeronautical companies is to use dual antennas on test flight air vehicles in order to overcome occlusion issues during high-speed telemetric maneuvers. The dual antennas, though never being masked at the same time, unfortunately lead to a drastic increase in nulls in the signal pattern. The result of this interference pattern can be compared to the effect of fading in a multiple-input multiple-output (MIMO) multi-path scattering environment. Confidence in this comparison leads to the use of unitary space-time MIMO codes to overcome the signal self-interference. The possibility and performance of several of these codes will be examined. Such criteria as training for channel estimation, use of shaped offset quadrature phase shift keying (SOQPSK), hardware facility, and data throughput will be compared for each code. A realistic telemetry channel will be derived to increase accuracy of simulated results and conclusions.
APA, Harvard, Vancouver, ISO, and other styles
21

Xing, Mian. "Validation of TRACE Code against ROSA/LSTF Test for SBLOCA of Pressure Vessel Upper-Head Small Break." Thesis, KTH, Kärnkraftsäkerhet, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-95745.

Full text
Abstract:
OECD/NEA ROSA/LSTF project tests are performed on the Large Scale Test Facility (LSTF). LSTF is a full-height, full-pressure and 1/48 volumetrically-scaled two-loop system which aims to simulate Japanese Tsuruga-2 Westinghouse-type 4-loop PWR. ROSA-V Test 6-1 simulates a pressure vessel (PV) upper-head small break loss-of-coolant accident (SBLOCA) with a break size equivalent to 1.9% of the volumetrically scaled cross-sectional area of the reference PWR cold leg.The main objective of present thesis is to build a TRACE calculation model for simulating thermal hydraulic behaviors in LSTF and PV upper-head SBLOCA, so as to assess different modeling options and parameters of TRACE code. The results show that TRACE code well reproduce the complex physical phenomena involved in this type of SBLOCA scenarios. Almost all the events in the experiment are well predicted by the model based on TRACE code. In addition, the sensitivity of different models and parameters are investigated. For example, the code slightly overestimates the break mass flow from upper head which affects the accuracy of the results significantly. The rising of core exit temperature (CET) is significantly influenced by the flow area of leakage between downcomer and hot leg. Besides, the effect of the break location, low pressure injection system (LPIS) and accumulator setup are also studied.
APA, Harvard, Vancouver, ISO, and other styles
22

Koken, Erman. "A Comparison Of Time-switched Transmit Diversity And Space-time Coded Systems Over Time-varying Miso Channels." Master's thesis, METU, 2011. http://etd.lib.metu.edu.tr/upload/12613557/index.pdf.

Full text
Abstract:
This thesis presents a comparison between two transmit diversity schemes, namely space-time coding and time-switched transmit diversity (TSTD) over block-fading and time-varying multi-input single-output (MISO) channels with different channel parameters. The schemes are concatenated with outer channel codes in order to achieve spatio-temporal diversity. The analytical results are derived for the error performances of the systems and the simulation results as well as outage probabilities are provided. Besides, the details of the pilot-symbol-aided modulation (PSAM) technique are investigated and the error performances of the systems are analyzed when the channel state information is estimated with PSAM. It is demonstrated using the analytical and simulation results that TSTD have a comparable error performance with the space-time coding techniques and it even outperforms the space-time codes for some channel parameters. Our results indicate that TSTD can be suggested as an alternative to space-time codes in some time-varying channels especially due to the implementation simplicity.
APA, Harvard, Vancouver, ISO, and other styles
23

Fehr, Brandon M. "Detailed study of the transient rod pneumatic system on the annular core research reactor." Thesis, Georgia Institute of Technology, 2016. http://hdl.handle.net/1853/55032.

Full text
Abstract:
Throughout the history of the Annular Core Research Reactor (ACRR), Transient Rod (TR) A has experienced an increased rate of failure versus the other two TRs (B and C). Either by pneumatic force or electric motor, the transient rods remove the poison rods from the ACRR core allowing for the irradiation of experiments. In order to develop causes for why TR A is failing (rod break) more often, a better understanding of the whole TR system and its components is needed. This study aims to provide a foundational understanding of how the TR pneumatic system affects the motion of the TRs and the resulting effects that the TR motion has on the neutronics of the ACRR. Transient rod motion profiles have been generated using both experimentally-obtained pressure data and by thermodynamic theory, and input into Razorback, a SNL-developed point kinetics and thermal hydraulics code, to determine the effects that TR timing and pneumatic pressure have on reactivity addition and reactivity feedback. From this study, accurate and precise TR motion profiles have been developed, along with an increased understanding of the pulse timing sequence. With this information, a safety limit within the ACRR was verified for different TR travel lengths and pneumatic system pressures. In addition, longer reactivity addition times have been correlated to cause larger amounts of reactivity feedback. The added clarity on TR motion and timing from this study will pave the way for further study to determine the cause for the increased failure rate of TR A.
APA, Harvard, Vancouver, ISO, and other styles
24

Peltonen, Joanna. "Effective Spatial Mapping for Coupled Code Analysis of Thermal–Hydraulics/Neutron–Kinetics of Boiling Water Reactors." Doctoral thesis, KTH, Kärnkraftsäkerhet, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-122088.

Full text
Abstract:
Analyses of nuclear reactor safety have increasingly required coupling of full three dimensional neutron kinetics (NK) core models with system transient thermal–hydraulics (TH) codes.  In order to produce results within a reasonable computing time, the coupled codes use two different spatial description of the reactor core.  The TH code uses few, typically 5 to 20 TH channels, which represent the core.  The NK code uses explicit one node for each fuel assembly.  Therefore, a spatial mapping of a coarse grid TH and a fine grid NK domain is necessary.  However, improper mappings may result in loss of valuable information, thus causing inaccurate prediction of safety parameters. The purpose of this thesis is to study the effectiveness of spatial coupling (channel refinement and spatial mapping) and develop recommendations for NK/TH mapping in simulation of safety transients.  Additionally, sensitivity of stability (measured by Decay Ratio and Frequency) to the different types of mapping schemes, is analyzed against OECD/NEA Ringhals–1 Stability Benchmark data. The research methodology consists of spatial coupling convergence study, by increasing the number of TH channels and varying mapping approaches, up to and including the reference case.  The reference case consists of one-to-one mapping: one TH channel per one fuel assembly.  The comparisons of the results are done for steady–state and transient results.  In this thesis mapping (spatial coupling) definition is formed and all the existing mapping approaches were gathered, analyzed and presented.  Additionally, to increase the efficiency and applicability of spatial mapping convergence, a new mapping methodology has been proposed.  The new mapping approach is based on hierarchical clustering method; the method of unsupervised learning that is adopted by many researchers in many different scientific fields, thanks to its flexibility and robustness.  The proposed new mapping method turns out to be very successful for spatial coupling problem and can be fully automatized allowing for significant time reduction in mapping convergence study. The steady–state results obtained from three different plant models for all the investigated cases are presented.  All models achieved well converged steady–state and local parameters were compared and it was concluded that solid basis for further transient analysis was found.  Analyzing the mapping performance, the best predictions for steady–state conditions are the mappings that include the power peaking factor feature alone or with any combination of other features.  Additionally it is of value to keep the core symmetry (symmetry feature).  The big part of this research is devoted to transient analysis.  The selection of transients was done such that it covers a wide range of transients and gathered knowledge may be used for other types of transients.  As a representative of a local perturbation, Control Rod Drop Accident was chosen.  A specially prepared Feedwater Transient was investigated as a regional perturbation and a Turbine Trip is an example of a global one.  In the case of local perturbation, it has been found that a number of TH channels is less important than the type of mapping, so a high number of TH channels does not guarantee improved results.  To avoid unnecessary averaging and to obtain the best prediction, hot channel and core zone where accident happens should be always separated from the rest.  The best performance is achieved with mapping according power peaking factors, and therefore this one is recommended for such type of perturbation. The regional perturbation has been found to be more challenging than the others.  This kind of perturbation is strongly dependent on mapping type that affects the power increase rate, SCRAM time, onset of instability, development of limit cycle, etc.  It has been also concluded that a special effort is needed for input model preparation.   In contrast to the regional perturbation, the global perturbation is found to be the least demanding transient.  Here, the number of TH channels and type of mapping do not have significant impact on average plant behaviour – general plant response is always well recreated.  A special effort has also been paid to investigate the core stability performance, in both global and regional mode.  It has been found that in case of unstable cores, a low number of TH channels significantly suppresses the instability.  For these cases number of TH channels is very important and therefore at least half of the core has to be modeled to have a confidence in predicted DR and FR.  In case of regional instability in order to get correct performance of out-of-phase oscillations, it is recommended to use full-scale model.  If this is not possible, the mapping which is a mixture of 1st power mode and power peaking factors, should be used. The general conclusions and recommendations are summarized at the end of this thesis.  Development of these recommendations was one of the purposes of this investigation and they should be taken into consideration while designing new coupled TH/NK models and choosing mapping strategy for a new transient analysis.

QC 20130516

APA, Harvard, Vancouver, ISO, and other styles
25

Chi, Zhanjiang. "Performance Analysis of Maximal-Ratio Combining and Space-Time Block Codes with Transmit Antenna Selection over Nakagami-m Fading Channels." School of Electrical and Information Engineering, 2007. http://hdl.handle.net/2123/2012.

Full text
Abstract:
Master of Engineering (Research)
The latest wireless communication techniques such as highspeed wireless internet application demand higher data rates and better quality of service (QoS). However, transmission reliability is still degraded by harsh propagation channels. Multiple-input multiple-output (MIMO) systems can increase the system capacity and improve transmission reliability. By transmitting multiple copies of data, a MIMO system can effectively combat the effects of fading. Due to the high hardware cost of a MIMO system, antenna selection techniques have been applied in MIMO system design to reduce the system complexity and cost. The Nakagami-m distribution has been considered for MIMO channel modeling since a wide range of fading channels, from severe to moderate, can be modeled by using Nakagami-m distribution. The Rayleigh distribution is a special case of the Nakagami-m distribution. In this thesis, we analyze the error performance of two MIMO schemes: maximal-ratio combining with transmit antenna selection (the TAS/MRC scheme) and space-time block codes with transmit antenna selection (the TAS/STBC scheme) over Nakagami-m fading channels. In the TAS/MRC scheme, one of multiple transmit antennas, which maximizes the total received signal-to-noise ratio (SNR), is selected for uncoded data transmission. First we use a moment generating function based (MGF-based) approach to derive the bit error rate (BER) expressions for binary phase shift keying (BPSK), the symbol error rate (SER) expressions for M-ray phase shift keying (MPSK) and M-ray quadrature amplitude modulation (MQAM) of the TAS/MRC scheme over Nakagami-m fading channels with arbitrary and integer fading parameters m. The asymptotic performance is also investigated. It is revealed that the asymptotic diversity order is equal to the product of the Nakagami fading parameter m, the number of transmit antenna Lt and the number of receive antenna Lr as if all transmit antenna were used. Then a Gaussian Q-functions approach is used to investigate the error performance of the TAS/STBC scheme over Nakagami-m fading channels. In the TAS/STBC scheme, two transmit antennas, which maximize the output SNR, are selected for transmission. The exact and asymptotic BER expressions for BPSK are obtained for the TAS/STBC schemes with three and four transmit antennas. It is shown that the TAS/STBC scheme can provide a full diversity order of mLtLr.
APA, Harvard, Vancouver, ISO, and other styles
26

Pretorius, Louisa. "Comparative study between a two–group and a multi–group energy dynamics code / Louisa Pretorius." Thesis, North-West University, 2010. http://hdl.handle.net/10394/4947.

Full text
Abstract:
The purpose of this study is to evaluate the effects and importance of different cross–section representations and energy group structures for steady state and transient analysis. More energy groups may be more accurate, but the calculation becomes much more expensive, hence a balance between accuracy and calculation effort must be find. This study is aimed at comparing a multi–group energy dynamics code, MGT (Multi–group TINTE) with TINTE (TIme Dependent Neutronics and TEmperatures). TINTE’s original version (version 204d) only distinguishes between two energy group structures, namely thermal and fast region with a polynomial reconstruction of cross–sections pre–calculated as a function of different conditions and temperatures. MGT is a TINTE derivative that has been developed, allowing a variable number of broad energy groups. The MGT code will be benchmarked against the OECD PBMR coupled neutronics/thermal hydraulics transient benchmark: the PBMR–400 core design. This comparative study reveals the variations in the results when using two different methods for cross–section generation and multi–group energy structure. Inputs and results received from PBMR (Pty) Ltd. were used to do the comparison. A comparison was done between two–group TINTE and the equivalent two energy groups in MGT as well as between 4, 6 and 8 energy groups in MGT with the different cross–section generation methods, namely inline spectrum– and tabulated cross–section method. The characteristics that are compared are reactor power, moderation– and maximum fuel temperatures and k–effective (only steady state case). This study revealed that a balance between accuracy and calculation effort can be met by using a 4–group energy group structure. A larger part of the available increase in accuracy can be obtained with 4–groups, at the cost of only a small increase in CPU time. The changing of the group structures in the steady state case from 2 to 8 groups has a greater influence on the variation in the results than the cross–section generation method that was used to obtain the results. In the case of a transient calculation, the cross–section generation method has a greater influence on the variation in the results than on the steady state case and has a similar effect to the number of energy groups.
Thesis (M.Ing. (Nuclear Engineering))--North-West University, Potchefstroom Campus, 2011.
APA, Harvard, Vancouver, ISO, and other styles
27

Baudin, Michaël. "Méthodes de relaxation pour la simulation des écoulements polyphasiques dans les conduites pétrolières." Paris 6, 2003. https://tel.archives-ouvertes.fr/tel-01583001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Afifi, Mohammed Ahmed Melegy Mohammed. "TCP FTAT (Fast Transmit Adaptive Transmission): A New End-To- End Congestion Control Algorithm." Cleveland State University / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=csu1414689425.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Jasiulevicius, Audrius. "Analysis methodology for RBMK-1500 core safety and investigations on corium coolabiblty during a LWR sever accidnet." Doctoral thesis, KTH, Energy Technology, 2004. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-3703.

Full text
Abstract:

This thesis presents the work involving two broad aspectswithin the field of nuclear reactor analysis and safety. Theseare: - development of a fully independent reactor dynamics andsafety analysis methodology of the RBMK-1500 core transientaccidents and - experiments on the enhancement of coolabilityof a particulate bed or a melt pool due to heat removal throughthe control rod guide tubes.

The first part of the thesis focuses on the development ofthe RBMK-1500 analysis methodology based on the CORETRAN codepackage. The second part investigates the issue of coolabilityduring severe accidents in LWR type reactors: the coolabilityof debris bed and melt pool for in- vessel and ex-vesselconditions.

The safety of the RBMK type reactors became an importantarea of research after the Chernobyl accident. Since 1989,efforts to adopt Western codes for the RBMK analysis and safetyassessment are being made. The first chapters of this Thesisdescribe the development of an independent neutron dynamics andsafety analysis methodology for the RBMK-1500 core transientsand accidents. This methodology is based on the codes HELIOSand CORETRAN. The RBMK-1500 neutron cross section library wasgenerated with the HELIOS code. The ARROTTA part of theCORETRAN code performs three dimensional neutron dynamicsanalysis and the VIPRE-02 part of the CORETRAN package performsthe rod bundle thermal hydraulics analysis. The VIPRE-02 codewas supplemented with additional CHF correlations, used inRBMK-type reactor calcula tions. The validation, verificationand assessment of the CORETRAN code model for RBMK-1500 wereperformed and are described in the thesis.

The second part of the thesis describes the in- vesselparticulate debris bed and melt pool coolabilityinvestigations. The role of the control rod guide tubes (CRGTs)in enhancing the coolability during a postulated severeaccident in a BWR was investigated experimentally. Thisinvestigation is directed towards the accident managementscheme of retaining the core melt within the BWR lowerhead.

The particulate debris bed coolability was also investigatedduring the ex-vessel severe accident situation, having a flowof non-condensable gases through the porous debris bed.Experimental investigations on the dependence of the quenchingtime on the non-condensable gas flow rate were carriedout.

The first chapter briefly presents the status ofdevelopments in both the RBMK- 1500 core analysis and thecorium coolability areas.

The second chapter describes the generation of the RBMK-1500neutron cross section data library with the HELIOS code. Thecross section library was developed for the whole range of thereactor conditions (i.e. for both cold and hot reactor states).The results of the benchmarking with the WIMS-D4 code andvalidation against the RBMK Critical Facility experiments isalso presented here. The HELIOS generated neutron cross sectiondata library provides a close agreement with the WIMS-D4 coderesults. The validation against the data from the CriticalExperiments shows that the HELIOS generated neutron crosssection library provides excellent predictions for thecriticality, axial and radial power distribution, control rodreactivity worths and coolant reactivity effects, etc. Thereactivity effects of voiding for the system, fuel assembly andadditional absorber channel are underpredicted in thecalculations using the HELIOS code generated neutron crosssections. The underprediction, however, is much less than thatobtained when the WIMS-D4 code generated cross sections areemployed.

The third chapter describes the work, performed towards theaccurate prediction, assessment and validation of the CHF andpost-CHF heat transfer for the RBMK- 1500 reactor fuelassemblies employing the VIPRE-02 code. This chapter describesthe experiments, which were used for validating the CHFcorrelations, appropriate for the RBMK-1500 type reactors.These correlations after validation were added to the standardversion of the VIPRE-02 code. The VIPRE-02 calculations werebenchmarked against the RELAP5/MOD3.3 code. It was found thatthese user-coded additional CHF correlations developed for theRBMK type reactors (Osmachkin, RRC KI and Khabenskicorrelations) and implemented into the code by the author,provide a good prediction of the CHF occurrence at the RBMKreactor nominal pressure range (at about 7 MPa). Transition andfilm boiling are also predicted well with the VIPRE-02 code forthis pressure range. It was found, that for the RBMK- 1500reactor applications, EPRI CHF correlation should be used forthe CHF predictions for the lower fuel assemblies of thereactor in the subchannel model of the RBMK-1500 fuel assembly.RRC KI and Bowring CHF correlations may be used for the upperfuel assemblies. For a single-channel model of the RBMK-1500fuel channel, Osmachkin, RRC KI and Bowring correlationsprovide the closest predictions and may be used for the CHFestimation. For the low coolant mass fluxes in the fuelchannel, Khabenski correlation can be applied.

The fourth chapter presents the verification of the CORETRANcode for the RBMK-1500 core analysis (HELIOS generated neutroncross section data, coupled CORETRAN 3-D neutron kineticscalculations and VIPRE-02 thermal hydraulic module). The modelwas verified against a number of RBMK-1500 plant data andtransient calculations. The new RBMK-1500 core model wassuccessfully applied in several safety assessment applications.A series of transient calculations, considered within the scopeof the RBMK-type reactor Safety Analysis Report (SAR), wereperformed. Several cases of the transient calculations arepresented in this chapter. The HELIOS/CORETRAN/VIPRE-02 coremodel for the RBMK-1500 is fully functional. The RBMK-1500 CPSlogic, added into the CORETRAN provides an adequate response tothe changes in the reactor parameters.

Chapters 5 and 6 describe the experiments and the analysisperformed on the coolability of particulate debris bed and meltpool during a postulated severe accident in the LWR. In theChapter 5, the coolability potential, offered by the presenceof a large number of the Control Rod Guide Tubes (CRGTs) in theBWR lower head is presented. The experimental investigationsfor the enhancement of coolability possible with CRGTs wereperformed on two experimental facilities: POMECO (POrous MEdiumCOolability) and COMECO (COrium MElt COolability). Theinfluence of the coolant supply through the CRGT on the debrisbed dryout heat flux, debris bed and melt pool quenching time,crust growth rate, etc. were examined. The heat removalcapacity offered by the presence of the CRGT was quantifiedwith the experimental data, obtained from the POMECO and COMECOfacilities. It was found that the presence of the CRGTs in thelower head of a BWR offers a substantial potential for heatremoval during a postulated severe accident. Additional 10-20kW of heat were removed from the POMECO and COMECO testsections through the CRGT. This corresponds to the average heatflux on the CRGT wall equal to 100-300 kW/m2.

In the Chapter 6 the ex-vessel particulate debris bedcoolability is investigated, considering the non-condensablegases released from the concrete ablation process. Theinfluence of the flow of the non-condensable gases on theprocess of quenching a hot porous debris bed was considered.The POMECO test facility was modified, adding the air supply atthe bottom of the test section, to simulate the noncondensablegas release. The process was investigated for both high and lowporosity debris beds. It was found that for the low porositybed composition the countercurrent flooding limit could beexceeded, which would degrade the quenching process for suchbed compositions. The experimental results were analyzed withseveral CCFL models, available in the literature.

Keywords:RBMK, light water reactor, core analysis,transient analysis, reactor dynamics, RIA, ATWS, critical heatflux, post-CHF, severe accidents, particulate debris beds, meltpool coolability, BWR, CRGT, dryout, quenching, CCFL, crustgrowth, solidification, water ingression, heat transfer.

APA, Harvard, Vancouver, ISO, and other styles
30

Sinnokrot, Mohanned Omar. "Space-time block codes with low maximum-likelihood decoding complexity." Diss., Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/31752.

Full text
Abstract:
Thesis (Ph.D)--Electrical and Computer Engineering, Georgia Institute of Technology, 2010.
Committee Chair: Barry, John; Committee Co-Chair: Madisetti, Vijay; Committee Member: Andrew, Alfred; Committee Member: Li, Ye; Committee Member: Ma, Xiaoli; Committee Member: Stuber, Gordon. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
31

Vargas, Paredero David Eduardo. "Transmit and Receive Signal Processing for MIMO Terrestrial Broadcast Systems." Doctoral thesis, Universitat Politècnica de València, 2016. http://hdl.handle.net/10251/66081.

Full text
Abstract:
[EN] Multiple-Input Multiple-Output (MIMO) technology in Digital Terrestrial Television (DTT) networks has the potential to increase the spectral efficiency and improve network coverage to cope with the competition of limited spectrum use (e.g., assignment of digital dividend and spectrum demands of mobile broadband), the appearance of new high data rate services (e.g., ultra-high definition TV - UHDTV), and the ubiquity of the content (e.g., fixed, portable, and mobile). It is widely recognised that MIMO can provide multiple benefits such as additional receive power due to array gain, higher resilience against signal outages due to spatial diversity, and higher data rates due to the spatial multiplexing gain of the MIMO channel. These benefits can be achieved without additional transmit power nor additional bandwidth, but normally come at the expense of a higher system complexity at the transmitter and receiver ends. The final system performance gains due to the use of MIMO directly depend on physical characteristics of the propagation environment such as spatial correlation, antenna orientation, and/or power imbalances experienced at the transmit aerials. Additionally, due to complexity constraints and finite-precision arithmetic at the receivers, it is crucial for the overall system performance to carefully design specific signal processing algorithms. This dissertation focuses on transmit and received signal processing for DTT systems using MIMO-BICM (Bit-Interleaved Coded Modulation) without feedback channel to the transmitter from the receiver terminals. At the transmitter side, this thesis presents investigations on MIMO precoding in DTT systems to overcome system degradations due to different channel conditions. At the receiver side, the focus is given on design and evaluation of practical MIMO-BICM receivers based on quantized information and its impact in both the in-chip memory size and system performance. These investigations are carried within the standardization process of DVB-NGH (Digital Video Broadcasting - Next Generation Handheld) the handheld evolution of DVB-T2 (Terrestrial - Second Generation), and ATSC 3.0 (Advanced Television Systems Committee - Third Generation), which incorporate MIMO-BICM as key technology to overcome the Shannon limit of single antenna communications. Nonetheless, this dissertation employs a generic approach in the design, analysis and evaluations, hence, the results and ideas can be applied to other wireless broadcast communication systems using MIMO-BICM.
[ES] La tecnología de múltiples entradas y múltiples salidas (MIMO) en redes de Televisión Digital Terrestre (TDT) tiene el potencial de incrementar la eficiencia espectral y mejorar la cobertura de red para afrontar las demandas de uso del escaso espectro electromagnético (e.g., designación del dividendo digital y la demanda de espectro por parte de las redes de comunicaciones móviles), la aparición de nuevos contenidos de alta tasa de datos (e.g., ultra-high definition TV - UHDTV) y la ubicuidad del contenido (e.g., fijo, portable y móvil). Es ampliamente reconocido que MIMO puede proporcionar múltiples beneficios como: potencia recibida adicional gracias a las ganancias de array, mayor robustez contra desvanecimientos de la señal gracias a la diversidad espacial y mayores tasas de transmisión gracias a la ganancia por multiplexado del canal MIMO. Estos beneficios se pueden conseguir sin incrementar la potencia transmitida ni el ancho de banda, pero normalmente se obtienen a expensas de una mayor complejidad del sistema tanto en el transmisor como en el receptor. Las ganancias de rendimiento finales debido al uso de MIMO dependen directamente de las características físicas del entorno de propagación como: la correlación entre los canales espaciales, la orientación de las antenas y/o los desbalances de potencia sufridos en las antenas transmisoras. Adicionalmente, debido a restricciones en la complejidad y aritmética de precisión finita en los receptores, es fundamental para el rendimiento global del sistema un diseño cuidadoso de algoritmos específicos de procesado de señal. Esta tesis doctoral se centra en el procesado de señal, tanto en el transmisor como en el receptor, para sistemas TDT que implementan MIMO-BICM (Bit-Interleaved Coded Modulation) sin canal de retorno hacia el transmisor desde los receptores. En el transmisor esta tesis presenta investigaciones en precoding MIMO en sistemas TDT para superar las degradaciones del sistema debidas a diferentes condiciones del canal. En el receptor se presta especial atención al diseño y evaluación de receptores prácticos MIMO-BICM basados en información cuantificada y a su impacto tanto en la memoria del chip como en el rendimiento del sistema. Estas investigaciones se llevan a cabo en el contexto de estandarización de DVB-NGH (Digital Video Broadcasting - Next Generation Handheld), la evolución portátil de DVB-T2 (Second Generation Terrestrial), y ATSC 3.0 (Advanced Television Systems Commitee - Third Generation) que incorporan MIMO-BICM como clave tecnológica para superar el límite de Shannon para comunicaciones con una única antena. No obstante, esta tesis doctoral emplea un método genérico tanto para el diseño, análisis y evaluación, por lo que los resultados e ideas pueden ser aplicados a otros sistemas de comunicación inalámbricos que empleen MIMO-BICM.
[CAT] La tecnologia de múltiples entrades i múltiples eixides (MIMO) en xarxes de Televisió Digital Terrestre (TDT) té el potencial d'incrementar l'eficiència espectral i millorar la cobertura de xarxa per a afrontar les demandes d'ús de l'escàs espectre electromagnètic (e.g., designació del dividend digital i la demanda d'espectre per part de les xarxes de comunicacions mòbils), l'aparició de nous continguts d'alta taxa de dades (e.g., ultra-high deffinition TV - UHDTV) i la ubiqüitat del contingut (e.g., fix, portàtil i mòbil). És àmpliament reconegut que MIMO pot proporcionar múltiples beneficis com: potència rebuda addicional gràcies als guanys de array, major robustesa contra esvaïments del senyal gràcies a la diversitat espacial i majors taxes de transmissió gràcies al guany per multiplexat del canal MIMO. Aquests beneficis es poden aconseguir sense incrementar la potència transmesa ni l'ample de banda, però normalment s'obtenen a costa d'una major complexitat del sistema tant en el transmissor com en el receptor. Els guanys de rendiment finals a causa de l'ús de MIMO depenen directament de les característiques físiques de l'entorn de propagació com: la correlació entre els canals espacials, l'orientació de les antenes, i/o els desequilibris de potència patits en les antenes transmissores. Addicionalment, a causa de restriccions en la complexitat i aritmètica de precisió finita en els receptors, és fonamental per al rendiment global del sistema un disseny acurat d'algorismes específics de processament de senyal. Aquesta tesi doctoral se centra en el processament de senyal tant en el transmissor com en el receptor per a sistemes TDT que implementen MIMO-BICM (Bit-Interleaved Coded Modulation) sense canal de tornada cap al transmissor des dels receptors. En el transmissor aquesta tesi presenta recerques en precoding MIMO en sistemes TDT per a superar les degradacions del sistema degudes a diferents condicions del canal. En el receptor es presta especial atenció al disseny i avaluació de receptors pràctics MIMO-BICM basats en informació quantificada i al seu impacte tant en la memòria del xip com en el rendiment del sistema. Aquestes recerques es duen a terme en el context d'estandardització de DVB-NGH (Digital Video Broadcasting - Next Generation Handheld), l'evolució portàtil de DVB-T2 (Second Generation Terrestrial), i ATSC 3.0 (Advanced Television Systems Commitee - Third Generation) que incorporen MIMO-BICM com a clau tecnològica per a superar el límit de Shannon per a comunicacions amb una única antena. No obstant açò, aquesta tesi doctoral empra un mètode genèric tant per al disseny, anàlisi i avaluació, per la qual cosa els resultats i idees poden ser aplicats a altres sistemes de comunicació sense fils que empren MIMO-BICM.
Vargas Paredero, DE. (2016). Transmit and Receive Signal Processing for MIMO Terrestrial Broadcast Systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/66081
TESIS
Premiado
APA, Harvard, Vancouver, ISO, and other styles
32

Gottfridsson, Filip. "Simulation of Reactor Transient and Design Criteria of Sodium-cooled Fast Reactors." Thesis, Uppsala universitet, Tillämpad kärnfysik, 2010. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-148572.

Full text
Abstract:
The need for energy is growing in the world and the market of nuclear power is now once more expanding. Some issues of the current light-water reactors can be solved by the next generation of nuclear power, Generation IV, where sodium-cooled reactors are one of the candidates. Phénix was a French prototype sodium-cooled reactor, which is seen as a success. Although it did encounter an earlier unexperienced phenomenon, A.U.R.N., in which a negative reactivity transient followed by an oscillating behavior forced an automatic emergency shutdown of the reactor. This phenomenon lead to a lot of downtime of the reactor and is still unsolved. However, the most probable cause of the transients is radial movements of the core, referred to as core-flowering. This study has investigated the available documentation of the A.U.R.N. events. A simplified model of core-flowering was also created in order to simulate how radial expansion affects the reactivity of a sodium-cooled core. Serpent, which is a Monte-Carlo based simulation code, was chosen as calculation tool. Furthermore, a model of the Phénix core was successfully created and partly validated. The model of the core has a k_eff = 1.00298 and a neutron flux of (8.43+-0.02)!10^15 neutrons/cm^2 at normal state. The result obtained from the simulations shows that an expansion of the core radius decreases the reactivity. A linear approximation of the result gave the relation: change in k_eff/core extension = - 60 pcm/mm. This value corresponds remarkably well to the around - 60 pcm/mm that was obtained from the dedicated core-flowering experiments in Phénix made by the CEA. Core-flowering can recreate similar signals to those registered during the A.U.R.N. events, though the absence of trace of core movements in Phénix speaks against this. However, if core-flowering is the sought answer, it can be avoided by design. The equipment that registered the A.U.R.N. events have proved to be insensitive to noise. Though, the high amplitude of the transients and their rapidness have made some researcher believe that the events are a combination of interference in the equipment of Phénix and a mechanical phenomenon. Regardless, the origin of A.U.R.N. seems to be bound to some specific parameter of Phénix due to the fact that the transients only have occurred in this reactor. A safety analysis made by an expert committee, appointed by CEA, showed that the A.U.R.N. events are not a threat to the safety of Phénix. However, the origin of these negative transients has to be found before any construction of a commercial size sodium-cooled fast reactor can begin. Thus, further research is needed.
APA, Harvard, Vancouver, ISO, and other styles
33

Zhang, Rui. "Transformer modelling and influential parameters identification for geomagnetic disturbances events." Thesis, University of Manchester, 2012. https://www.research.manchester.ac.uk/portal/en/theses/transformer-modelling-and-influential-parameters-identification-for-geomagnetic-disturbances-events(e7c8df5c-8fa9-491f-bc06-9cb90cbbf543).html.

Full text
Abstract:
Power transformers are a key element in the transmission and distribution of electrical energy and as such need to be highly reliable and efficient. In power system networks, transformer core saturation can cause system voltage disturbances or transformer damage or accelerate insulation ageing. Low frequency switching transients such as ferroresonance and inrush currents, and increasingly what is now known as geomagnetic induce currents (GIC), are the most common phenomena to cause transformer core saturation. This thesis describes extensive simulation studies carried out on GIC and switching ferroresonant transient phenomena. Two types of transformer model were developed to study core saturation problems; one is the mathematical transformer magnetic circuit model, and the other the ATPDraw transformer model. Using the mathematical transformer magnetic circuit model, the influence of the transformer core structure on the magnetising current has been successfully identified and so have the transformers' responses to GIC events. By using the ATPDraw transformer model, the AC system network behaviours under the influence of the DC bias caused by GIC events have been successfully analysed using various simulation case studies. The effects of the winding connection, the core structure, and the network parameters including system impedances and transformer loading conditions on the magnetising currents of the transformers are summarised. Transient interaction among transformers and other system components during energisation and de-energisation operations are becoming increasingly important. One case study on switching ferroresonant transients was modelled using the available transformer test report data and the design data of the main components of the distribution network. The results were closely matched with field test results, which verified the simulation methodology. The simulation results helped establish the fundamental understanding of GIC and ferroresonance events in the power networks; among all the influential parameters identified, transformer core structure is the most important one. In summary, the five-limb core is easier to saturate than the three-limb transformer under the same GIC events; the smaller the side yoke area of the five-limb core, the easier it will be to saturate. More importantly, under GIC events a transformer core could become saturated irrespective of the loading condition of the transformer.
APA, Harvard, Vancouver, ISO, and other styles
34

Kliem, S., U. Grundmann, and U. Rohde. "Qualifizierung des Kernmodells DYN3D im Komplex mit dem Störfallcode ATHLET als fortgeschrittenes Werkzeug für die Störfallanalyse von WWER-Reaktoren - Teil 2." Forschungszentrum Dresden, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:d120-qucosa-29356.

Full text
Abstract:
Benchmark calculations for the validation of the coupled neutron kinetics/thermohydraulic code complex DYN3D-ATHLET are described. Two benchmark problems concerning hypothetical accident scenarios with leaks in the steam system for a VVER-440 type reactor and the TMI-1 PWR have been solved. The first benchmark task has been defined by FZR in the frame of the international association "Atomic Energy Research" (AER), the second exercise has been organised under the auspices of the OECD. While in the first benchmark the break of the main steam collector in the sub-critical hot zero power state of the reactor was considered, the break of one of the two main steam lines at full reactor power was assumed in the OECD benchmark. Therefore, in this exercise the mixing of the coolant from the intact and the defect loops had to be considered, while in the AER benchmark the steam collector break causes a homogeneous overcooling of the primary circuit. In the AER benchmark, each participant had to use its own macroscopic cross section libraries. In the OECD benchmark, the cross sections were given in the benchmark definition. The main task of both benchmark problems was to analyse the re-criticality of the scrammed reactor due to the overcooling. For both benchmark problems, a good agreement of the DYN3D-ATHLET solution with the results of other codes was achieved. Differences in the time of re-criticality and the height of the power peak between various solutions of the AER benchmark can be explained by the use of different cross section data. Significant differences in the thermohydraulic parameters (coolant temperature, pressure) occurred only at the late stage of the transient during the emergency injection of highly borated water. In the OECD benchmark, a broader scattering of the thermohydraulic results can be observed, while a good agreement between the various 3D reactor core calculations with given thermohydraulic boundary conditions was achieved. Reasons for the differences in the thermohydraulics were assumed in the difficult modelling of the vertical once-through steam generator with steam superheating. Sensitivity analyses which considered the influence of the nodalisation and the impact of the coolant mixing model were performed for the DYN3D-ATHLET solution of the OECD benchmark. The solution of the benchmarks essentially contributed to the qualification of the code complex DYN3D-ATHLET as an advanced tool for the accident analysis for both VVER type reactors and Western PWRs.
APA, Harvard, Vancouver, ISO, and other styles
35

Kliem, S., U. Grundmann, and U. Rohde. "Qualifizierung des Kernmodells DYN3D im Komplex mit dem Störfallcode ATHLET als fortgeschrittenes Werkzeug für die Störfallanalyse von WWER-Reaktoren - Teil 2." Forschungszentrum Rossendorf, 2002. https://hzdr.qucosa.de/id/qucosa%3A21762.

Full text
Abstract:
Benchmark calculations for the validation of the coupled neutron kinetics/thermohydraulic code complex DYN3D-ATHLET are described. Two benchmark problems concerning hypothetical accident scenarios with leaks in the steam system for a VVER-440 type reactor and the TMI-1 PWR have been solved. The first benchmark task has been defined by FZR in the frame of the international association "Atomic Energy Research" (AER), the second exercise has been organised under the auspices of the OECD. While in the first benchmark the break of the main steam collector in the sub-critical hot zero power state of the reactor was considered, the break of one of the two main steam lines at full reactor power was assumed in the OECD benchmark. Therefore, in this exercise the mixing of the coolant from the intact and the defect loops had to be considered, while in the AER benchmark the steam collector break causes a homogeneous overcooling of the primary circuit. In the AER benchmark, each participant had to use its own macroscopic cross section libraries. In the OECD benchmark, the cross sections were given in the benchmark definition. The main task of both benchmark problems was to analyse the re-criticality of the scrammed reactor due to the overcooling. For both benchmark problems, a good agreement of the DYN3D-ATHLET solution with the results of other codes was achieved. Differences in the time of re-criticality and the height of the power peak between various solutions of the AER benchmark can be explained by the use of different cross section data. Significant differences in the thermohydraulic parameters (coolant temperature, pressure) occurred only at the late stage of the transient during the emergency injection of highly borated water. In the OECD benchmark, a broader scattering of the thermohydraulic results can be observed, while a good agreement between the various 3D reactor core calculations with given thermohydraulic boundary conditions was achieved. Reasons for the differences in the thermohydraulics were assumed in the difficult modelling of the vertical once-through steam generator with steam superheating. Sensitivity analyses which considered the influence of the nodalisation and the impact of the coolant mixing model were performed for the DYN3D-ATHLET solution of the OECD benchmark. The solution of the benchmarks essentially contributed to the qualification of the code complex DYN3D-ATHLET as an advanced tool for the accident analysis for both VVER type reactors and Western PWRs.
APA, Harvard, Vancouver, ISO, and other styles
36

Yurdanur, Elif. "Theoretical Investigation Of Laser Produced Ni-like Sn Plasma." Master's thesis, METU, 2006. http://etd.lib.metu.edu.tr/upload/12607568/index.pdf.

Full text
Abstract:
In this thesis, theoretical investigation of nickel-like tin plasma is presented. X-ray production in a plasma medium produced by a laser beam is reviewed. Applications mostly, lithography are discussed. Two different schemes for x-ray lasing, namely, quasi-steady state and transient collisional excitation are explained and compared. The computer codes that are used for plasma, especially for laser produced plasma and x-ray laser including hydrodynamic codes, ray-trace codes and collisional radiative codes are discussed. The code used in this work, EHYBRID, is considered in more detail. An experimental setup which can allow x-ray lasing is designed for different plasma and laser parameters are analyzed by means of EHYBRID code. Results are briefly discussed and as a future work the realization of the related experiment is mentioned.
APA, Harvard, Vancouver, ISO, and other styles
37

Rink, Norman Alexander, and Jeronimo Castrillon. "Comprehensive Backend Support for Local Memory Fault Tolerance." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-215785.

Full text
Abstract:
Technological advances drive hardware to ever smaller feature sizes, causing devices to become more vulnerable to transient faults. Applications can be protected against faults by adding error detection and recovery measures in software. This is popularly achieved by applying automatic program transformations. However, transformations applied to program representations at abstraction levels higher than machine instructions are fundamentally incapable of protecting against vulnerabilities that are introduced during compilation. In particular, a large proportion of a program’s memory accesses are introduced by the compiler backend. This report presents a backend that protects these accesses against faults in the memory system. It is demonstrated that the presented backend can detect all single bit flips in memory that would be missed by an error detection scheme that operates on the LLVM intermediate representation of programs. The presented compiler backend is obtained by modifying the LLVM backend for the x86 architecture. On a subset of SPEC CINT2006 the runtime overhead incurred by the backend modifications amounts to 1.50x for the 32-bit processor architecture i386, and 1.13x for the 64-bit architecture x86_64. To achieve comprehensive detection of memory faults, the modified backend implements an adjusted calling convention that leaves library function calls transparent and intact.
APA, Harvard, Vancouver, ISO, and other styles
38

Le, Roux Nicolas. "Etude par similitude de l'influence du vent sur les transferts de masse dans les bâtiments complexes." Phd thesis, Université de La Rochelle, 2011. http://tel.archives-ouvertes.fr/tel-00717838.

Full text
Abstract:
Les bâtiments résidentiels et industriels munis d'un réseau de ventilation constituent des installations complexes, susceptibles d'être le siège de transferts de masse et d'énergie variés, selon les situations de fonctionnement. Afin d'étudier ces transferts de masse, une méthodologie permettant d'établir des expérimentations à échelle réduite pour l'étude des écoulements isothermes, en régime permanent ou transitoire, a été développée. Cette méthodologie a été validée numériquement et expérimentalement sur des configurations simples, puis appliquée à deux configurations de référence, représentatives de celles rencontrées dans le domaine nucléaire.L'influence du vent sur les transferts de masse au sein de ces configurations, en situation de fonctionnement normale, dégradée (arrêt de la ventilation) ou accidentelle (surpression interne), a été étudiée dans la soufflerie climatique Jules Verne du CSTB. Les effets du vent, couplés ou non à une surpression interne, peuvent alors entraîner une perte partielle ou globale du confinement des polluants au sein des installations. De plus, la turbulence du vent peut induire des inversions instantanées des débits de fuite, qui ne sont pas identifiées en régime permanent. Par ailleurs, l'analyse de sollicitations transitoires montre la faible influence de l'inertie des branches sur les écoulements transitoires, pour des grandeurs caractéristiques d'une installation réelle. Enfin, des essais de traçage gazeux ont été réalisés afin d'étudier la dispersion d'un polluant au sein d'une configuration de référence soumise aux effets couplés du vent, de la ventilation mécanique et d'une surpression interne.La robustesse du code à zones SYLVIA, utilisé notamment pour appuyer les évaluations de sûreté des installations nucléaires, a été analysée à partir de ces résultats expérimentaux. La prise en compte des phénomènes physiques observés expérimentalement a été validée, en régimes permanent et transitoire. Toutefois, quelques limitations ont été identifiées pour l'étude de la dispersion d'un scalaire passif, du fait des hypothèses utilisées dans le code SYLVIA, comme dans tout code à zones (concentration homogène dans les locaux, propagation instantanée dans les branches et dans les locaux).
APA, Harvard, Vancouver, ISO, and other styles
39

Dagoneau, Nicolas. "Détection de sursauts gamma ultra-longs et traitement d'images embarqué pour le télescope spatial SVOM/ECLAIRs." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASP040.

Full text
Abstract:
Les sursauts gamma sont des phénomènes extragalactiques brefs, comptant parmi les plus énergétiques de l’Univers et résultant de la formation d’un trou noir stellaire. Ils sont caractérisés par une émission prompte de photons X et gamma, pouvant durer d’une fraction de seconde à quelques minutes, suivis d’une rémanence dans d’autres longueurs d’onde. La mission franco-chinoise SVOM qui est prévue d’entrer en opération après 2021 a pour objectif de détecter leur émission prompte et d’observer leur émission rémanente depuis l’espace et le sol. Le télescope à masque codé ECLAIRs embarqué à bord du satellite SVOM aura la charge de scruter le ciel à la recherche de l’émission prompte des sursauts gamma dans le domaine des rayons X durs et gamma mous, mais aussi d’observer d’autres sources connues rayonnant dans cette gamme d’énergie comme des binaires de rayons X hébergeant un trou noir ou une étoile à neutrons et une étoile compagnon. ECLAIRs est également sensible au bruit de fond cosmique de rayons X (CXB). Pendant ma thèse, j’ai étudié l’influence du CXB et des sources de rayons X connues sur les capacités de l’imagerie embarquée du télescope ECLAIRs. Le CXB et les sources connues dégradent la qualité des images produites par le logiciel embarqué, et donc réduisent les capacités de détection des sursauts gamma. Afin d’optimiser la détection de sources inconnues, j’ai étudié deux méthodes de traitement d’image détecteur avant reconstruction des images du ciel : une méthode d’ajustement d’un modèle prédéfini et une méthode à base d’ondelettes. Les sources brillantes connues risquant de perturber la détection des sursauts gamma seront corrigées par l’une de ces méthodes tandis que les moins brillantes seront exclues de la zone du ciel pour la recherche de nouvelles sources. Dans ce dernier cas, il sera possible de détecter avec le logiciel embarqué des éruptions de sources de rayons X connues. La stratégie de traitement des sources connues ainsi que la gestion de la détection d’éruptions reposent sur un catalogue qui fera partie du logiciel embarqué d’ECLAIRs et que j’ai construit à partir des données collectées par les instruments Swift/BAT et MAXI/GSC. De plus, je me suis aussi penché sur les sursauts gamma d’ultra-longue durée, dont l’émission en rayons X peut atteindre plus de 1000 secondes. La détection de ces sursauts pourrait bénéficier de l’imagerie à longue exposition d’ECLAIRs atteignant 20 minutes. J’ai simulé les quelques événements détectés à ce jour par l’instrument Swift/BAT avec un prototype du logiciel de déclenchement d’ECLAIRs et montré que ECLAIRs pourrait détecter au moins autant de sursauts ultra-longs que Swift
Abstract : Gamma-ray bursts (GRBs) are brief extragalactic phenomena, among the most energetic in the Universe, resulting from the formation of a stellar-mass black hole. They are characterised by a prompt emission of X and gamma-ray photons, which can last from a fraction of a second to a few minutes, followed by an afterglow in other wavelengths. The French-Chinese SVOM mission, expected to begin operations after 2021, aims to detect their prompt emission and to observe their afterglow from space but also from the ground. The ECLAIRs coded mask telescope onboard the SVOM satellite will scan the sky in search of the prompt emission of GRBs in the hard X-ray and soft gamma-ray band, but also observe other known sources emitting in this energy range, such as X-ray binaries hosting a black hole or a neutron star and a companion star. The ECLAIRs telescope is also sensitive to the Cosmic X-ray Background (CXB). During my thesis, I studied the influence of the CXB and the known X-ray sources on the onboard imaging capabilities of ECLAIRs. The CXB and known sources downgrade the quality of the images produced by the onboard software, and thus reduce the detection capabilities of GRBs. In order to enhance the detection of unknown sources, I studied two methods to correct the detector plane image prior to sky image reconstruction: a predefined model fitting method and a wavelet based method. Known bright sources that may disturb the detection of GRBs will be corrected by one of those methods, while the fainter ones will be excluded from the search region for new sources in the reconstructed sky. In the latter case, it will be possible to detect X-ray flares with the onboard software. The processing strategy for known sources and the management of flare detection are based on a catalogue which will be part of the ECLAIRs onboard software and which I have built from data collected by the Swift/BAT and MAXI/GSC instruments. In addition, I also studied ultra-long duration GRBs, whose X-ray emission can reach more than 1000 seconds. The detection of these bursts could benefit from long exposure imaging of ECLAIRs up to 20 minutes. I have simulated the few events detected so far by Swift/BAT with a prototype of the ECLAIRs triggering software and shown that ECLAIRs could detect at least as many ultra-long bursts as Swift
APA, Harvard, Vancouver, ISO, and other styles
40

Fougeron, Denis. "Etude et mise en oeuvre de cellules résistantes aux radiations dans le cadre de l'évolution du détecteur à pixels d'Atlas technologie CMOS 65 nm." Electronic Thesis or Diss., Toulon, 2020. http://www.theses.fr/2020TOUL0005.

Full text
Abstract:
Cette étude s’inscrit dans le cadre d’une collaboration internationale, RD53, et qui vise à fournir à la communauté scientifique un ASIC « Front-End » de lecture du futur détecteur pixels courant 2022. La technologie 65 nm choisie par la communauté scientifique devra fonctionner dans un environnement extrêmement radioactif (10 MGray) pendant cinq ans d’exploitation sans maintenance possible.Deux approches expérimentales sont décrites dans ce mémoire : 1. Des études en irradiation ont été réalisées afin d'estimer la tolérance à la dose (TID) du process 65 nm pour fixer des règles de conception qui peuvent être respectées pour les cellules numériques et analogiques implantées dans le circuit final. Des véhicules de test (PCM) ont été définis pour être irradiés à l’aide d’une source de rayons X (10 keV – 3 kW) afin d'estimer les effets de dose. Les résultats obtenus sont synthétisés dans les chapitres concernés. 2. Dans le but d'optimiser l'immunité des points mémoires aux effets des SEU, plusieurs circuits prototypes ont été conçus. Ils incluent différentes architectures en vue d’être irradiées. Plusieurs campagnes d'irradiation ont été menées en utilisant un faisceau d'ions lourds et un faisceau de protons à dessein de comparer leur comportement et d’en extraire une cross-section la plus précise possible
This study is inside an international collaboration context, RD53, which its goal is to provide to the scientific community an electronic front-end for the readout of the future pixel detector in 2022. The 65 nm technology chosen by the collaboration will have to be operational in a highly radioactive environment (10 MGray) for five years without maintenance operation.Two experimental approaches are described in this thesis: 1. Irradiation studies were carried out to estimate the dose tolerance (TID) of the 65 nm process to fix all essentials design rules for digital and analog cells implanted in the final circuit. Test vehicles (PCM) were defined for irradiation using an X-ray source (10 keV - 3 kW) to estimate dose effects. The results we obtained are summarized in the document. 2. In order to optimize the tolerance of memories to the SEE effects, several ASIC prototypes havebeen designed. These prototypes include different architectures for irradiation characterization. Several irradiation campaigns have been carried out using a heavy ion beam and a proton beam in order to a cross-section as accurate as possible
APA, Harvard, Vancouver, ISO, and other styles
41

Chielle, Eduardo. "Selective software-implemented hardware fault tolerance tecnhiques to detect soft errors in processors with reduced overhead." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/142568.

Full text
Abstract:
A utilização de técnicas de tolerância a falhas em software é uma forma de baixo custo para proteger processadores contra soft errors. Contudo, elas causam aumento no tempo de execução e utilização de memória. Em consequência disso, o consumo de energia também aumenta. Sistemas que operam com restrição de tempo ou energia podem ficar impossibilitados de utilizar tais técnicas. Por esse motivo, este trabalho propoe técnicas de tolerância a falhas em software com custos no desempenho e memória reduzidos e cobertura de falhas similar a técnicas presentes na literatura. Como detecção é menos custoso que correção, este trabalho foca em técnicas de detecção. Primeiramente, um conjunto de técnicas de dados baseadas em regras de generalização, chamada VAR, é apresentada. As técnicas são baseadas nesse conjunto generalizado de regras para permitir uma investigação exaustiva, em termos de confiabilidade e custos, de diferentes variações de técnicas. As regras definem como a técnica duplica o código e insere verificadores. Cada técnica usa um diferente conjunto de regras. Então, uma técnica de controle, chamada SETA, é introduzida. Comparando SETA com uma técnica estado-da-arte, SETA é 11.0% mais rápida e ocupa 10.3% menos posições de memória. As técnicas de dados mais promissoras são combinadas com a técnica de controle com o objetivo de proteger tanto os dados quanto o fluxo de controle da aplicação alvo. Para reduzir ainda mais os custos, métodos para aplicar seletivamente as técnicas propostas foram desenvolvidos. Para técnica de dados, em vez de proteger todos os registradores, somente um conjunto de registradores selecionados é protegido. O conjunto é selecionado com base em uma métrica que analisa o código e classifica os registradores por sua criticalidade. Para técnicas de controle, há duas abordagens: (1) remover verificadores de blocos básicos, e (2) seletivamente proteger blocos básicos. As técnicas e suas versões seletivas são avaliadas em termos de tempo de execução, tamanho do código, cobertura de falhas, e o Mean Work to Failure (MWTF), o qual é uma métrica que mede o compromisso entre cobertura de falhas e tempo de execução. Resultados mostram redução dos custos sem diminuição da cobertura de falhas, e para uma pequena redução na cobertura de falhas foi possível significativamente reduzir os custos. Por fim, uma vez que a avaliação de todas as possíveis combinações utilizando métodos seletivos toma muito tempo, este trabalho utiliza um método para extrapolar os resultados obtidos por simulação com o objetivo de encontrar os melhores parâmetros para a proteção seletiva e combinada de técnicas de dados e de controle que melhorem o compromisso entre confiabilidade e custos.
Software-based fault tolerance techniques are a low-cost way to protect processors against soft errors. However, they introduce significant overheads to the execution time and code size, which consequently increases the energy consumption. System operation with time or energy restrictions may not be able to make use of these techniques. For this reason, this work proposes software-based fault tolerance techniques with lower overheads and similar fault coverage to state-of-the-art software techniques. Once detection is less costly than correction, the work focuses on software-based detection techniques. Firstly, a set of data-flow techniques called VAR is proposed. The techniques are based on general building rules to allow an exhaustive assessment, in terms of reliability and overheads, of different technique variations. The rules define how the technique duplicates the code and insert checkers. Each technique uses a different set of rules. Then, a control-flow technique called SETA (Software-only Error-detection Technique using Assertions) is introduced. Comparing SETA with a state-of-the-art technique, SETA is 11.0% faster and occupies 10.3% fewer memory positions. The most promising data-flow techniques are combined with the control-flow technique in order to protect both dataflow and control-flow of the target application. To go even further with the reduction of the overheads, methods to selective apply the proposed software techniques have been developed. For the data-flow techniques, instead of protecting all registers, only a set of selected registers is protected. The set is selected based on a metric that analyzes the code and rank the registers by their criticality. For the control-flow technique, two approaches are taken: (1) removing checkers from basic blocks: all the basic blocks are protected by SETA, but only selected basic blocks have checkers inserted, and (2) selectively protecting basic blocks: only a set of basic blocks is protected. The techniques and their selective versions are evaluated in terms of execution time, code size, fault coverage, and Mean Work To Failure (MWTF), which is a metric to measure the trade-off between fault coverage and execution time. Results show that was possible to reduce the overheads without affecting the fault coverage, and for a small reduction in the fault coverage it was possible to significantly reduce the overheads. Lastly, since the evaluation of all the possible combinations for selective hardening of every application takes too much time, this work uses a method to extrapolate the results obtained by simulation in order to find the parameters for the selective combination of data and control-flow techniques that are probably the best candidates to improve the trade-off between reliability and overheads.
APA, Harvard, Vancouver, ISO, and other styles
42

Da, penha coelho Alexandre Augusto. "Tolérance aux fautes et fiabilité pour les réseaux sur puce 3D partiellement connectés." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT054.

Full text
Abstract:
Le paradigme de réseaux sur puce (NoC), basé sur un mécanisme modulaire de commutation par paquets, peut répondre à de nombreux défis de communication sur puce tels que la complexité du câblage, la latence des communications et la bande passante. De plus, les avantages combinés des circuits intégrés 3D et des NoCs offrent la possibilité de concevoir un système haute performance dans une zone limitée de la puce. Les NoCs 3D souffrent de certains problèmes de fiabilité tels que la variabilité des processus de fabrication 3D-IC. En particulier, le faible rendement de la connexion verticale a un impact significatif sur la conception des piles de matrices tridimensionnelles avec un grand nombre de TSV. De même, les progrès des technologies de fabrication de circuits intégrés entraînent une augmentation potentielle de leur sensibilité aux effets des rayonnements présents dans l'environnement dans lequel ils vont fonctionner. En fait, le nombre croissant de défaillances transitoires est devenu, au cours des dernières années, une préoccupation majeure dans la conception des systèmes de contrôle critiques. Par conséquent, l'évaluation de la sensibilité des circuits et des applications aux événements causés par les particules énergétiques présentes dans l'environnement réel est une préoccupation majeure à laquelle il faut répondre. Cette thèse présente donc des contributions dans deux domaines importants de la recherche sur la fiabilité : dans la conception et la mise en œuvre de schémas de routage à tolérance de pannes sans blocage pour les réseaux sur puce tridimensionnels émergents ; et dans la conception de cadres d'injection de défauts capables d'émuler des défauts transitoires simples et multiples dans les circuits basés sur HDL. La première partie de cette thèse aborde les problèmes des défauts transitoires et permanents dans l'architecture des NoCs 3D et présente une nouvelle unité de calcul de routage résiliente ainsi qu'un nouveau schéma de routage tolérant aux défauts d'exécution. Un nouveau mécanisme résilient est introduit afin de tolérer les défauts transitoires se produisant dans l'unité de calcul de route (RCU), qui est l'élément logique le plus important dans les routeurs NoC. En combinant un circuit de détection de défauts fiable à double échantillonnage au niveau du circuit et un mécanisme de réacheminement économique, nous développons une solution complète de tolérance aux fautes qui peut détecter et corriger efficacement ces erreurs fatales avant que les paquets affectés ne quittent le routeur. Pourtant, dans la première partie de cette thèse, un nouveau schéma de routage à tolérance de pannes pour les réseaux 3D sur puce à connexion verticale partielle appelé FL-RuNS est présenté. Grâce à une distribution asymétrique des canaux virtuels, FL-RuNS peut garantir une distribution de paquets à 100% sous un ensemble non contraint de temps d'exécution et de pannes permanentes des liaisons verticales. Dans le but d'émuler les effets du rayonnement sur les nouvelles conceptions de SoCs, la deuxième partie de cette thèse aborde les méthodologies d'injection de fautes en introduisant deux outils appelés NETFI-2 et NoCFI. NETFI-2 est une méthodologie d'injection de fautes capable d'émuler des défauts transitoires tels que SEU et SET dans un circuit HDL. Des expériences approfondies réalisées sur deux études de cas attrayantes sont présentées pour démontrer les caractéristiques et les avantages de NETFI-2. Enfin, dans la dernière partie de ce travail, nous présentons NoCFI comme une nouvelle méthodologie pour injecter des défauts multiples tels que les MBU et SEMT dans une architecture de réseaux sur puce. NoCFI combine ASIC-design-flow, afin d'extraire les informations de layout, et FPGA-design-flow pour émuler plusieurs défauts transitoires
Networks-on-Chip (NoC) have emerged as a viable solution for the communication challenges in highly complex Systems-on-Chip (SoC). The NoC architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication challenges such as wiring complexity, communication latency, and bandwidth. Furthermore, the combined benefits of 3D IC and Networks-on-Chip (NoC) schemes provide the possibility of designing a high-performance system in a limited chip area. The major advantages of Three-Dimensional Networks-on-Chip (3D-NoCs) are a considerable reduction in the average wire length and wire delay, resulting in lower power consumption and higher performance. However, 3D-NoCs suffer from some reliability issues such as the process variability of 3D-IC manufacturing. In particular, the low yield of vertical connection significantly impacts the design of three-dimensional die stacks with a large number of Through Silicon Via (TSV). Equally concerning, advances in integrated circuit manufacturing technologies are resulting in a potential increase in their sensitivity to the effects of radiation present in the environment in which they will operate. In fact, the increasing number of transient faults has become, in recent years, a major concern in the design of critical SoC. As a result, the evaluation of the sensitivity of circuits and applications to events caused by energetic particles present in the real environment is a major concern that needs to be addressed. So, this thesis presents contributions in two important areas of reliability research: in the design and implementation of deadlock-free fault-tolerant routing schemes for the emerging three-dimensional Networks-on-Chips; and in the design of fault injection frameworks able to emulate single and multiple transient faults in the HDL-based circuits. The first part of this thesis addresses the issues of transient and permanent faults in the architecture of 3D-NoCs and introduces a new resilient routing computation unit as well as a new runtime fault-tolerant routing scheme. A novel resilient mechanism is introduced in order to tolerate transient faults occurring in the route computation unit (RCU), which is the most important logical element in NoC routers. Failures in the RCU can provoke misrouting, which may lead to severe effects such as deadlocks or packet loss, corrupting the operation of the entire chip. By combining a reliable fault detection circuit leveraging circuit-level double-sampling, with a cost-effective rerouting mechanism, we develop a full fault-tolerance solution that can efficiently detect and correct such fatal errors before the affected packets leave the router. Yet in the first part of this thesis, a novel fault-tolerant routing scheme for vertically-partially-connected 3D Networks-on-Chip called FL-RuNS is presented. Thanks to an asymmetric distribution of virtual channels, FL-RuNS can guarantee 100% packet delivery under an unconstrained set of runtime and permanent vertical link failures. With the aim to emulate the radiation effects on new SoCs designs, the second part of this thesis addresses the fault injection methodologies by introducing two frameworks named NETFI-2 (Netlist Fault Injection) and NoCFI (Networks-on-Chip Fault Injection). NETFI-2 is a fault injection methodology able to emulate transient faults such as Single Event Upsets (SEU) and Single Event Transient (SET) in a HDL-based (Hardware Description Language) design. Extensive experiments performed on two appealing case studies are presented to demonstrate NETFI-2 features and advantage. Finally, in the last part of this work, we present NoCFI as a novel methodology to inject multiple faults such as MBUs and SEMT in a Networks-on-Chip architecture. NoCFI combines ASIC-design-flow, in order to extract layout information, and FPGA-design-flow to emulate multiple transient faults
APA, Harvard, Vancouver, ISO, and other styles
43

You, Hui Long, and 游輝隆. "TransII-logic compiled code logic simulator." Thesis, 1995. http://ndltd.ncl.edu.tw/handle/19503114603719113871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Tsai, Chia-Hung, and 蔡嘉紘. "BCH Coded Multilevel Space-Time Block Codes with Three Time Slots and Two Transmit Antennas." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/y94yxw.

Full text
Abstract:
碩士
國立臺北科技大學
電機工程研究所
105
In digital transmission, the technical of error control coding can improve the error rate. In recent years, multiple-input multiple output (MIMO) schemes can be used to over the problem of multipath fading in wireless communication environments. In this thesis, we proposed a multilevel space-time block code with two transmit antennas and three time slots which can be applied to LTE-A frame structure. A transmission system which can achieve high transmission rate requirements in the MIMO. This new structure is designed by multilevel space-time block code and BCH code. The simulation results show that this new structure can achieve full rate transmission and outstanding error performance.
APA, Harvard, Vancouver, ISO, and other styles
45

Hsu, Sheng-hui, and 徐勝輝. "An Approach to Transmit Secret Messages via QR Code." Thesis, 2011. http://ndltd.ncl.edu.tw/handle/76004027997891417720.

Full text
Abstract:
碩士
國立臺灣科技大學
資訊工程系
99
Encryption is the most frequently used method to secure the transmitted message over unsafe channels. However, ciphertext always looks like random string and can be easily detected by attackers. On the other hand, information hiding technologies utilize the characteristic of media or the representation of data to camouflage the existence of hidden message in order to protect the secrets.  QR code and other two-dimensional codes have gained wide applications in recent years. These codes are able to transmit information via physical media like papers in an efficient manner. Since QR code is designed to be tolerant of noise and has random-bits-like representation, it is naturally suitable for the use of information hiding method. This thesis improves an existing approach and demonstrates a complete implementation on sending confidential data with QR code by embedding the password or secret key as noise.
APA, Harvard, Vancouver, ISO, and other styles
46

Muharemovic, Tarik. "Information theory of transmit diversity and space-time code design." Thesis, 2000. http://hdl.handle.net/1911/17364.

Full text
Abstract:
We restate the achievable information rates for multiple transmit multiple receive antenna systems in fast fading channels. Then we consider non-ergodic channel, where we evaluate asymptotic expression for the outage probability and note its striking similarity with the error probability. After that, we consider a simple transmit diversity technique [7], and evaluate its achievable information rate. We note that in the case of a single receiver antenna there is no loss in capacity if one reserves to this technique as a mean of exploiting transmit diversity. However, we note that in the case of multiple receivers there is a penalty in capacity. Also, we demonstrate that CDMA orthogonalisation between antennas results in capacity loss. Then we derive performance criterion for codes concatenated to the simple diversity technique. After that, we propose some high-rate space time codes which use multiple antennas in order to increase the rate rather than improving communication reliability. At the end we note how some distance spectrum trimming can increase a performance of space-time codes without increasing computational complexity.
APA, Harvard, Vancouver, ISO, and other styles
47

Churms, Duane. "Comparison of code rate and transmit diversity in MIMO systems." Thesis, 2016. http://hdl.handle.net/10539/21155.

Full text
Abstract:
A thesis submitted in ful lment of the requirements for the degree of Master of Science in the Centre of Excellence in Telecommunications and Software School of Electrical and Information Engineering, March 2016
In order to compare low rate error correcting codes to MIMO schemes with transmit diversity, two systems with the same throughput are compared. A VBLAST MIMO system with (15; 5) Reed-Solomon coding is compared to an Alamouti MIMO system with (15; 10) Reed-Solomon coding. The latter is found to perform signi cantly better, indicating that transmit diversity is a more e ective technique for minimising errors than reducing the code rate. The Guruswami-Sudan/Koetter-Vardy soft decision decoding algorithm was implemented to allow decoding beyond the conventional error correcting bound of RS codes and VBLAST was adapted to provide reliability information. Analysis is also performed to nd the optimal code rate when using various MIMO systems.
MT2016
APA, Harvard, Vancouver, ISO, and other styles
48

Shi, Tai-Yu, and 徐泰裕. "Golay Code in Third Transmit Phasing for Harmonic Contrast Detection." Thesis, 2010. http://ndltd.ncl.edu.tw/handle/50702985523224072314.

Full text
Abstract:
碩士
國立臺灣科技大學
電機工程系
98
Ultrasonic harmonic image is limited by low signal-to-noise ratio (SNR) and insufficient contrast-to-tissue ratio (CTR). The method of 3f0 transmit phasing utilizes an additional 3f0 transmit signal to provide mutual cancellation between the frequency-sum component and frequency-difference component of tissue harmonic signal to improve image quality. This paper presents a technique that uses Golay code in third harmonic (3f0) transmit phasing for harmonic imaging with ultrasound. In linear imaging, Golay coded transmission is achieved by transmitting two coded sequences comprising of +1 and -1 pulses. The echoes from the two coded transmissions are processed with matched filter and are summed to increase mainlobe SNR. The complementary range sidelobes are also cancelled in the sum. To produce the -1 pulse of the Golay code for the harmonic signal in 3f0 transmit phasing, the phase shift of 90 degrees is added into the fundamental transmit phase and subtracted from the 3f0 transmit phase, respectively. Both simulations and experiments are performed to validate the Golay-encoded transmit waveform for the 3f0 transmit phasing. Our results show that, depending on the code length, the Golay code in combination with the 3f0 transmit phasing can enhance SNR by 8~14 dB together with the CTR improvement of 14~16 dB. Nevertheless, due to unique nonlinear oscillation of the microbubbles, the residual range sidelobes remain in the contrast region and thus lead to image degradation. Keywords: 3f0 transmit phasing, Golay code, contrast harmonic image, SNR, Contrast-to-tissue ratio (CTR)
APA, Harvard, Vancouver, ISO, and other styles
49

"Tempos de transito multiparametricos : estimação e inversão." Tese, Biblioteca Digital da Unicamp, 2001. http://libdigi.unicamp.br/document/?code=vtls000235894.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Wu, Tai-Cheng, and 吳泰徵. "Differentially Transmit-Diversity Block Coded 2IMO OFDM System." Thesis, 2004. http://ndltd.ncl.edu.tw/handle/60436366801444246324.

Full text
Abstract:
碩士
國立臺北科技大學
電腦通訊與控制研究所
92
For wireless mobile communications, the problem is that how to transmit and receive signals effectively and reliably in a practice transmission channel; thereupon, a wideband wireless communication system should be able to combat the time-selective fading resulted from the high-speed movement and overcome the frequency-selective fading caused by the multipath propagation. For the coherent transmission, the real-time channel state information (CSI) is required for demodulating the signals correctly. However, for a rapid fading channel, the accurate and real-time channel estimation is a difficult task. To avoid this, we consider the noncoherent transmission that employs the differential space-time block coding (DSTBC) to combat the time-selective fading. To obtain the best performance, the noncoherent maximum-likelihood sequence detector is studied. To avoid its high complexity, we also study its three special cases, namely, the noncoherent one-shot detector, the linearly predictive decision-feedback detector, and the linearly predictive Viterbi receiver. To overcome the frequency-selective fading, we choose the orthogonal frequency division multiplexing (OFDM) technique, which divides the transmission bandwidth into many narrowband subcarriers, each of which exhibits an approximately flat fading. In summary, combining the two advance transmission technique mentioned above, namely, the DSTBC and the OFDM, we construct the wideband wireless noncoherent 2IMO OFDM systems. Numerical results have revealed that satisfactory performances can be obtained even when the systems are operated in highly selective channels.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography