To see the other types of publications on this topic, follow the link: Multiple Mobile Signal Sources.

Dissertations / Theses on the topic 'Multiple Mobile Signal Sources'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 24 dissertations / theses for your research on the topic 'Multiple Mobile Signal Sources.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Habool, Al-Shamery Maitham. "Reconstruction of multiple point sources by employing a modified Gerchberg-Saxton iterative algorithm." Thesis, University of Sussex, 2018. http://sro.sussex.ac.uk/id/eprint/79826/.

Full text
Abstract:
Digital holograms has been developed an used in many applications. They are a technique by which a wavefront can be recorded and then reconstructed, often even in the absence of the original object. In this project, we use digital holography methods in which the original object amplitude and phase are recorded numerically, which would allow these data be downloaded to a spatial light modulator (SLM).This provides digital holography with capabilities that are not available using optical holographic methods. The digital holographically reconstructed image can be refocused to different depths depending on the reconstruction distance. This remarkable aspect of digital holography as can be useful in many applications and one of the most beneficial applications is when it is used for the biological cell studies. In this research, point source digital in-line and off-axis digital holography with a numerical reconstruction has been studied. The point source hologram can be used in many biological applications. As the original object we use the binary amplitude Fresnel zone plate which is made by rings with an alternating opaque and transparent transmittance. The in-line hologram of a spherical wave of wavelength, λ, emanating from the point source is initially employed in the project. Also, we subsequently employ an off-axis point source in which the original point-source object is translated away from original on-axis location. Firstly, we create the binary amplitude Fresnel zone plate (FZP) which is considered the hologram of the point source. We determine a phase-only digital hologram calculation technique for the single point source object. We have used a modified Gerchberg-Saxton algorithm (MGSA) instead of the non-iterative algorithm employed in classical analogue holography. The first complex amplitude distribution, i(x, y), is the result of the Fourier transform of the point source phase combined with a random phase. This complex filed distribution is the input of the iteration process. Secondly, we propagate this light field by using the Fourier transform method. Next we apply the first constraint by modifying the amplitude distribution, that is by replacing it with the measured modulus and keeping the phase distribution unchanged. We use the root mean square error (RMSE) criterion between the reconstructed field and the target field to control the iteration process. The RMSE decreases at each iteration, giving rise to an error-reduction in the reconstructed wavefront. We then extend this method to the reconstruction of multiple points sources. Thus the overall aim of this thesis has been to create an algorithm that is able to reconstruct the multi-point source objects from only their modulus. The method could then be used for biological microscopy applications in which it is necessary to determine the position of a fluorescing source from within a volume of biological tissue.
APA, Harvard, Vancouver, ISO, and other styles
2

Pather, Direshin. "A model for context awareness for mobile applications using multiple-input sources." Thesis, Nelson Mandela Metropolitan University, 2015. http://hdl.handle.net/10948/2969.

Full text
Abstract:
Context-aware computing enables mobile applications to discover and benefit from valuable context information, such as user location, time of day and current activity. However, determining the users’ context throughout their daily activities is one of the main challenges of context-aware computing. With the increasing number of built-in mobile sensors and other input sources, existing context models do not effectively handle context information related to personal user context. The objective of this research was to develop an improved context-aware model to support the context awareness needs of mobile applications. An existing context-aware model was selected as the most complete model to use as a basis for the proposed model to support context awareness in mobile applications. The existing context-aware model was modified to address the shortcomings of existing models in dealing with context information related to personal user context. The proposed model supports four different context dimensions, namely Physical, User Activity, Health and User Preferences. A prototype, called CoPro was developed, based on the proposed model, to demonstrate the effectiveness of the model. Several experiments were designed and conducted to determine if CoPro was effective, reliable and capable. CoPro was considered effective as it produced low-level context as well as inferred context. The reliability of the model was confirmed by evaluating CoPro using Quality of Context (QoC) metrics such as Accuracy, Freshness, Certainty and Completeness. CoPro was also found to be capable of dealing with the limitations of the mobile computing platform such as limited processing power. The research determined that the proposed context-aware model can be used to successfully support context awareness in mobile applications. Design recommendations were proposed and future work will involve converting the CoPro prototype into middleware in the form of an API to provide easier access to context awareness support in mobile applications.
APA, Harvard, Vancouver, ISO, and other styles
3

Shekaramiz, Mohammad. "Sparse Signal Recovery Based on Compressive Sensing and Exploration Using Multiple Mobile Sensors." DigitalCommons@USU, 2018. https://digitalcommons.usu.edu/etd/7384.

Full text
Abstract:
The work in this dissertation is focused on two areas within the general discipline of statistical signal processing. First, several new algorithms are developed and exhaustively tested for solving the inverse problem of compressive sensing (CS). CS is a recently developed sub-sampling technique for signal acquisition and reconstruction which is more efficient than the traditional Nyquist sampling method. It provides the possibility of compressed data acquisition approaches to directly acquire just the important information of the signal of interest. Many natural signals are sparse or compressible in some domain such as pixel domain of images, time, frequency and so forth. The notion of compressibility or sparsity here means that many coefficients of the signal of interest are either zero or of low amplitude, in some domain, whereas some are dominating coefficients. Therefore, we may not need to take many direct or indirect samples from the signal or phenomenon to be able to capture the important information of the signal. As a simple example, one can think of a system of linear equations with N unknowns. Traditional methods suggest solving N linearly independent equations to solve for the unknowns. However, if many of the variables are known to be zero or of low amplitude, then intuitively speaking, there will be no need to have N equations. Unfortunately, in many real-world problems, the number of non-zero (effective) variables are unknown. In these cases, CS is capable of solving for the unknowns in an efficient way. In other words, it enables us to collect the important information of the sparse signal with low number of measurements. Then, considering the fact that the signal is sparse, extracting the important information of the signal is the challenge that needs to be addressed. Since most of the existing recovery algorithms in this area need some prior knowledge or parameter tuning, their application to real-world problems to achieve a good performance is difficult. In this dissertation, several new CS algorithms are proposed for the recovery of sparse signals. The proposed algorithms mostly do not require any prior knowledge on the signal or its structure. In fact, these algorithms can learn the underlying structure of the signal based on the collected measurements and successfully reconstruct the signal, with high probability. The other merit of the proposed algorithms is that they are generally flexible in incorporating any prior knowledge on the noise, sparisty level, and so on. The second part of this study is devoted to deployment of mobile sensors in circumstances that the number of sensors to sample the entire region is inadequate. Therefore, where to deploy the sensors, to both explore new regions while refining knowledge in aleady visited areas is of high importance. Here, a new framework is proposed to decide on the trajectories of sensors as they collect the measurements. The proposed framework has two main stages. The first stage performs interpolation/extrapolation to estimate the phenomenon of interest at unseen loactions, and the second stage decides on the informative trajectory based on the collected and estimated data. This framework can be applied to various problems such as tuning the constellation of sensor-bearing satellites, robotics, or any type of adaptive sensor placement/configuration problem. Depending on the problem, some modifications on the constraints in the framework may be needed. As an application side of this work, the proposed framework is applied to a surrogate problem related to the constellation adjustment of sensor-bearing satellites.
APA, Harvard, Vancouver, ISO, and other styles
4

Djendi, Mohamed. "Méthodes améliorées de débruitage bi-capteurs dans un contexte automobile." Rennes 1, 2010. http://www.theses.fr/2010REN1S012.

Full text
Abstract:
Dans les systèmes mono-capteur d’annulation du bruit, une seule observation est disponible pour séparer la parole du bruit par un algorithme de rehaussement. Cette séparation est la plupart du temps faite sous l’hypothèse de stationnarité du bruit. Cette hypothèse ne se vérifie que très peu dans le contexte véhicule : il en résulte de nombreuses distorsions du bruit et de la parole. Ceci nous amène à considérer l'utilisation d'un second microphone. Ce second capteur doit permettre de s'affranchir de l'hypothèse de stationnarité du bruit et permet d'avoir des informations sur la configuration spatiale des signaux. Deux structures de séparation de sources (BSS) bi-capteurs connues sous les noms Directe et Récursive sont souvent utilisées dans ce contexte. La structure BSS Directe a comme contrainte la difficulté d’estimation d’un post-filtre de type RII en sortie pour corriger les distortions spectrales qu’elle introduit. Dans cette thèse nous proposons trois nouvelles méthodes dédiées pour l’estimation de ce post-filtre : La première méthode est basée sur un calcul temporel du post-filtre par un algorithme de filtrage adaptatif ; la deuxième méthode est basée sur un calcul direct du post-filtre dans le domaine fréquentiel ; la troisième méthode utilise un algorithme de filtrage adaptatif fréquentiel pour l’estimation du post-filtre. Nous utilisons une nouvelle forme robuste du pas d’adaptation fréquentiel de l’algorithme adaptatif utilisé. Nous proposons aussi dans cette thèse l’utilisation de l’algorithme adaptatif de type Newton FNTF (Fast Newton Transversal Filter) dans la structure BSS Directe pour estimer les filtres de la matrice de séparation. Ce nouvel algorithme appelé FNTF double (DFNTF) a donné de bons résultats en comparaison avec d’autres algorithmes Doubles proposés précédemment dans la littérature. Les approches optimales que nous proposons dans cette thèse pour l’estimation du post-filtre placé en sortie de la structure BSS Directe, apportent des gains de correction spectrale importants par rapport aux techniques et aux méthodes classiques de la séparation de sources basées sur les deux structures Directe et Récursive. Une étude comparative avec des méthodes de l'état de l'art est réalisée et présentée dans ce sens. Cette étude confirme la supériorité et les bonnes performances de nos méthodes<br>In the mono-sensor systems for noise cancellation, only one observation is used to separate the speech signal from the noise by an enhancement algorithm. This separation is most of the time made by supposing a stationary noise. This assumption is not largely verified in a car context: it results many distortions of the noise and the speech signal. This leads us to consider the use of a second microphone. This second sensor must make it possible to be free from the stationary noise assumption and allows getting information on signals spatial configuration. Two blind sources separation structures (BSS) known under the names Forward and Backward are often used in this context. The BSS Forward structure is constrained by the estimation problem of an RII type post-filter at the output to correct the spectral distortions that it introduces. In this Ph. D thesis, we propose three new methods dedicated to the estimation of this post-filter: The first method is based on a time-domain calculation of the post-filter by an adaptive filtering algorithm, the second method is based on an open-loop frequency-domain calculation of the post-filter, the third method uses a frequency-domain adaptive filtering algorithm to estimate the post-filter. We propose the use of a new robust form of the frequency-domain step-size for the used adaptive algorithm. We also propose in this Ph. D thesis the use of the Newton adaptive filtering algorithm FNTF (Fast Newton Transverse Filter) with the BSS Forward structure to estimate the filters of the mixing matrix. This new algorithm called double FNTF (DFNTF) gave good results in comparison with other double algorithms proposed previously in the literature. The optimal approaches that we propose in this thesis to estimate the post-filter placed at the output of the BSS Forward structure, bring a significant spectral correction profit compared to the techniques and to the traditional methods of sources separation based on the two structures Forward and Backward. A comparative study with methods of the state of the art is carried out and presented. This study confirms the superiority and the good performances of our methods
APA, Harvard, Vancouver, ISO, and other styles
5

Nion, Dimitri Fijalkow Inbar. "Méthodes PARAFAC généralisées pour l'extraction aveugle de sources Application aux systèmes DS-CDMA /." [S.l.] : [s.n.], 2008. http://biblioweb.u-cergy.fr/theses/07CERG0322.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Gabrea, Georghe Marcel. "Rehaussement de la parole en ambiance bruitée : méthodes monovoie et bivoie." Bordeaux 1, 1999. http://www.theses.fr/1999BOR12051.

Full text
Abstract:
Le rehaussement de la parole consiste à améliorer un ou plusieurs aspects perceptifs du signal vocal. On suppose qu'un ou deux microphones sont utilisés pour la prise de son et on cherche à réduire le bruit. On présente l'état de l'art du rehaussement de la parole perturbée par un bruit additif dans le cas où l'on ne dispose que d'un seul signal capté. Ensuite des nouvelles approches que nous avons proposés sont regroupe en deux classes. Dans la première, on considère le signal de parole modélisé par un processus AR contaminé par un bruit additif blanc; trois nouvelles méthodes sont proposées. Dans la seconde, on considère le signal de parole et le bruit contaminant additif modélisés par des processus AR. Deux méthodes sont proposées dans ce cadre. La suite concerne le débruitage de la parole dans le cas où l'on dispose de deux signaux captés séparément dans le même milieu bruité. La première approche proposée par Widrow et fondée sur un seul filtre adaptatif, n'est utilisable que dans le cas où la composante de la parole captée par le second microphone est négligeable. Dans les autres cas il faut envisager des schémas plus complexes, comme les schémas FFIS (FeedForward Implementation Scheme) et FBIS (FeedBack Implementation Scheme). Les méthodes fondées sur le schéma FFIS sont regroupées en deux classes. Dans la première les méthodes fondées sur la décorrélation des signaux intermédiaires et dans la seconde, les méthodes fondées sur les statistiques d'ordre supérieur. Nous avons proposé un nouvel algorithme en utilisant les statistiques d'ordre trois. [etc. . . ]
APA, Harvard, Vancouver, ISO, and other styles
7

Gabrea, Georghe Marcel. "Rehaussement de la parole en ambiance bruitée : méthodes monovoie et bivoie." Bordeaux 1, 1999. http://www.theses.fr/1999BOR10662.

Full text
Abstract:
Le rehaussement de la parole consiste à améliorer un ou plusieurs aspects perceptifs du signal vocal. On suppose qu'un ou deux microphones sont utilisés pour la prise de son et on cherche à réduire le bruit. On présente l'état de l'art du rehaussement de la parole perturbée par un bruit additif dans le cas où l'on ne dispose que d'un seul signal capté. Ensuite des nouvelles approches que nous avons proposés sont regroupe en deux classes. Dans la première, on considère le signal de parole modélisé par un processus AR contaminé par un bruit additif blanc; trois nouvelles méthodes sont proposées. Dans la seconde, on considère le signal de parole et le bruit contaminant additif modélisés par des processus AR. Deux méthodes sont proposées dans ce cadre. La suite concerne le débruitage de la parole dans le cas où l'on dispose de deux signaux captés séparément dans le même milieu bruité. La première approche proposée par Widrow et fondée sur un seul filtre adaptatif, n'est utilisable que dans le cas où la composante de la parole captée par le second microphone est négligeable. Dans les autres cas il faut envisager des schémas plus complexes, comme les schémas FFIS (FeedForward Implementation Scheme) et FBIS (FeedBack Implementation Scheme). Les méthodes fondées sur le schéma FFIS sont regroupées en deux classes. Dans la première les méthodes fondées sur la décorrélation des signaux intermédiaires et dans la seconde, les méthodes fondées sur les statistiques d'ordre supérieur. Nous avons proposé un nouvel algorithme en utilisant les statistiques d'ordre trois. [etc. . . ]
APA, Harvard, Vancouver, ISO, and other styles
8

Ros, Laurent. "Réception multi-capteur pour un terminal radio-mobile dans un système d'accès multiple à répartion par codes : application au mode TDD de l'UMTS." Phd thesis, Grenoble INPG, 2001. http://tel.archives-ouvertes.fr/tel-00687474.

Full text
Abstract:
Cette thèse s'inscrit dans le cadre des systèmes de radiocommunications numériques cellulaires à accès multiple à répartition par codes, CDMA, basé sur la technique d'étalement de spectre. Les ordres de grandeur sont ceux de la liaison descendante du prochain système de 3ème génération de téléphonie mobile, UMTS, dans sa version TDD. Après une description détaillée du contexte, nous dérivons les traitements linéaires optimaux "théoriques" de réception multi-capteur multi-utilisateur opérant symbole par symbole sur le mobile, pour des canaux sélectifs. Ceci à partir d'une représentation en fréquence proposée, et en insistant sur les aspects interprétations. L'application au calcul de performances pour divers modèles d'environnement de l'UMTS mesure l'apport d'une réception sur 2 ou 3 éléments pour lutter efficacement contre les phénomènes duaux d'interférence et d'évanouissement apportés par le canal, de même que le bénéfi ce de la détection conjointe "multi-utilisateur". La dernière partie, plus pragmatique, étudie les structures numériques de réalisation pour essayer de trouve les bons compromis performances/complexité. Nous comparons d'abord la structure linéaire "libre" avec une structure imposée approximant à durée finie la solution linéaire "théorique", et dégageons les caractéristiques souhaitables pour de nouvelles structures "intermédiaires" que nous proposons et étudions à la suite. Enfin nous illustrons le comportement adaptatif de ces structures en environnement "véhicule".
APA, Harvard, Vancouver, ISO, and other styles
9

Nion, Dimitri. "Méthodes PARAFAC généralisées pour l'extraction aveugle de sources : Application aux systèmes DS-CDMA." Cergy-Pontoise, 2007. http://www.theses.fr/2007CERG0322.

Full text
Abstract:
L'objet de cette thèse est le développement de méthodes PARAFAC généralisées pour l'extraction aveugle de sources dans un système DS-CDMA multi-utilisateurs. Il s'avère que les diversités spatiales et temporelles ainsi que la diversité de codes permettent de stocker les échantillons du signal reçu dans un tenseur d'ordre trois. La séparation aveugle des signaux présents consiste alors à décomposer ce tenseur en une somme de contributions. Jusqu'à présent, cette approche a été utilisée en télécommunications pour des canaux de propagation non-sélectifs en fréquence et la solution est donnée par la décomposition PARAFAC. Cependant, pour un scénario de propagation plus réaliste, caractérisé par des trajets multiples engendrant de l'Interférence entre symboles, cette décomposition ne permet pas de résoudre le problème. Dans ce contexte, l'objectif central de cette thèse est le développement de récepteurs algébriques multilinéaires basés sur une généralisation de la décomposition PARAFAC<br>The goal of this PhD Thesis is to develop generalized PARAFAC decompositions for blind source extraction in a multi-user DS-CDMA system. The spatial, temporal and code diversities allow to store the samples of the received signal in a third order tensor. In order to blindly estimate the symbols of each user, we decompose this tensor in a sum of users' contributions. Until now, this multilinear approach has been used in wireless communications for instantaneous channels, in which case the problem is solved by the PARAFAC decomposition. However, in multipath channels with Inter-Symbol-Interference, this decomposition has to be generalized. The main contribution of this Thesis is to build tensor decompositions more general than PARAFAC to take this propagation scenario into account
APA, Harvard, Vancouver, ISO, and other styles
10

Nguyen, Van Quan. "Cartographie d'un environnement sonore par un robot mobile." Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0172/document.

Full text
Abstract:
L’audition est une modalité utile pour aider un robot à explorer et comprendre son environnement sonore. Dans cette thèse, nous nous intéressons à la tâche de localiser une ou plusieurs sources sonores mobiles et intermittentes à l’aide d’un robot mobile équipé d’une antenne de microphones en exploitant la mobilité du robot pour améliorer la localisation. Nous proposons d’abord un modèle bayésien pour localiser une seule source mobile intermittente. Ce modèle estime conjointement la position et l’activité de la source au cours du temps et s’applique à tout type d’antenne. Grâce au mouvement du robot, il peut estimer la distance de la source et résoudre l’ambiguïté avant-arrière qui apparaît dans le cas des antennes linéaires. Nous proposons deux implémentations de ce modèle, l’une à l’aide d’un filtre de Kalman étendu basé sur des mélanges de gaussiennes et l’autre à l’aide d’un filtre à particules, que nous comparons en termes de performance et de temps de calcul. Nous étendons ensuite notre modèle à plusieurs sources intermittentes et mobiles. En combinant notre filtre avec un joint probability data association filter (JPDAF), nous pouvons estimer conjointement les positions et activités de deux sources sonores dans un environnement réverbérant. Enfin nous faisons une contribution à la planification de mouvement pour réduire l’incertitude sur la localisation d’une source sonore. Nous définissons une fonction de coût avec l’alternative entre deux critères: l’entropie de Shannon ou l’écart-type sur l’estimation de la position. Ces deux critères sont intégrés dans le temps avec un facteur d’actualisation. Nous adaptons alors l’algorithme de Monte-Carlo tree search (MCTS) pour trouver, efficacement, le mouvement du robot qui minimise notre fonction de coût. Nos expériences montrent que notre méthode surpasse, sur le long terme, d’autres méthodes de planification pour l’audition robotique<br>Robot audition provides hearing capability for robots and helps them explore and understand their sound environment. In this thesis, we focus on the task of sound source localization for a single or multiple, intermittent, possibly moving sources using a mobile robot and exploiting robot motion to improve the source localization. We propose a Bayesian filtering framework to localize the position of a single, intermittent, possibly moving sound source. This framework jointly estimates the source location and its activity over time and is applicable to any micro- phone array geometry. Thanks to the movement of the robot, it can estimate the distance to the source and solve the front-back ambiguity which appears in the case of a linear microphone array. We propose two implementations of this framework based on an extended mixture Kalman filter (MKF) and on a particle filter, that we compare in terms of performance and computation time. We then extend our model to the context of multiple, intermittent, possibly moving sources. By implementing an extended MKF with joint probabilistic data association filter (JPDAF), we can jointly estimate the locations of two sources and their activities over time. Lastly, we make a contribution on long-term robot motion planning to optimally reduce the uncertainty in the source location. We define a cost function with two alternative criteria: the Shannon entropy or the standard deviation of the estimated belief. These entropies or standard deviations are integrated over time with a discount factor. We adapt the Monte Carlo tree search (MCTS) method for efficiently finding the optimal robot motion that will minimize the above cost function. Experiments show that the proposed method outperforms other robot motion planning methods for robot audition in the long run
APA, Harvard, Vancouver, ISO, and other styles
11

Nasser, Youssef. "Sensibilité des systèmes OFDM-CDMA aux erreurs de synchronisation en réception radio-mobile." Phd thesis, Grenoble INPG, 2006. http://tel.archives-ouvertes.fr/tel-00214147.

Full text
Abstract:
Dans le cadre des systèmes de communications radio mobiles de la 4ème génération, on s'est intéressé aux systèmes combinant les techniques d'étalement par les codes et de transmission à porteuses multiples.<br />La thèse consiste à étudier en premier lieu les performances des différents types de combinaisons de l'OFDM et du CDMA, appelées sous le nom générique « OFDM-CDMA », dans un environnement parfaitement synchronisé dans une liaison descendante avec les mêmes conditions de transmission : charge du système, constellation, rendement du codage.<br />Une fois la comparaison des différents systèmes établie dans un contexte parfaitement synchronisé, on traitera le problème des imperfections de transmission : erreurs des synchronisation, imperfections Radio Fréquences (RF), estimation du canal, effet Doppler.<br />Les différents types d'erreurs de synchronisation étudiés dans le manuscrit sont la synchronisation de la fenêtre temporelle, la synchronisation des fréquences porteuse, et d'échantillonnage.<br />Les imperfections RF étudiées consistent en le bruit de phase et la gigue d'horloge.<br />Les sensibilités de l'OFDM-CDMA à ces erreurs sont évaluées en fonction du Rapport Signal à Interférence plus Bruit (RSIB) en sortie du détecteur en tenant compte de l'orthogonalité entre les codes d'étalement.<br />Finalement, on s'intéresse à évaluer les performances de ces systèmes en terme du Taux d'Erreur Binaire (TEB) en sortie du décodeur et à faire le lien entre le RSIB en sortie du détecteur et le TEB en sortie du décodeur.<br />En conclusion de ce travail, on peut tirer des limites tolérées sur les imperfections de transmission de ces systèmes ainsi qu'une comparaison entre leurs performances.
APA, Harvard, Vancouver, ISO, and other styles
12

Bacher, Raphael. "Méthodes pour l'analyse des champs profonds extragalactiques MUSE : démélange et fusion de données hyperspectrales ;détection de sources étendues par inférence à grande échelle." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAT067/document.

Full text
Abstract:
Ces travaux se placent dans le contexte de l'étude des champs profonds hyperspectraux produits par l'instrument d'observation céleste MUSE. Ces données permettent de sonder l'Univers lointain et d'étudier les propriétés physiques et chimiques des premières structures galactiques et extra-galactiques. La première problématique abordée dans cette thèse est l'attribution d'une signature spectrale pour chaque source galactique. MUSE étant un instrument au sol, la turbulence atmosphérique dégrade fortement le pouvoir de résolution spatiale de l'instrument, ce qui génère des situations de mélange spectral pour un grand nombre de sources. Pour lever cette limitation, des approches de fusion de données, s'appuyant sur les données complémentaires du télescope spatial Hubble et d'un modèle de mélange linéaire, sont proposées, permettant la séparation spectrale des sources du champ. Le second objectif de cette thèse est la détection du Circum-Galactic Medium (CGM). Le CGM, milieu gazeux s'étendant autour de certaines galaxies, se caractérise par une signature spatialement diffuse et de faible intensité spectrale. Une méthode de détection de cette signature par test d'hypothèses est développée, basée sur une stratégie de max-test sur un dictionnaire et un apprentissage des statistiques de test sur les données. Cette méthode est ensuite étendue pour prendre en compte la structure spatiale des sources et ainsi améliorer la puissance de détection tout en conservant un contrôle global des erreurs. Les codes développés sont intégrés dans la bibliothèque logicielle du consortium MUSE afin d'être utilisables par l'ensemble de la communauté. De plus, si ces travaux sont particulièrement adaptés aux données MUSE, ils peuvent être étendus à d'autres applications dans les domaines de la séparation de sources et de la détection de sources faibles et étendues<br>This work takes place in the context of the study of hyperspectral deep fields produced by the European 3D spectrograph MUSE. These fields allow to explore the young remote Universe and to study the physical and chemical properties of the first galactical and extra-galactical structures.The first part of the thesis deals with the estimation of a spectral signature for each galaxy. As MUSE is a terrestrial instrument, the atmospheric turbulences strongly degrades the spatial resolution power of the instrument thus generating spectral mixing of multiple sources. To remove this issue, data fusion approaches, based on a linear mixing model and complementary data from the Hubble Space Telescope are proposed, allowing the spectral separation of the sources.The second goal of this thesis is to detect the Circum-Galactic Medium (CGM). This CGM, which is formed of clouds of gas surrounding some galaxies, is characterized by a spatially extended faint spectral signature. To detect this kind of signal, an hypothesis testing approach is proposed, based on a max-test strategy on a dictionary. The test statistics is learned on the data. This method is then extended to better take into account the spatial structure of the targets, thus improving the detection power, while still ensuring global error control.All these developments are integrated in the software library of the MUSE consortium in order to be used by the astrophysical community.Moreover, these works can easily be extended beyond MUSE data to other application fields that need faint extended source detection and source separation methods
APA, Harvard, Vancouver, ISO, and other styles
13

Chenouard, Nicolas. "Avancées en suivi probabiliste de particules pour l'imagerie biologique." Phd thesis, Télécom ParisTech, 2010. http://tel.archives-ouvertes.fr/tel-00560530.

Full text
Abstract:
Le suivi de particules est une méthode de choix pour comprendre les mécanismes intra-cellulaires car il fournit des moyens robustes et précis de caractériser la dynamiques des objets mobiles à l'échelle micro et nano métrique. Cette thèse traite de plusieurs aspects liés au problème du suivi de plusieurs centaines de particules dans des conditions bruitées. Nous présentons des techniques nouvelles basée sur des méthodes mathématiques robustes qui nous permettent des suivre des particules sous-résolutives dans les conditions variées qui sont rencontrées en imagerie cellulaire. Détection de particules : nous avons tout d'abord traité le problème de la détection de particules dans les images fluorescentes contenant un fond structuré. L'idée clé de la méthode est l'utilisation d'une technique de séparation de sources : l'algorithme d'Analyse en Composantes Morphologiques (ACM), pour séparer le fond des particules en exploitant leur différence de morphologie dans les images. Nous avons effectué un certain nombre de modifications à l'ACM pour l'adapter aux caractéristiques des images biologiques en fluorescence. Par exemple, nous avons proposé l'utilisation du dictionnaire de Curvelet et d'un dictionnaire de d'ondelettes, avec des à priori de parcimonie différents, afin de séparer le signal des particules du fond. Une fois la séparation de sources effectuée, l'image sans fond peut être analysée pour identifier de manière robuste la position des particules et pour les suivre au cours du temps. Modélisation du problème de suivi : nous avons proposé un cadre de travail statistique global qui tient compte des nombreux aspects du problème de suivi de particules dans des conditions bruitées. Le cadre de travail probabiliste que nous avons mis au point contient de nombreux modèles qui sont dédiés à l'imagerie biologique, tels que des modèles statistiques de mouvement des particules en milieu cellulaire. Nous avons aussi défini la concept de perceiability d'une cible dans le cas des particules biologiques. Grâce à ce modèle l'existence d'une particule est explicitement modélisée et quantifiée, ce qui nous permet de résoudre les problèmes de création et de terminaison des trajectoires au sein même de notre cadre probabiliste de suivi. Le cadre de travail proposé bénéficie d'une grande flexibilité mais reste facile à adapter car chaque paramètre du modèle trouve une interprétation simple et intuitive. Ainsi, notre modèle probabiliste de suivi nous a permis de modéliser de manière exhaustive un grand nombre de systèmes biologiques différents. Mise au point d'un algorithme de suivi : nous avons reformulé l'algorithme de suivi nommé Multiple Hypothesis Tracking (MHT) pour qu'il inclue notre modèle probabiliste de suivi dédié aux particules biologiques, et nous avons proposé une implémentation rapide qui permet de suivre de nombreuses particules dans des conditions d'imagerie dégradées. L'\textit{Enhanced} MHT (E-MHT) que nous avons proposé tire pleinement partie du modèle de suivi en incorporant la connaissance des images futures, ce qui augmente significativement le pouvoir discriminant des critères statistiques. En conséquence, l'E-MHT est capable d'identifier automatiquement les détections erronée et de détecter les événements d'apparition et de disparition des particules. Nous avons résolu le problème de la complexité de la tache de suivi grâce à un design de l'algorithme que exploite la topologie en arbre des solution et à la possibilité d'effectuer les calculs de manière parallèle. Une série de tests comparatifs entre l'E-MHT et des méthodes existantes de suivi a été réalisée avec des séquences d'images synthétiques 2D et avec des jeux de données réels 2D et 3D. Dans chaque cas l'E-MHT a montré des performances supérieures par rapport aux méthodes standards, avec une capacité remarquable à supporter des conditions d'imagerie très dégradées. Nous avons appliqué les méthodes de suivi proposées dans le cadre de plusieurs projets biologiques, ce qui a conduit à des résultats biologiques originaux. La flexibilité et la robustesse de notre méthode nous a notamment permis de suivre des prions infectant des cellules, de caractériser le transport de protéines lors du développement de l'ovocyte de la drosophile, ainsi que d'étudier la trafic d'ARN messager dans l'ovocyte de drosophile.
APA, Harvard, Vancouver, ISO, and other styles
14

Aloui, Nadia. "Localisation sonore par retournement temporel." Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENT079/document.

Full text
Abstract:
L'objectif général de cette thèse était de proposer une solution de localisation en intérieur à la fois simple et capable de surmonter les défis de la propagation dans les environnements en intérieur. Pour ce faire, un système de localisation basé sur la méthode des signatures et adoptant le temps d'arrivée du signal de l'émetteur au récepteur comme signature, a été proposé. Le système présente deux architectures différentes, une première orientée privée utilisant la méthode d'accès multiple à répartition par code et une deuxième centralisée basée sur la méthode d'accès multiple à répartition dans le temps. Le système calcule la position de l'objet d'intérêt par la méthode de noyau. Une comparaison expérimentale entre le système à architecture orientée privée et un système de localisation sonore déjà existant et basé sur la méthode de trilatération, a permis de confirmer les résultats trouvés dans le cas de la localisation par ondes radiofréquences. Cependant, nos expérimentations étaient les premières à montrer l'effet de la réverbération sur les approches de la localisation acoustique. Dans un second lieu, un système de localisation basé sur la technique de retournement temporel, permettant une localisation simultanée de sources avec différentes précisions, a été testé par simulations en faisant varier le nombre de sources. Ce système a été ensuite validé par expérimentations. Dans la dernière partie de notre étude, nous nous sommes intéressés à la réduction de l'audibilité du signal utile à la localisation par recours à la psycho-acoustique. Un filtre défini à partir du seuil d'audition absolu a été appliqué au signal de localisation. Nos résultats ont montré une amélioration de la précision de localisation comparé au système de localisation sans modèle psycho-acoustique et ce grâce à l'utilisation d'un filtre adapté au modèle psycho-acoustique à la réception. Par ailleurs, l'écoute du signal après application du modèle psycho-acoustique a montré une réduction significative de son audibilité comparée à celle du signal original<br>The objective of this PhD is to propose a location solution that should be simple and robust to multipath that characterizes the indoor environments. First, a location system that exploits the time domain of channel parameters has been proposed. The system adopts the time of arrival of the path of maximum amplitude as a signature and estimates the target position through nonparametric kernel regression. The system was evaluated in experiments for two main configurations: a privacy-oriented configuration with code-division multiple-access operation and a centralized configuration with time-division multiple-access operation. A comparison between our privacy-oriented system and another acoustic location system based on code-division multiple-access operation and lateration method confirms the results found in radiofrequency-based localization. However, our experiments are the first to demonstrate the detrimental effect that reverberation has on acoustic localization approaches. Second, a location system based on time reversal technique and able to localize simultaneously sources with different location precisions has been tested through simulations for different values of the number of sources. The system has then been validated by experiments. Finally, we have been interested in reducing the audibility of the localization signal through psycho-acoustics. A filter, set from the absolute threshold of hearing, is then applied to the signal. Our results showed an improvement in precision, when compared to the location system without psychoacoustic model, thanks to the use of matched filter at the receiver. Moreover, we have noticed a significant reduction in the audibility of the filtered signal compared to that of the original signal
APA, Harvard, Vancouver, ISO, and other styles
15

Krishnanand, K. N. "Glowworm Swarm Optimization : A Multimodal Function Optimization Paradigm With Applications To Multiple Signal Source Localization Tasks." Thesis, 2007. https://etd.iisc.ac.in/handle/2005/480.

Full text
Abstract:
Multimodal function optimization generally focuses on algorithms to find either a local optimum or the global optimum while avoiding local optima. However, there is another class of optimization problems which have the objective of finding multiple optima with either equal or unequal function values. The knowledge of multiple local and global optima has several advantages such as obtaining an insight into the function landscape and selecting an alternative solution when dynamic nature of constraints in the search space makes a previous optimum solution infeasible to implement. Applications include identification of multiple signal sources like sound, heat, light and leaks in pressurized systems, hazardous plumes/aerosols resulting from nuclear/ chemical spills, fire-origins in forest fires and hazardous chemical discharge in water bodies, oil spills, deep-sea hydrothermal vent plumes, etc. Signals such as sound, light, and other electromagnetic radiations propagate in the form of a wave. Therefore, the nominal source profile that spreads in the environment can be represented as a multimodal function and hence, the problem of localizing their respective origins can be modeled as optimization of multimodal functions. Multimodality in a search and optimization problem gives rise to several attractors and thereby presents a challenge to any optimization algorithm in terms of finding global optimum solutions. However, the problem is compounded when multiple (global and local) optima are sought. This thesis develops a novel glowworm swarm optimization (GSO) algorithm for simultaneous capture of multiple optima of multimodal functions. The algorithm shares some features with the ant-colony optimization (ACO) and particle swarm optimization (PSO) algorithms, but with several significant differences. The agents in the GSO algorithm are thought of as glowworms that carry a luminescence quantity called luciferin along with them. The glowworms encode the function-profile values at their current locations into a luciferin value and broadcast the same to other agents in their neighborhood. The glowworm depends on a variable local decision domain, which is bounded above by a circular sensor range, to identify its neighbors and compute its movements. Each glowworm selects a neighbor that has a luciferin value more than its own, using a probabilistic mechanism, and moves toward it. That is, they are attracted to neighbors that glow brighter. These movements that are based only on local information enable the swarm of glowworms to partition into disjoint subgroups, exhibit simultaneous taxis-behavior towards, and rendezvous at multiple optima (not necessarily equal) of a given multimodal function. Natural glowworms primarily use the bioluminescent light to signal other individuals of the same species for reproduction and to attract prey. The general idea in the GSO algorithm is similar in these aspects in the sense that glowworm agents are assumed to be attracted to move toward other glowworm agents that have brighter luminescence (higher luciferin value). We present the development of the GSO algorithm in terms of its working principle, various algorithmic phases, and evolution of the algorithm from the first version of the algorithm to its present form. Two major phases ¡ splitting of the agent swarm into disjoint subgroups and local convergence of agents in each subgroup to peak locations ¡ are identified at the group level of the algorithm and theoretical performance results related to the latter phase are obtained for a simplified GSO model. Performance of the GSO algorithm against a large class of benchmark multimodal functions is demonstrated through simulation experiments. We categorize the various constants of the algorithm into algorithmic constants and parameters. We show in simulations that fixed values of the algorithmic constants work well for a large class of problems and only two parameters have some influence on algorithmic performance. We also study the performance of the algorithm in the presence of noise. Simulations show that the algorithm exhibits good performance in the presence of fairly high noise levels. We observe graceful degradation only with significant increase in levels of measurement noise. A comparison with the gradient based algorithm reveals the superiority of the GSO algorithm in coping with uncertainty. We conduct embodied robot simulations, by using a multi-robot-simulator called Player/Stage that provides realistic sensor and actuator models, in order to assess the GSO algorithm's suitability for multiple source localization tasks. Next, we extend this work to collective robotics experiments. For this purpose, we use a set of four wheeled robots that are endowed with the capabilities required to implement the various behavioral primitives of the GSO algorithm. We present an experiment where two robots use the GSO algorithm to localize a light source. We discuss an application of GSO to ubiquitous computing based environments. In particular, we propose a hazard-sensing environment using a heterogeneous swarm that consists of stationary agents and mobile agents. The agents deployed in the environment implement a modification of the GSO algorithm. In a graph of mini mum number of mobile agents required for 100% source-capture as a function of the number of stationary agents, we show that deployment of the stationary agents in a grid configuration leads to multiple phase-transitions in the heterogeneous swarm behavior. Finally, we use the GSO algorithm to address the problem of pursuit of multiple mobile signal sources. For the case where the positions of the pursuers and the moving source are collinear, we present a theoretical result that provides an upper bound on the relative speed of the mobile source below which the agents succeed in pursuing the source. We use several simulation scenarios to demonstrate the ecacy of the algorithm in pursuing mobile signal sources. In the case where the positions of the pursuers and the moving source are non-collinear, we use numerical experiments to determine an upper bound on the relative speed of the mobile source below which the pursuers succeed in pursuing the source.
APA, Harvard, Vancouver, ISO, and other styles
16

Krishnanand, K. N. "Glowworm Swarm Optimization : A Multimodal Function Optimization Paradigm With Applications To Multiple Signal Source Localization Tasks." Thesis, 2007. http://hdl.handle.net/2005/480.

Full text
Abstract:
Multimodal function optimization generally focuses on algorithms to find either a local optimum or the global optimum while avoiding local optima. However, there is another class of optimization problems which have the objective of finding multiple optima with either equal or unequal function values. The knowledge of multiple local and global optima has several advantages such as obtaining an insight into the function landscape and selecting an alternative solution when dynamic nature of constraints in the search space makes a previous optimum solution infeasible to implement. Applications include identification of multiple signal sources like sound, heat, light and leaks in pressurized systems, hazardous plumes/aerosols resulting from nuclear/ chemical spills, fire-origins in forest fires and hazardous chemical discharge in water bodies, oil spills, deep-sea hydrothermal vent plumes, etc. Signals such as sound, light, and other electromagnetic radiations propagate in the form of a wave. Therefore, the nominal source profile that spreads in the environment can be represented as a multimodal function and hence, the problem of localizing their respective origins can be modeled as optimization of multimodal functions. Multimodality in a search and optimization problem gives rise to several attractors and thereby presents a challenge to any optimization algorithm in terms of finding global optimum solutions. However, the problem is compounded when multiple (global and local) optima are sought. This thesis develops a novel glowworm swarm optimization (GSO) algorithm for simultaneous capture of multiple optima of multimodal functions. The algorithm shares some features with the ant-colony optimization (ACO) and particle swarm optimization (PSO) algorithms, but with several significant differences. The agents in the GSO algorithm are thought of as glowworms that carry a luminescence quantity called luciferin along with them. The glowworms encode the function-profile values at their current locations into a luciferin value and broadcast the same to other agents in their neighborhood. The glowworm depends on a variable local decision domain, which is bounded above by a circular sensor range, to identify its neighbors and compute its movements. Each glowworm selects a neighbor that has a luciferin value more than its own, using a probabilistic mechanism, and moves toward it. That is, they are attracted to neighbors that glow brighter. These movements that are based only on local information enable the swarm of glowworms to partition into disjoint subgroups, exhibit simultaneous taxis-behavior towards, and rendezvous at multiple optima (not necessarily equal) of a given multimodal function. Natural glowworms primarily use the bioluminescent light to signal other individuals of the same species for reproduction and to attract prey. The general idea in the GSO algorithm is similar in these aspects in the sense that glowworm agents are assumed to be attracted to move toward other glowworm agents that have brighter luminescence (higher luciferin value). We present the development of the GSO algorithm in terms of its working principle, various algorithmic phases, and evolution of the algorithm from the first version of the algorithm to its present form. Two major phases ¡ splitting of the agent swarm into disjoint subgroups and local convergence of agents in each subgroup to peak locations ¡ are identified at the group level of the algorithm and theoretical performance results related to the latter phase are obtained for a simplified GSO model. Performance of the GSO algorithm against a large class of benchmark multimodal functions is demonstrated through simulation experiments. We categorize the various constants of the algorithm into algorithmic constants and parameters. We show in simulations that fixed values of the algorithmic constants work well for a large class of problems and only two parameters have some influence on algorithmic performance. We also study the performance of the algorithm in the presence of noise. Simulations show that the algorithm exhibits good performance in the presence of fairly high noise levels. We observe graceful degradation only with significant increase in levels of measurement noise. A comparison with the gradient based algorithm reveals the superiority of the GSO algorithm in coping with uncertainty. We conduct embodied robot simulations, by using a multi-robot-simulator called Player/Stage that provides realistic sensor and actuator models, in order to assess the GSO algorithm's suitability for multiple source localization tasks. Next, we extend this work to collective robotics experiments. For this purpose, we use a set of four wheeled robots that are endowed with the capabilities required to implement the various behavioral primitives of the GSO algorithm. We present an experiment where two robots use the GSO algorithm to localize a light source. We discuss an application of GSO to ubiquitous computing based environments. In particular, we propose a hazard-sensing environment using a heterogeneous swarm that consists of stationary agents and mobile agents. The agents deployed in the environment implement a modification of the GSO algorithm. In a graph of mini mum number of mobile agents required for 100% source-capture as a function of the number of stationary agents, we show that deployment of the stationary agents in a grid configuration leads to multiple phase-transitions in the heterogeneous swarm behavior. Finally, we use the GSO algorithm to address the problem of pursuit of multiple mobile signal sources. For the case where the positions of the pursuers and the moving source are collinear, we present a theoretical result that provides an upper bound on the relative speed of the mobile source below which the agents succeed in pursuing the source. We use several simulation scenarios to demonstrate the ecacy of the algorithm in pursuing mobile signal sources. In the case where the positions of the pursuers and the moving source are non-collinear, we use numerical experiments to determine an upper bound on the relative speed of the mobile source below which the pursuers succeed in pursuing the source.
APA, Harvard, Vancouver, ISO, and other styles
17

Peng, Chung-hao, and 彭俊豪. "Location-Specified Signal Extraction from Multiple Sources." Thesis, 2005. http://ndltd.ncl.edu.tw/handle/47885333071743241295.

Full text
Abstract:
碩士<br>大同大學<br>資訊工程學系(所)<br>94<br>Recently, microphone array technique has already been used in teleconference and distance education. Without modifying the hardware arrangement, the sound signal from the position we selected could be enhanced by microphone array. This paper presents a new method of extract sound signal, base on the Back-Projection method usually been used in Computerized Tomography (CT) technique. The typical method to enhance signal, which comes from the focus location in multiple sources environment, is Delay-and-Sum technique which is base on the delay and shift operations. In the Computerized Tomography (CT) technique, the back-projection method usually been used to reconstruct the image signal. Taking the process of signal received by microphone is a kind of non-straight projection. Therefore, the back-projection method could be used to reconstruct the sound signal. The system is simulated by matlab and a non-reverberant conference room is performed. Microphone array are arranged with equal distance alone one wall. First, simple sin wave is used to verify the extraction efficiency. Then, we change the position of two sources and the microphone array size. The result shows the back-projection method could extract the sound, which comes from the source we specified
APA, Harvard, Vancouver, ISO, and other styles
18

Lee, Chia-Fu, and 李佳福. "Hand Gesture Recognition with Multiple Wireless Signal Sources." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/75324114152290565375.

Full text
Abstract:
碩士<br>國立臺灣大學<br>電機工程學研究所<br>103<br>This thesis presents a multi-source signal source gesture recognition system. We leverage the characteristics of wireless signal, including traverse through the whole home, and high penetration rate to implement a gesture recognition system which is unlike the line of sight and light condition limitation of vision-based system, non device-free physical sensor system. However, according to our experimental measurement studies, the accuracy of single transmitter system depends on the angle of performing gesture(location). Fortunately, in real world, current environments are full of wireless signals transmitted by different devices coming from various angles. For example, the signals sent by one source is very likely to create a stronger Doppler effect than that sent by another source. In thesis, we presents three approaches exploiting this diversity of multiple signal sources to tackle the above location issues, as a result realizing the whole-home gesture recognition in practice.
APA, Harvard, Vancouver, ISO, and other styles
19

Chan, Chen-Yu, and 詹鎮宇. "Simultaneous Localization of Mobile Robot and Unknown Number of Multiple Sound Sources." Thesis, 2009. http://ndltd.ncl.edu.tw/handle/11290653893234555455.

Full text
Abstract:
碩士<br>國立交通大學<br>電機與控制工程系所<br>97<br>This work proposes a method that is able to simultaneously localize a mobile robot and unknown number of multiple sound sources in the environment. The reason of using sound sources as the landmarks in SLAM algorithm is presented. Several DOA estimation methods are described and a combinational one is used for real time application. After knowing the DOA information, a bearings-only SLAM (simultaneous localization and mapping) algorithm is introduced in detail, which contains the theoretical structure of Bayes filter. The estimated DOAs are known as the bearings information in the algorithm. As source signals are not persistent and there is no identification of the signal content, data association is unknown which is solved using particle filter. Modifications of the algorithm are made for real time application. Experimental results are presented to verify the effectiveness of the proposed approaches.
APA, Harvard, Vancouver, ISO, and other styles
20

OuYang, Yien, and 歐陽宜恩. "Using Multiple Data Sources to Investigate the Effect of Mobile Application Store Menu Design on Usability." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/00829095449505764671.

Full text
Abstract:
碩士<br>國立臺中教育大學<br>數位內容科技學系碩士班<br>101<br>Nowadays, smartphones, tablet personal computers and other mobile devices have filled with people’s life. Mobile applications stores have emerged as the focus of mobile commerce (M-Commerce) development. For the purpose of integration Mobile application store also can be browsed, installed, and used to download from PC. The merits and drawbacks of interface design are key factors in continuing to attract the attention of consumers. In this study, using Apple App Store and Google Play for example (which are most used in Taiwan), combining multiple measurements (physiological aspects: heart rate variability analysis; physical aspects: eye tracking and mouse trajectory hits; psychological aspects: QUIS and NASA -TLX scale questionnaires) to assess the interface design of menu. The dimensions of assessment including performance, preference and user cost. The results showed that: (1) the overall usability of the vertical menu style (Google Play) outperformed the dynamic menu style (App Store); (2) task types would produce the moderating effect between menu styles and usability; (3) All variables are in the significant correlation between each other except for emotional variables. This study can be used as a reference for design menu interface, and expect that not only providing more information but also reducing the load of users and improving the acceptance.
APA, Harvard, Vancouver, ISO, and other styles
21

Kim, Chang Young. "Robotic Searching for Stationary, Unknown and Transient Radio Sources." Thesis, 2012. http://hdl.handle.net/1969.1/ETD-TAMU-2012-05-11014.

Full text
Abstract:
Searching for objects in physical space is one of the most important tasks for humans. Mobile sensor networks can be great tools for the task. Transient targets refer to a class of objects which are not identifiable unless momentary sensing and signaling conditions are satisfied. The transient property is often introduced by target attributes, privacy concerns, environment constraints, and sensing limitations. Transient target localization problems are challenging because the transient property is often coupled with factors such as sensing range limits, various coverage functions, constrained mobility, signal correspondence, limited number of searchers, and a vast searching region. To tackle these challenge tasks, we gradually increase complexity of the transient target localization problem such as Single Robot Single Target (SRST), Multiple Robots Single Target (MRST), Single Robot Multiple Targets (SRMT) and Multiple Robots Multiple Targets (MRMT). We propose the expected searching time (EST) as a primary metric to assess the searching ability of a single robot and the spatiotemporal probability occupancy grid (SPOG) method that captures transient characteristics of multiple targets and tracks the spatiotemporal posterior probability distribution of the target transmissions. Besides, we introduce a team of multiple robots and develop a sensor fusion model using the signal strength ratio from the paired robots in centralized and decentralized manners. We have implemented and validated the algorithms under a hardware-driven simulation and physical experiments.
APA, Harvard, Vancouver, ISO, and other styles
22

Kurniawan, Adit. "Predictive power control in CDMA systems." 2003. http://arrow.unisa.edu.au:8081/1959.8/24961.

Full text
Abstract:
This study is aimed at solving several important problems relating to power control in CDMA systems. Our focus is on the mobile to basestation (reverse) link. In this study, we propose a new SIR estimator for CDMA systems, using an auxiliary spreading sequence method. The proposed SIR estimator is employed at the basestation to estimate the SIR, which serves as a control parameter in the power control algorithm. The effects of system parameters (step size, power-update rate, feedback delay, SIR measurement error, and command error) on the bit error rate (BER) performance of power control are investigated. Feedback delay is found to be the most critical parameter that causes a serious problem in the loop. To solve this problem, we propose to use a channel prediction technique at the basestation. To further improve the performance of power control, we then propose to use a diversity reception technique using antenna arrays at the basestation. We show that this combination allows solving the problems linked to the use of power control in a real system affected by multiple access interference under fading conditions.<br>thesis (PhD)--University of South Australia, 2003.
APA, Harvard, Vancouver, ISO, and other styles
23

"Novel self-decorrelation and fractional self-decorrelation pre-processing techniques to enhance the output SINR of single-user-type DS-CDMA detectors in blind space-time RAKE receivers." 2002. http://library.cuhk.edu.hk/record=b5891142.

Full text
Abstract:
Cheung Shun Keung.<br>Thesis (M.Phil.)--Chinese University of Hong Kong, 2002.<br>Includes bibliographical references (leaves 80-83).<br>Abstracts in English and Chinese.<br>Chapter 1 --- Introduction --- p.1<br>Chapter 1.1 --- The Problem --- p.1<br>Chapter 1.2 --- Overview of CDMA --- p.2<br>Chapter 1.3 --- Problems Encountered in Direct-Sequence (DS)CDMA --- p.3<br>Chapter 1.3.1 --- Multipath Fading Scenario in DS-CDMA Cellular Mo- bile Communication --- p.3<br>Chapter 1.3.2 --- Near-Far Problem --- p.4<br>Chapter 1.4 --- Delimitation and Significance of the Thesis --- p.5<br>Chapter 1.5 --- Summary --- p.7<br>Chapter 1.6 --- Scope of the Thesis --- p.8<br>Chapter 2 --- Literature Review of Blind Space-Time Processing in a wire- less CDMA Receiver --- p.9<br>Chapter 2.1 --- General Background Information --- p.9<br>Chapter 2.1.1 --- Time Model of K-User Chip-Synchronous CDMA --- p.9<br>Chapter 2.1.2 --- Dispersive Channel Modelling --- p.10<br>Chapter 2.1.3 --- Combination of K-user CDMA Time Model with the Slow Frequency-Selective Fading Channel Model to form a completed Chip-Synchronous CDMA Time Model --- p.13<br>Chapter 2.1.4 --- Spatial Channel Model with Antenna Array [9] --- p.15<br>Chapter 2.1.5 --- Joint Space-Time Channel Model in Chip-Synchronous CDMA --- p.19<br>Chapter 2.1.6 --- Challenges to Blind Space-Time Processing in a base- station CDMA Receiver --- p.23<br>Chapter 2.2 --- Literature Review of Single-User-Type Detectors used in Blind Space-Time DS-CDMA RAKE Receivers --- p.25<br>Chapter 2.2.1 --- A Common Problem among the Signal Processing Schemes --- p.28<br>Chapter 3 --- "Novel ""Self-Decorrelation"" Technique" --- p.29<br>Chapter 3.1 --- "Problem with ""Blind"" Space-Time RAKE Processing Using Single- User-Type Detectors" --- p.29<br>Chapter 3.2 --- "Review of Zoltowski & Ramos[10,11,12] Maximum-SINR Single- User-Type CDMA Blind RAKE Receiver Schemes" --- p.31<br>Chapter 3.2.1 --- Space-Time Data Model --- p.31<br>Chapter 3.2.2 --- The Blind Element-Space-Only (ESO) RAKE Receiver with Self-Decorrelation Pre-processing Applied --- p.32<br>Chapter 3.3 --- Physical Meaning of Self-Decorrelation Pre-processing --- p.35<br>Chapter 3.4 --- Simulation Results --- p.38<br>Chapter 4 --- """Fractional Self-Decorrelation"" Pre-processing" --- p.45<br>Chapter 4.1 --- The Blind Maximum-SINR RAKE Receivers in Chen et. al.[l] and Wong et. al.[2] --- p.45<br>Chapter 4.2 --- Fractional Self-Decorrelation Pre-processing --- p.47<br>Chapter 4.3 --- The Blind Element-Space-Only (ESO) RAKE Receiver with Fractional Self-Decorrelation Pre-processing Applied --- p.50<br>Chapter 4.4 --- Physical Meaning of Fractional Self-Decorrelation Pre-processing --- p.54<br>Chapter 4.5 --- Simulation Results --- p.55<br>Chapter 5 --- Complexity Analysis and Schematics of Proposed Techniques --- p.64<br>Chapter 5.1 --- Computational Complexity --- p.64<br>Chapter 5.1.1 --- Self-Decorrelation Applied in Element-Space-Only (ESO) RAKE Receiver --- p.64<br>Chapter 5.1.2 --- Fractional Self-Decorrelation Applied in Element-Space- Only (ESO) RAKE Receiver --- p.67<br>Chapter 5.2 --- Schematics of the Two Proposed Techniques --- p.69<br>Chapter 6 --- Summary and Conclusion --- p.74<br>Chapter 6.1 --- Summary of the Thesis --- p.74<br>Chapter 6.1.1 --- The Self-Decorrelation Pre-processing Technique --- p.75<br>Chapter 6.1.2 --- The Fractional Self-Decorrelation Pre-processing Tech- nique --- p.76<br>Chapter 6.2 --- Conclusion --- p.78<br>Chapter 6.3 --- Future Work --- p.78<br>Bibliography --- p.80<br>Chapter A --- Generalized Eigenvalue Problem --- p.84<br>Chapter A.1 --- Standard Eigenvalue Problem --- p.84<br>Chapter A.2 --- Generalized Eigenvalue Problem --- p.84
APA, Harvard, Vancouver, ISO, and other styles
24

Anderson, Kevin. "Practical multiuser detection algorithms for CDMA." Thesis, 1999. https://vuir.vu.edu.au/18140/.

Full text
Abstract:
The aim of this thesis is to investigate practical multiuser demodulation algorithms for mobile communications systems that are based on code division multiple access (CDMA) technologies. These include the adaptive receiver and the interference canceller. The overall complexity of implementing them is examined. The effects of the proposed algorithms on reducing of multiple access interference (MAI) and their resistance to the Near Far Effect (NFE) will be explored.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography