To see the other types of publications on this topic, follow the link: Telecommunication – Data processing.

Dissertations / Theses on the topic 'Telecommunication – Data processing'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Telecommunication – Data processing.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Goldschneider, Jill R. "Lossy compression of scientific data via wavelets and vector quantization /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/5881.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Subramaniam, Suresh. "All-optical networks with sparse wavelength conversion /." Thesis, Connect to this title online; UW restricted, 1997. http://hdl.handle.net/1773/6032.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Johnson, Mary Holland. "Low bit rate compression of Marine imagery using fast ECVQ /." Thesis, Connect to this title online; UW restricted, 1999. http://hdl.handle.net/1773/5998.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Tsoi, Yiu-lun Kelvin, and 蔡耀倫. "Real-time scheduling techniques with QoS support and their applications in packet video transmission." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31221786.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Chu, Chung Cheung. "Tree encoding of speech signals at low bit rates." Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=65459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Li, Jingyun. "Lapped transforms based on DLS and DLC basis functions and applications." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ30101.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Goodenow, Daniel P. "A reference guide to JPEG compression /." Online version of thesis, 1993. http://hdl.handle.net/1850/11714.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Smith, Craig M. "Efficient software implementation of the JBIG compression standard /." Online version of thesis, 1993. http://hdl.handle.net/1850/11713.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Chan, Ho Yin. "Graph-theoretic approach to the non-binary index assignment problem /." View abstract or full-text, 2008. http://library.ust.hk/cgi/db/thesis.pl?ECED%202008%20CHAN.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Tang, Sai-kin Owen. "Implementation of Low bit-rate image codec /." [Hong Kong] : University of Hong Kong, 1994. http://sunzi.lib.hku.hk/hkuto/record.jsp?B14804402.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Dong, Liqin Carleton University Dissertation Engineering Electrical. "Compressed voice in integrated services frame relay networks." Ottawa, 1992.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
12

Danyali, Habibollah. "Highly scalable wavelet image and video coding for transmission over heterogeneous networks." Access electronically, 2004. http://www.library.uow.edu.au/adt-NWU/public/adt-NWU20041027.115306/index.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Klausutis, Timothy J. "Adaptive lapped transforms with applications to image coding." Diss., Georgia Institute of Technology, 1997. http://hdl.handle.net/1853/15925.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

de, Beste Eugene. "Enabling the processing of bioinformatics workflows where data is located through the use of cloud and container technologies." University of the Western Cape, 2019. http://hdl.handle.net/11394/6767.

Full text
Abstract:
>Magister Scientiae - MSc<br>The growing size of raw data and the lack of internet communication technology to keep up with that growth is introducing unique challenges to academic researchers. This is especially true for those residing in rural areas or countries with sub-par telecommunication infrastructure. In this project I investigate the usefulness of cloud computing technology, data analysis workflow languages and portable computation for institutions that generate data. I introduce the concept of a software solution that could be used to simplify the way that researchers execute their analysis on data sets at remote sources, rather than having to move the data. The scope of this project involved conceptualising and designing a software system to simplify the use of a cloud environment as well as implementing a working prototype of said software for the OpenStack cloud computing platform. I conclude that it is possible to improve the performance of research pipelines by removing the need for researchers to have operating system or cloud computing knowledge and that utilising technologies such as this can ease the burden of moving data.
APA, Harvard, Vancouver, ISO, and other styles
15

Mohror, Kathryn Marie. "Infrastructure For Performance Tuning MPI Applications." PDXScholar, 2004. https://pdxscholar.library.pdx.edu/open_access_etds/2660.

Full text
Abstract:
Clusters of workstations are becoming increasingly popular as a low-budget alternative for supercomputing power. In these systems,message-passing is often used to allow the separate nodes to act as a single computing machine. Programmers of such systems face a daunting challenge in understanding the performance bottlenecks of their applications. This is largely due to the vast amount of performance data that is collected, and the time and expertise necessary to use traditional parallel performance tools to analyze that data. The goal of this project is to increase the level of performance tool support for message-passing application programmers on clusters of workstations. We added support for LAM/MPI into the existing parallel performance tool,P aradyn. LAM/MPI is a commonly used, freely-available implementation of the Message Passing Interface (MPI),and also includes several newer MPI features,such as dynamic process creation. In addition, we added support for non-shared filesystems into Paradyn and enhanced the existing support for the MPICH implementation of MPI. We verified that Paradyn correctly measures the performance of the majority of LAM/MPI programs on Linux clusters and show the results of those tests. In addition,we discuss MPI-2 features that are of interest to parallel performance tool developers and design support for these features for Paradyn.
APA, Harvard, Vancouver, ISO, and other styles
16

Mihailescu, Patrik 1977. "MAE : a mobile agent environment for resource limited devices." Monash University, School of Network Computing, 2003. http://arrow.monash.edu.au/hdl/1959.1/5805.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

鄧世健 and Sai-kin Owen Tang. "Implementation of Low bit-rate image codec." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1994. http://hub.hku.hk/bib/B31212670.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Tacic, Ivan. "Efficient Synchronized Data Distribution Management in Distributed Simulations." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6822.

Full text
Abstract:
Data distribution management (DDM) is a mechanism to interconnect data producers and data consumers in a distributed application. Data producers provide useful data to consumers in the form of messages. For each message produced, DDM determines the set of data consumers interested in receiving the message and delivers it to those consumers. We are particularly interested in DDM techniques for parallel and distributed discrete event simulations. Thus far, researchers have treated synchronization of events (i.e. time management) and DDM independent of each other. This research focuses on how to realize time managed DDM mechanisms. The main reason for time-managed DDM is to ensure that changes in the routing of messages from producers to consumers occur in a correct sequence. Also time managed DDM avoids non-determinism in the federation execution, which may result in non-repeatable executions. An optimistic approach to time managed DDM is proposed where one allows DDM events to be processed out of time stamp order, but a detection and recovery procedure is used to recover from such errors. These mechanisms are tailored to the semantics of the DDM operations to ensure an efficient realization. A correctness proof is presented to verify the algorithm correctly synchronizes DDM events. We have developed a fully distributed implementation of the algorithm within the framework of the Georgia Tech Federated Simulation Development Kit (FDK) software. A performance evaluation of the synchronized DDM mechanism has been completed in a loosely coupled distributed system consisting of a network of workstations connected over a local area network (LAN). We compare time-managed versus unsynchronized DDM for two applications that exercise different mobility patterns: one based on a military simulation and a second utilizing a synthetic workload. The experiments and analysis illustrate that synchronized DDM performance depends on several factors: the simulations model (e.g. lookahead), applications mobility patterns and the network hardware (e.g. size of network buffers). Under certain mobility patterns, time-managed DDM is as efficient as unsynchronized DDM. There are also mobility patterns where time-managed DDM overheads become significant, and we show how they can be reduced.
APA, Harvard, Vancouver, ISO, and other styles
19

Lum, Randall M. G. "Differential pulse code modulation data compression." Scholarly Commons, 1989. https://scholarlycommons.pacific.edu/uop_etds/2181.

Full text
Abstract:
With the requirement to store and transmit information efficiently, an ever increasing number of uses of data compression techniques have been generated in diverse fields such as television, surveillance, remote sensing, medical processing, office automation, and robotics. Rapid increases in processing capabilities and the speed of complex integrated circuits make data compression techniques a prime candidate for application in the areas mentioned above. This report addresses, from a theoretical viewpoint, three major data compression techniques, Pixel Coding, Predictive Coding, and Transform Coding. It begins with a project description and continues with data compression techniques, focusing on Differential Pulse Code Modulation.
APA, Harvard, Vancouver, ISO, and other styles
20

Lewis, Sharon, and University of Lethbridge Faculty of Education. "Using telecommunications to enhance the grade 8 science curriculum." Thesis, Lethbridge, Alta. : University of Lethbridge, Faculty of Education, 1996, 1996. http://hdl.handle.net/10133/31.

Full text
Abstract:
The primary objective of this study was to implement a project that utilizes telecommunications as a tool to enhance the grade eight science curriculum. The process of becoming prepared to undertake this study was examined and documentd so that teachers in all subject areas at all grade levels could use it as a guide for similar projects. It was proposed to do this by conducting a collaborative project in which the students would use the scientific method to develop research questions that could be used to discover why the incidence of asthma is so high in Central Alberta. These questions would be sent out to schools across Canada and then the data would be analyzed and interpreted. The results would be shared with all participants as well as asthma researchers. The study met with many barriers whcih impeded the progress as well as made it impossible to fulfil the original goal of having the students collaborate with the experts and contribute their own research to the field. Whe embarking on a new project using technology it is inevitable that there will be barriers. Through repeated reconnaissance we were able to adjust our goals and still pursue very worthwile, but very different computer and telecommunications projects. The students attitudes towards learning science, science in society and computers were measured by pre and post surveys. The findings showed that the students were aware of the importance of all these factors in their lives. Without completing the asthma study, it is impossible to know how much of a difference there would have been in the results. The qualitative results showed very clearly that computers are a motivator for students. They enjoy working on them and the challenge they present. Many of them will do extra homework so that they can take advantage of every opportunity to work on the computer. Unfortunately, many teachers do not have the time or support to learn enough about the Internet/Schoolnet and what is available to take full advantage of what is has to offer our students and ourselves. For the most part, there are few teachers in each disrict becoming involved. This will change over time only if there is a support system in place and the pioneers share what they have learned. We cannot run the risk of the forerunners becoming discouraged and giving up. The Internet is a global community. For that community to grow and flourish we must share what we have learned and provide the means to make the path smoother for those who follow. Through this study, the projects have been documented and resources have been prepared that are intended to help others get online and access a wide variety of resources that are sure to enhance all programs and professional development.<br>xiii, 228 leaves : ill. ; 28 cm.
APA, Harvard, Vancouver, ISO, and other styles
21

Kahyaoglu, Nazli Deniz. "Spectral And Statistical Analyses Of Experimental Radar Clutter Data." Master's thesis, METU, 2010. http://etd.lib.metu.edu.tr/upload/12612799/index.pdf.

Full text
Abstract:
The performance of radar detection and imaging systems strongly depends on the characteristics of radar clutter. In order to improve the radar signal processing algorithms, successful analysis and modeling of radar clutter are required. For a successful model of radar clutter, both the spectral and statistical characteristics of the clutter should be revealed. Within the scope of this study, an experimental radar data acquisition system is established to analyze radar clutter. The hardware and the data processing system are first verified using generic signals and then a set of measurements is taken in the open terrain. In this thesis, the limitations and problems encountered during the establishment of the system are explained in detail. The spectral and statistical analyses performed on the recorded data are examined. The temporal and spatial behavior of the measured clutter data are explored. The hypothetical models proposed so far in the literature are tested on the experimental data and the fitting of models to the experimental data is confirmed using various goodness-of-fit tests. Finally, the results of the analyses are interpreted in the light of the radar system parameters and the characteristics of the illuminated terrain.
APA, Harvard, Vancouver, ISO, and other styles
22

Marais, Hendrik Gideon. "Development of dynamically reconfigurable ground station software." Thesis, Link to the online version, 2007. http://hdl.handle.net/10019/675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Sankara, Krishnan Shivaranjani. "Delay sensitive delivery of rich images over WLAN in telemedicine applications." Thesis, Atlanta, Ga. : Georgia Institute of Technology, 2009. http://hdl.handle.net/1853/29673.

Full text
Abstract:
Thesis (M. S.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2009.<br>Committee Chair: Jayant, Nikil; Committee Member: Altunbasak, Yucel; Committee Member: Sivakumar, Raghupathy. Part of the SMARTech Electronic Thesis and Dissertation Collection.
APA, Harvard, Vancouver, ISO, and other styles
24

Tessier, Sean Michael. "Ontology-based approach to enable feature interoperability between CAD systems." Thesis, Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/41118.

Full text
Abstract:
Data interoperability between computer-aided design (CAD) systems remains a major obstacle in the information integration and exchange in a collaborative engineering environment. The standards for CAD data exchange have remained largely restricted to geometric representations, causing the design intent portrayed through construction history, features, parameters, and constraints to be discarded in the exchange process. In this thesis, an ontology-based framework is proposed to allow for the full exchange of semantic feature data. A hybrid ontology approach is proposed, where a shared base ontology is used to convey the concepts that are common amongst different CAD systems, while local ontologies are used to represent the feature libraries of individual CAD systems as combinations of these shared concepts. A three-branch CAD feature model is constructed to reduce ambiguity in the construction of local ontology feature data. Boundary representation (B-Rep) data corresponding to the output of the feature operation is incorporated into the feature data to enhance data exchange. The Ontology Web Language (OWL) is used to construct a shared base ontology and a small feature library, which allows the use of existing ontology reasoning tools to infer new relationships and information between heterogeneous data. A combination of OWL and SWRL (Semantic Web Rule Language) rules are developed to allow a feature from an arbitrary source system expressed via the shared base ontology to be automatically classified and translated into the target system. These rules relate input parameters and reference types to expected B-Rep objects, allowing classification even when feature definitions vary or when little is known about the source system. In cases when the source system is well known, this approach also permits direct translation rules to be implemented. With such a flexible framework, a neutral feature exchange format could be developed.
APA, Harvard, Vancouver, ISO, and other styles
25

Vu, Manh Tuan. "Feasibility, acceptability and utilization of a moblie cardiovascular risk factor profile e-platform amongst physicians and patients in HongKong." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B47869823.

Full text
Abstract:
Study methods: Mixed-method study design was used to investigate feasibility of implementing a mobile-phone based behavioural intervention to reduce CVD risk factors among the Chinese population. Patients, who were 45-79 years old, fair English literacy, had access to a JAVA enabled mobile phone and had no mental health problems, cognitive impairment or severe illness, were eligible to the study. Intervention: Patients recruited from three settings (1 GP, 1 specialist and 1 public clinics) had the study software installed to their phone. The software enabled patients to access their CVD risk profiles (including weight, BP, HbA1c, and lipoprotein profile), 10-year CVD risk prediction (based on Framingham Cardiac Risk Score), and pre-set behavioural recommendations. Patients’ CVD risk profiles were updated at 1-month and 3-month follow-up when their test results were available. Patients were alerted with healthy behaviours recommendations. Outcomes: Outcomes were measured at baseline and 3-month follow-up. Clinical outcomes included Cardiac Risk Factor Score and its components (BMI, Systolic & Diastolic BP, total cholesterol, HDL and HbA1c). Two sets of questionnaires were used to measure knowledge, risk reduction behaviour and attitude toward usefulness of medical records (pre-intervention) and perceived ease of use, usefulness, satisfaction and utilisation of the software (post-intervention). Results and Discussion: 19 patients were recruited at baseline. 75% (14) aged 45-55 years, 58% (11) were male, 79% (15) had secondary or lower education, 63% (12) were married, and 95% (18) never smoked. Patients’ understanding about CVD risk factors and risk reduction behaviour was moderate. Patients’ attitude toward electronic medical record was positive. Overall patients’ perception of usefulness, ease of use and satisfaction with the software was satisfactory. Post-intervention, a decreasing trend was observed in patients’ CVD risk profiles i.e. weight, BMI, SBP&DBP, HbA1c and Lipoprotein profile. Focus group discussions revealed that there was a mismatch between physicians and patients perspectives about the use of mobile phone in a behavioural intervention. Physicians tended to express their concern about the quality of records, security of technology, and patients’ actual benefit, while patients showed little concern about security and great excitement about further use of mobile phone technology in assisting their disease self-management. The public sector physicians admitted that their patients were passive in term of seeking information about their health. Patients were willing to use this software for future care if it could provide more real-time data, tailored recommendations for behavioural change, and an interactive communication tool with their physicians. Physicians would like to try the software if it could ease patient-management process, especially enhance patient-physician communication, and be a decision support system to help them keep track with changes that their patients made. Conclusion: This pilot study has provided preliminary evidence of the feasibility, acceptability, and utility of an e-platform in primary interventions for CVD in Hong Kong.<br>published_or_final_version<br>Community Medicine<br>Master<br>Master of Philosophy
APA, Harvard, Vancouver, ISO, and other styles
26

Muller, Rikus. "A study of image compression techniques, with specific focus on weighted finite automata." Thesis, Link to the online version, 2005. http://hdl.handle.net/10019/1128.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Chan, Chun-ying, and 陳俊英. "A case study of hoe technology is used to create service value." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1995. http://hub.hku.hk/bib/B31266332.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Nolte, Ernst Hendrik. "Image compression quality measurement : a comparison of the performance of JPEG and fractal compression on satellite images." Thesis, Stellenbosch : Stellenbosch University, 2000. http://hdl.handle.net/10019.1/51796.

Full text
Abstract:
Thesis (MEng)--Stellenbosch University, 2000.<br>ENGLISH ABSTRACT: The purpose of this thesis is to investigate the nature of digital image compression and the calculation of the quality of the compressed images. The work is focused on greyscale images in the domain of satellite images and aerial photographs. Two compression techniques are studied in detail namely the JPEG and fractal compression methods. Implementations of both these techniques are then applied to a set of test images. The rest of this thesis is dedicated to investigating the measurement of the loss of quality that was introduced by the compression. A general method for quality measurement (signal To Noise Ratio) is discussed as well as a technique that was presented in literature quite recently (Grey Block Distance). Hereafter, a new measure is presented. After this, a means of comparing the performance of these measures is presented. It was found that the new measure for image quality estimation performed marginally better than the SNR algorithm. Lastly, some possible improvements on this technique are mentioned and the validity of the method used for comparing the quality measures is discussed.<br>AFRIKAANSE OPSOMMING: Die doel van hierdie tesis is om ondersoek in te stel na die aard van digitale beeldsamepersing en die berekening van beeldkwaliteit na samepersing. Daar word gekonsentreer op grysvlak beelde in die spesifieke domein van satellietbeelde en lugfotos. Twee spesifieke samepersingstegnieke word in diepte ondersoek naamlik die JPEG en fraktale samepersingsmetodes. Implementasies van beide hierdie tegnieke word op 'n stel toetsbeelde aangewend. Die res van hierdie tesis word dan gewy aan die ondersoek van die meting van die kwaliteitsverlies van hierdie saamgeperste beelde. Daar word gekyk na 'n metode wat in algemene gebruik in die praktyk is asook na 'n nuwer metode wat onlangs in die literatuur veskyn het. Hierna word 'n nuwe tegniek bekendgestel. Verder word daar 'n vergelyking van hierdie mates en 'n ondersoek na die interpretasie van die 'kwaliteit' van hierdie kwaliteitsmate gedoen. Daar is gevind dat die nuwe maatstaf vir kwaliteit net so goed en selfs beter werk as die algemene maat vir beeldkwaliteit naamlik die Sein tot Ruis Verhouding. Laastens word daar moontlike verbeterings op die maatstaf genoem en daar volg 'n bespreking oor die geldigheid van die metode wat gevolg is om die kwaliteit van die kwaliteitsmate te bepaal
APA, Harvard, Vancouver, ISO, and other styles
29

Sefara, Mamphoko Nelly. "Design of a forward error correction algorithm for a satellite modem." Thesis, Stellenbosch : Stellenbosch University, 2001. http://hdl.handle.net/10019.1/52181.

Full text
Abstract:
Thesis (MScEng)--University of Stellenbosch, 2001.<br>ENGLISH ABSTRACT: One of the problems with any deep space communication system is that information may be altered or lost during transmission due to channel noise. It is known that any damage to the bit stream may lead to objectionable visual quality distortion of images at the decoder. The purpose of this thesis is to design an error correction and data compression algorithm for image protection, which will allow the communication bandwidth to be better utilized. The work focuses on Sunsat (Stellenbosch Satellite) images as test images. Investigations were done on the JPEG 2000 compression algorithm's robustness to random errors, putting more emphasis on how much of the image is degraded after compression. Implementation of both the error control coding and data compression strategy is then applied to a set of test images. The FEe algorithm combats some if not all of the simulated random errors introduced by the channel. The results illustrates that the error correction of random errors is achieved by a factor of 100 times (xl00) on all test images and that the probability of error of 10-2in the channel (10-4for image data) shows that the errors causes little degradation on the image quality.<br>AFRIKAANSE OPSOMMING: Een van die probleme met kommunikasie in die ruimte is dat informasie mag verlore gaan en! of gekorrupteer word deur ruis gedurende versending deur die kanaal. Dit is bekend dat enige skade aan die bisstroom mag lei tot hinderlike vervorming van die beelde wat op aarde ontvang word. Die doel van hierdie tesis om foutkorreksie en datakompressie te ontwikkel wat die satelliet beelde sal beskerm gedurende versending en die kommunikasie kanaal se bandwydte beter sal benut. Die werk fokus op SUNSAT (Stellenbosch Universiteit Satelliet) se beelde as toetsbeelde. Ondersoeke is gedoen na die JPEG2000 kompressie algoritme se bestandheid teen toevalsfoute, met klem op hoeveel die beeld gedegradeer word deur die bisfoute wat voorkom. Beide die kompressie en die foutkorreksie is ge-implementeer en aangewend op die toetsbeelde. Die foutkorreksie bestry die gesimuleerde toevalsfoute, soos wat dit op die kanaal voorkom. Die resultate toon dat die foutkorreksie die toevalsfoute met 'n faktor 100 verminder, en dat 'n foutwaarskynlikheid van 10-2 op die kanaal (10-4 op die beelddata) weinig degradering in die beeldkwaliteit veroorsaak.
APA, Harvard, Vancouver, ISO, and other styles
30

Rathnam, Tarun. "Using Ontologies to Support Interoperability in Federated Simulation." Thesis, Georgia Institute of Technology, 2004. http://hdl.handle.net/1853/4788.

Full text
Abstract:
A vast array of computer-based simulation tools are used to support engineering design and analysis activities. Several such activities call for the simulation of various coupled sub-systems in parallel, typically to study the emergent behavior of large, complex systems. Most sub-systems have their own simulation models associated with them, which need to interoperate with each other in a federated fashion to simulate system-level behavior. The run-time exchange of information between federate simulations requires a common information model that defines the representation of simulation concepts shared between federates. However, most federate simulations employ disparate representations of shared concepts. Therefore, it is often necessary to implement transformation stubs that convert concepts between their common representation to those used in federate simulations. The tasks of defining a common representation for shared simulation concepts and building translation stubs around them adds to the cost of performing a system-level simulation. In this thesis, a framework to support automation and reuse in the process of achieving interoperability between federate simulations is developed. This framework uses ontologies as a means to capture the semantics of different simulation concepts shared in a federation in a formal, reusable fashion. Using these semantics, a common representation for shared simulation entities, and a corresponding set of transformation stubs to convert entities from their federate to common representations (and vice-versa) are derived automatically. As a foundation to this framework, a schema to enable the capture of simulation concepts in an ontology is specified. Also, a graph-based algorithm is developed to extract the appropriate common information model and transformation procedures between federate and common simulation entities. As a proof of concept, this framework is applied to support the development of a federated air traffic simulation. To progress with the design of an airport, the combined operation of its individual systems (air traffic control, ground traffic control, and ground-based aircraft services) in handling varying volumes of aircraft traffic is to be studied. To do so, the individual simulation models corresponding to the different sub-systems of the airport need to be federated, for which the ontology-based framework is applied.
APA, Harvard, Vancouver, ISO, and other styles
31

Holkner, Bernard 1953. "Developing computer communications for professional collaboration." Monash University, Faculty of Education, 2001. http://arrow.monash.edu.au/hdl/1959.1/8468.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Petratos, Anastasia. "An ICT strategy to support a patient-centred approach to diabetes care." Thesis, Nelson Mandela Metropolitan University, 2017. http://hdl.handle.net/10948/14466.

Full text
Abstract:
Factors such as poverty, ethnicity, socio-economic status, poor infrastructure and governance, etc., are some of the reasons that effective and proven prevention and treatment interventions for most of the major causes of mortality and morbidity in the developing world continue to fail. Chronic diseases require complex interventions that these countries simply cannot maintain. Diabetes mellitus (DM) is a chronic disease that is on the rise worldwide. This disease is a lifestyle disease, which means, that it is brought on by poor health habits. Statistics show that 285 million (6.4%) people aged between 20 and 79 years will be affected by Diabetes in 2010 and a staggering 439 million (7.7%) by 2030. This is a projected growth of 69% in developing countries and 20% in developed countries. The findings from studies conducted from 1993 to 2003 in Sub-Saharan Africa, particularly in South Africa, around the health care services for diabetes highlights many challenges. Sadly, the challenges 10 years after that study, are very similar. The conditions of people with Diabetes can be improved through regular monitoring of patients, improvement and monitoring of health care provided, education on healthy lifestyle, as well as education on the importance of adherence to treatment plans for the successful management of the condition. The diabetes endemic in South Africa is exacerbated by the manual functions that are performed in all aspects of monitoring and management of the disease. With the advancements that have been made in ICT and the many apps that already exist for healthcare, it is sensible to state that ICT can assist in the monitoring and management of diabetes. Another factor that is considered is that of patient-centred care. The huge number of people who need acute care and treatment in hospitals and clinics have forced a previously caring environment, to turn into a cold, almost production line affair. The sick wait in long queues and are ushered in and out of the consulting rooms as fast as possible without even as much as a “hallo”. This has left a void in the healthcare delivery to South Africans which should never have been removed in the first place, namely patient-centred care. This means that the patient is at the centre of the treatment and fully involved in the decisions about his/her health. Every patient deserves to be recognised as a human-being and treated with dignity and respect. Treatment plans for long term chronic care patients such as diabetics, should be thoroughly discussed with the patient and they should believe and comit themselves to the treatment plan. These plans are life-long and require dedication and as it is vital that patients are part of decision making and understand fully what they are expected to do. Bearing this in mind, this study has investigated the needs and care plans for people with diabetes. Specialist in the field of diabetes were interviewed and recognised care plans for diabetes such as those from WHO, IDF and SEMSDA were studied. This study also established, that by practising a patient-centred approach the adherence to a treatment plan is likely to be higher. The strategy developed involves the person with diabetes, the healthcare worker and the support structure in the care plan of the diabetic. The use of ICT as part of the solution must consider the patient-centred requirements for using IT so that the people using the strategy are comfortable and not intimidated by the technology. The need to incorporate e-health into governments’ healthcare plans has been growing over the last decade. The GSMA conducted research into mobile health opportunities in South Africa and found that SA now has a penetration of 98% and that this is the ideal medium to address the inaccessibility and inequality of healthcare in SA. The causes identified as playing a major role in the rise in diabetes were identified and it was determined that through the implementation of an ICT strategy for diabetes care, many of these can be addressed. These include the use of technology for, improved monitoring and management, increased diabetes awareness and education, and promotion of healthy lifestyle. The study focuses on the self-management aspect of diabetes and produces a strategy that incorporates various ICT solutions that would assist in the daily aspects of diabetes care, as well as follow a patient-centred approach to diabetes care. This strategy developed in this study does not need any intervention from government as it is driven by the people who have diabetes and their healthcare workers, with the aid of the technology that they currently have on hand.
APA, Harvard, Vancouver, ISO, and other styles
33

Ali, Khan Syed Irteza. "Classification using residual vector quantization." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/50300.

Full text
Abstract:
Residual vector quantization (RVQ) is a 1-nearest neighbor (1-NN) type of technique. RVQ is a multi-stage implementation of regular vector quantization. An input is successively quantized to the nearest codevector in each stage codebook. In classification, nearest neighbor techniques are very attractive since these techniques very accurately model the ideal Bayes class boundaries. However, nearest neighbor classification techniques require a large size of representative dataset. Since in such techniques a test input is assigned a class membership after an exhaustive search the entire training set, a reasonably large training set can make the implementation cost of the nearest neighbor classifier unfeasibly costly. Although, the k-d tree structure offers a far more efficient implementation of 1-NN search, however, the cost of storing the data points can become prohibitive, especially in higher dimensionality. RVQ also offers a nice solution to a cost-effective implementation of 1-NN-based classification. Because of the direct-sum structure of the RVQ codebook, the memory and computational of cost 1-NN-based system is greatly reduced. Although, as compared to an equivalent 1-NN system, the multi-stage implementation of the RVQ codebook compromises the accuracy of the class boundaries, yet the classification error has been empirically shown to be within 3% to 4% of the performance of an equivalent 1-NN-based classifier.
APA, Harvard, Vancouver, ISO, and other styles
34

Nedstrand, Paul, and Razmus Lindgren. "Test Data Post-Processing and Analysis of Link Adaptation." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-121589.

Full text
Abstract:
Analysing the performance of cell phones and other wireless connected devices to mobile networks are key when validating if the standard of the system is achieved. This justies having testing tools that can produce a good overview of the data between base stations and cell phones to see the performance of the cell phone. This master thesis involves developing a tool that produces graphs with statistics from the trac data in the communication link between a connected mobile device and a base station. The statistics will be the correlation between two parameters in the trac data in the channel (e.g. throughput over the channel condition). The tool is oriented on analysis of link adaptation and by the produced graphs the testing personnel at Ericsson will be able to analyse the performance of one or several mobile equipments. We performed our own analysis on link adaptation using the tool to show that this type of analysis is possible with this tool. To show that the tool is useful for Ericsson we let test personnel answer a survey on the usability and user friendliness of it.
APA, Harvard, Vancouver, ISO, and other styles
35

Rakotoarivelo, Thierry Electrical Engineering &amp Telecommunications Faculty of Engineering UNSW. "Distributed discovery and management of alternate internet paths with enhanced quality of service." Awarded by:University of New South Wales. School of Electrical Engineering and Telecommunications, 2006. http://handle.unsw.edu.au/1959.4/27316.

Full text
Abstract:
The convergence of recent technology advances opens the way to new ubiquitous environments, where network-enabled devices collectively form invisible pervasive computing and networking environments around the users. These users increasingly require extensive applications and capabilities from these devices. Recent approaches propose that cooperating service providers, at the edge of the network, offer these required capabilities (i.e services), instead of having them directly provided by the devices. Thus, the network evolves from a plain communication medium into an endless source of services. Such a service, namely an overlay application, is composed of multiple distributed application elements, which cooperate via a dynamic communication mesh, namely an overlay association. The Quality of Service (QoS) perceived by the users of an overlay application greatly depends on the QoS on the communication paths of the corresponding overlay association. This thesis asserts and shows that it is possible to provide QoS to an overlay application by using alternate Internet paths resulting from the compositions of independent consecutive paths. Moreover, this thesis also demonstrates that it is possible to discover, select and compose these independent paths in a distributed manner within an community comprising a limited large number of autonomous cooperating peers, such as the fore-mentioned service providers. Thus, the main contributions of this thesis are i) a comprehensive description and QoS characteristic analysis of these composite alternate paths, and ii) an original architecture, termed SPAD (Super-Peer based Alternate path Discovery), which allows the discovery and selection of these alternate paths in a distributed manner. SPAD is a fully distributed system with no single point of failure, which can be easily and incrementally deployed on the current Internet. It empowers the end-users at the edge of the network, allowing them to directly discover and utilize alternate paths.
APA, Harvard, Vancouver, ISO, and other styles
36

Nehme, Rimma V. "Continuous query processing on spatio-temporal data streams." Link to electronic thesis, 2005. http://www.wpi.edu/Pubs/ETD/Available/etd-082305-154035/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Daveau, Jean-Marc. "Spécifications systèmes et synthèse de la communication pour le co-design logiciel/matériel." Grenoble INPG, 1997. https://tel.archives-ouvertes.fr/tel-00002996.

Full text
Abstract:
Au fur et à mesure que la complexité s'accroit, il devient nécessaire de définir de nouvelles méthodes permettant de la gérer. Une des façons de maîtriser cette complexité est d'élever le niveau d'abstraction des spécifications en utilisant des langages de spécification systèmes. D'un autre côté, l'élévation du niveau d'abstraction augmente le fossé entre les concepts utilisés pour la spécification (processus communicants, communication abstraite) et ceux utilisés par les langages de description de matériel. Bien que ces langages soient bien adaptés à la spécification et la validation de systèmes complexes, les concepts qu'ils manipulent ne sont pas aisément transposables sur ceux des langages de description de matériels. Il est donc nécessaire de définir de nouvelles méthodes permettant une synthèse efficace à partir de spécifications systèmes. Le sujet de cette thèse est la présentation d'une approche de génération de code C et VHDL à partir de spécifications systèmes en SDL. Cette approche résout la principale difficulté rencontrée par les autres approches, à savoir la communication inter-processus. La communication SDL peut être traduite en VHDL en vue de la synthèse. Cela est rendu possible par l'utilisation d'une forme intermédiaire qui supporte un modèle de communication générale qui autorise la représentation pour la synthèse de la plupart des schémas de communication. Cette forme intermédiaire permet d'appliquer au système un ensemble d'étapes de raffinement pour obtenir la solution désirée. La principale étape de raffinement, appelée synthèse de la communication, détermine le protocole et les interfaces utilisés par les différents processus pour communiquer. La spécification raffinée peut être traduite en C et VHDL pour être utilisée par des outils du commerce. Nous illustrons la faisabilité de cette approche par une application à un système de télécommunication : le protocole TCP/IP sur ATM<br>As the system complexity grows there is a need for new methods to handle large system design. One way to manage that complexity is to rise the level of abstraction of the specifications by using system level description languages. On the other side, as the level of abstraction rise the gap between the concepts used for the specification at the system level (communication channels, interacting processes, data types) and those used for hardware synthesis becomes wider. Although these languages are well suited for the specification and validation of complex real time distributed systems, the concepts manipulated are not easy to map onto hardware description languages. It is thus necessary to defines methods for system level synthesis enabling efficient synthesis from system level specifications. The subject of this thesis is the presentation of a new approach of generation of C and VHDL code from system level specifications in SDL. This approach solves the main problem encountered by previous approach : inter process communications. SDL communication can be translated in VHDL for synthesis. This is achieved by the use of a powerful intermediate form that support the modelling for synthesis of a wide range of communication schemes. This intermediate form allows to apply to the system a set of transformations in order to obtain the desired solution. The main refinement step, called communication synthesis is aimed at fixing the protocol and interface used by the different processes to communicate. The refined specification can be translated in C and VHDL and synthesised by commercial tools. We illustrate the feasibility of this approach through an application to a telecommunication example : the TCP/IP over ATM protocol
APA, Harvard, Vancouver, ISO, and other styles
38

Abdullah, S. N. "Data transmission at 9600 bit/sec over an HF radio link." Thesis, Loughborough University, 1986. https://dspace.lboro.ac.uk/2134/7432.

Full text
Abstract:
The thesis is concerned with serial data transmission at 9600 bit/sec over a voiceband channel, where the main impairments are additive noise and intersymbol interference, and the latter varles slowly with time. The thesis includes a brief description of the ionospheric propagation medium and presents an equivalent baseband model of the HF channel, suitable for computer simulation of quadrature amplitude modulation (QAM) systems. A study of 16-point QAM signals transmitted over voiceband HF channels is then carr-iod out usj-ng the given channel model. Several cost effective near-maximum-likelihood detection processes have been developed for HF modems. Each detector is here preceded by an adaptive linear filter that is adjusted to make the sampled impulse response of the channel and filter minimum phase. These detectors require an accurate knowledge of the sampled impulse response of the channel, if their full potential is to be achieved. The results of computer-simulation tests on the near-maximum-likelihood detectors are given, where these tests assume that other receiver operations such as channel estimation and adaptive linear filtering, together with element timing synchronisation and Doppler shift correction, are carried out perfectly. A recently developed HF channel estimator employing a simple feedforward transversal-filter and requiring knowledge of the number of skywaves is next investigated and a starting up procedure is developed for such an estimator. The technique is then made fully adaptive in the sense that it continues to operate correctly when the number of skywaves changes. Results of computer simulation tests are then presented showing the performance of the above detectors when operating with a channel estimator and adaptive linear filtering. Finally modem synchronisation is studied and various techniques of element timing and carrier frequency synchronisation are proposed.
APA, Harvard, Vancouver, ISO, and other styles
39

Li, Jin. "Window Queries Over Data Streams." PDXScholar, 2008. https://pdxscholar.library.pdx.edu/open_access_etds/2675.

Full text
Abstract:
Evaluating queries over data streams has become an appealing way to support various stream-processing applications. Window queries are commonly used in many stream applications. In a window query, certain query operators, especially blocking operators and stateful operators, appear in their windowed versions. Previous research work in evaluating window queries typically requires ordered streams and this order requirement limits the implementations of window operators and also carries performance penalties. This thesis presents efficient and flexible algorithms for evaluating window queries. We first present a new data model for streams, progressing streams, that separates stream progress from physical-arrival order. Then, we present our window semantic definitions for the most commonly used window operators—window aggregation and window join. Unlike previous research that often requires ordered streams when describing window semantics, our window semantic definitions do not rely on physical-stream arrival properties. Based on the window semantic definitions, we present new implementations of window aggregation and window join, WID and OA-Join. Compared to the existing implementations of stream query operators, our implementations do not require special stream-arrival properties, particularly stream order. In addition, for window aggregation, we present two other implementations extended from WID, Paned-WID and AdaptWID, to improve excution time by sharing sub-aggregates and to improve memory usage for input with data distribution skew, respectively. Leveraging our order-insenstive implementations of window operators, we present a new architecture for stream systems, OOP (Out-of- Order Processing). Instead of relying on ordered streams to indicate stream progress, OOP explicitly communicates stream progress to query operators, and thus is more flexible than the previous in-order processing (IOP) approach, which requires maintaining stream order. We implemented our order-insensitive window query operators and the OOP architecture in NiagaraST and Gigascope. Our performance study in both systems confirms the benefits of our window operator implementations and the OOP architecture compared to the commonly used approaches in terms of memory usage, execution time and latency.
APA, Harvard, Vancouver, ISO, and other styles
40

Iwaya, Leonardo H. "Secure and Privacy-aware Data Collection and Processing in Mobile Health Systems." Licentiate thesis, Karlstads universitet, Institutionen för matematik och datavetenskap (from 2013), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-46982.

Full text
Abstract:
Healthcare systems have assimilated information and communication technologies in order to improve the quality of healthcare and patient's experience at reduced costs. The increasing digitalization of people's health information raises however new threats regarding information security and privacy. Accidental or deliberate data breaches of health data may lead to societal pressures, embarrassment and discrimination. Information security and privacy are paramount to achieve high quality healthcare services, and further, to not harm individuals when providing care. With that in mind, we give special attention to the category of Mobile Health (mHealth) systems. That is, the use of mobile devices (e.g., mobile phones, sensors, PDAs) to support medical and public health. Such systems, have been particularly successful in developing countries, taking advantage of the flourishing mobile market and the need to expand the coverage of primary healthcare programs. Many mHealth initiatives, however, fail to address security and privacy issues. This, coupled with the lack of specific legislation for privacy and data protection in these countries, increases the risk of harm to individuals. The overall objective of this thesis is to enhance knowledge regarding the design of security and privacy technologies for mHealth systems. In particular, we deal with mHealth Data Collection Systems (MDCSs), which consists of mobile devices for collecting and reporting health-related data, replacing paper-based approaches for health surveys and surveillance. This thesis consists of publications contributing to mHealth security and privacy in various ways: with a comprehensive literature review about mHealth in Brazil; with the design of a security framework for MDCSs (SecourHealth); with the design of a MDCS (GeoHealth); with the design of Privacy Impact Assessment template for MDCSs; and with the study of ontology-based obfuscation and anonymisation functions for health data.<br>Information security and privacy are paramount to achieve high quality healthcare services, and further, to not harm individuals when providing care. With that in mind, we give special attention to the category of Mobile Health (mHealth) systems. That is, the use of mobile devices (e.g., mobile phones, sensors, PDAs) to support medical and public health. Such systems, have been particularly successful in developing countries, taking advantage of the flourishing mobile market and the need to expand the coverage of primary healthcare programs. Many mHealth initiatives, however, fail to address security and privacy issues. This, coupled with the lack of specific legislation for privacy and data protection in these countries, increases the risk of harm to individuals. The overall objective of this thesis is to enhance knowledge regarding the design of security and privacy technologies for mHealth systems. In particular, we deal with mHealth Data Collection Systems (MDCSs), which consists of mobile devices for collecting and reporting health-related data, replacing paper-based approaches for health surveys and surveillance.
APA, Harvard, Vancouver, ISO, and other styles
41

Blum, Niklas. "Formalization of a converged internet and telecommunications service environment." Phd thesis, Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2011/5114/.

Full text
Abstract:
The programmable network envisioned in the 1990s within standardization and research for the Intelligent Network is currently coming into reality using IPbased Next Generation Networks (NGN) and applying Service-Oriented Architecture (SOA) principles for service creation, execution, and hosting. SOA is the foundation for both next-generation telecommunications and middleware architectures, which are rapidly converging on top of commodity transport services. Services such as triple/quadruple play, multimedia messaging, and presence are enabled by the emerging service-oriented IPMultimedia Subsystem (IMS), and allow telecommunications service providers to maintain, if not improve, their position in the marketplace. SOA becomes the de facto standard in next-generation middleware systems as the system model of choice to interconnect service consumers and providers within and between enterprises. We leverage previous research activities in overlay networking technologies along with recent advances in network abstraction, service exposure, and service creation to develop a paradigm for a service environment providing converged Internet and Telecommunications services that we call Service Broker. Such a Service Broker provides mechanisms to combine and mediate between different service paradigms from the two domains Internet/WWW and telecommunications. Furthermore, it enables the composition of services across these domains and is capable of defining and applying temporal constraints during creation and execution time. By adding network-awareness into the service fabric, such a Service Broker may also act as a next generation network-to-service element allowing the composition of crossdomain and cross-layer network and service resources. The contribution of this research is threefold: first, we analyze and classify principles and technologies from Information Technologies (IT) and telecommunications to identify and discuss issues allowing cross-domain composition in a converging service layer. Second, we discuss service composition methods allowing the creation of converged services on an abstract level; in particular, we present a formalized method for model-checking of such compositions. Finally, we propose a Service Broker architecture converging Internet and Telecom services. This environment enables cross-domain feature interaction in services through formalized obligation policies acting as constraints during service discovery, creation, and execution time.<br>Das programmierbare Netz, das Ende des 20. Jahrhunderts in der Standardisierung und Forschung für das Intelligente Netz entworfen wurde, wird nun Realität in einem auf das Internet Protokoll basierendem Netz der nächsten Generation (Next Generation Network). Hierfür kommen Prinzipien aus der Informationstechnologie, insbesondere aus dem Bereich dienstorientierte Architekturen (Service-Oriented Architecture / SOA) für die Diensterstellung, -ausführung und -betrieb zum Tragen. SOA bietet hierbei die theoretische Grundlage für Telekommunikationsnetze, vor allem jedoch für die dazugehörigen Dienstplattformen. Diese erlauben dem Telekommunikationsbetreiber seine Position in einem offenen Marktplatz der Dienste auszubauen. Dazu bedarf es allerdings möglichst flexibler Dienstumgebungen, die die Kooperation zwischen Dienstanbietern und Nutzern aus unterschiedlichsten Domänen durch Unterstützung geeigneter Werkzeuge und Mechanismen fördert. Im Rahmen dieser Dissertation definieren wir aufbauend auf Forschungsergebnisse im Bereich Overlay-Netze, Netzabstraktion und Zugriff auf exponierte Dienste eine Service Broker genannte Dienstumgebung für konvergente Internet- und Telekommunikationsdienste. Dieser Service Broker stellt Mechanismen für die Komposition von Diensten und Mediation zwischen unterschiedlichen Dienstparadigmen und Domänenspezifika beim Dienstaufruf zur Verfügung. Der Forschungsbeitrag dieser Arbeit findet auf unterschiedlichen Ebenen statt: Aufbauend auf einer Analyse und Klassifikation von Technologien und Paradigmen aus den Bereichen Informationstechnologie (IT) und Telekommunikation diskutieren wir die Problemstellung der Kooperation von Diensten und deren Komposition über Domänengrenzen hinweg. In einem zweiten Schritt diskutieren wir Methoden der Dienstkomposition und präsentieren eine formalisierte Methode der modellbasierten Diensterstellung. Der Schwerpunkt der Arbeit liegt auf der Spezifikation der Service Broker Dienstumgebung und einem zugrundeliegenden Informations- und Datenmodell. Diese Architektur erlaubt die Komposition und Kooperation von Diensten über Domänengrenzen hinweg, um konvergente Internet- und Telekommunikationsdienste zu realisieren. Hierfür wird ein auf Obligationspolitiken basierendes Regelsystemformalisiert, das Interaktionen zwischen Dienstmerkmalen während der Diensterstellung und -ausführung definiert.
APA, Harvard, Vancouver, ISO, and other styles
42

Silva, Fernando Silvestre da. "Procedimentos para tratamento e compressão de imagens e video utilizando tecnologia fractal e transformadas wavelet." [s.n.], 2005. http://repositorio.unicamp.br/jspui/handle/REPOSIP/260581.

Full text
Abstract:
Orientador: Yuzo Iano<br>Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de Computação<br>Made available in DSpace on 2018-08-05T13:46:30Z (GMT). No. of bitstreams: 1 Silva_FernandoSilvestreda_D.pdf: 35017484 bytes, checksum: fb460a6a42e44fe0a50e94599ac027fc (MD5) Previous issue date: 2005<br>Resumo: A excelente qualidade visual e taxa de compressão dos codificadores fractais de imagem tem aplicações limitadas devido ao exaustivo tempo de codificação inerente. Esta pesquisa apresenta um novo codificador híbrido de imagens que aplica a velocidade da transformada wavelet à qualidade da compressão fractal. Neste esquema, uma codificação fractal acelerada usando a classificação de domínios de Fisher é aplicada na sub-banda passa-baixas de uma imagem transformada por wavelet e uma versão modificada do SPIHT (Set Partitioned in Hierarchical Trees) é aplicada nos coeficientes remanescentes. Os detalhes de imagem e as características de transmissão progressiva da transformada wavelet são mantidas; nenhum efeito de bloco comuns às técnicas fractais é introduzido, e o problema de fidelidade de codificação comuns aos codificadores híbridos fractal-wavelet é resolvido. O sistema proposto reduz o tempo médio de processamento das imagens em 94% comparado com o codificador fractal tradicional e um ganho de 0 a 2,4dB em PSNR sobre o algoritmo SPIHT puro. Em ambos os casos, o novo esquema proposto apresentou melhorias em temos de qualidade subjetiva das imagens para altas, médias e baixas taxas de compressão<br>Abstract: The excellent visual quality and compression rate of fractal image coding have limited applications due to exhaustive inherent encoding time. This research presents a new fast and efficient image coder that applies the speed of the wavelet transform to the image quality of the fractal compression. In this scheme, a fast fractal encoding using Fisher¿s domain classification is applied to the lowpass subband of wavelet transformed image and a modified SPIHT coding (Set Partitioning in Hierarchical Trees), on the remaining coefficients. The image details and wavelet progressive transmission characteristics are maintained; no blocking effects from fractal techniques are introduced; and the encoding fidelity problem common in fractal-wavelet hybrid coders is solved. The proposed scheme provides an average of 94% reduction in encoding-decoding time compared to the pure accelerated Fractal coding results, and a 0-2,4dB gain in PSNR over the pure SPIHT coding. In both cases, the new scheme improves the subjective quality of pictures for high, medium and low bit rates<br>Doutorado<br>Telecomunicações e Telemática<br>Doutor em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
43

Ribeiro, Moises Vidal. "Tecnicas de processamento de sinais aplicadas a transmissão de dados via rede eletrica e ao monitoramento da qualidade de energia." [s.n.], 2005. http://repositorio.unicamp.br/jspui/handle/REPOSIP/261263.

Full text
Abstract:
Orientador: João Marcos Travassos Romano<br>Tese (doutorado) - Universidade Estadual de Campinas. Faculdade de Engenharia Eletrica e de Computação<br>Made available in DSpace on 2018-08-04T03:58:12Z (GMT). No. of bitstreams: 1 Ribeiro_MoisesVidal_D.pdf: 5330417 bytes, checksum: ebf89b90c9327ce0ba7f3c169b5e260f (MD5) Previous issue date: 2005<br>Resumo: A presente tese tem por objetivo propor e discutir o uso de algumas técnicas de processamento de sinais e de inteligência computacional para a melhoria da transmissão digital de dados via redes elétricas e da análise da qualidade da energia elétrica em sistemas de potência. No que tange à transmissão de dados via rede elétrica, novas técnicas são introduzidas para solucionar os problemas de cancelamento de ruídos impulsivos e equalização de canais de comunicação. Para a melhoria do monitoramento da qualidade da energia elétrica, propõem-se novas técnicas para a análise espectral das componentes fundamental e harmônicas, e para a detecção, a classificação e a compressão de distúrbios. As várias técnicas apresentadas no presente trabalho são fundamentadas no princípio de dividir e conquistar, largamente utilizado em diversas áreas do conhecimento. A aplicação adequada desse princípio através de técnicas de processamento de sinais e de inteligência computacional nos permitiram fornecer análises mais precisas dos problemas estudados e propor novas soluções para os mesmos. Os resultados numéricos obtidos nas simulações computacionais confirmam a relevância das técnicas propostas<br>Abstract: This thesis is aimed at proposing and discussing the use of signal processing and computational intelligence techniques to improve digital communications through power line channels and a more precise power quality analysis of power systems. Regarding power line communication applications, advanced techniques for impulse noise mitigation and channel equalization are introduced. For power quality monitoring applications, novel techniques are proposed for spectral analysis of power line signals and for detection, classification and compression of disturbance events. The techniques proposed are developed on the light of the divider and conquer principle. The appropriate application of such principle, by means of signal processing and computational intelligence techniques, enable us to offering a more precise analysis of the problems investigated and novel solutions for them. By introducing a set of signal processing techniques along with some computational intelligence ones, this contribution succeeds in offering improvements for all the problems investigated. Numerical results obtained by computational simulations verify such improvement and confirm the relevance of the techniques proposed.<br>Doutorado<br>Telecomunicações<br>Doutor em Engenharia Elétrica
APA, Harvard, Vancouver, ISO, and other styles
44

Constantopoulos, Vassilios. "Highly variable real-time networks: an Ethernet/IP solution and application to railway trains." Doctoral thesis, Universite Libre de Bruxelles, 2006. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210864.

Full text
Abstract:
In this thesis we study the key requirements and solutions for the feasibility and application of Ethernet-TCP/IP technology to the networks we termed Highly-Variable Real-Time Networks (HVRN). This particular class of networks poses exceptionally demanding requirements because their physical and logical topologies are both temporally and spatially variable. We devised and introduced specific mechanisms for applying Ethernet-TCP/IP to HVRNs with particular emphasis on effective and reliable modular connectivity. Using a railroad train as a reference, this work analyzes the unique requirements of HVRNs and focuses on the backbone architecture for such a system under Ethernet and TCP/IP.<br>Doctorat en sciences appliquées<br>info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
45

De, Vega Rodrigo Miguel. "Modeling future all-optical networks without buffering capabilities." Doctoral thesis, Universite Libre de Bruxelles, 2008. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210455.

Full text
Abstract:
In this thesis we provide a model for a bufferless optical burst switching (OBS) and an optical packet switching (OPS) network. The thesis is divided in three parts. <p><p>In the first part we introduce the basic functionality and structure of OBS and OPS networks. We identify the blocking probability as the main performance parameter of interest. <p><p>In the second part we study the statistical properties of the traffic that will likely run through these networks. We use for this purpose a set of traffic traces obtained from the Universidad Politécnica de Catalunya. Our conclusion is that traffic entering the optical domain in future OBS/OPS networks will be long-range dependent (LRD). <p><p>In the third part we present the model for bufferless OBS/OPS networks. This model takes into account the results from the second part of the thesis concerning the LRD nature of traffic. It also takes into account specific issues concerning the functionality of a typical bufferless packet-switching network. The resulting model presents scalability problems, so we propose an approximative method to compute the blocking probability from it. We empirically evaluate the accuracy of this method, as well as its scalability.<br>Doctorat en Sciences de l'ingénieur<br>info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
46

Brandt, Ingrid Gisélle. "Models of internet connectivity for secondary schools in the Grahamstown circuit." Thesis, Rhodes University, 2006. http://hdl.handle.net/10962/d1006566.

Full text
Abstract:
Information and Communication Technologies (ICTs) are becoming more pervasive in South African schools and are increasingly considered valuable tools in education, promoting the development of higher cognitive processes and allowing teachers and learners access to a plethora of information. This study investigates models of Internet connectivity for secondary schools in the Grahamstown Circuit. The various networking technologies currently available to South African schools, or likely to become available to South African schools in the future, are described along with the telecommunications legislation which governs their use in South Africa. Furthermore, current ICT in education projects taking place in South Africa are described together with current ICT in education policy in South Africa. This information forms the backdrop of a detailed schools survey that was conducted at all the 13 secondary schools in the Grahamstown Circuit and enriched with experimental work in the provision of metropolitan network links to selected schools, mostly via Wi-Fi. The result of the investigation is the proposal of a Grahamstown Circuit Metropolitan Education Network.
APA, Harvard, Vancouver, ISO, and other styles
47

Chen, Kairang. "Energy-Efficient Data Converters for Low-Power Sensors." Doctoral thesis, Linköpings universitet, Elektroniska Kretsar och System, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-133231.

Full text
Abstract:
Wireless sensor networks (WSNs) are employed in many applications, such as for monitoring bio-potential signals and environmental information. These applications require high-resolution (&gt; 12-bit) analog-to-digital converters (ADCs) at low-sampling rates (several kS/s). Such sensor nodes are usually powered by batteries or energy-harvesting sources hence low power consumption is primary for such ADCs. Normally, tens or hundreds of autonomously powered sensor nodes are utilized to capture and transmit data to the central processor. Hence it is profitable to fabricate the relevant electronics, such as the ADCs, in a low-cost standard complementary metal-oxide-semiconductor (CMOS) process. The two-stage pipelined successive approximation register (SAR) ADC has shown to be an energy-efficient architecture for high resolution. This thesis further studies and explores the design limitations of the pipelined SAR ADC for high-resolution and low-speed applications. The first work is a 15-bit, 1 kS/s two-stage pipelined SAR ADC that has been implemented in 0.35-μm CMOS process. The use of aggressive gain reduction in the residue amplifier combined with a suitable capacitive array digital-to-analog converter (DAC) topology in the second-stage simplifies the design of the operational transconductance amplifier (OTA) while eliminating excessive capacitive load and consequent power consumption. A comprehensive power consumption analysis of the entire ADC is performed to determine the number of bits in each stage of the pipeline. Choice of a segmented capacitive array DAC and attenuation capacitorbased DAC for the first and second stages respectively enable significant reduction in power consumption and area. Fabricated in a low-cost 0.35-μm CMOS process, the prototype ADC achieves a peak signal-to-noise-and-distortion ratio (SNDR) of 78.9 dB corresponding to an effective number of bits (ENOB) of 12.8-bit at a sampling frequency of 1 kS/s and provides a Schreier figure-of-merit (FoM) of 157.6 dB. Without any form of calibration, the ADC maintains an ENOB &gt; 12.1-bit up to the Nyquist bandwidth of 500 Hz while consuming 6.7 μW. Core area of the ADC is 0.679 mm2. The second work is a 14-bit, tunable bandwidth two-stage pipelined SAR ADC which is suitable for low-power, cost-effective sensor readout circuits. To overcome the high open-loop DC gain requirement of the OTA in the gain-stage, a 3-stage capacitive charge pump (CCP) is utilized to achieve the gain-stage instead of using the switch capacitor (SC) amplifier. Unity-gain OTAs have been used as the analog buffers to prevent the charge sharing between the CCP stages. The detailed design considerations are given in this work. The prototype ADC, designed and fabricated in a low-cost 0.35-μm CMOS process, achieves a peak SNDR of 75.6 dB at a sampling rate of 20 kS/s and 76.1 dB at 200 kS/s while consuming 7.68 μW and 96 μW, respectively. The corresponding Schreier FoM are 166.7 dB and 166.3 dB. Since the bandwidth of CCP is tunable, the ADC maintains a SNDR &gt; 75 dB up to 260 kHz. The core area occupied by the ADC is 0.589 mm2. As the low-power sensors might be active only for very short time triggered by an external pulse to acquire the data, the third work is a 14-bit asynchronous two-stage pipelined SAR ADC which has been designed and simulated in 0.18-μm CMOS process. A self-synchronous loop based on an edge detector is utilized to generate an internal clock with variable phase. A tunable delay element enables to allocate the available time for the switch capacitor DACs and the gain-stage. Three separate asynchronous clock generators are implemented to create the control signals for two sub-ADCs and the gain-stage between. Aiming to reduce the power consumption of the gain-stage, simple source followers as the analog buffers are implemented in the 3-stage CCP gain-stage. Post-layout simulation results show that the ADC achieves a SNDR of 83.5 dB while consuming 2.39 μW with a sampling rate of 10 kS/s. The corresponding Schreier FoM is 176.7 dB.
APA, Harvard, Vancouver, ISO, and other styles
48

Van, Wyk Byron Jay. "E-trust: a building block for developing valuable online platforms in Higher Education." Thesis, Cape Peninsula University of Technology, 2013. http://hdl.handle.net/20.500.11838/1852.

Full text
Abstract:
Thesis submitted in fulfilment of the requirements for the degree Master of Technology Design in the Faculty of Informatics and Design at the Cape Peninsula University of Technology Supervisor: Prof J Messeter Cape Town, 2013<br>The aim of this research project was to provide an answer to the question: “How can an understanding of online trust be used to build valuable online applications in Higher Education?” In order to present an answer to this question, a literature survey was conducted to establish: • An understanding of the phenomenon of online trust • What the factors are that influence a loss of trust in the online environment The literature survey highlighted several factors that influence a loss of trust in the online environment, called trust cues. These factors, however, were often tested within the E-commerce environment, and not in organization-specific contexts, such as online platforms in use in Higher Education. In order to determine whether or not these factors would influence the development of trust in context-specific environments, the author of this research grouped the indentified trust factors into three focus areas, i.e. content, ease of use, and navigation. These factors were then incorporated into a series of nine different prototypes. These prototypes were different versions of a particular online platform currently in use at the Cape Peninsula University of Technology (CPUT). The prototypes were tested over a three week period, with certain staff members at the institution in question recruited as test participants. During each week of user observations, a different focus area was targeted, in order to establish the impact that it would have on the perceived trustworthiness of the platform in question. User observations were conducted while test participants completed a standard process using the various prototypes. Semi-structured interviews were also conducted while participants completed the specific process. Participants were asked to evaluate each screen in the process according to its perceived trust worthiness, by assigning a trust level score. At the completion of the three rounds of user observations, in-depth interviews were conducted with test participants. The participants’ trust level scores for each prototype were captured and graphed. A detailed description for the score given for a particular screen was presented on each graph. These scores were combined to provide an analysis of the focus area tested during the specific round. After the three rounds of user observations were completed, an analysis of all the trust factors tested were done. Data captured during interviews were transcribed, combined with feedback received from questionnaires, and analysed. An interpretation of the results showed that not all trust factors had a similar influence in the development of trust in the online platform under investigation. Trust cues such as content organization, clear instructions and useful content were by far the most significant trust factors, while others such as good visual design elements, professional images of products, and freedom from grammatical and typographical errors had little or no impact in the overall trustworthiness of the platform under investigation. From the analysis done it was clear that the development of trust in organization-specific contexts is significantly different than developing trust in an E-commerce environment and that factors that influence the development of trust in one context might not always be significant in another. In conclusion, it is recommended that when software applications are developed in organization-specific contexts, such as Higher Education, that trust factors such as good content organization, clear instructions and useful content be considered as the most salient. Organization-specific contexts differ quite significantly in that the users of these systems often convey a certain degree of trust toward the online platforms that they work with on a daily basis. Trust factors that are geared toward developing an initial or basic trust in a particular platform, which is often the case with first time users engaging in an E-commerce platform, would therefore not be as significant in the development of a more developed level of trust, which is what is needed within the development of organization-specific online platforms.
APA, Harvard, Vancouver, ISO, and other styles
49

Chavan, Rohit. "JAVA synchronized collaborative multimedia toolkit: A collaborative communication tool." CSUSB ScholarWorks, 2004. https://scholarworks.lib.csusb.edu/etd-project/2549.

Full text
Abstract:
In this project a collaboration multimedia toolkit, JSCMT (Java Synchronized Collaborative Multimedia Toolkit) was developed which is intended to connect a group of people located in different geographical locations who are working on the same project.
APA, Harvard, Vancouver, ISO, and other styles
50

Sanga, Dione Aparecido de Oliveira. "Mineração de textos para o tratamento automático em sistemas de atendimento ao usuário." Universidade Tecnológica Federal do Paraná, 2017. http://repositorio.utfpr.edu.br/jspui/handle/1/2850.

Full text
Abstract:
A explosão de novas formas de comunicação entre empresas e clientes proporciona novas oportunidades e meios para que empresas possam tirar proveito desta interação. A forma como os clientes interagem com as empresas tem evoluído nos últimos anos, devido ao aumento dos dispositivos móveis e o acesso à internet: clientes que tradicionalmente solicitavam atendimento via telefone migraram para meios de atendimento eletrônicos, sejam eles via app´s dos smartphones ou via portais de atendimento a clientes. Como resultado desta transformação tecnológica do meio de comunicação, a Mineração de Textos tornou-se uma atrativa forma das empresas extraírem conhecimento novo a partir do registro das interações realizadas pelos clientes. Dentro deste contexto, o ambiente de telecomunicações proporciona os insumos para a realização de experimentos devido ao grande volume de dados gerados diariamente em sistemas de atendimento a clientes. Esse trabalho tem por objetivo analisar se o uso de Mineração de Textos aumenta a acurácia dos modelos de Mineração de Dados em aplicações que envolvem textos livres. Para isso é desenvolvido uma aplicação que visa a identificação de clientes propensos a saírem de ambientes internos de atendimento (CRM) e migrarem para órgãos regulamentadores do setor de telecomunicações. Também são abordados os principais problemas encontrados em aplicações de Mineração de Textos. Por fim, são apresentados os resultados da aplicação de algoritmos de classificação sobre diferentes conjuntos de dados, para a avaliação da melhoria obtida com a inclusão da Mineração de Textos para este tipo de aplicação. Os resultados obtidos mostram um ganho consolidado na melhoria da acuraria na ordem de 32%, fazendo da Mineração de Textos uma ferramenta útil para este tipo de problema.<br>The explosion of new forms of communication between companies and new opportunities and means for companies to take advantage of this interaction. The way customers interact with companies has evolved in the recent years due to the increase in mobile devices and Internet access: clients who traditionally requested phone service migrated to electronic means of service, whether via smartphone app's or via customer service portals. As a result of this technological transformation of the communication medium, text mining has become an attractive form for companies to extract new knowledge from the register of interactions carried out by customers. Within this context, the telecommunications environment provides the inputs for conducting experiments due to the large volume of data generated daily in customer service systems. This job aims to analyze if the use of text mining increases the accuracy of data mining models in applications involving free texts. For this purpose, an application is developed that aims to identify clients likely to leave internal service environments (CRM) and migrate to regulatory agencies in the telecommunications sector [Baeza, Ricardo e Berthier ,1999]. Also addressed are the main problems encountered in text mining applications. Finally, the results of the application of classification algorithms on different data sets are presented for the evaluation of the improvement obtained with the inclusion of text mining for this type of application. The results obtained show a consolidated gain in the improvement of the acuraria in the order of 32%, making the mining of texts a useful tool for this type of problem.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!