To see the other types of publications on this topic, follow the link: Digital computing device.

Dissertations / Theses on the topic 'Digital computing device'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 17 dissertations / theses for your research on the topic 'Digital computing device.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Nguyen, Doc Lap. "Digital Receipt System Using Mobile Device Technologies." ScholarWorks@UNO, 2008. http://scholarworks.uno.edu/td/705.

Full text
Abstract:
Cell phones are the most prevalent computing devices. They come pre-loaded with many different functions such as a digital camera, a mobile web browser, a streaming media player, games, GPS navigation, and more. However, if the banks have their way, the cell phone may become the preferred method of payment for everyday purchases. When that happens, there will be a need to securely send and store the receipt information so that they can be quickly analyzed. This thesis will demonstrate the use of a Digital Receipt system to manage transactions using Bluetooth technology to communicate between mobile devices. This expands on a previous thesis titled "Bi-Directional Information Exchange with Handheld Computing Devices." (Qaddoura, 2006) Cell phones have now been added into the setup. Thereby, expanding the Digital Receipt concept to include many more affordable computing devices, thus, increasing the likelihood that this application will be accepted by the general public.
APA, Harvard, Vancouver, ISO, and other styles
2

Павлов, Андрій Володимирович, Андрей Владимирович Павлов, Andrii Volodymyrovych Pavlov та И. Е. Бурик. "Методика повышения общей устойчивости цифровых систем управления в процессе их синтеза". Thesis, Видавництво СумДУ, 2011. http://essuir.sumdu.edu.ua/handle/123456789/10407.

Full text
Abstract:
Важное место при проектировании цифровых систем управления (ЦСУ) занимает задача разработки алгоритма работы цифрового вычислительного устройства (ЦВУ) по выработке управляющего воздействия на объект управления, способного обеспечить на выходе системы переходной процесс с оптимальным соотношением всех его прямых показателей качества. Наиболее распространенным критерием выбора алгоритмов работы ЦВУ при синтезе ЦСУ можно считать критерий быстродействия, позволяющий сформировать закон работы цифрового регулятора таким образом, что дискретная переходная характеристика системы будет иметь вид устойчивого переходного процесса минимальной и конечной длительности. При цитировании документа, используйте ссылку http://essuir.sumdu.edu.ua/handle/123456789/10407
APA, Harvard, Vancouver, ISO, and other styles
3

Tesfaye, Mussie. "Secure Reprogramming of a Network Connected Device : Securing programmable logic controllers." Thesis, KTH, Kommunikationssystem, CoS, 2012. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-104077.

Full text
Abstract:
This is a master’s thesis project entitled “Secure reprogramming of network connected devices”. The thesis begins by providing some background information to enable the reader to understand the current vulnerabilities of network-connected devices, specifically with regard to cyber security and data integrity. Today supervisory control and data acquisition systems utilizing network connected programmable logic controllers are widely used in many industries and critical infrastructures. These network-attached devices have been under increasing attack for some time by malicious attackers (including in some cases possibly government supported efforts). This thesis evaluates currently available solutions to mitigate these attacks. Based upon this evaluation a new solution based on the Trusted Computing Group (TCG’s) Trusted Platform Modules (TPM) specification is proposed. This solution utilizes a lightweight version of TPM and TCG’s Reliable Computing Machine (RCM) to achieve the desired security. The security of the proposed solution is evaluated both theoretically and using a prototype. This evaluation shows that the proposed solution helps to a great extent to mitigate the previously observed vulnerabilities when reprogramming network connected devices. The main result of this thesis project is a secure way of reprogramming these network attached devices so that only a valid user can successfully reprogram the device and no one else can reprogram the device (either to return it to an earlier state, perhaps with a known attack vector, or even worse prevent a valid user from programming the device).<br>Avhandlingen börjar med att ge lite bakgrundsinformation för att läsaren att förstå de nuvarande sårbarheten i nätverksanslutna enheter, särskilt när det gäller IT-säkerhet och dataintegritet. Idag övervakande kontroll och datainsamlingssystem använder nätverksanslutna programmerbara styrsystem används allmänt i många branscher och kritisk infrastruktur. Dessa nätverk anslutna enheter har under ökande attacker under en tid av illvilliga angripare (inklusive i vissa fall eventuellt regeringen stöds insatser). Denna avhandling utvärderar för närvarande tillgängliga lösningar för att minska dessa attacker. Baserat på denna utvärdering en ny lösning baserad på Trusted Computing Group (TCG) Trusted Platform Modules (TPM) specifikation föreslås. Denna lösning använder en lätt version av TPM och TCG:s pålitliga dator (RCM) för att uppnå önskad säkerhet. Säkerheten i den föreslagna lösningen utvärderas både teoretiskt och med hjälp av en prototyp. Utvärderingen visar att den föreslagna lösningen bidrar i stor utsträckning för att minska de tidigare observerade sårbarheter när omprogrammering nätverksanslutna enheter.  Huvudresultatet av denna avhandling projektet är ett säkert sätt omprogrammering dessa nätverksanslutna enheter så att endast ett giltigt användarnamn framgångsrikt kan omprogrammera enheten och ingen annan kan programmera enheten (antingen att återställa den till ett tidigare tillstånd, kanske med en känd attack vector, eller ännu värre förhindra en giltig användare från programmering av enheten).
APA, Harvard, Vancouver, ISO, and other styles
4

Clement, Jeffrey S. "The Spillable Environment: Expanding a Handheld Device's Screen Real Estate and Interactive Capabilities." BYU ScholarsArchive, 2007. https://scholarsarchive.byu.edu/etd/1166.

Full text
Abstract:
Handheld devices have a limited amount of screen real estate. If a handheld device could take advantage of larger screens, it would create a more powerful user interface and environment. As time progresses, Moore's law predicts that the computational power of handheld devices will increase dramatically in the future, promoting the interaction with a larger screen. Users can then use their peripheral vision to recognize spatial relationships between objects and solve problems more easily with this integrated system. In the spillable environment, the handheld device uses a DiamondTouch Table, a large, touch-sensitive horizontal table, to enhance the viewing environment. When the user moves the handheld device on the DiamondTouch, the orientation of the application changes accordingly. A user can let another person see the application by rotating the handheld device in that person's direction. A user could conveniently use this system in a public area. In a business meeting, a user can easily show documents and presentations to other users around the DiamondTouch table. In an academic setting, a tutor could easily explain a concept to a student. A user could effortlessly do all of this while having all of his/her information on the handheld device. A wide range of applications could be used in these types of settings.
APA, Harvard, Vancouver, ISO, and other styles
5

Vadivelu, Somasundaram. "Sensor data computation in a heavy vehicle environment : An Edge computation approach." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-235486.

Full text
Abstract:
In a heavy vehicle, internet connection is not reliable, primarily because the truck often travels to a remote location where network might not be available. The data generated from the sensors in a vehicle might not be sent to the internet when the connection is poor and hence it would be appropriate to store and do some basic computation on those data in the heavy vehicle itself and send it to the cloud when there is a good network connection. The process of doing computation near the place where data is generated is called Edge computing. Scania has its own Edge computation solution, which it uses for doing computations like preprocessing of sensor data, storing data etc. Scania’s solution is compared with a commercial edge computing platform called as AWS (Amazon Web Service’s) Greengrass. The comparison was in terms of Data efficiency, CPU load, and memory footprint. In the conclusion it is shown that Greengrass solution works better than the current Scania solution in terms of CPU load and memory footprint, while in data efficiency even though Scania solution is more efficient compared to Greengrass solution, it was shown that as the truck advances in terms of increasing data size the Greengrass solution might prove competitive to the Scania solution.One more topic that is explored in this thesis is Digital twin. Digital twin is the virtual form of any physical entity, it can be formed by obtaining real-time sensor values that are attached to the physical device. With the help of sensor values, a system with an approximate state of the device can be framed and which can then act as the digital twin. Digital twin can be considered as an important use case of edge computing. The digital twin is realized with the help of AWS Device shadow.<br>I ett tungt fordonsscenario är internetanslutningen inte tillförlitlig, främst eftersom lastbilen ofta reser på avlägsna platser nätverket kanske inte är tillgängligt. Data som genereras av sensorer kan inte skickas till internet när anslutningen är dålig och det är därför bra att ackumulera och göra en viss grundläggande beräkning av data i det tunga fordonet och skicka det till molnet när det finns en bra nätverksanslutning. Processen att göra beräkning nära den plats där data genereras kallas Edge computing. Scania har sin egen Edge Computing-lösning, som den använder för att göra beräkningar som förbehandling av sensordata, lagring av data etc. Jämförelsen skulle vara vad gäller data efficiency, CPU load och memory consumption. I slutsatsen visar det sig att Greengrass-lösningen fungerar bättre än den nuvarande Scania-lösningen när det gäller CPU-belastning och minnesfotavtryck, medan det i data-effektivitet trots att Scania-lösningen är effektivare jämfört med Greengrass-lösningen visades att när lastbilen går vidare i Villkor för att öka datastorleken kan Greengrass-lösningen vara konkurrenskraftig för Scania-lösningen. För att realisera Edge computing används en mjukvara som heter Amazon Web Service (AWS) Greengrass.Ett annat ämne som utforskas i denna avhandling är digital twin. Digital twin är den virtuella formen av någon fysisk enhet, den kan bildas genom att erhålla realtidssensorvärden som är anslutna till den fysiska enheten. Med hjälp av sensorns värden kan ett system med ungefärligt tillstånd av enheten inramas och som sedan kan fungera som digital twin. Digital twin kan betraktas som ett viktigt användningsfall vid kantkalkylering. Den digital twin realiseras med hjälp av AWS Device Shadow.
APA, Harvard, Vancouver, ISO, and other styles
6

Blair, James M. "Architectures for Real-Time Automatic Sign Language Recognition on Resource-Constrained Device." UNF Digital Commons, 2018. https://digitalcommons.unf.edu/etd/851.

Full text
Abstract:
Powerful, handheld computing devices have proliferated among consumers in recent years. Combined with new cameras and sensors capable of detecting objects in three-dimensional space, new gesture-based paradigms of human computer interaction are becoming available. One possible application of these developments is an automated sign language recognition system. This thesis reviews the existing body of work regarding computer recognition of sign language gestures as well as the design of systems for speech recognition, a similar problem. Little work has been done to apply the well-known architectural patterns of speech recognition systems to the domain of sign language recognition. This work creates a functional prototype of such a system, applying three architectures seen in speech recognition systems, using a Hidden Markov classifier with 75-90% accuracy. A thorough search of the literature indicates that no cloud-based system has yet been created for sign language recognition and this is the first implementation of its kind. Accordingly, there have been no empirical performance analyses regarding a cloud-based Automatic Sign Language Recognition (ASLR) system, which this research provides. The performance impact of each architecture, as well as the data interchange format, is then measured based on response time, CPU, memory, and network usage across an increasing vocabulary of sign language gestures. The results discussed herein suggest that a partially-offloaded client-server architecture, where feature extraction occurs on the client device and classification occurs in the cloud, is the ideal selection for all but the smallest vocabularies. Additionally, the results indicate that for the potentially large data sets transmitted for 3D gesture classification, a fast binary interchange protocol such as Protobuf has vastly superior performance to a text-based protocol such as JSON.
APA, Harvard, Vancouver, ISO, and other styles
7

Wang, Changhai. "Optical bistability studies of liquid crystal based devices." Thesis, Heriot-Watt University, 1991. http://hdl.handle.net/10399/872.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Reichenbacher, Tumasch. "Mobile cartography : adaptive visualisation of geographic information on mobile devices /." München : Verlag Dr. Hut, 2004. http://purl.fcla.edu/UF/lib/MobileCartography.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Azevedo, Bernardo Lopes de Sá. "Reconstrução/processamento de imagem médica com GPU em tomossíntese." Master's thesis, Faculdade de Ciências e Tecnologia, 2011. http://hdl.handle.net/10362/7503.

Full text
Abstract:
Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica<br>A Tomossíntese Digital Mamária (DBT) é uma recente técnica de imagem médica tridimensional baseada na mamografia digital que permite uma melhor observação dos tecidos sobrepostos, principalmente em mamas densas. Esta técnica consiste na obtenção de múltiplas imagens (cortes) do volume a reconstruir, permitindo dessa forma um diagnóstico mais eficaz, uma vez que os vários tecidos não se encontram sobrepostos numa imagem 2D. Os algoritmos de reconstrução de imagem usados em DBT são bastante similares aos usados em Tomografia Computorizada (TC). Existem duas classes de algoritmos de reconstrução de imagem: analíticos e iterativos. No âmbito deste trabalho foram implementados dois algoritmos iterativos de reconstrução: Maximum Likelihood – Expectation Maximization (ML-EM) e Ordered Subsets – Expectation Maximization (OS-EM). Os algoritmos iterativos permitem melhores resultados, no entanto são computacionalmente muito pesados, pelo que, os algoritmos analíticos têm sido preferencialmente usados em prática clínica. Com os avanços tecnológicos na área dos computadores, já é possível diminuir consideravelmente o tempo que leva para reconstruir uma imagem com um algoritmo iterativo. Os algoritmos foram implementados com recurso à programação em placas gráficas − General-Purpose computing on Graphics Processing Units (GPGPU). A utilização desta técnica permite usar uma placa gráfica (GPU – Graphics Processing Unit) para processar tarefas habitualmente designadas para o processador de um computador (CPU – Central Processing Unit) ao invés da habitual tarefa do processamento gráfico a que são associadas as GPUs. Para este projecto foi usado uma GPU NVIDIA®, recorrendo-se à arquitectura Compute Unified Device Architecture (CUDA™) para codificar os algoritmos de reconstrução. Os resultados mostraram que a implementação dos algoritmos em GPU permitiu uma diminuição do tempo de reconstrução em, aproximadamente, 6,2 vezes relativamente ao tempo obtido em CPU. No respeitante à qualidade de imagem, a GPU conseguiu atingir um nível de detalhe similar às imagens da CPU, apesar de diferenças pouco significativas.
APA, Harvard, Vancouver, ISO, and other styles
10

Ruokamo, A. (Ari). "Parallel computing and parallel programming models:application in digital image processing in mobile systems and personal mobile devices." Bachelor's thesis, University of Oulu, 2018. http://urn.fi/URN:NBN:fi:oulu-201802271269.

Full text
Abstract:
Today powerful parallel computer architectures empower numerous application areas in personal computing and consumer electronics and parallel computation is an established mainstay in personal mobile devices (PMD). During last ten years PMDs have been equipped with increasingly powerful parallel computation architectures (CPU+GPU) enabling rich gaming, photography and multimedia experiences ultimately general purpose parallel computation through application programming interfaces. This study views into current status of parallel computing and parallel programming, and specifically its application and practices of digital image processing applied in the domain of Mobile Systems (MS) and Personal Mobile Devices (PMD). The application of parallel computing and -programming has become more common today with the changing user-application requirements and with the increased requirements of sustained high-performance applications and functionality. Furthermore, the paradigm shift of data consumption in personal computing towards PMD and mobile devices is under increased interest. The history of parallel computation in MS and PMD is relatively new topic in academia and industry. The literature study revealed that while there is good amount of new application specific research emerging in this domain, the foundations of dominant and common parallel programming paradigms in the area of MS and PMD are still moving targets.
APA, Harvard, Vancouver, ISO, and other styles
11

Homem, Irvin. "Towards Automation in Digital Investigations : Seeking Efficiency in Digital Forensics in Mobile and Cloud Environments." Licentiate thesis, Stockholms universitet, Institutionen för data- och systemvetenskap, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-130742.

Full text
Abstract:
Cybercrime and related malicious activity in our increasingly digital world has become more prevalent and sophisticated, evading traditional security mechanisms. Digital forensics has been proposed to help investigate, understand and eventually mitigate such attacks. The practice of digital forensics, however, is still fraught with various challenges. Some of the most prominent of these challenges include the increasing amounts of data and the diversity of digital evidence sources appearing in digital investigations. Mobile devices and cloud infrastructures are an interesting specimen, as they inherently exhibit these challenging circumstances and are becoming more prevalent in digital investigations today. Additionally they embody further characteristics such as large volumes of data from multiple sources, dynamic sharing of resources, limited individual device capabilities and the presence of sensitive data. These combined set of circumstances make digital investigations in mobile and cloud environments particularly challenging. This is not aided by the fact that digital forensics today still involves manual, time consuming tasks within the processes of identifying evidence, performing evidence acquisition and correlating multiple diverse sources of evidence in the analysis phase. Furthermore, industry standard tools developed are largely evidence-oriented, have limited support for evidence integration and only automate certain precursory tasks, such as indexing and text searching. In this study, efficiency, in the form of reducing the time and human labour effort expended, is sought after in digital investigations in highly networked environments through the automation of certain activities in the digital forensic process. To this end requirements are outlined and an architecture designed for an automated system that performs digital forensics in highly networked mobile and cloud environments. Part of the remote evidence acquisition activity of this architecture is built and tested on several mobile devices in terms of speed and reliability. A method for integrating multiple diverse evidence sources in an automated manner, supporting correlation and automated reasoning is developed and tested. Finally the proposed architecture is reviewed and enhancements proposed in order to further automate the architecture by introducing decentralization particularly within the storage and processing functionality. This decentralization also improves machine to machine communication supporting several digital investigation processes enabled by the architecture through harnessing the properties of various peer-to-peer overlays. Remote evidence acquisition helps to improve the efficiency (time and effort involved) in digital investigations by removing the need for proximity to the evidence. Experiments show that a single TCP connection client-server paradigm does not offer the required scalability and reliability for remote evidence acquisition and that a multi-TCP connection paradigm is required. The automated integration, correlation and reasoning on multiple diverse evidence sources demonstrated in the experiments improves speed and reduces the human effort needed in the analysis phase by removing the need for time-consuming manual correlation. Finally, informed by published scientific literature, the proposed enhancements for further decentralizing the Live Evidence Information Aggregator (LEIA) architecture offer a platform for increased machine-to-machine communication thereby enabling automation and reducing the need for manual human intervention.
APA, Harvard, Vancouver, ISO, and other styles
12

Chenzira, Ayoka. "Haptic cinema: an art practice on the interactive digital media tabletop." Diss., Georgia Institute of Technology, 2011. http://hdl.handle.net/1853/39500.

Full text
Abstract:
Common thought about cinema calls to mind an audience seated in a darkened theatre watching projected moving images that unfold a narrative onto a single screen. Cinema is much more than this. There is a significant history of artists experimenting with the moving image outside of its familiar setting in a movie theatre. These investigations are often referred to as "expanded cinema". This dissertation proposes a genre of expanded cinema called haptic cinema, an approach to interactive narrative that emphasizes material object sensing, identification and management; viewer's interaction with material objects; multisequential narrative; and the presentation of visual and audio information through multiple displays to create a sensorially rich experience for viewers. The interactive digital media tabletop is identified as one platform on which to develop haptic cinema. This platform supports a subgenre of haptic cinema called tabletop cinema. Expanded cinema practices are analyzed for their contributions to haptic cinema. Based on this theoretical and artistic research, the thesis claims that haptic cinema contributes to the historical development of expanded cinema and interactive cinema practices. I have identified the core properties of a haptic cinema practice during the process of designing, developing and testing a series of haptic cinema projects. These projects build on and make use of methods and conventions from tangible interfaces, tangible narratives and tabletop computing.
APA, Harvard, Vancouver, ISO, and other styles
13

Yoo, Heejong. "Low-Power Audio Input Enhancement for Portable Devices." Diss., Georgia Institute of Technology, 2005. http://hdl.handle.net/1853/6821.

Full text
Abstract:
With the development of VLSI and wireless communication technology, portable devices such as personal digital assistants (PDAs), pocket PCs, and mobile phones have gained a lot of popularity. Many such devices incorporate a speech recognition engine, enabling users to interact with the devices using voice-driven commands and text-to-speech synthesis. The power consumption of DSP microprocessors has been consistently decreasing by half about every 18 months, following Gene's law. The capacity of signal processing, however, is still significantly constrained by the limited power budget of these portable devices. In addition, analog-to-digital (A/D) converters can also limit the signal processing of portable devices. Many systems require very high-resolution and high-performance A/D converters, which often consume a large fraction of the limited power budget of portable devices. The proposed research develops a low-power audio signal enhancement system that combines programmable analog signal processing and traditional digital signal processing. By utilizing analog signal processing based on floating-gate transistor technology, the power consumption of the overall system as well as the complexity of the A/D converters can be reduced significantly. The system can be used as a front end of portable devices in which enhancement of audio signal quality plays a critical role in automatic speech recognition systems on portable devices. The proposed system performs background audio noise suppression in a continuous-time domain using analog computing elements and acoustic echo cancellation in a discrete-time domain using an FPGA.
APA, Harvard, Vancouver, ISO, and other styles
14

Шатний, Сергій В'ячеславович. "Інформаційна технологія обробки та аналізу кардіосигналів з використанням нейронної мережі". Diss., Національний університет "Львівська політехніка", 2021. https://ena.lpnu.ua/handle/ntb/56259.

Full text
Abstract:
Дисертацію присвячено розробці та вдосконаленні моделей, методів та засобів інформаційної технології обробки електрокардіограми, підвищенні швидкодії та точності обробки кардіосигналів, зменшенні розміру системи, призначеної для такої обробки, зниження її енергоспоживання і реалізації системи в аналоговій та цифровій елементних базах. Сформульовано актуальність теми дисертації, мету та основні задачі досліджень, визначено наукову новизну роботи і практичне значення отриманих результатів, показано зв'язок роботи з науковими темами. Подано відомості про апробацію результатів роботи, особистий внесок автора та його публікації. Виявлено, що ефективність обробки та аналізу залежить від якості попередньої обробки сигналів та природи самого сигналу. Аналіз підходів до побудови систем обробки біомедичних сигналів показав необхідність підвищення їх ефективності. Результати аналізу існуючих систем обробки кардіосигналів дали змогу стверджувати, що в більшості з них недостатньо висока точність класифікації (не вище 75 %), низька швидкодія та висока вартість обладнання, пов’язана з монополією компаній-виробників. Представлено розроблений метод аналізу електрокардіограми шляхом визначення амлітуди та тривалості кожного з P, Q, R, S, T-сегментів. Удосконалено метод попередньої обробки кардіосигналів за рахунок використання для ідентифікації та фільтрування нейронних мереж. Покращено метод класифікації кардіосигналів за допомогою використання частковорозпаралеленої нейронної мережі. Розроблені програмні та апаратні реалізації інформаційної технології обробки кардіосигналів, структурно-функціональні схеми обробки вхідних сигналів на основі мікроконтролерів та програмованих логічних інтегральних схем. Проведено моделювання та оптимізацію засобів обміну даними між структурними елементами системи. Розроблені спеціалізовані програмні продукти, призначені для попередньої обробки та аналізу ЕКГ. Розроблено серверні засоби для функціонування віддаленої web-системи для взаємодії логічної моделі «лікар-пацієнт». The dissertation is prepared to development and improvement of models, methods and means of information technology of electrocardiogram processing, increase of speed and accuracy of processing of cardio signals, reduction of the size of the system intended for such processing, reduction of its power consumption and realization of system in analog and digital element bases. The relevance of the topic of the dissertation is substantiated, the purpose and main tasks of research are formulated, the scientific novelty of the work and the practical significance of the obtained results are determined, the connection of the work with scientific topics is shown. Information on approbation of work results, personal contribution of the author and his publication is given. It was found that the efficiency of processing and analysis depends on the quality of signal pre-processing and the nature of the signal itself. Analysis of approaches to the construction of biomedical signal processing systems has shown the need to increase their efficiency. The results of the analysis of the existing cardio signal processing systems allowed to state that in most of them the classification accuracy is not high enough (not higher than 75%), low speed and high cost of equipment due to the monopoly of the manufacturing companies. The developed method of analysis of the electrocardiogram by determination of amplitude and duration of each of P, Q, R, S, T-segments is presented. The method of pre-processing of cardiac signals has been improved due to the use of neural networks for identification and filtering of cardiac signals. The method of classification of cardio signals by means of use of a partially-parallel fuzzy neural network is improved. Software and hardware implementations of information technology of cardiac signal processing, structural-functional and basic schemes of input signal processing on the basis. of microcontrollers and programmable logic integrated circuits are developed. Modeling and optimization of means of data exchange between structural elements of the system are carried out. The system of processing and analysis of cardio signals is developed with use of open, free and conditionally free software, in particular programming language and environment of development of GCC, system of visual programming and carrying out of simulations of NI Labviev. Means based on programmable logic integrated circuits and programmable valve arrays were selected as the hardware platform. The NIO RIO platform was used to conduct the software and hardware simulation, and a platform based on microchip microcontrollers and Altera programmable valve arrays was selected to create, design and implement the layout. Developed specialized software products for ECG pre-processing and analysis. Server tools have been developed for the operation of a remote web-system for the interaction of the logical model "doctor-patient". Comparative analyzes were performed with existing software and hardware platforms for cardiac signal processing, in particular with the Holter device. The data show a decrease in energy consumption, increase the accuracy of cardio signal analysis, reduce the infraction of readings and increase the compactness of the system. In general, the proposed and used tools allow for a full range of medical research and implement the developed system in medical and scientific institutions.
APA, Harvard, Vancouver, ISO, and other styles
15

Singh, Preeti. "Modeling Context-Adaptive Energy-Aware Security in Mobile Devices." UNF Digital Commons, 2019. https://digitalcommons.unf.edu/etd/883.

Full text
Abstract:
As increasing functionality in mobile devices leads to rapid battery drain, energy management has gained increasing importance. However, differences in user’s usage contexts and patterns can be leveraged for saving energy. On the other hand, the increasing sensitivity of users’ data, coupled with the need to ensure security in an energy-aware manner, demands careful analyses of trade-offs between energy and security. The research described in this thesis addresses this challenge by 1)modeling the problem of context-adaptive energy-aware security as a combinatorial optimization problem (Context-Sec); 2) proving that the decision version of this problem is NP-Complete, via a reduction from a variant of the well-known Knapsack problem; 3) developing three different algorithms to solve a related offline version of Context-Sec; and 4) implementing tests and compares the performance of the above three algorithms with data-sets derived from real-world smart-phones on wireless networks. The first algorithm presented is a pseudo-polynomial dynamic programming (DP)algorithm that computes an allocation with optimal user benefit using recurrence of the relations; the second algorithm is a greedy heuristic for allocation of security levels based on user benefit per unit of power consumption for each level; and the third algorithm is a Fully Polynomial Time Approximation Scheme (FPTAS) which has a polynomial time execution complexity as opposed to the pseudo-polynomialDP based approach. To the best of the researcher’s knowledge, this is the first work focused on modeling, design, implementation and experimental performance.
APA, Harvard, Vancouver, ISO, and other styles
16

Nishibe, Caio Arce. "Central de confrontos para um sistema automático de identificação biométrica: uma abordagem de implementação escalável." Universidade Tecnológica Federal do Paraná, 2017. http://repositorio.utfpr.edu.br/jspui/handle/1/3142.

Full text
Abstract:
Com a popularização do uso da biometria, determinar a identidade de um indivíduo é uma atividade cada vez mais comum em diversos contextos: controle de acesso físico e lógico, controle de fronteiras, identificações criminais e forenses, pagamentos. Sendo assim, existe uma demanda crescente por Sistemas Automáticos de Identificação Biométrica (ABIS) cada vez mais rápidos, com elevada acurácia e que possam operar com um grande volume de dados. Este trabalho apresenta uma abordagem de implementação de uma central de confrontos para um ABIS de grande escala utilizando um framework de computação em memória. Foram realizados experimentos em uma base de dados real com mais de 50 milhões de impressões digitais em um cluster com até 16 nós. Os resultados mostraram a escalabilidade da solução proposta e a capacidade de operar em grandes bases de dados.<br>With the popularization of biometrics, personal identification is an increasingly common activity in several contexts: physical and logical access control, border control, criminal and forensic identification, payments. Thus, there is a growing demand for faster and accurate Automatic Biometric Identification Systems (ABIS) capable to handle a large volume of biometric data. This work presents an approach to implement a scalable cluster-based matching platform for a large-scale ABIS using an in-memory computing framework. We have conducted some experiments that involved a database with more than 50 million captured fingerprints, in a cluster up to 16 nodes. The results have shown the scalability of the proposed solution and the capability to handle a large biometric database.
APA, Harvard, Vancouver, ISO, and other styles
17

Magalhães, Tiago Emanuel da Cunha. "Spatial Coherence Mapping of Structured Astrophysical Sources." Doctoral thesis, 2019. http://hdl.handle.net/10451/45596.

Full text
Abstract:
All optical fields that we encounter in nature or in the laboratory have random fluctuations. Although light emerging from lasers can be considered as “well-behaved” electromagnetic fields, that is certainly not the case of natural sources such as stars. Thus, they must be treated statistically using the theory of coherence, in particular, second-order statistics. The Mutual Coherence Function (MCF) and the Cross-Spectral Density Function (CSDF) are central quantities in the space-time and space-frequency domains, respectively, in the theory of coherence. Both quantities are connected through a Fourier transform. Moreover, all second order-optical quantities can be extracted from these central functions, for example, the intensity distribution and the spectral degree of coherence. Since, in general, the MCF and the CSDF change throughout propagation, all second-order optical quantities, such as the spectral density, also change throughout propagation. When the far-field normalized spectrum of light changes due to source correlations, we say that coherenceinduced spectral changes occurred. This is known as the Wolf effect and it is the driving force of this dissertation. In this thesis, we have investigated the use of heterogeneous computing for the propagation of partially coherent light, namely, the propagation of the CSDF. The main goal was to reduce the computation time. By defining the CSDF at the source plane, the software built is able to propagate the CSDF and retrieve second-order optical quantities such as the spectral density and the spectral degree of coherence. The implementation of this software was then used to perform numerical simulations of the propagation of the far-field normalized spectrum of planar sources. The main goal was to evaluate the presence of the Wolf effect in specific source models. The results obtained suggest that the far-field spectrum of source models, which do not have analytical solutions, can be computed using our implementation. We next designed to first-order a conceptual space-based instrument, named Solar Coherence Instrument (SCI), capable of performing spatial coherence measurements of individual solar granular cells (granules), present in the photosphere of the Sun. Two digital micromirror devices, which are reflective-type spatial light modulator, form the basis of our design. A signal-to-noise ratio estimation (> 102) was performed and the results point to the feasibility of such instrument. We then validated experimentally two crucial subsystems of SCI, namely, the subsystems responsible for selective imaging of a single solar granule and another responsible for spatial coherence measurements. In both cases, two experiments were designed and constructed, and the results obtained are presented and discussed. By comparing the spatial coherence measurement results with those expected from the van Cittert-Zernike theorem, we have obtained a good agreement, suggesting that such configuration is possible.<br>Todos os campos ópticos que encontramos na natureza ou no laboratório possuem flutuações aleatórias. Apesar de a luz proveniente dos lasers ser normalmente tratada como “bem-comportada”, do ponto de vista da coerência, esse não é o caso da luz gerada por fontes naturais como, por exemplo, as estrelas. Logo, o tratamento para estas flutuações tem de ser estatístico, utilizando a teoria de coerência, em particular, a estatística de segunda ordem. Nesta teoria, a Função de Coerência Mútua (FCM) e a Função Densidade Espectral Cruzada (FDEC) são quantidades centrais nos domínios espaço-tempo e espaçofrequência, respetivamente. Ambas estão ligadas por uma transformada de Fourier e todas as quantidades de segunda ordem, como a intensidade e a densidade espectral, podem ser extraídas destas duas quantidades centrais. No geral, a FCM e a FDEC variam ao longo da propagação da luz, logo, todas as quantidades de segunda-ordem também variam, como por exemplo, a densidade espectral. Quando o espectro normalizado da luz no campo longínquo varia devido a correlações na fonte de luz, dizemos que estamos perante uma variação espectral induzida por coerência, isto é, estamos perante aquilo a que se costuma chamar de efeito Wolf. Este efeito é o fio condutor desta tese. Nesta tese, investigamos o uso de computação heterogénea para a propagação de luz parcialmente coerente, isto é, através da propagação da FDEC. O objetivo principal passava por reduzir o tempo de computação. Ao definir uma FDEC para uma fonte luminosa planar, o programa computacional construído é capaz de propagar a FDEC e obter quantidades de segunda-ordem como a densidade espectral e o grau complexo de coerência. A implementação utilizada para este programa foi utilizada para efetuar simulações numéricas da propagação do espectro normalizado no campo longínquo de fontes luminosas parcialmente coerentes, em particular, para avaliar a presença do efeito Wolf. Os resultados sugerem que modelos de fontes cujos espectros no campo longínquo não possuem soluções analíticas, podem ser simulados através da nossa implementação. De seguida, desenhamos, em primeira ordem, um instrumento conceptual, baseado no espaço, chamado Solar Coherence Instrument (SCI), que é capaz de medir a coerência espacial de células convectivas solares (grânulos) que estão presentes na fotosfera do Sol. Dois moduladores espaciais de luz por reflexão, designados por Digital Micromirror Devices DMDs, formam a base do desenho deste instrumento. Foi realizada uma estimativa da razão sinal-ruído e o resultado aponta para a fiabilidade do instrumento. Após o desenho, procedemos à validação experimental de dois subsistemas fundamentais do SCI, nomeadamente, o subsistema responsável pela seletividade da luz proveniente de grânulos individuais e o subsistema responsável pela medição de coerência espacial. Em ambos os casos, duas experiências foram desenhadas e construídas e os resultados obtidos foram discutidos. Ao comparar os resultados obtidos das medidas de coerência espacial com aqueles que eram esperados pelo teorema de van Cittert-Zernike, obtivemos um bom ajuste, o que sugere que tal configuração é possível.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography