To see the other types of publications on this topic, follow the link: Mass communication|Artificial intelligence|Computer science.

Dissertations / Theses on the topic 'Mass communication|Artificial intelligence|Computer science'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 46 dissertations / theses for your research on the topic 'Mass communication|Artificial intelligence|Computer science.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Singh, Anurag. "Multi-Resolution Superpixels for Visual Saliency Detection in a Large Image Collection." Thesis, University of Louisiana at Lafayette, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3718565.

Full text
Abstract:

Finding what attracts attention is an important task for visual processing. The visual saliency detection finds location of focus of visual attention on the most important or stand-out object in an image or a video sequence. These stand-out objects are composed of regions or superpixels. Moreover, the fixations occur in clusters, which are simulated using superpixels, where superpixels are clusters of pixels bound by the Gestalt principle for perceptual grouping. The visual saliency detection algorithms presented in the dissertation build on the premise that salient regions are high in color contrast, and when compared to other regions, they stand-out.

The most intuitive method to find the salient region is by comparing it to every other region. A region is ranked by its dissimilarities with respect to other regions and highlighting the statistically salient region proportional to their rank. Another way to compare regions is with respect to its local surrounding. Each region is represented with its Dominant Color Descriptor and the color difference between neighbors is found using the Earth Mover's Distance. The multi-resolution framework ensures robustness to the object size, location, and background type.

Image saliency detection using region contrast is often based on the premise that a salient region has a contrast with the background. But the natural biological method involves comparison to a large collection of similar regions. A novel method is presented to efficiently compare the image region to the regions derived from a large, stored collection of images. Intuitively finding video saliency is derived as a special case of a large collection with temporal reference. The various methods presented in the dissertation are tested on publicly available data sets and performs better than existing state-of-the-art methods.

APA, Harvard, Vancouver, ISO, and other styles
2

Gautam, Kumar. "Computer Vision-based Estimation of Body Mass Distribution, Center of Mass, and Body Types| Design and Comparative Study." Thesis, California State University, Long Beach, 2018. http://pqdtopen.proquest.com/#viewpdf?dispub=10838305.

Full text
Abstract:

Body mass distribution and center of mass (CoM) are important topics in the field of human biomechanics and the healthcare industry. Increasing global obesity has led researchers to measure body parameters. This project focuses on developing an automatic computer vision approach to calculate the body mass distribution and CoM, as well as identify body types with a minimum setup cost.

In this project, a 3-D calibrated experimental setup was devised to take images of four male subjects in three views: front view, left side view, and right side view. First, a method was devised to separate the human subject from the background. Second, a novel approach was developed to find the CoM, percentage body mass distribution, and body types using two models: Simulated Skeleton Model (SSM) and Simulated Skeleton Matrix (SSMA). The CoM using this method was 94.36% of the CoM calculated with a reaction board experiment. Total body mass using this method was 96.6% of the total body mass calculated with the weighing balance. This project has three components: (1) finding the body mass distribution and comparing the results with the weighing balance, (2) finding the CoM and comparing the results with the reaction board experiment, and (3) offering new ways to conceptualize the three body types that are ectomorph, endomorph, and mesomorph with ratings in the range of 0 to 5.

APA, Harvard, Vancouver, ISO, and other styles
3

Sim, Robert. "On visual maps and their automatic construction." Thesis, McGill University, 2004. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=84842.

Full text
Abstract:
This thesis addresses the problem of automatically constructing a visual representation of an unknown environment that is useful for robotic navigation, localization and exploration. There are two main contributions. First, the concept of the visual map is developed, a representation of the visual structure of the environment, and a framework for learning this structure is provided. Second, methods for automatically constructing a visual map are presented for the case when limited information is available about the position of the camera during data collection.
The core concept of this thesis is that of the visual map, which models a set of image-domain features extracted from a scene. These are initially selected using a measure of visual saliency, and subsequently modelled and evaluated for their utility for robot pose estimation. Experiments are conducted demonstrating the feature learning process and the inferred models' reliability for pose inference.
The second part of this thesis addresses the problem of automatically collecting training images and constructing a visual map. First, it is shown that visual maps are self-organizing in nature, and the transformation between the image and pose domains is established with minimal prior pose information. Second, it is shown that visual maps can be constructed reliably in the face of uncertainty by selecting an appropriate exploration strategy. A variety of such strategies are presented and these approaches are validated experimentally in both simulated and real-world settings.
APA, Harvard, Vancouver, ISO, and other styles
4

Hartmann, William. "ASR-Driven Binary Mask Estimation for Robust Automatic Speech Recognition." The Ohio State University, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=osu1338244649.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Navaroli, Nicholas Martin. "Generative Probabilistic Models for Analysis of Communication Event Data with Applications to Email Behavior." Thesis, University of California, Irvine, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=3668831.

Full text
Abstract:

Our daily lives increasingly involve interactions with others via different communication channels, such as email, text messaging, and social media. In this context, the ability to analyze and understand our communication patterns is becoming increasingly important. This dissertation focuses on generative probabilistic models for describing different characteristics of communication behavior, focusing primarily on email communication.

First, we present a two-parameter kernel density estimator for estimating the probability density over recipients of an email (or, more generally, items which appear in an itemset). A stochastic gradient method is proposed for efficiently inferring the kernel parameters given a continuous stream of data. Next, we apply the kernel model and the Bernoulli mixture model to two important prediction tasks: given a partially completed email recipient list, 1) predict which others will be included in the email, and 2) rank potential recipients based on their likelihood to be added to the email. Such predictions are useful in suggesting future actions to the user (i.e. which person to add to an email) based on their previous actions. We then investigate a piecewise-constant Poisson process model for describing the time-varying communication rate between an individual and several groups of their contacts, where changes in the Poisson rate are modeled as latent state changes within a hidden Markov model.

We next focus on the time it takes for an individual to respond to an event, such as receiving an email. We show that this response time depends heavily on the individual's typical daily and weekly patterns - patterns not adequately captured in standard models of response time (e.g. the Gamma distribution or Hawkes processes). A time-warping mechanism is introduced where the absolute response time is modeled as a transformation of effective response time, relative to the daily and weekly patterns of the individual. The usefulness of applying the time-warping mechanism to standard models of response time, both in terms of log-likelihood and accuracy in predicting which events will be quickly responded to, is illustrated over several individual email histories.

APA, Harvard, Vancouver, ISO, and other styles
6

Taylor, Julia Michelle. "Towards Informal Computer Human Communication: Detecting Humor in a Restricted Domain." Cincinnati, Ohio : University of Cincinnati, 2008. http://rave.ohiolink.edu/etdc/view.cgi?acc_num=ucin1226600183.

Full text
Abstract:
Thesis (Ph.D.)--University of Cincinnati, 2008.
Advisor: Lawrence J. Mazlack. Title from electronic thesis title page (viewed Feb.16, 2009). Keywords: artificial intelligence; computational humor; natural language understanding. Includes abstract. Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
7

Shi, Shaohuai. "Communication optimizations for distributed deep learning." HKBU Institutional Repository, 2020. https://repository.hkbu.edu.hk/etd_oa/813.

Full text
Abstract:
With the increasing amount of data and the growing computing power, deep learning techniques using deep neural networks (DNNs) have been successfully applied in many practical artificial intelligence applications. The mini-batch stochastic gradient descent (SGD) algorithm and its variants are the most widely used algorithms in training deep models. The SGD algorithm is an iterative algorithm that needs to update the model parameters many times by traversing the training data, which is very time-consuming even using the single powerful GPU or TPU. Therefore, it becomes a common practice to exploit multiple processors (e.g., GPUs or TPUs) to accelerate the training process using distributed SGD. However, the iterative nature of distributed SGD requires multiple processors to iteratively communicate with each other to collaboratively update the model parameters. The intensive communication cost easily becomes the system bottleneck and limits the system scalability. In this thesis, we study the communication-efficient techniques for distributed SGD to improve the system scalability and thus accelerate the training process. We identify the performance issues in distributed SGD through benchmarking and modeling and then propose several communication optimization algorithms to address the communication issues. First, we build a performance model with a directed acyclic graph (DAG) to modeling the training process of distributed SGD and verify the model with extensive benchmarks on existing state-of-the-art deep learning frameworks including Caffe, MXNet, TensorFlow, and CNTK. Our benchmarking and modeling point out that existing optimizations for the communication problems are sub-optimal, which we need to address in this thesis. Second, to address the startup problem (due to the high latency of each communication) of layer-wise communications with wait-free backpropagation (WFBP), we propose an optimal gradient merging solution for WFBP, named MG-WFBP, that exploits the layer-wise property to well overlap the communication tasks with the computing tasks and can be adaptive to the training environments. Experiments are conducted on dense-GPU clusters with Ethernet and InfiniBand, and the results show that MG-WFBP can well address the startup problem in distributed training of layer-wise structured DNNs. Third, to make the high computing-intensive training tasks be possible in GPU clusters with low- bandwidth interconnect, we investigate the gradient compression techniques in distributed training. The top-{dollar}k{dollar} sparsification can well compress the communication traffic with little impact on the model convergence, but it suffers from a linear communication complexity to the number of workers so that top-{dollar}k{dollar} sparsification cannot scale well in large-scale clusters. To address the problem, we propose a global top-{dollar}k{dollar} (gTop-{dollar}k{dollar}) sparsification algorithm that reduces the communication complexity to be logarithmic to the number of workers. We also provide detailed theoretical analysis for the gTop-{dollar}k{dollar} SGD training algorithm, and the theoretical results show that our gTop-{dollar}k{dollar} SGD has the same order of convergence rate with SGD. Experiments are conducted on up to 64-GPU cluster to verify that gTop-{dollar}k{dollar} SGD significantly improves the system scalability with only a slight impact on the model convergence. Lastly, to enjoy the both benefits of the pipelining technique and the gradient sparsification algorithm, we propose a new distributed training algorithm, layer-wise adaptive gradient sparsification SGD (LAGS-SGD), which supports layer-wise sparsification and communication, and we theoretically and empirically prove that the LAGS-SGD preserves the convergence properties. To further alliterate the impact of the startup problem of layer-wise communications in LAGS-SGD, we also propose the optimal gradient merging solution for LAGS-SGD, named OMGS-SGD, and theoretical prove its optimality. The experimental results on a 16-node GPU cluster connected 1Gbps Ethernet show that OMGS-SGD can always improve the system scalability while the model convergence properties are not affected
APA, Harvard, Vancouver, ISO, and other styles
8

Shackelford, Philip Clayton. "On the Wings of the Wind: The United States Air Force Security Service and Its Impact on Signals Intelligence in the Cold War." Kent State University Honors College / OhioLINK, 2014. http://rave.ohiolink.edu/etdc/view?acc_num=ksuhonors1399284818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Woodward, Mark P. "Framing Human-Robot Task Communication as a Partially Observable Markov Decision Process." Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10188.

Full text
Abstract:
As general purpose robots become more capable, pre-programming of all tasks at the factory will become less practical. We would like for non-technical human owners to be able to communicate, through interaction with their robot, the details of a new task; I call this interaction "task communication". During task communication the robot must infer the details of the task from unstructured human signals, and it must choose actions that facilitate this inference. In this dissertation I propose the use of a partially observable Markov decision process (POMDP) for representing the task communication problem; with the unobservable task details and unobservable intentions of the human teacher captured in the state, with all signals from the human represented as observations, and with the cost function chosen to penalize uncertainty. This dissertation presents the framework, works through an example of framing task communication as a POMDP, and presents results from a user experiment where subjects communicated a task to a POMDP-controlled virtual robot and to a human controlled virtual robot. The task communicated in the experiment consisted of a single object movement and the communication in the experiment was limited to binary approval signals from the teacher. This dissertation makes three contributions: 1) It frames human-robot task communication as a POMDP, a widely used framework. This enables the leveraging of techniques developed for other problems framed as a POMDP. 2) It provides an example of framing a task communication problem as a POMDP. 3) It validates the framework through results from a user experiment. The results suggest that the proposed POMDP framework produces robots that are robust to teacher error, that can accurately infer task details, and that are perceived to be intelligent.
Engineering and Applied Sciences
APA, Harvard, Vancouver, ISO, and other styles
10

Fiedler, Heather Starr. "Journalism and Mass Communication Education in The Age of Technology." NSUWorks, 2005. http://nsuworks.nova.edu/gscis_etd/516.

Full text
Abstract:
The developmental research project was undertaken to determine the best way to structure the future of journalism and mass communication education so that it remains a viable discipline within the academy. New media technology is an emerging discipline within the journalism field. While many new jobs exist for graduates who are skilled in the field, only a small number of colleges and universities are offering undergraduate programs to train students in new media technology. The goal of the dissertation was to propose a new undergraduate major in new media technology that schools may implement. The literature review traces the origins and development of journalism and mass communication education through the 19th and 20th centuries and focuses on the emerging field of new media technology and online journalism. To help answer the research questions, a survey questionnaire was distributed to journalism and mass communications educators at 108 programs in the United States and to more than 300 media professionals. All the programs are accredited by the Accrediting Council on Education in Journalism and Mass Communications (ACEJMC), and the media professionals are all members of the Online News Association (ON A). The total number of participants was 102. In the surveys, participants shared their views on the current state of journalism and mass communication education as well as the new media industry through a combination of rank-order items, Likert-type scales, and open-ended questions. Results were used to correlate industry requirements with program offerings to prescribe the best possible undergraduate program in new media technology. The content, coverage and feasibility of the model program were validated by a panel of experts.
APA, Harvard, Vancouver, ISO, and other styles
11

Antos, Dimitrios. "Deploying Affect-Inspired Mechanisms to Enhance Agent Decision-Making and Communication." Thesis, Harvard University, 2012. http://dissertations.umi.com/gsas.harvard:10107.

Full text
Abstract:
Computer agents are required to make appropriate decisions quickly and efficiently. As the environments in which they act become increasingly complex, efficient decision-making becomes significantly more challenging. This thesis examines the positive ways in which human emotions influence people’s ability to make good decisions in complex, uncertain contexts, and develops computational analogues of these beneficial functions, demonstrating their usefulness in agent decision-making and communication. For decision-making by a single agent in large-scale environments with stochasticity and high uncertainty, the thesis presents GRUE (Goal Re-prioritization Using Emotion), a decision-making technique that deploys emotion-inspired computational operators to dynamically re-prioritize the agent’s goals. In two complex domains, GRUE is shown to result in improved agent performance over many existing techniques. Agents working in groups benefit from communicating and sharing information that would otherwise be unobservable. The thesis defines an affective signaling mechanism, inspired by the beneficial communicative functions of human emotion, that increases coordination. In two studies, agents using the mechanism are shown to make faster and more accurate inferences than agents that do not signal, resulting in improved performance. Moreover, affective signals confer performance increases equivalent to those achieved by broadcasting agents’ entire private state information. Emotions are also useful signals in agents’ interactions with people, influencing people’s perceptions of them. A computer-human negotiation study is presented, in which virtual agents expressed emotion. Agents whose emotion expressions matched their negotiation strategy were perceived as more trustworthy, and they were more likely to be selected for future interactions. In addition, to address similar limitations in strategic environments, this thesis uses the theory of reasoning patters in complex game-theoretic settings. An algorithm is presented that speeds up equilibrium computation in certain classes of games. For Bayesian games, with and without a common prior, the thesis also discusses a novel graphical formalism that allows agents’ possibly inconsistent beliefs to be succinctly represented, and for reasoning patterns to be defined in such games. Finally, the thesis presents a technique for generating advice from a game’s reasoning patterns for human decision-makers, and demonstrates empirically that such advice helps people make better decisions in a complex game.
Engineering and Applied Sciences
APA, Harvard, Vancouver, ISO, and other styles
12

Balster, Stephanie Karen. "An earth image simulation and tracking system for the Mars Laser Communication Demonstration." Thesis, Massachusetts Institute of Technology, 2005. http://hdl.handle.net/1721.1/33109.

Full text
Abstract:
Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.
Includes bibliographical references (p. 71-72).
In this thesis I created an Earth-image simulation and investigated Earth-tracking algorithms for the Mars Laser Communication Demonstration (MLCD). The MLCD mission will demonstrate the feasibility of high-data-rate laser communications between a Mars orbiting satellite and an Earth ground station. One of the key challenges of the mission is the requirement to achieve 0.35-rad-accuracy pointing and tracking of the laser beam to maintain the 1-30 Mbps communication downlink from Mars to Earth. The sunlit Earth is a bright source and, for most of the mission, can be tracked to stabilize the telescope from disturbances between 0.02 to 2 Hz, while other stabilization systems will cover the rest of the frequency spectrum. Before testing candidate Earth-tracking algorithms, simulated Earth image sequences were created to provide test; data sets. While a plain centroiding algorithm, thresholded-centroiding algorithm, cross-spectrum phase correlation method, and optical flow algorithm were all tested under various Earth phase conditions and pixel resolutions to evaluate their performance on simulated test data, the thresholded-centroiding algorithm was eventually chosen for its accuracy and low computational cost. The effect of short-term albedo variations on the performance of the thresholded-centroiding algorithm was shown to be limited by the Earth's rotation and too slow to change the Earth's surface enough to affect the centroid calculation between time frames. Differences between the geometric centroid and optical centroid were measured to be up to 10% of the Earth's diameter, or up to 2 focal plane array pixels during the mission at closest range. As such, the uncertainty area in which to search for the beacon at the ground receiving station is limited to a 2-pixel radius.
by Stephanie Karen Balster.
M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
13

Reed, Aaron A. "Changeful Tales| Design-Driven Approaches Toward More Expressive Storygames." Thesis, University of California, Santa Cruz, 2017. http://pqdtopen.proquest.com/#viewpdf?dispub=10289007.

Full text
Abstract:

Stories in released games are still based largely on static and predetermined structures, despite decades of academic work to make them more dynamic. Making game narratives more playable is an important step in the evolution of games and playable media as culturally relevant art forms. In the same way interactive systems help students learn about complicated subjects like physics in a more intuitive and immediate way than static texts, more dynamic interactive stories open up new ways of understanding people and situations. Such dreams remain mostly unrealized in released and playable games.

In this dissertation I will describe a number of design and technical solutions to the problem of creating more expressive and dynamic storygames, informed by a practice-based approach to game production. I will first define a framework for the analysis of games, including especially the terms storygame (a playable system with units of narrative where the understanding of the interconnectedness between story and system is crucial) and the notion of narrative logics (the set of processes that define how player input affects the next unit of story presented by the system). I will exercise this framework on an existing and well-known storygame genre, the adventure game, and use it to make a number of claims about the mechanics and dynamics of narratives in this genre that are borne out by an analysis of how contemporary games adopting some of its aesthetics succeed and fail. I will then describe three emerging storygame modes that are still in the process of being defined, developing a critical framework for each informed by close readings and historical analysis, and considering what design and technical innovations are required to fully realize the new mode's potential. These three modes I discuss are sculptural fiction (which shifts the focus from navigating to building a structure of narrative nodes), social simulation (games that explore the possibility space created by a set of simulated characters and rules for social interaction), and collaborative storygames (in which the lexia are generated at least in part by the participants during play). Each theoretical chapter is paired with a case study of one or several fully completed and released games I have created or co-created in that mode, to see how these design ideas were realized and technical advancements implemented in practice. I will conclude each section with applied advice for game makers hoping to work in these new spaces, and new technological developments that will help storygames continue to evolve and prosper.

APA, Harvard, Vancouver, ISO, and other styles
14

Gudmandsen, Magnus. "Using a robot head with a 3D face mask as a communication medium for telepresence." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-171402.

Full text
Abstract:
This thesis investigates the viability of a new communication medium for telepresence, namely a robotic head with a 3D face mask. In order to investigate this, a program was developed for an existing social robot, enabling the robot to be used as a device reflecting the facial movements of the operator. A study is performed with the operator located in front of a computer with a web camera, connected to speak through the robot to two interlocutors located in a room with the robot. This setup is then compared to a regular video call. The results from this study show that the robot improves the presence of the operator in the room, along with providing enhanced simulated gaze direction and eye contact with the interlocutors. It is concluded that using a robotic head with a 3D face mask is a viable option for a communication medium for telepresence.
Denna uppsats utforskar gångbarheten för ett nytt kommunikationsmedium för telenärvaro, nämligen ett robothuvud med en 3D-ansiktsmask. För att undersöka detta har ett program utvecklats för en existerande social robot, som möjliggör att roboten kan återspegla operatörens ansiktsrörelser. En studie utförs med en operatör placerad framför en dator med en webbkamera, där operatören kopplas genom roboten för att prata med två samtalspartner som är i samma rum som roboten. Detta arrangemang jämförs sedan med ett vanligt videosamtal. Resultaten från studie visar att roboten förbättrar operatörensnärvaro i rummet, och dessutom ger förbättrad simulerad blickriktning och ögonkontakt med samtalspartnerna. Slutligen fastställs att ett robothuvud med en 3D-ansiktsmask är ett gångbart kommunikationsmedium för telenärvaro.
APA, Harvard, Vancouver, ISO, and other styles
15

Ampatzis, Christos. "On the evolution of autonomous decision-making and communication in collective robotics." Doctoral thesis, Universite Libre de Bruxelles, 2008. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/210445.

Full text
Abstract:
In this thesis, we use evolutionary robotics techniques to automatically design and synthesise

behaviour for groups of simulated and real robots. Our contribution will be on

the design of non-trivial individual and collective behaviour; decisions about solitary or

social behaviour will be temporal and they will be interdependent with communicative

acts. In particular, we study time-based decision-making in a social context: how the

experiences of robots unfold in time and how these experiences influence their interaction

with the rest of the group. We propose three experiments based on non-trivial real-world

cooperative scenarios. First, we study social cooperative categorisation; signalling and

communication evolve in a task where the cooperation among robots is not a priori required.

The communication and categorisation skills of the robots are co-evolved from

scratch, and the emerging time-dependent individual and social behaviour are successfully

tested on real robots. Second, we show on real hardware evidence of the success of evolved

neuro-controllers when controlling two autonomous robots that have to grip each other

(autonomously self-assemble). Our experiment constitutes the first fully evolved approach

on such a task that requires sophisticated and fine sensory-motor coordination, and it

highlights the minimal conditions to achieve assembly in autonomous robots by reducing

the assumptions a priori made by the experimenter to a functional minimum. Third, we

present the first work in the literature to deal with the design of homogeneous control

mechanisms for morphologically heterogeneous robots, that is, robots that do not share

the same hardware characteristics. We show how artificial evolution designs individual

behaviours and communication protocols that allow the cooperation between robots of

different types, by using dynamical neural networks that specialise on-line, depending on

the nature of the morphology of each robot. The experiments briefly described above

contribute to the advancement of the state of the art in evolving neuro-controllers for

collective robotics both from an application-oriented, engineering point of view, as well as

from a more theoretical point of view.
Doctorat en Sciences de l'ingénieur
info:eu-repo/semantics/nonPublished

APA, Harvard, Vancouver, ISO, and other styles
16

Bergfeldt, Niklas. "Cooperative Robotics : A Survey." Thesis, University of Skövde, Department of Computer Science, 2000. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-473.

Full text
Abstract:

This dissertation aims to present a structured overview of the state-of-the-art in cooperative robotics research. As we illustrate in this dissertation, there are several interesting aspects that draws attention to the field, among which 'Life Sciences' and 'Applied AI' are emphasized. We analyse the key concepts and main research issues within the field, and discuss its relations to other disciplines, including cognitive science, biology, artificial life and engineering. In particular it can be noted that the study of collective robot behaviour has drawn much inspiration from studies of animal behaviour. In this dissertation we also analyse one of the most attractive research areas within cooperative robotics today, namely RoboCup. Finally, we present a hierarchy of levels and mechanisms of cooperation in robots and animals, which we illustrate with examples and discussions.

APA, Harvard, Vancouver, ISO, and other styles
17

Bishell, Aaron. "Designing application-specific processors for image processing : a thesis presented in partial fulfilment of the requirements for the degree of Master of Science in Computer Science, Massey University, Palmerston North, New Zealand." Massey University, 2008. http://hdl.handle.net/10179/1024.

Full text
Abstract:
Implementing a real-time image-processing algorithm on a serial processor is difficult to achieve because such a processor cannot cope with the volume of data in the low-level operations. However, a parallel implementation, required to meet timing constraints for the low-level operations, results in low resource utilisation when implementing the high-level operations. These factors suggested a combination of parallel hardware, for the low-level operations, and a serial processor, for the high-level operations, for implementing a high-level image-processing algorithm. Several types of serial processors were available. A general-purpose processor requires an extensive instruction set to be able to execute any arbitrary algorithm resulting in a relatively complex instruction decoder and possibly extra FUs. An application-specific processor, which was considered in this research, implements enough FUs to execute a given algorithm and implements a simpler, and more efficient, instruction decoder. In addition, an algorithms behaviour on a processor could be represented in either hardware (i.e. hardwired logic), which limits the ability to modify the algorithm behaviour of a processor, or “software” (i.e. programmable logic), which enables external sources to specify the algorithm behaviour. This research investigated hardware- and software- controlled application-specific serial processors for the implementation of high-level image-processing algorithms and compared these against parallel hardware and general-purpose serial processors. It was found that application-specific processors are easily able to meet the timing constraints imposed by real-time high-level image processing. In addition, the software-controlled processors had additional flexibility, a performance penalty of 9.9% and 36.9% and inconclusive footprint savings (and costs) when compared to hardwarecontrolled processors.
APA, Harvard, Vancouver, ISO, and other styles
18

Manaf, Afwarman 1962. "Constraint-based software for broadband networks planning : a software framework for planning with the holistic approach." Monash University, Dept. of Electrical and Computer Systems Engineering, 2000. http://arrow.monash.edu.au/hdl/1959.1/7754.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Manaf, Afwarman 1962. "Constraint-based software for broadband networks planninga software framework for planning with the holistic approach /." Monash University, Dept. of Electrical and Computer Systems Engineering, 2000. http://arrow.monash.edu.au/hdl/1959.1/8163.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Awan, Ammar Ahmad. "Co-designing Communication Middleware and Deep Learning Frameworks for High-Performance DNN Training on HPC Systems." The Ohio State University, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=osu1587433770960088.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Liu, MingHui. "Navel orange blemish identification for quality grading system : a thesis submitted in partial fulfilment of the requirements for the degree of Master of Computer Science at Massey University, Albany, New Zealand." Massey University, 2009. http://hdl.handle.net/10179/1175.

Full text
Abstract:
Each year, the world’s top orange producers output millions of oranges for human consumption. This production is projected to grow by as much as 64 million in 2010 and so the demand for fast, low-cost and precise automated orange fruit grading systems is only deemed to become more increasingly important. There is however an underlying limit to most orange blemish detection algorithms. Most existing statistical-based, structural-based, model-based and transform-based orange blemish detection algorithms are plagued by the following problem: any pixels in an image of an orange having about the same magnitudes for the red, green and blue channels will almost always be classified as belonging to the same category (either a blemish or not). This however presents a big problem as the RGB components of the pixels corresponding to blemishes are very similar to pixels near the boundary of an orange. In light of this problem, this research utilizes a priori knowledge of the local intensity variations observed on rounded convex objects to classify the ambiguous pixels correctly. The algorithm has the effect of peeling-off layers of the orange skin according to gradations of the intensity. Therefore, any abrupt discontinuities detected along successive layers would significantly help identifying skin blemishes more accurately. A commercial-grade fruit inspection and distribution system was used to collect 170 navel orange images. Of these images, 100 were manually classified as good oranges by human inspection and the rest are blemished ones. We demonstrate the efficacy of the algorithm using these images as the benchmarking test set. Our results show that the system garnered 96% correctly classified good oranges and 97% correctly classified blemished oranges. The proposed system is easily customizable as it does not require any training. The fruit quality bands can be adjusted to meet the requirements set by the market standards by specifying an agreeable percentage of blemishes for each band.
APA, Harvard, Vancouver, ISO, and other styles
22

Neppalli, Venkata Kishore. "Extracting Useful Information from Social Media during Disaster Events." Thesis, University of North Texas, 2017. https://digital.library.unt.edu/ark:/67531/metadc984251/.

Full text
Abstract:
In recent years, social media platforms such as Twitter and Facebook have emerged as effective tools for broadcasting messages worldwide during disaster events. With millions of messages posted through these services during such events, it has become imperative to identify valuable information that can help the emergency responders to develop effective relief efforts and aid victims. Many studies implied that the role of social media during disasters is invaluable and can be incorporated into emergency decision-making process. However, due to the "big data" nature of social media, it is very labor-intensive to employ human resources to sift through social media posts and categorize/classify them as useful information. Hence, there is a growing need for machine intelligence to automate the process of extracting useful information from the social media data during disaster events. This dissertation addresses the following questions: In a social media stream of messages, what is the useful information to be extracted that can help emergency response organizations to become more situationally aware during and following a disaster? What are the features (or patterns) that can contribute to automatically identifying messages that are useful during disasters? We explored a wide variety of features in conjunction with supervised learning algorithms to automatically identify messages that are useful during disaster events. The feature design includes sentiment features to extract the geo-mapped sentiment expressed in tweets, as well as tweet-content and user detail features to predict the likelihood of the information contained in a tweet to be quickly spread in the network. Further experimentation is carried out to see how these features help in identifying the informative tweets and filter out those tweets that are conversational in nature.
APA, Harvard, Vancouver, ISO, and other styles
23

Petersson, Lantz Robert, and Andreas Alvarsson. "Creating access control maps and defining a security policy for a healthcare communication system." Thesis, Linköpings universitet, Institutionen för datavetenskap, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-121131.

Full text
Abstract:
This report handles the creation of an access control map and the dening of asecurity policy for a healthcare communication system. An access control mapis a graphical way to describe the access controls of the subjects and objects ina system. We use a three step method to produce a graphical overview of theparts in the system, the interactions between them and the permissions of thesubjects. Regarding the security policy we create a read up and read down policylike the so called Ring policy, but adapt a write sideways approach. We alsoapply a mandatory access control which has a centralized authority that denesthe permissions of the subjects. Attribute restrictions is also included to thesecurity levels, to set an under limit for reading permissions.
APA, Harvard, Vancouver, ISO, and other styles
24

Susnjak, Teo. "Accelerating classifier training using AdaBoost within cascades of boosted ensembles : a thesis presented in partial fulfillment of the requirements for the degree of Master of Science in Computer Sciences at Massey University, Auckland, New Zealand." Massey University, 2009. http://hdl.handle.net/10179/1002.

Full text
Abstract:
This thesis seeks to address current problems encountered when training classifiers within the framework of cascades of boosted ensembles (CoBE). At present, a signifi- cant challenge facing this framework are inordinate classifier training runtimes. In some cases, it can take days or weeks (Viola and Jones, 2004; Verschae et al., 2008) to train a classifier. The protracted training runtimes are an obstacle to the wider use of this framework (Brubaker et al., 2006). They also hinder the process of producing effective object detection applications and make the testing of new theories and algorithms, as well as verifications of others research, a considerable challenge (McCane and Novins, 2003). An additional shortcoming of the CoBE framework is its limited ability to train clas- sifiers incrementally. Presently, the most reliable method of integrating new dataset in- formation into an existing classifier, is to re-train a classifier from beginning using the combined new and old datasets. This process is inefficient. It lacks scalability and dis- cards valuable information learned in previous training. To deal with these challenges, this thesis extends on the research by Barczak et al. (2008), and presents alternative CoBE frameworks for training classifiers. The alterna- tive frameworks reduce training runtimes by an order of magnitude over common CoBE frameworks and introduce additional tractability to the process. They achieve this, while preserving the generalization ability of their classifiers. This research also introduces a new framework for incrementally training CoBE clas- sifiers and shows how this can be done without re-training classifiers from beginning. However, the incremental framework for CoBEs has some limitations. Although it is able to improve the positive detection rates of existing classifiers, currently it is unable to lower their false detection rates.
APA, Harvard, Vancouver, ISO, and other styles
25

Wong, Hak Lim. "Signal strength-based location estimation in two different mobile networks." HKBU Institutional Repository, 2006. http://repository.hkbu.edu.hk/etd_ra/700.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Alsulami, Khalil Ibrahim D. "Application-Based Network Traffic Generator for Networking AI Model Development." University of Dayton / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=dayton1619387614152354.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Pack, Alicia. "New Media Photographic Representations of Women`s Collegiate Volleyball: Game Faces, Action Shots, and Equipment." Scholar Commons, 2011. http://scholarcommons.usf.edu/etd/3278.

Full text
Abstract:
Researchers consistently find that mainstream media often represent women athletes in stereotypical ways including trivialization, sexualization, infantilization, passivity, and utilization of camera down-angles. However, research on new media's visual representation of women athletes is still in its infancy. This study adds to the growing literature on new media's representation of women athletes and concurs with previous findings suggesting that new media might be an outlet that can counter old media gender stereotypes. This thesis used mixed methods of qualitative content analysis and photovoice in order to better understand how Big East volleyball players are represented in photographs on websites: Instances of stereotypes were few, action shots were numerous, and "extreme game faces" emerged as a new category for the visual representation of women athletes. These results might suggest that new media, specifically collegiate athletics' websites and volleyball fans, might defy traditional media's stereotypical gender representations. This thesis found that Big East women volleyball players were, overall, visually represented positively by NCAA.com, BigEast.org, Big East member schools' collegiate athletics websites, and fans of the University of South Florida's volleyball team during, and shortly after, the 2010 season.
APA, Harvard, Vancouver, ISO, and other styles
28

Long, Daniel Wayne. "Exploring Generational Differences in Text Messaging Usage and Habits." Diss., NSUWorks, 2018. https://nsuworks.nova.edu/gscis_etd/1060.

Full text
Abstract:
Members of society today embrace multiple communication media for various purposes and intents. Text messaging has been identified as the medium of choice for continual relationship maintenance and text messaging from mobile devices overshadows all other media forms for the support of social connections. Text messaging is changing everything from how operators market their plans to how advertisers and service providers reach consumers. But just as technology usage of social media and internet access are different across generational boundaries, text messaging usage and habits may also be different for various generational groups. The majority of peer-reviewed research regarding text messaging usage habits has focused on adolescent and young adult users with less attention on text messaging usage habits by older adults; there is a scarcity of peer-reviewed research examining cross-generation text messaging habits and texting usage patterns. The primary goal of this study was to assess the similarities and differences in text messaging usage habits, purposes, and support of social connections differentiated by five of the commonly designated generational groups in America; the Post-War Silent Generation, Baby Boomers, Generation X, Millennials, and Generation Z. A mixed methods study provided data on the text messaging usage habits of members of the generational groups using a pool of adult college students, members of the researcher’s LinkedIn network, and data from a survey service to determine to what extent differences and similarities exist between users’ text messaging usage habits within each generational group. Results indicated generational group membership has a significant effect on a participant’s messaging volume (UV), text messaging partner choices (TMPC), and text messaging social habits (SH), regardless of gender, education level, or employment status. The older the generational group, the more likely they are to prefer talking over texting and to have issues with the device interface. The Post-War Silent generation texts their spouses the least of any group, while Generation X texts their spouses the most, and all generational groups with the exception of Generation Z would limit texting while driving. Generational characteristics seem to have some influence over texting behaviors. Contributions to the existing body of knowledge in the human computer interaction field include an investigation of factors that contribute to each generational group’s willingness to embrace or reject the text messaging medium, and an investigation into the into how each generation views and exploits the texting medium.
APA, Harvard, Vancouver, ISO, and other styles
29

Lebeau, Laura Ann. "USF's Coverage of Women's Athletics: A Census of the USF Athletics Home Web Page." Scholar Commons, 2011. http://scholarcommons.usf.edu/etd/3200.

Full text
Abstract:
This study examines the coverage of women's athletics at USF provided through photographic representations on the university's Athletics Internet home web page during the 2009-2010 academic year. Findings revealed that, consistent with recent research on coverage of female athletes and women's athletics on university web pages, women, compared to men, were underrepresented in the majority of the five areas of the home page analyzed. Studies such as this can be beneficial because, if gender coverage inequities are brought to the attention of university administrators and Athletics personnel, actions could be take to reduce the inequities, thereby setting the tone for how we see and think about female athletes.
APA, Harvard, Vancouver, ISO, and other styles
30

Cho, Kyung Jin. "Quantification of the normal patellofemoral shape and its clinical applications." Thesis, Stellenbosch : Stellenbosch University, 2013. http://hdl.handle.net/10019.1/80285.

Full text
Abstract:
Thesis (MScEng)--Stellenbosch University, 2013.
ENGLISH ABSTRACT: The shape of the knee’s trochlear groove is a very important factor in the overall stability of the knee. However, a quantitative description of the normal three-dimensional geometry of the trochlea is not available in the literature. This is also reflected in the poor outcomes of patellofemoral arthroplasty (PFA). In this study, a standardised method for femoral parameter measurements on three-dimensional femur models was established. Using software tools, virtual femur models were aligned with the mechanical and the posterior condylar planes and this framework was used to measure the femoral parameters in a repeatable way. An artificial neural network (ANN), incorporating the femoral parameter measurements and classifications done by experienced surgeons, was used to classify knees into normal and abnormal categories. As a result, 15 knees in the database were classified by the ANN as being normal. Furthermore, the geometry of the normal knees was analysed by fitting B-spline curves and circular arcs on their sagittal surface curves to prove and reconfirm that the groove has a circular shape on a sagittal plane. Self-organising maps (SOM), which is a type of ANN, was trained with the acquired data of the normal knees and in this way the normal trochlear geometry could be predicted. The prediction of the anterior-posterior (AP) distance and the trochlear heights showed an average agreement of 97 % between the actual and the predicted normal geometries. A case study was conducted on four types of trochlear dysplasia to determine a normal geometry for these knees, and a virtual surface reconstruction was performed on them. The study showed that the trochlea was deepened after the surface reconstruction, having an average trochlea depth of 5.5 mm compared to the original average value of 2.9 mm. In summary, this research proposed a quantitative method for describing and predicting the normal geometry of a knee by making use of ANN and the femoral parameters that are unaffected by trochlear dysplasia.
AFRIKAANSE OPSOMMING: Die vorm van die trogleêre keep is ’n belangrike faktor in patella-stabiliteit. Tog is ’n kwantitatiewe beskrywing van die normale driedimensionele geometrie van die troglea nog nie beskikbaar nie, wat duidelik blyk uit die swak uitkomste van patellofemorale artroplastie (PFA). In hierdie studie is ’n gestandaardiseerde metode vir die meting van femorale parameters op grond van driedimensionele femurmodelle ontwikkel. Die femurmodel is in lyn gebring met die meganiese en posterior kondilêre vlak, welke raamwerk gebruik is om die femorale parameters op ’n herhaalbare wyse te meet. Die normale knieë is geklassifiseer met ’n kunsmatige neurale netwerk (ANN), wat die femorale parameter-mate sowel as die chirurgiese klassifikasie ingesluit het, en 15 knieë is gevolglik as normaal aangewys. Die normaleknie-geometrie is ontleed deur B-latkrommes en sirkelboë op die sagittale oppervlak-kurwes aan te bring om te bewys en te herbevestig dat die keep uit ’n sirkelvorm op ’n sagittale vlak bestaan. Die ingesamelde data van die normale knieë is ingevoer by selfreëlende kaarte (SOM), synde ’n soort ANN, wat die navorser in staat gestel het om die normale trogleêre geometrie te voorspel. Die voorspelling van die anterior-posterior (AP) afstand en die trogleêre hoogtes toon ’n gemiddelde ooreenkoms van meer as 97 % tussen die werklike en voorspelde normale geometrie. ’n Gevallestudie is op vier soorte trogleêre displasie uitgevoer om die normale geometrie te voorspel en ’n oppervlakrekonstruksie daarop uit te voer. Hierdie studie het getoon dat die troglea ná oppervlakrekonstruksie verdiep was, met ’n gemiddelde trogleadiepte van 5.5 mm in vergelyking met die aanvanklike gemiddelde waarde van 2.9 mm. Hierdie navorsing het dus ’n metode aan die hand gedoen vir die kwantitatiewe beskrywing en voorspelling van normale geometrie met behulp van ANN sowel as met die femorale parameters wat nie deur die trogleêre displasie geraak word nie.
APA, Harvard, Vancouver, ISO, and other styles
31

Cedergren, Daniel, and Gustaf Terning. "SMS i TV : ett sätt att skapa interaktion i TV?" Thesis, Södertörn University College, School of Communication, Technology and Design, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:sh:diva-345.

Full text
Abstract:

The purpose with this essay is to examine the phenomenon SMS in TV. Partly by identify the communication process behind the phenomenon and partly by describing it in a editorial and technical point of view. We have analysed the communication process regarding scientific theories in communication, interaction and convergence. With a qualitive approach we have fulfilled four interviews and two small ethnographical field observations. Objects for the interviews and observations have been two Swedish produced debate-programs, broadcasted in TV; Diskus in TV4 and Debatt in SVT2. Our conclusions shows that the phenomenon SMS in TV provides the traditional one-way communicative medium TV a new opportunity to create a feedback-channel between the TV-viewer and the TV-producer. The main purpose with SMS in TV from a TV-producers point of view is to let the TV-viewer have the possibility to send a SMS-message with his or her thoughts or questions to the TV-producer and by that affect the TV-program. Our conclusions also describes how the phenomenon is used in a editorial way and how it’s produced and used in a technical point of view. Further discussions are made regarding problems connected to the phenomenon and it’s future possibilities.


Uppsatsens syfte är att undersöka fenomenet SMS i TV. Dels för att identifiera fenomenets kommunikationsprocess och dels för att åskådliggöra fenomenet redaktionellt och tekniskt sett. För att analysera fenomenets kommunikationsprocess har vi valt att utgå från kommunikations-, interaktions- och konvergensteorier. Med en kvalitativ ansats har vi genomfört fyra intervjuer samt två mindre fältobservationer med etnografisk ansats. Utgångspunkten för dessa har varit de svenskproducerade TV-sända debattprogrammen Diskus i TV4 och Debatt i SVT2. Slutsatserna visar att fenomenet SMS i TV ger det traditionellt envägskommunikativa mediet TV en möjlighet till återkoppling mellan TV-tittaren och TV-producenten. Syftet med SMS i TV är att från TV-producentens sida samverka med TV-tittaren genom att låta denne skicka ett SMS-meddelande med sin åsikt eller fråga i för att påverka innehållet i det aktuella TV-programmet. Slutsatserna beskriver även hur fenomenet används redaktionellt och hur det produceras och fungerar tekniskt sett. Vidare diskuteras problematiken med SMS i TV samt fenomenets möjligheter i framtiden.

APA, Harvard, Vancouver, ISO, and other styles
32

Sanga, Dione Aparecido de Oliveira. "Mineração de textos para o tratamento automático em sistemas de atendimento ao usuário." Universidade Tecnológica Federal do Paraná, 2017. http://repositorio.utfpr.edu.br/jspui/handle/1/2850.

Full text
Abstract:
A explosão de novas formas de comunicação entre empresas e clientes proporciona novas oportunidades e meios para que empresas possam tirar proveito desta interação. A forma como os clientes interagem com as empresas tem evoluído nos últimos anos, devido ao aumento dos dispositivos móveis e o acesso à internet: clientes que tradicionalmente solicitavam atendimento via telefone migraram para meios de atendimento eletrônicos, sejam eles via app´s dos smartphones ou via portais de atendimento a clientes. Como resultado desta transformação tecnológica do meio de comunicação, a Mineração de Textos tornou-se uma atrativa forma das empresas extraírem conhecimento novo a partir do registro das interações realizadas pelos clientes. Dentro deste contexto, o ambiente de telecomunicações proporciona os insumos para a realização de experimentos devido ao grande volume de dados gerados diariamente em sistemas de atendimento a clientes. Esse trabalho tem por objetivo analisar se o uso de Mineração de Textos aumenta a acurácia dos modelos de Mineração de Dados em aplicações que envolvem textos livres. Para isso é desenvolvido uma aplicação que visa a identificação de clientes propensos a saírem de ambientes internos de atendimento (CRM) e migrarem para órgãos regulamentadores do setor de telecomunicações. Também são abordados os principais problemas encontrados em aplicações de Mineração de Textos. Por fim, são apresentados os resultados da aplicação de algoritmos de classificação sobre diferentes conjuntos de dados, para a avaliação da melhoria obtida com a inclusão da Mineração de Textos para este tipo de aplicação. Os resultados obtidos mostram um ganho consolidado na melhoria da acuraria na ordem de 32%, fazendo da Mineração de Textos uma ferramenta útil para este tipo de problema.
The explosion of new forms of communication between companies and new opportunities and means for companies to take advantage of this interaction. The way customers interact with companies has evolved in the recent years due to the increase in mobile devices and Internet access: clients who traditionally requested phone service migrated to electronic means of service, whether via smartphone app's or via customer service portals. As a result of this technological transformation of the communication medium, text mining has become an attractive form for companies to extract new knowledge from the register of interactions carried out by customers. Within this context, the telecommunications environment provides the inputs for conducting experiments due to the large volume of data generated daily in customer service systems. This job aims to analyze if the use of text mining increases the accuracy of data mining models in applications involving free texts. For this purpose, an application is developed that aims to identify clients likely to leave internal service environments (CRM) and migrate to regulatory agencies in the telecommunications sector [Baeza, Ricardo e Berthier ,1999]. Also addressed are the main problems encountered in text mining applications. Finally, the results of the application of classification algorithms on different data sets are presented for the evaluation of the improvement obtained with the inclusion of text mining for this type of application. The results obtained show a consolidated gain in the improvement of the acuraria in the order of 32%, making the mining of texts a useful tool for this type of problem.
APA, Harvard, Vancouver, ISO, and other styles
33

Miller, Vail Marie. "The Role of Consumers in the Success of the Consumer Driven Healthcare Movement." Cleveland, Ohio : Case Western Reserve University, 2010. http://rave.ohiolink.edu/etdc/view?acc_num=case1259787032.

Full text
Abstract:
Thesis(Ph.D.)--Case Western Reserve University, 2010
Title from PDF (viewed on 2010-01-28) Department of Bioethics Includes abstract Includes bibliographical references and appendices Available online via the OhioLINK ETD Center
APA, Harvard, Vancouver, ISO, and other styles
34

Vlisides, James C. "Rendering the Other: Ideologies of the Neo-Oriental in World of Warcraft." Bowling Green State University / OhioLINK, 2013. http://rave.ohiolink.edu/etdc/view?acc_num=bgsu1363105916.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Berrebi, Johanna. "Contribution à l'intégration d'une liaison avionique sans fil. L'ingénierie système appliquée à une problématique industrielle." Phd thesis, Ecole Polytechnique X, 2013. http://pastel.archives-ouvertes.fr/pastel-00800141.

Full text
Abstract:
Dans un avion, un hélicoptère ou un lanceur actuel, des milliers de capteurs, pour la plupart non critiques sont utilisés pour la mesure de divers paramètres (températures, pressions, positions...) Les résultats sont ensuite acheminés par des fils vers les calculateurs de bord qui les traitent. Ceci implique la mise en place de centaines de kilomètres de câbles (500 km pour un avion de ligne) dont le volume est considérable. Il en résulte une grande complexité de conception et de fabrication, des problèmes de fiabilité, notamment au niveau des connexions, et une masse importante. Par ailleurs l'instrumentation de certaines zones est impossible car leur câblage est difficilement envisageable par manque d'espace. En outre, s'il est souvent intéressant d'installer de nouveaux capteurs pour faire évoluer un aéronef ancien, l'installation des câbles nécessaires implique un démantèlement partiel, problématique et coûteux, de l'appareil. Pour résoudre ces problèmes, une idée innovante a émergé chez les industriels de l'aéronautique : commencer à remplacer les réseaux filaires reliant les capteurs d'un aéronef et leur centre de décision par des réseaux sans fil. Les technologies de communication sans fil sont aujourd'hui largement utilisées dans les marchés de l'électronique de grande consommation. Elles commencent également à être déployées pour des applications industrielles comme l'automobile ou le relevé à distance de compteurs domestiques. Cependant, remplacer des câbles par des ondes représente un défi technologique considérable comme la propagation en milieu confiné, la sécurité, la sureté de fonctionnement, la fiabilité ou la compatibilité électromagnétique. Cette thèse est motivée d'une part par l'avancée non négligeable dans le milieu aérospatial que pourrait être l'établissement d'un réseau sans fil à bord d'aéronefs dans la résolution de problématique classiques comme l'allégement et l'instrumentation. Il en résulterait donc : * Une meilleure connaissance de l'environnement et de la santé de l'aéronef * Un gain sur le poids. * Un gain en flexibilité. * Un gain en malléabilité et en évolutivité. * Un gain sur la complexité. * Un gain sur la fiabilité D'autre part, étant donnée la complexité de la conception de ce réseau de capteur sans fil, il a été nécessaire d'appliquer une méthodologie évolutive et adaptée mais inspirée de l'ingénierie système. Il est envisageable, vu le nombre de sous-systèmes à considérer, que cette méthodologie soit réutilisable pour d'autre cas pratiques. Une étude aussi complète que possible a été réalisée autour de l'existant déjà établi sur le sujet. En effet, on peut en lisant ce mémoire de thèse avoir une idée assez précise de ce qui a été fait. Une liste a été dressée de toutes les technologies sans fil en indiquant leur état de maturité, leurs avantages et leurs inconvénients afin de préciser les choix possibles et les raisons de ces choix. Des projets de capteurs sans fil ont été réalisés, des technologies sans fil performantes et personnalisables ont été développées et arrivent à maturité dans des secteurs variés tels que la domotique, la santé, l'automobile ou même l'aéronautique. Cependant aucun capteur sans fil n'a été véritablement installé en milieu aérospatial car de nombreux verrous technologiques n'ont pas été levés. Fort des expériences passées, et de la maturité qu'ont prise certaines technologies, des conclusions ont été tirées des projets antérieurs afin de tendre vers des solutions plus viables. Une fois identifiés, les verrous technologiques ont été isolés. Une personnalisation de notre solution a été à envisager afin de remédier tant que faire se peut à ces points bloquants avec les moyens mis à disposition. La méthodologie appliquée nous a permis d'identifier un maximum de contraintes, besoins et exigences pour mieux focaliser les efforts d'innovation sur les plus importantes et choisir ainsi les technologies les plus indiquées.
APA, Harvard, Vancouver, ISO, and other styles
36

Olsen, Megan M. "Variations on stigmergic communication to improve artificial intelligence and biological modeling." 2011. https://scholarworks.umass.edu/dissertations/AAI3482652.

Full text
Abstract:
Stigmergy refers to indirect communication that was originally found in biological systems. It is used for self-organization by ants, bees, and flocks of birds, by allowing individuals to focus on local information. Through local communication among individuals, larger patterns are formed without centralized communication. This self-organization is just one type of system studied within complex systems. Systems of ants, bees, and flocks of birds are considered complex because they exhibit emergent behavior: the outcome is more than the sum of the individual parts. Emergent behavior can be found in many other systems as well. One example is the Internet, which is a series of computers organized in a self-organized fashion. Complexity can also be defined through properties other than emergent behavior, such as existing on multiple scales. Many biological systems are multi-scale. For instance, cancer exists on many scales, including the sub-cellular and cellular levels. Many computing systems are also multiscale, as there may be both individual and system-wide controls interacting together to determine the output. Many multi-agent systems would fall into this category, as would many large software systems. In this dissertation I examine complex systems in artificial intelligence and biology: the growth of cancer, population dynamics, emotions, multi-agent fault tolerance, and real-time strategic AI for games. My goal is twofold: (a) to develop novel computational models of complex biological systems, and (b) to tackle key AI research questions by proposing new algorithms and techniques that are inspired by those complex biological systems. In all of these cases I design variations on stigmergic communication to accomplish the task at hand. My contributions are a new agent-based cancer growth model, a proposed use of location communication for removing cancer, improved multi-agent fault tolerance through localized messaging, a new approach to modeling predator-prey dynamics using computational emotions, and improved strategic game AI through computational emotions.
APA, Harvard, Vancouver, ISO, and other styles
37

Jang, Myeong-Wuk. "Efficient communication and coordination for large-scale multi-agent systems /." 2006. http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqdiss&rft_dat=xri:pqdiss:3223620.

Full text
Abstract:
Thesis (Ph.D.)--University of Illinois at Urbana-Champaign, 2006.
Source: Dissertation Abstracts International, Volume: 67-07, Section: B, page: 3901. Adviser: Gul Agha. Includes bibliographical references (leaves 126-137) Available on microfilm from Pro Quest Information and Learning.
APA, Harvard, Vancouver, ISO, and other styles
38

Henriques, Roberto André Pereira. "Artificial Intelligence in geospatial analysis: applications of self-organizing maps in the context of geographic information science." Doctoral thesis, 2011. http://hdl.handle.net/10362/5723.

Full text
Abstract:
A thesis submitted in partial fulfillment of the requirements for the degree of Doctor in Information Management, specialization in Geographic Information Systems
The size and dimensionality of available geospatial repositories increases every day, placing additional pressure on existing analysis tools, as they are expected to extract more knowledge from these databases. Most of these tools were created in a data poor environment and thus rarely address concerns of efficiency, dimensionality and automatic exploration. In addition, traditional statistical techniques present several assumptions that are not realistic in the geospatial data domain. An example of this is the statistical independence between observations required by most classical statistics methods, which conflicts with the well-known spatial dependence that exists in geospatial data. Artificial intelligence and data mining methods constitute an alternative to explore and extract knowledge from geospatial data, which is less assumption dependent. In this thesis, we study the possible adaptation of existing general-purpose data mining tools to geospatial data analysis. The characteristics of geospatial datasets seems to be similar in many ways with other aspatial datasets for which several data mining tools have been used with success in the detection of patterns and relations. It seems, however that GIS-minded analysis and objectives require more than the results provided by these general tools and adaptations to meet the geographical information scientist‟s requirements are needed. Thus, we propose several geospatial applications based on a well-known data mining method, the self-organizing map (SOM), and analyse the adaptations required in each application to fulfil those objectives and needs. Three main fields of GIScience are covered in this thesis: cartographic representation; spatial clustering and knowledge discovery; and location optimization.(...)
APA, Harvard, Vancouver, ISO, and other styles
39

"Machine Learning on Mars: A New Lens on Data from Planetary Exploration Missions." Doctoral diss., 2019. http://hdl.handle.net/2286/R.I.54942.

Full text
Abstract:
abstract: There are more than 20 active missions exploring planets and small bodies beyond Earth in our solar system today. Many more have completed their journeys or will soon begin. Each spacecraft has a suite of instruments and sensors that provide a treasure trove of data that scientists use to advance our understanding of the past, present, and future of the solar system and universe. As more missions come online and the volume of data increases, it becomes more difficult for scientists to analyze these complex data at the desired pace. There is a need for systems that can rapidly and intelligently extract information from planetary instrument datasets and prioritize the most promising, novel, or relevant observations for scientific analysis. Machine learning methods can serve this need in a variety of ways: by uncovering patterns or features of interest in large, complex datasets that are difficult for humans to analyze; by inspiring new hypotheses based on structure and patterns revealed in data; or by automating tedious or time-consuming tasks. In this dissertation, I present machine learning solutions to enhance the tactical planning process for the Mars Science Laboratory Curiosity rover and future tactically-planned missions, as well as the science analysis process for archived and ongoing orbital imaging investigations such as the High Resolution Imaging Science Experiment (HiRISE) at Mars. These include detecting novel geology in multispectral images and active nuclear spectroscopy data, analyzing the intrinsic variability in active nuclear spectroscopy data with respect to elemental geochemistry, automating tedious image review processes, and monitoring changes in surface features such as impact craters in orbital remote sensing images. Collectively, this dissertation shows how machine learning can be a powerful tool for facilitating scientific discovery during active exploration missions and in retrospective analysis of archived data.
Dissertation/Thesis
Doctoral Dissertation Exploration Systems Design 2019
APA, Harvard, Vancouver, ISO, and other styles
40

Shin, Heesang. "Finding near optimum colour classifiers : genetic algorithm-assisted fuzzy colour contrast fusion using variable colour depth : a thesis presented to the Institute of Information and Mathematical Sciences in partial fulfillment of the requirements for the degree of Master of Science in Computer Science at Massey University, Albany, Auckland, New Zealand." 2009. http://hdl.handle.net/10179/1096.

Full text
Abstract:
This thesis presents a complete self-calibrating illumination intensity-invariant colour classification system. We extend a novel fuzzy colour processing tech- nique called Fuzzy Colour Contrast Fusion (FCCF) by combining it with a Heuristic- assisted Genetic Algorithm (HAGA) for automatic fine-tuning of colour descriptors. Furthermore, we have improved FCCF’s efficiency by processing colour channels at varying colour depths in search for the optimal ones. In line with this, we intro- duce a reduced colour depth representation of a colour image while maintaining efficient colour sensitivity that suffices for accurate real-time colour-based object recognition. We call the algorithm Variable Colour Depth (VCD) and we propose a technique for building and searching a VCD look-up table (LUT). The first part of this work investigates the effects of applying fuzzy colour contrast rules to vary- ing colour depths as we extract the optimal rule combination for any given target colour exposed under changing illumination intensities. The second part introduces the HAGA-based parameter-optimisation for automatically constructing accurate colour classifiers. Our results show that for all cases, the VCD algorithm, combined with HAGA for parameter optimisation improve colour classification via a pie-slice colour classifier.For 6 different target colours, the hybrid algorithm was able to yield 17.63% higher overall accuracy as compared to the pure fuzzy approach. Fur- thermore, it was able to reduce LUT storage space by 78.06% as compared to the full-colour depth LUT.
APA, Harvard, Vancouver, ISO, and other styles
41

Watkins, Elizabeth Anne. "The Polysemia of Recognition: Facial Recognition in Algorithmic Management." Thesis, 2021. https://doi.org/10.7916/d8-6qwc-0t83.

Full text
Abstract:
Algorithmic management systems organize many different kinds of work across domains, and have increasingly come under academic scrutiny. Under labels including gig work, piecemeal work, and platform labor, these systems have been richly theorized under disciplines including human-computer interaction, sociology, communications, economics, and labor law. When it comes to the relationships between such systems and their workers, current theory frames these interactions on a continuum between organizational control and worker autonomy. This has laid the groundwork for other ways of examining micro-level practices of workers under algorithmic management. As an alternative to the binary of control and autonomy, this dissertation takes its cue from feminist scholars in Science, Technology, and Society (STS) studies. Drawing on frameworks from articulation, repair, and mutual shaping, I examine workers’ interpretations and interactions, to ask how new subjectivities around identity and community emerge from these entanglements. To shed empirical light on these processes, this dissertation employs a mixed-methods research design examining the introduction of facial recognition into the sociotechnical systems of algorithmic management. Data include 22 in-person interviews with workers in New York City and Toronto, a survey of 100 workers in the United States who have been subjected to facial recognition, and analysis of over 2800 comments gathered from an online workers’ forum posted over the course of four years.Facial recognition, like algorithmic management, suffers from a lack of empirical, on-the-ground insights into how workers communicate, negotiate, and strategize around and through them. Interviews with workers reveals that facial recognition evokes polysemia, i.e. a number of distinct, yet interrelated interpretations. I find that for some workers, facial recognition means safety and security. To others it means violation of privacy and accusations of fraud. Some are impressed by the “science-fiction”-like capabilities of the system: “it’s like living in the future.” Others are wary, and science fiction becomes a vehicle to encapsulate their fears: “I’m in the [movie] The Minority Report.” For some the technology is hyper-powerful: “It feels like I’m always being watched,” yet others decry, “it’s an obvious façade.” Following interviews, I build a body of research using empirical methods combined with frameworks drawn from STS and organizational theory to illuminate workers’ perceptions and strategies negotiating their algorithmic managers. I operationalize Julian Orr’s studies of storytelling among Xerox technicians to analyze workers’ information-sharing practices in online forums, to better understand how gig workers, devices, forums, and algorithmic management systems engage in mutual shaping processes. Analysis reveals that opposing interpretations of facial recognition, rather than dissolving into consensus of “shared understanding,” continue to persist. Rather than pursuing and relying on shared understanding of their work to maintain relationships, workers under algorithmic management, communicating in online forums about facial recognition, elide consensus. After forum analysis, I then conduct a survey, to assess workers’ fairness perceptions of facial recognition targeting and verification. The goal of this research is to establish an empirical foundation to determine whether algorithmic fairness perceptions are subject to theories of bounded rationality and decision-making. Finally, for the last two articles, I turn back to the forums, to analyze workers’ experiences negotiating two other processes with threats or ramifications for safety, privacy, and risk. In one article, I focus on their negotiation of threats from scam attackers, and the use the forum itself as a “shared repertoire” of knowledge. In the other I use the forums as evidence to illuminate workers’ experiences and meaning-making around algorithmic risk management under COVID-19. In the conclusion, I engage in theory-building to examine how algorithmic management and its attendant processes demand that information-sharing mechanisms serve novel ends buttressing legitimacy and authenticity, in what I call “para-organizational” work, a world of work where membership and legitimacy are liminal and uncertain. Ultimately, this body of research illuminates mutual shaping processes in which workers’ practices, identity, and community are entangled with technological artifacts and organizational structures. Algorithmic systems of work and participants’ interpretations of, and interactions with, related structures and devices, may be creating a world where sharing information is a process wielded not as a mechanism of learning, but as one of belonging.
APA, Harvard, Vancouver, ISO, and other styles
42

Kloss, Guy Kristoffer. "Adaptation of colour perception through dynamic ICC profile modification : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, Albany (Auckland), New Zealand." 2010. http://hdl.handle.net/10179/1683.

Full text
Abstract:
Digital colour cameras are dramatically falling in price, making them a ordable for ubiquitous appliances in many applications. Change in colour perception with changing light conditions induce errors that may escape a user's awareness. Colour constancy algorithms are based on inferring light properties (usually the white point) to correct colour. Other attempts using more data for colour correction such as (ICC based) colour management characterise a capturing device under given conditions through an input device pro le. This pro le can be applied to correct for deviating colour perception. But this pro le is only valid for the speci c conditions at the time of the characterisation, but fails with changes in light. This research presents a solution to the problem of long time observations with changes in the scene's illumination for common natural (overcast or clear, blue sky) and arti cial sources (incandescent or uorescent lamps). Colour measurements for colour based reasoning need to be represented in a robustly de ned way. One such suitable and well de ned description is given by the CIE LAB colour space, a device-independent, visually linearised colour description. Colour transformations using ICC pro le are also based on CIE colour descriptions. Therefore, also the corrective colour processing has been based on ICC based colour management. To verify the viability of CIE LAB based corrective colour processing colour constancy algorithms (White Patch Retinex and Grey World Assumption) have been modi ed to operate on L a b colour tuples. Results were compared visually and numerically (using colour indexing) against those using the same algorithms operating on RGB colour tuples. We can take advantage of the fact that we are dealing with image streams over time, adding another dimension usable for analysis. A solution to the problem of slowly changing light conditions in scenes with a static camera perspective is presented. It takes advantage of the small (frame-to-frame) changes in appearance of colour within the scene over time. Reoccurring objects or (background) areas of the scene are tracked to gather data points for an analysis. As a result, a suitable colour space distortion model has been devised through a rst order Taylor approximation (a ne transformation). By performing a multidimensional linear regression analysis on the tracked data points, parameterisations for the a ne transformations were derived. Finally, the device pro le is updated by amalgamating the corrections from the model into the ICC pro le for a single, comprehensive transformation. Following applications of the ICC based colour pro les are very fast and can be used in real-time with the camera's capturing frame rate (for current normal web cameras and low spec desktop computers). As light conditions usually change on a much slower time scale than the capturing rate of a camera, the computationally expensive pro le adaptation generally showed to be usable for many frames. The goal was to set out and nd a solution for consistent colour capturing using digital cameras, which is capable of coping with changing light conditions. Theoretical backgrounds and strategies for such a system have been devised and implemented successfully.
APA, Harvard, Vancouver, ISO, and other styles
43

Liu, Mingzhe. "Theoretical investigation of traffic flow : inhomogeneity induced emergence : a dissertation presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, Auckland, New Zealand." 2010. http://hdl.handle.net/10179/1350.

Full text
Abstract:
This research work is focused on understanding the effects of inhomogeneity on traffic flow by theoretical analysis and computer simulations. Traffic has been observed at almost all levels of natural and manmade systems (e.g., from microscopic protein motors to macroscopic objects like cars). For these various traffic, basic and emer- gent phenomena, modelling methods, theoretical analysis and physical meanings are normally concerned. Inhomogeneity like bottlenecks may cause traffic congestions or motor protein crowding. The crowded protein motors may lead to some human diseases. The congested traffic patterns have not been understood well so far. The modelling method in this research is based on totally asymmetric simple exclusion process (TASEP). The following TASEP models are developed: TASEP with single inhomogeneity, TASEP with zoned inhomogeneity, TASEP with junction, TASEP with site sharing and different boundary conditions. These models are motivated by vehicular traffic, pedestrian trafficc, ant traffic, protein motor traffic and/or Internet traffic. Theoretical solutions for the proposed models are obtained and verified by Monte Carlo simulations. These theoretical results can be used as a base for further developments. The emergent properties such as phase transitions, phase separations and spontaneous symmetry breaking are observed and discussed. This study has contributed to a deeper understanding of generic traffic dynamics, particularly, in the presence of inhomogeneity, and has important implications for explanation or guidance of future traffic studies.
APA, Harvard, Vancouver, ISO, and other styles
44

Fan, Chao. "Real-time facial expression analysis : a thesis presented in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Ph.D.) in Computer Science at Massey University, Auckland, New Zealand." 2008. http://hdl.handle.net/10179/762.

Full text
Abstract:
As computers have become more and more advanced, with even the most basic computer capable of tasks almost unimaginable only a decade ago, researchers and developers are focusing on improving the way that computers interact with people in their everyday lives. A core goal, therefore, is to develop a computer system which can understand and react appropriately to natural human behavior. A key requirement for such a system is the ability to automatically, and in real time, recognises human facial expressions. In addition, this must be successfully achieved regardless of the inherent differences in human faces or variations in lighting and other external conditions. The focus of this research was to develop such a system by evaluating and then utilizing the most appropriate of the many image processing techniques currently available, and, where appropriate, developing new methodologies and algorithms. The first key step in the system is to recognise a human face with acceptable levels of misses and false positives. This research analysed and evaluated a number of different face detection techniques, before developing a novel algorithm which combined phase congruency and template matching techniques. This novel algorithm provides key advantages over existing techniques because it can detect faces rotated to any angle, and it works in real time. Existing techniques could only recognise faces which were rotated less than 10 degrees (in either direction) and most could not work in real time due to excessive computational power requirements. The next step for the system is to enhance and extract the facial features. To successfully achieve the stated goal, the enhancement and extraction of the facial features must reduce the number of facial dimensions to ensure the system can operate in real time, as well as providing sufficient clear and detailed features to allow the facial expressions to be accurately recognised. This part of the system was successfully completed by developing a novel algorithm based on the existing Contrast Limited Adaptive Histogram Equalization technique which quickly and accurately represents facial features, and developing another novel algorithm which reduces the number of feature dimensions by combining radon transformation and fast Fourier transformation techniques, ensuring real time operation is possible. The final step for the system is to use the information provided by the first two steps to accurately recognise facial expressions. This is achieved using an SVM trained using a database including both real and computer generated facial images with various facial expressions. The system developed during this research can be utilised in a number of ways, and, most significantly, has the potential to revolutionise future interactions between humans and computers by assisting these reactions to become natural and intuitive. In addition, individual components of the system also have significant potential, with, for example, the algorithms which allow the recognition of an object regardless of its rotation under consideration as part of a project aiming to achieve non-invasive detection of early stage cancer cells.
APA, Harvard, Vancouver, ISO, and other styles
45

Kapur, Ajay. "Digitizing North Indian music: preservation and extension using multimodal sensor systems, machine learning and robotics." Thesis, 2007. http://hdl.handle.net/1828/202.

Full text
Abstract:
This dissertation describes how state of the art computer music technology can be used to digitize, analyze, preserve and extend North Indian classical music performance. Custom built controllers, influenced by the Human Computer Interaction (HCI) community, serve as new interfaces to gather musical gestures from a performing artist. Designs on how to modify a Tabla, Dholak, and Sitar with sensors and electronics are described. Experiments using wearable sensors to capture ancillary gestures of a human performer are also included. A twelve-armed solenoid-based robotic drummer was built to perform on a variety of traditional percussion instruments from around India. The dissertation also describes experimentation on interfacing a human sitar performer with the robotic drummer. Experiments include automatic tempo tracking and accompaniment methods. A framework is described for digitally transcribing performances of masters using custom designed hardware and software to aid in preservation. This work draws on knowledge from many disciplines including: music, computer science, electrical engineering, mechanical engineering and psychology. The goal is to set a paradigm on how to use technology to aid in the preservation of traditional art and culture.
APA, Harvard, Vancouver, ISO, and other styles
46

(8771429), Ashley S. Dale. "3D OBJECT DETECTION USING VIRTUAL ENVIRONMENT ASSISTED DEEP NETWORK TRAINING." Thesis, 2021.

Find full text
Abstract:

An RGBZ synthetic dataset consisting of five object classes in a variety of virtual environments and orientations was combined with a small sample of real-world image data and used to train the Mask R-CNN (MR-CNN) architecture in a variety of configurations. When the MR-CNN architecture was initialized with MS COCO weights and the heads were trained with a mix of synthetic data and real world data, F1 scores improved in four of the five classes: The average maximum F1-score of all classes and all epochs for the networks trained with synthetic data is F1∗ = 0.91, compared to F1 = 0.89 for the networks trained exclusively with real data, and the standard deviation of the maximum mean F1-score for synthetically trained networks is σ∗ F1 = 0.015, compared to σF 1 = 0.020 for the networks trained exclusively with real data. Various backgrounds in synthetic data were shown to have negligible impact on F1 scores, opening the door to abstract backgrounds and minimizing the need for intensive synthetic data fabrication. When the MR-CNN architecture was initialized with MS COCO weights and depth data was included in the training data, the net- work was shown to rely heavily on the initial convolutional input to feed features into the network, the image depth channel was shown to influence mask generation, and the image color channels were shown to influence object classification. A set of latent variables for a subset of the synthetic datatset was generated with a Variational Autoencoder then analyzed using Principle Component Analysis and Uniform Manifold Projection and Approximation (UMAP). The UMAP analysis showed no meaningful distinction between real-world and synthetic data, and a small bias towards clustering based on image background.

APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography