To see the other types of publications on this topic, follow the link: Data collection from field.

Dissertations / Theses on the topic 'Data collection from field'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data collection from field.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Richards, Kevin Tarn 1976. "Hydrologic and water quality modeling with HSPF : utilization of data from a novel field data collection system and historical archives." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/28243.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 2002.<br>Includes bibliographical references (leaf 63).<br>Catchment-scale hydrology and water quality studies are empowered by current mobile computing, wireless, and Internet technologies to new levels of technical assessment capability. These technical developments motivate an investigation into the modem uses of hydrologic and water quality models. The Hydrologic Simulation Program - FORTRAN (HSPF) is applied using data from the Williams River basin, New South Wales, Australia. The Williams River is an agricultural catchment with interesting physical characteristics and various non-point source water quality issues that warrant a modeling investigation to characterize the hydrology of this large and heavily utilized water resource. Model inputs include 1) a thorough set of Geographic Information System (GIS) files utilized in a closely coupled interface with the HSPF algorithms; 2) time series meteorologic and water quality datasets from historical archives; and 3) supplemental data obtained during a technically enabled field sampling campaign. These inputs are formatted for import to the HSPF routines, streamflow is simulated, and outputs are analyzed for accuracy.<br>by Kevin Tarn Richards.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
2

Lukeman, Ryan J. "Modeling collective motion in animal groups : from mathematical analysis to field data." Thesis, University of British Columbia, 2009. http://hdl.handle.net/2429/11873.

Full text
Abstract:
Animals moving together cohesively is a commonly observed phenomenon in biology, with bird flocks and fish schools as familiar examples. Mathematical models have been developed in order to understand the mechanisms that lead to such coordinated motion. The Lagrangian framework of modeling, wherein individuals within the group are modeled as point particles with position and velocity, permits construction of inter-individual interactions via `social forces' of attraction, repulsion and alignment. Although such models have been studied extensively via numerical simulation, analytical conclusions have been difficult to obtain, owing to the large size of the associated system of differential equations. In this thesis, I contribute to the modeling of collective motion in two ways. First, I develop a simplified model of motion and, by focusing on simple, regular solutions, am able to connect group properties to individual characteristics in a concrete manner via derivations of existence and stability conditions for a number of solution types. I show that existence of particular solutions depends on the attraction-repulsion function, while stability depends on the derivative of this function. Second, to establish validity and motivate construction of specific models for collective motion, actual data is required. I describe work gathering and analyzing dynamic data on group motion of surf scoters, a type of diving duck. This data represents, to our knowledge, the largest animal group size (by almost an order of magnitude) for which the trajectory of each group member is reconstructed. By constructing spatial distributions of neighbour density and mean deviation, I show that frontal neighbour preference and angular deviation are important features in such groups. I show that the observed spatial distribution of neighbors can be obtained in a model incorporating a topological frontal interaction, and I find an optimal parameter set to match simulated data to empirical data.
APA, Harvard, Vancouver, ISO, and other styles
3

Ostrodka, Lenna Moy. "From Water Guns to Science Clubs: A Field-to-Classroom Internship with the USGS." Miami University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=miami1355003793.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Voborník, Petr. "Výzkum spolehlivosti statických elektroměrů." Master's thesis, Vysoké učení technické v Brně. Fakulta elektrotechniky a komunikačních technologií, 2013. http://www.nusl.cz/ntk/nusl-220102.

Full text
Abstract:
This work deals with setting of dependability of static electricity meters. The first two chapters deal about electricity meters and dependability in general than there are introduced three possible ways for getting dependability parameters. The first methode is data collection from the field. The second methode is reliability prediction from component's reliability. The third methode is aging life tests. Conclusion contains evaluation of results and highligting of importance for practical usage.
APA, Harvard, Vancouver, ISO, and other styles
5

Brewer, Peter W., and Christopher H. Guiterman. "A new digital field data collection system for dendrochronology." Laboratory of Tree-Ring Research, University of Arizona, 2016. http://hdl.handle.net/10150/622364.

Full text
Abstract:
A wide variety of information or 'metadata' is required when undertaking dendrochronological sampling. Traditionally, researchers record observations and measurements on field notebooks and/or paper recording forms, and use digital cameras and hand-held GPS devices to capture images and record locations. In the lab, field notes are often manually entered into spreadsheets or personal databases, which are then sometimes linked to images and GPS waypoints. This process is both time consuming and prone to human and instrument error. Specialised hardware technology exists to marry these data sources, but costs can be prohibitive for small scale operations (>$2000 USD). Such systems often include proprietary software that is tailored to very specific needs and might require a high level of expertise to use. We report on the successful testing and deployment of a dendrochronological field data collection system utilising affordable off-the-shelf devices ($100-300 USD). The method builds upon established open source software that has been widely used in developing countries for public health projects as well as to assist in disaster recovery operations. It includes customisable forms for digital data entry in the field, and a marrying of accurate GPS location with geotagged photographs (with possible extensions to other measuring devices via Bluetooth) into structured data fields that are easy to learn and operate. Digital data collection is less prone to human error and efficiently captures a range of important metadata. In our experience, the hardware proved field worthy in terms of size, ruggedness, and dependability (e.g., battery life). The system integrates directly with the Tellervo software to both create forms and populate the database, providing end users with the ability to tailor the solution to their particular field data collection needs.
APA, Harvard, Vancouver, ISO, and other styles
6

Haddock, Paul C. "TELEMETERY DATA COLLECTION FROM OSCAR SATELLITES." International Foundation for Telemetering, 1998. http://hdl.handle.net/10150/607347.

Full text
Abstract:
International Telemetering Conference Proceedings / October 26-29, 1998 / Town & Country Resort Hotel and Convention Center, San Diego, California<br>This paper discusses the design, configuration, and operation of a satellite station built for the Center for Space Telemetering and Telecommunications Laboratory in the Klipsch School of Electrical and Computer Engineering Engineering at New Mexico State University (NMSU). This satellite station consists of a computer-controlled antenna tracking system, 2m/70cm transceiver, satellite tracking software, and a demodulator. The satellite station receives satellite telemetry, allows for voice communications, and will be used in future classes. Currently this satellite station is receiving telemetry from an amateur radio satellite, UoSAT-OSCAR-11. Amateur radio satellites are referred to as Orbiting Satellites Carrying Amateur Radio (OSCAR) satellites.
APA, Harvard, Vancouver, ISO, and other styles
7

Songar, Poonam. "Learning Assessment Data Collection from Educational Game Applications." Kent State University / OhioLINK, 2012. http://rave.ohiolink.edu/etdc/view?acc_num=kent1353900797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Palencia, Arreola Daniel Heriberto. "Arguments for and field experiments in democratizing digital data collection : the case of Flocktracker." Thesis, Massachusetts Institute of Technology, 2019. https://hdl.handle.net/1721.1/121749.

Full text
Abstract:
Thesis: M.C.P., Massachusetts Institute of Technology, Department of Urban Studies and Planning, 2019<br>Cataloged from PDF version of thesis.<br>Includes bibliographical references (pages [127]-131).<br>Data is becoming increasingly relevant to urban planning, serving as a key input for many conceptions of a "smart city." However, most urban data generation results from top-down processes, driven by government agencies or large companies. This provides limited opportunities for citizens to participate in the ideation and creation of the data used to ultimately gain insights into, and make decisions about, their communities. Digital community data collection can give more inputs to city planners and decision makers while also empowering communities. This thesis derives arguments from the literature about why it would be helpful to have more participation from citizens in data generation and examines digital community mapping as a potential niche for the democratization of digital data collection.<br>In this thesis, I examine one specific digital data collection technology, Flocktracker, a smartphone-based tool developed to allow users with no technical background to setup and generate their own data collection projects. I define a model of how digital community data collection could be "democratized" with the use of Flocktracker. The model envisions a process in which "seed" projects lead to a spreading of Flocktracker's use across the sociotechnical landscape, eventually producing self-sustaining networks of data collectors in a community. To test the model, the experimental part of this research examines four different experiments using Flocktracker: one in Tlalnepantla, Mexico and three in Surakarta, Indonesia. These experiments are treated as "seed" projects in the democratization model and were setup in partnership with local NGOs.<br>The experiments were designed to help understand whether citizen participation in digital community mapping events might affect their perceptions about open data and the role of participation in community data collection and whether this participation entices them to create other community datasets on their own, thus starting the democratization process. The results from the experiments reveal the difficulties in motivating community volunteers to participate in technology-based field data collection. While Flocktracker proved easy enough for the partner organizations to create data collection projects, the technology alone does not guarantee participation. The envisioned "democratization" model could not be validated. Each of the experiments had relatively low levels of participation in the community events that were organized.<br>This low participation, in turn, led to inconclusive findings regarding the effects of community mapping on participants' perceptions and on the organizations themselves. Nonetheless, numerous insights emerge, providing lessons for the technology and how it might be better used in the future to improve digital community mapping events.<br>by Daniel Heriberto Palencia Arreola.<br>M.C.P.<br>M.C.P. Massachusetts Institute of Technology, Department of Urban Studies and Planning
APA, Harvard, Vancouver, ISO, and other styles
9

Baradaranshokouhi, Yashar. "Estimation of neural field models from spatiotemporal electrophysiological data." Thesis, University of Sheffield, 2015. http://etheses.whiterose.ac.uk/9665/.

Full text
Abstract:
The human brain is one of the most complex systems faced in research and science. Different methods and theories from various categories of science and engineering have contributed to understanding the functionality of the brain and its underlying structure. However, development of a complete theory remains a huge challenge. Among many different aspects of this field of research, one of the main branches is focused on brain disorders, causes and possible improvements to treatments and patients' life quality. To tackle this challenge, experimental and clinical measurements have been used with computational models to analyse and contribute to treatments of brain disorders. Signal processing is playing a key role on detecting key features out of brain electrical recordings and developing frameworks that can give insight into underlying structure of recorded observations. As part of the scope of this thesis, previous work have been extended by relaxing some of the assumptions in earlier work and checking the performance of developed framework under new conditions. The main focus of this thesis is based on application of Unscented Kalman Filter with Amari type model for human brain electrical activities. It is assumed that Amari type models can present the underlying dynamics of the brain activity. The Amari type model is presented in state space form and by use of a decomposition method, the estimation framework has been used to estimate the states and connectivity kernel gains. Heterogeneous connectivity is considered as long range connection in a neural network. The novelty introduced in this thesis is the introduction of a heterogeneous connectivity kernel in Amari type model and estimating the connectivity strength. Applications of the developed methods on the synthetic data are applied on epilepsy data and results are presented. By monitoring the parameters, it is possible to show that brain dynamics from normal to abnormal states can be detected. Further research and future work in this area can potentially lead to prediction of seizure and eventually improving life quality of patients with epilepsy.
APA, Harvard, Vancouver, ISO, and other styles
10

Klingsbo, Lukas. "NoSQL: Moving from MapReduce Batch Jobs to Event-Driven Data Collection." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-260394.

Full text
Abstract:
Collecting and analysing data of analytical value is important for many service providers today. Many make use of NoSQL databases for their larger software systems, what is less known is how to effectively analyse and gather business intelligence from the data in these systems. This paper suggests a method of separating the most valuable analytical data from the rest in real time and at the same time providing an effective traditional database for the analyser. In this paper we analyse our given data sets to decide whether big data tools are required and then traditional databases are compared to see how well they fit the context. A technique that makes use of an asynchronous log- ging system is used to insert the data from the main system to the dedicated analytical database. The tests show that our technique can efficiently be used with a tra- ditional database even on large data sets (&gt;1000000 insertions/hour per database node) and still provide both historical data and aggregate func- tions for the analyser.
APA, Harvard, Vancouver, ISO, and other styles
11

Katona, Gregory. "Field Theoretic Lagrangian From Off-Shell Supermultiplet Gauge Quotients." Doctoral diss., University of Central Florida, 2013. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/5958.

Full text
Abstract:
Recent efforts to classify off-shell representations of supersymmetry without a central charge have focused upon directed, supermultiplet graphs of hypercubic topology known as Adinkras. These encodings of Super Poincare algebras, depict every generator of a chosen supersymmetry as a node-pair transformtion between fermionic / bosonic component fields. This research thesis is a culmination of investigating novel diagrammatic sums of gauge quotients by supersymmetric images of other Adinkras, and the correlated building of field theoretic worldline Lagrangians to accommodate both classical and quantum venues. We find Ref [40], that such gauge quotients do not yield other stand alone or ”proper” Adinkras as afore sighted, nor can they be decomposed into supermultiplet sums, but are rather a connected ”Adinkraic network”. Their iteration, analogous to Weyl's construction for producing all finite-dimensional unitary representations in Lie algebras, sets off chains of algebraic paradigms in discrete-graph and continuous-field variables, the links of which feature distinct, supersymmetric Lagrangian templates. Collectively, these Adiankraic series air new symbolic genera for equation to phase moments in Feynman path integrals. Guided in this light, we proceed by constructing Lagrangians actions for the N = 3 supermultiplet YI /(iDI X) for I = 1, 2, 3, where YI and X are standard, Salam-Strathdee superfields: YI fermionic and X bosonic. The system, bilinear in the component fields exhibits a total of thirteen free parameters, seven of which specify Zeeman-like coupling to external background (magnetic) fluxes. All but special subsets of this parameter space describe aperiodic oscillatory responses, some of which are found to be surprisingly controlled by the golden ratio, ? ? 1.61803, Ref [52]. It is further determined that these Lagrangians allow an N = 3 ? 4 supersymmetric extension to the Chiral-Chiral and Chiral-twisted- Chiral multiplet, while a subset admits two inequivalent such extensions. In a natural progression, a continuum of observably and usefully inequivalent, finite-dimensional off-shell representations of worldline N = 4 extended supersymmetry are explored, that are variate from one another but in the value of a tuning parameter, Ref [53]. Their dynamics turns out to be nontrivial already when restricting to just bilinear Lagrangians. In particular, we find a 34-parameter family of bilinear Lagrangians that couple two differently tuned supermultiplets to each other and to external magnetic fluxes, where the explicit parameter dependence is unremovable by any field redefinition and is therefore observable. This offers the evaluation of X-phase sensitive, off-shell path integrals with promising correlations to group product decompositions and to deriving source emergences of higher-order background flux-forms on 2-dimensional manifolds, the stacks of which comprise space-time volumes. Application to nonlinear sigma models would naturally follow, having potential use in M- and F- string theories.<br>Ph.D.<br>Doctorate<br>Physics<br>Sciences<br>Physics
APA, Harvard, Vancouver, ISO, and other styles
12

Chowdhury, Rafiqul Islam. "Field implementation of polyacrylamide for runoff from construction sites." Master's thesis, University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4870.

Full text
Abstract:
Polyacrylamide (PAM) is often used a part of a treatment train for the treatment of stormwater to reduce its turbidity. This study investigated the application of PAM within various treatment systems for a construction site environment. The general concept is to introduce hydraulic principles when placing PAM blocks within an open channel in order to yield high mixing energies leading to high turbidity removal efficiency. The first part of the study observed energy variation using a hydraulic flume for three dissimilar configurations. The flume was ultimately used to determine which configuration would be most beneficial when transposed into field-scale conditions. Three different configurations were tested in the flume, namely, the Jump configuration, Dispersion configuration and the Staggered configuration. The field-scale testing served as both justification of the findings within the controlled hydraulic flume and comprehension of the elements introduced within the field when attempting to reduce the turbidity of stormwater. As a result, the Dispersion configuration proved to be the most effective when removing turbidity and displayed a greater energy used for mixing within the open channel. Consequently, an analysis aid is developed based on calculations from the results of this study to better serve the sediment control industry when implementing PAM blocks within a treatment system. Recommendations are made for modification and future applications of the research conducted. This innovative approach has great potential for expansion and future applications. Continued research on this topic can expand on key elements such as solubility of the PAM, toxicity of the configuration within the field, and additional configurations that may yield more advantageous energy throughout the open channel.<br>ID: 030423049; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (M.S.C.E.)--University of Central Florida, 2011.; Includes bibliographical references (p. 271-274).<br>M.S.<br>Masters<br>Civil, Environmental and Construction Engineering<br>Engineering and Computer Science
APA, Harvard, Vancouver, ISO, and other styles
13

Lou, Qiang. "LEARNING FROM INCOMPLETE HIGH-DIMENSIONAL DATA." Diss., Temple University Libraries, 2013. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/214785.

Full text
Abstract:
Computer and Information Science<br>Ph.D.<br>Data sets with irrelevant and redundant features and large fraction of missing values are common in the real life application. Learning such data usually requires some preprocess such as selecting informative features and imputing missing values based on observed data. These processes can provide more accurate and more efficient prediction as well as better understanding of the data distribution. In my dissertation I will describe my work in both of these aspects and also my following up work on feature selection in incomplete dataset without imputing missing values. In the last part of my dissertation, I will present my current work on more challenging situation where high-dimensional data is time-involving. The first two parts of my dissertation consist of my methods that focus on handling such data in a straightforward way: imputing missing values first, and then applying traditional feature selection method to select informative features. We proposed two novel methods, one for imputing missing values and the other one for selecting informative features. We proposed a new method that imputes the missing attributes by exploiting temporal correlation of attributes, correlations among multiple attributes collected at the same time and space, and spatial correlations among attributes from multiple sources. The proposed feature selection method aims to find a minimum subset of the most informative variables for classification/regression by efficiently approximating the Markov Blanket which is a set of variables that can shield a certain variable from the target. I present, in the third part, how to perform feature selection in incomplete high-dimensional data without imputation, since imputation methods only work well when data is missing completely at random, when fraction of missing values is small, or when there is prior knowledge about the data distribution. We define the objective function of the uncertainty margin-based feature selection method to maximize each instance's uncertainty margin in its own relevant subspace. In optimization, we take into account the uncertainty of each instance due to the missing values. The experimental results on synthetic and 6 benchmark data sets with few missing values (less than 25%) provide evidence that our method can select the same accurate features as the alternative methods which apply an imputation method first. However, when there is a large fraction of missing values (more than 25%) in data, our feature selection method outperforms the alternatives, which impute missing values first. In the fourth part, I introduce my method handling more challenging situation where the high-dimensional data varies in time. Existing way to handle such data is to flatten temporal data into single static data matrix, and then applying traditional feature selection method. In order to keep the dynamics in the time series data, our method avoid flattening the data in advance. We propose a way to measure the distance between multivariate temporal data from two instances. Based on this distance, we define the new objective function based on the temporal margin of each data instance. A fixed-point gradient descent method is proposed to solve the formulated objective function to learn the optimal feature weights. The experimental results on real temporal microarray data provide evidence that the proposed method can identify more informative features than the alternatives that flatten the temporal data in advance.<br>Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
14

Tileylioglu, Salih. "Evaluation of soil-structure interaction effects from field performance data." Diss., Restricted to subscribing institutions, 2008. http://proquest.umi.com/pqdweb?did=1666368201&sid=2&Fmt=2&clientId=1564&RQT=309&VName=PQD.

Full text
APA, Harvard, Vancouver, ISO, and other styles
15

Ortiz, Logan A. "Highway work zone capacity estimation using field data from Kansas." Thesis, Kansas State University, 2014. http://hdl.handle.net/2097/18224.

Full text
Abstract:
Master of Science<br>Department of Civil Engineering<br>Sunanda Dissanayake<br>Although extensive research has been conducted on urban freeway capacity estimation methods, minimal research has been carried out for rural highway sections, especially sections within work zones. This study filled that void for rural highways in Kansas. This study estimated capacity of rural highway work zones in Kansas. Six work zone locations were selected. An average of six days’ worth of field data was collected, from mid-October 2013 to late November 2013, at each of these work zone sites. Two capacity estimation methods were utilized, including the Maximum Observed 15-minute Flow Rate Method and the Platooning Method divided into 15-minute intervals. The Maximum Observed 15-minute Flow Rate Method provided an average capacity of 1469 passenger cars per hour per lane (pcphpl) with a standard deviation of 141 pcphpl, while the Platooning Method provided a maximum average capacity of 1195 pcphpl and a standard deviation of 28 pcphpl. Based on observed data and analysis carried out in this study, the recommended capacity to be used is 1500 pcphpl when designing work zones for rural highways in Kansas. This research provides the proposed standard value of rural highway work zone capacities so engineers and city planners can effectively mitigate congestion that would have otherwise occurred due to impeding construction/maintenance.
APA, Harvard, Vancouver, ISO, and other styles
16

Malchik, Alexander 1975. "An aggregator tool for extraction and collection of data from web pages." Thesis, Massachusetts Institute of Technology, 2000. http://hdl.handle.net/1721.1/86522.

Full text
Abstract:
Thesis (M.Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2000.<br>Includes bibliographical references (p. 54-56).<br>by Alexander Malchik.<br>M.Eng.
APA, Harvard, Vancouver, ISO, and other styles
17

Schwarte, Judith. "Modelling the earth's magnetic field of magnetospheric origin from CHAMP data." [S.l. : s.n.], 2004. http://deposit.ddb.de/cgi-bin/dokserv?idn=971057001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

De, Elía Ramón. "A study of wind field retrieval from single Doppler radar data /." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1997. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/MQ37113.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

BenGheit, Ali O. "Inversion of seismic reflection data from the Gialo Field, Sirte Basin." Thesis, Durham University, 1996. http://etheses.dur.ac.uk/5454/.

Full text
Abstract:
This project is concerned with the development of software to invert seismic reflection data for acoustic impedance, with application to the YY-reservoir area in Gialo Field, Sirte Basin. The problem was that of inverting post-stack seismic reflection data from two seismic lines into impedance profiles. The main input to the inversion process is an initial guess, or initial earth model, of the impedance profile defined in terms of parameters. These parameters describe the impedance and the geometry of the number of layers that constitute the earth model. Additionally, an initial guess is needed for the seismic wavelet, defined in the frequency domain using nine parameters. The inversion is an optimisation problem subject to constraints. The optimisation problem is that of minimising the error energy function defined by the sum of squares of the residuals between the observed seismic trace and its prediction by the forward model for the given earth model parameters. To determine the solution we use the method of generalised linear inverses. The generalised inverse is possible only when the Hessian matrix, which describe the curvature of error energy surface, is positive definite. When the Hessian is not definite, it is necessary to modify it to obtain the nearest positive definite matrix. To modify the Hessian we used a method based on the Cholesky factorisation. Because the modified Hessian is positive definite, we need to find the generalised inverse only once. But we may need to restrict the step-length to obtain the minimum. Such a method is a step-length based method. A step-length based method was implemented using linear equality and inequality constraints into a computer program to invert the observed seismic data for impedance. The linear equality and inequality constraints were used so that solutions that are geologically feasible and numerically stable are obtained. The strategy for the real data inversion was to first estimate the seismic wavelet at the well, then optimise the wavelet parameters. Then use the optimum wavelet to invert for impedance and layer boundaries in the seismic traces. In the three real data examples studied, this inversion scheme proved that the delineation of the Chadra sands in Gialo Field is possible. Better results could be obtained by using initial earth models that properly parameterise the subsurface, and linear constraints that are based on well data. Defining the wavelet parameters in the time domain may prove to be more stable and could lead to better inversion results.
APA, Harvard, Vancouver, ISO, and other styles
20

Hildreth, John C. "The Use of Short-Interval GPS Data for Construction Operations Analysis." Diss., Virginia Tech, 2003. http://hdl.handle.net/10919/26120.

Full text
Abstract:
The global positioning system (GPS) makes use of extremely accurate measures of the time to determine position. The times required for electronic signals to travel at the speed of light from at least four orbiting satellites to a receiver on earth is measured precisely and used to calculate the distances from the satellites to the receiver. The calculated distances are used to determine the position of the receiver through triangulation. This research takes an approach opposite the original GPS research, focusing on the use of position to determine the time at which events occur. Specifically, this work addresses the question: Can the information pertaining to position and speed contained in a GPS record be used to autonomously identify the times at which critical events occur within a production cycle? The research question was answered by determining the hardware needs for collecting the desired data in a useable format an developing a unique data collection tool to meet those needs. The tool was field evaluated and the data collected was used to determine the software needs for automated reduction of the data to the times at which key events occurred. The software tools were developed in the form of Time Identification Modules (TIMs). The TIMs were used to reduce data collected from a load and haul earthmoving operation to duration measures for the load, haul, dump, and return activities. The value of the developed system was demonstrated by investigating correlations between performance times in construction operations and by using field data to verify the results obtained from productivity estimating tools. Use of the system was shown to improve knowledge and provide additional insight into operations analysis studies.<br>Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
21

Grant, Andrea Nicole. "Arctic climate from an upper level perspective arising from a new collection of historical upper air data /." Zürich : ETH, 2008. http://e-collection.ethbib.ethz.ch/show?type=diss&nr=17868.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Guldemir, Hanifi. "Prediction of induction motor line current spectra from design data." Thesis, University of Nottingham, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.287180.

Full text
APA, Harvard, Vancouver, ISO, and other styles
23

Maier, Thorsten. "Multiscale geomagnetic field modelling from satellite data theoretical aspects and numerical applications /." [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=967076935.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Freimund, Jeremy Ronald. "Potential error in hydrologic field data collected from small semi-arid watersheds." Thesis, The University of Arizona, 1992. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu_e9791_1992_119_sip1_w.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Fourtounis, Peter D. "Field-test data from soil-structure interaction of shallow and deep foundations." Thesis, California State University, Long Beach, 2015. http://pqdtopen.proquest.com/#viewpdf?dispub=1587896.

Full text
Abstract:
<p>This thesis presents an analysis of data from two soil-structure interaction field tests, one involving a deep foundation and the other a shallow foundation. The objective of this research is to use the field data to validate and inform models used by engineers. Soil-structure interaction fundamentals and background are first discussed. Field-test data was used in conjunction with a soil-structure system model to develop equations that can be used to determine stiffness and damping of a rigid pile foundation system subjected to forced vibration loading. The stiffness and damping characteristics are presented through complex-valued impedance functions. The equations were applied to field data; however the results were inconclusive due in part to the limited frequency range of the data used. Additionally, soil-foundation interface pressures are analyzed for a shallow foundation system. Analysis of the shallow foundation behavior indicated resonance of the field test structure and the corresponding pressure generation.
APA, Harvard, Vancouver, ISO, and other styles
26

Eicker, Annette. "Gravity field refinement by radial basis functions from in-situ satellite data /." Bonn : Igg, 2008. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=016738220&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Al, Kanale Ahmed. "Investigation of recovery of stellar magnetic field geometries from simulated spectropolarimetric data." Thesis, Uppsala universitet, Institutionen för fysik och astronomi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-316290.

Full text
Abstract:
Powerful remote sensing techniques can covert time variability of polarization profiles of stellar spectral lines into vector magnetic fields maps of stellar surfaces. These techniques are widely applied to interpret observations but have been tested using only simplistic tests. It would be of interest to test magnetic inversion methods using polarization spectra simulated for realistic and physical models of stellar magnetic fields provided by recent 3D numerical simulations. Doppler Imaging is a method to reconstruct vector magnetic field maps of stellar surfaces from variation of polarization profiles. The work in this thesis presents numerical experiments to evaluate the performance of Magnetic Doppler Imaging (MDI) code INVERS10. The numerical experiments showed that in given high-resolution observations in four Stokes parameters, the code is capable of reconstructing magnetic field vector distributions, over the stellar surface, simultaneously and without any prior assumptions about the magnetic field geometry. Input data consists of polarization measurements in the line profiles and the reconstruction is performed by solving the regularized inverse problem. Right results were obtained by testing different type of models covering simple, complex and unusual complex magnetic field distribution. Whilst using incomplete Stokes parameter datasets containing only Stokes I and V profiles, the INVERS10 code was able to reconstruct a global stellar magnetic fields of only simple models and give accurate and reliable results. Testing the code with different inclination and azimuth angle successfully gave the lowest deviation when same values are used from the true map.
APA, Harvard, Vancouver, ISO, and other styles
28

Parthepan, Vijayeandra. "Efficient Schema Extraction from a Collection of XML Documents." TopSCHOLAR®, 2011. http://digitalcommons.wku.edu/theses/1061.

Full text
Abstract:
The eXtensible Markup Language (XML) has become the standard format for data exchange on the Internet, providing interoperability between different business applications. Such wide use results in large volumes of heterogeneous XML data, i.e., XML documents conforming to different schemas. Although schemas are important in many business applications, they are often missing in XML documents. In this thesis, we present a suite of algorithms that are effective in extracting schema information from a large collection of XML documents. We propose using the cost of NFA simulation to compute the Minimum Length Description to rank the inferred schema. We also studied using frequencies of the sample inputs to improve the precision of the schema extraction. Furthermore, we propose an evaluation framework to quantify the quality of the extracted schema. Experimental studies are conducted on various data sets to demonstrate the efficiency and efficacy of our approach.
APA, Harvard, Vancouver, ISO, and other styles
29

Lommen, Candice M. "How does the use of digital photography affect student observation skills and data collection during outdoor field studies?" Montana State University, 2012. http://etd.lib.montana.edu/etd/2012/lommen/LommenC0812.pdf.

Full text
Abstract:
The purpose of this project was to determine if adding digital photography as a tool for collecting data during outdoor field study would increase student engagement and also improve the quality of the data students brought back to the classroom. Too often my students would come in from the field with data that focused on surface or irrelevant features. They were unable to use their data to make connections to the ecology concepts we were learning in the classroom. During the non-treatment phase of the study, students recorded all of their data through drawings and written observations. While at their plots, students inventoried the vegetation present and also took specific measurements such as tree circumference, canopy cover and invasive plant cover. Before taking the cameras out to the field, students practiced with the macro settings to take close up pictures of vegetation brought into the classroom. During the treatment phase, students took digital cameras out to their new plots to inventory and measure plants. Student engagement data was measured using a self-assessment questionnaire, outside observer behavior checklist and teacher field journal. Although interest and engagement were high for most students during the entire study, students who were not initially engaged in the field study activities reported higher engagement levels when cameras were used. The outside observer and teacher journal data supported this finding. The quality of student data was measured using both the student self-assessment questionnaire and drawing or photo rubrics. Rubric scores increased when students used photographs, rather than drawings, to write observations. Students felt they had more to write about when looking at their pictures as compared to their drawings. Interestingly, students reported they wrote less while at their plots when they had the camera, relying on their pictures to tell the story of their plot. Using photos only slightly increased students' ability to positively identify their plants. Pictures lacked those complex features that would enable students to easily work their way through a basic key. To increase the complexity of observations, additional content knowledge about plant structure and ecology is needed.
APA, Harvard, Vancouver, ISO, and other styles
30

Daniel, Gayon Monique. "Web-Based Evaluation Survey of Campus Mediation Programs: Perceptions from the Field." Diss., Temple University Libraries, 2009. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/36941.

Full text
Abstract:
Educational Psychology<br>Ph.D.<br>Campus mediation programs (CMP's) experienced a rapid growth in higher education institutions from 18 programs in 1990 to more than 200 programs in 1998 (Warters, 2000). During that period, CMP's became a widely accepted approach for addressing conflict within US colleges and universities. However, recent data indicate that there are just over 100 programs which points to a decline and raises questions as to the value of campus mediation programs to higher education institutions. A hindrance to addressing the questions raised has been the limited amount of empirical research and published data on evaluation use within campus mediation programs. Accordingly, the purpose of this study was to gather information from US campus mediation program directors regarding their use of program evaluation in order to suggest ways to improve their evaluation efforts. Campus mediation program directors were surveyed on their perceptions of evaluation use in their respective programs. This study was conducted over a period of six months using a web-survey and follow-up telephone interviews. The web-based survey used in this study was adapted from an online campus mediation program survey developed by Rick Olshak and modified. The web-survey consisted of four sections: Demographics, Description of Services, Evaluation and Program Profile. The population consisted of 108 campus mediation program directors in US higher education institutions who were solicited for this study and agreed to participate. Of the 108 directors, there were a total of 59 respondents representing a 55% response rate. There were nine respondents who participated in a follow-up telephone interview. Data analysis for the research questions utilized rank order, frequencies, and averages; supplemental analyses utilized an independent samples t-test, one-way ANOVA's and Pearson correlations. Results indicated that evaluation received one of the lowest priority ranking as a program goal, however, most of the directors indicated that they would be very interested in learning different ways of improving their evaluation methods and having a standard evaluation process. The most prevalent concerns and recommendations from the telephone follow-up interviews focused on acquiring buy-in of administration and campus affiliates, improving program surveys, addressing budget cuts and decreasing high staff turnovers.<br>Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
31

Zhang, Ping. "Learning from Multiple Knowledge Sources." Diss., Temple University Libraries, 2013. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/214795.

Full text
Abstract:
Computer and Information Science<br>Ph.D.<br>In supervised learning, it is usually assumed that true labels are readily available from a single annotator or source. However, recent advances in corroborative technology have given rise to situations where the true label of the target is unknown. In such problems, multiple sources or annotators are often available that provide noisy labels of the targets. In these multi-annotator problems, building a classifier in the traditional single-annotator manner, without regard for the annotator properties may not be effective in general. In recent years, how to make the best use of the labeling information provided by multiple annotators to approximate the hidden true concept has drawn the attention of researchers in machine learning and data mining. In our previous work, a probabilistic method (i.e., MAP-ML algorithm) of iteratively evaluating the different annotators and giving an estimate of the hidden true labels is developed. However, the method assumes the error rate of each annotator is consistent across all the input data. This is an impractical assumption in many cases since annotator knowledge can fluctuate considerably depending on the groups of input instances. In this dissertation, one of our proposed methods, GMM-MAPML algorithm, follows MAP-ML but relaxes the data-independent assumption, i.e., we assume an annotator may not be consistently accurate across the entire feature space. GMM-MAPML uses a Gaussian mixture model (GMM) and Bayesian information criterion (BIC) to find the fittest model to approximate the distribution of the instances. Then the maximum a posterior (MAP) estimation of the hidden true labels and the maximum-likelihood (ML) estimation of quality of multiple annotators at each Gaussian component are provided alternately. Recent studies show that it is not the case that employing more annotators regardless of their expertise will result in improved highest aggregating performance. In this dissertation, we also propose a novel algorithm to integrate multiple annotators by Aggregating Experts and Filtering Novices, which we call AEFN. AEFN iteratively evaluates annotators, filters the low-quality annotators, and re-estimates the labels based only on information obtained from the good annotators. The noisy annotations we integrate are from any combination of human and previously existing machine-based classifiers, and thus AEFN can be applied to many real-world problems. Emotional speech classification, CASP9 protein disorder prediction, and biomedical text annotation experiments show a significant performance improvement of the proposed methods (i.e., GMM-MAPML and AEFN) as compared to the majority voting baseline and the previous data-independent MAP-ML method. Recent experiments include predicting novel drug indications (i.e., drug repositioning) for both approved drugs and new molecules by integrating multiple chemical, biological or phenotypic data sources.<br>Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
32

Laroche, Stéphane. "Variational analysis methods for retrieval of wind field from single-doppler radar data." Thesis, McGill University, 1994. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=28818.

Full text
Abstract:
The variational analysis methods are applied to retrieve the steady state wind field from single-Doppler radar data. The wind field is retrieved by fitting, in the least-squares sense, constraining model equations to observations measured during a short assimilation period (2 or 3 time sequences). The weak and strong constraint formalisms are reviewed and examined using the one-dimensional linear advection equation as a constraint. It is shown that the retrieval is not unique, but the problem can be controlled by a smoothness constraint. Variational two-dimensional and three-dimensional wind retrieval algorithms are developed and tested using actual dual-Doppler radar data. The conservation of reflectivity and the radial momentum equation are used as weak constraints in both algorithms. The anelastic form of the continuity equation is also included as a strong constraint in the three-dimensional algorithm. The two-dimensional algorithm is tested and compared to echo tracking methods using Doppler radar observations in the clear-air planetary boundary layer. The resolution at which the methods can effectively retrieve the horizontal wind field is examined in detail. The variational algorithm can properly retrieve wind structures greater than 10 km wavelength. The three-dimensional algorithm is tested using observations of a precipitating microburst. It is demonstrated that the three-dimensional wind field can be retrieved, but the method fails near the ground level. In addition, the retrieval is sensitive to the radar position relative to the observational domain due to systematic model errors. The computational efficiency of the three-dimensional wind retrieval algorithm allows its semi-operational implementation at the J. S. Marshall Radar Observatory of McGill University.
APA, Harvard, Vancouver, ISO, and other styles
33

Williams, Simon E. "Extended Euler deconvolution and interpretation of potential field data from Bohai Bay, China." Thesis, University of Leeds, 2006. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.432651.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Ghaemi, Omid. "Collection and Examination of Lab Test and Field Performance Data on Friction and Polishing of Hot Mix Asphalt Surface." University of Akron / OhioLINK, 2011. http://rave.ohiolink.edu/etdc/view?acc_num=akron1323290412.

Full text
APA, Harvard, Vancouver, ISO, and other styles
35

Puerto, Valencia J. (Jose). "Predictive model creation approach using layered subsystems quantified data collection from LTE L2 software system." Master's thesis, University of Oulu, 2019. http://jultika.oulu.fi/Record/nbnfioulu-201907192705.

Full text
Abstract:
Abstract. The road-map to a continuous and efficient complex software system’s improvement process has multiple stages and many interrelated on-going transformations, these being direct responses to its always evolving environment. The system’s scalability on this on-going transformations depends, to a great extent, on the prediction of resources consumption, and systematic emergent properties, thus implying, as the systems grow bigger in size and complexity, its predictability decreases in accuracy. A predictive model is used to address the inherent complexity growth and be able to increase the predictability of a complex system’s performance. The model creation processes are driven by the recollection of quantified data from different layers of the Long-term Evolution (LTE) Data-layer (L2) software system. The creation of such a model is possible due to the multiple system analysis tools Nokia has already implemented, allowing a multiple-layers data gathering flow. The process starts by first, stating the system layers differences, second, the use of a layered benchmark approach for the data collection at different levels, third, the design of a process flow organizing the data transformations from recollection, filtering, pre-processing and visualization, and forth, As a proof of concept, different Performance Measurements (PM) predictive models, trained by the collected pre-processed data, are compared. The thesis contains, in parallel to the model creation processes, the exploration, and comparison of various data visualization techniques that addresses the non-trivial graphical representation of the in-between subsystem’s data relations. Finally, the current results of the model process creation process are presented and discussed. The models were able to explain 54% and 67% of the variance in the two test configurations used in the instantiation of the model creation process proposed in this thesis.
APA, Harvard, Vancouver, ISO, and other styles
36

Jonasson, Fredrik. "A system for GDPR-compliant collection of social media data: from legal to software requirements." Thesis, Uppsala universitet, Institutionen för informationsteknologi, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-397110.

Full text
Abstract:
As of 2018 there is a new regulation regarding data protection in the European union. The legislation, often refereed to as the General Data Protection Regulation (GDPR), has led to increased demands on organizations that processes personal data. This thesis has investigated the legal consequences of social media data collection with a certain focus on collection of tweets. The legal findings was then translated into possible enhancements of a tweet collecting software. The tweet collecting software was extended with a method for pseudonymization, however it turned out that our implementation had some serious performance issues. There where also work done on an implementation of a method providing automatic tweet posting with the purpose to repeatedly inform followers of a hashtag that a collection of tweets regarding that hashtag is taking place. Lastly, some findings about possible future enhancements that can be done on the software was laid out.
APA, Harvard, Vancouver, ISO, and other styles
37

Burns, Jonathan Allen. "Prehistoric Rockshelters of Pennsylvania: Revitalizing Behavioral Interpretation from Archaeological Spatial Data." Diss., Temple University Libraries, 2009. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/48182.

Full text
Abstract:
Anthropology<br>Ph.D.<br>The size of archaeological data collection units and provenience controls affect data resolution, types of analyses, and the interpretations that archaeologists draw from the spatial patterning of material evidence. This research examines the use of fine-grained data collection units and the analyses that they support in the study of two Pennsylvania rockshelters to: 1) provide a better understanding of rockshelter use and the importance of rockshelters in Pennsylvania and Middle Atlantic region prehistory and, 2) reveal the impact that archaeological units can have on the reconstruction and interpretation of human behaviors in general. Insights from behavioral theory, ethnoarchaeology and previous archaeological research influenced the units and methods employed in the excavation of the Mykut and Camelback rockshelters. This analysis reveals the range of behaviors that can be reconstructed from these data, which can then be compared and contrasted with interpretations of other rockshelters and site contexts in the region.<br>Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
38

Gligorijevic, Jelena. "Context-aware Learning from Partial Observations." Diss., Temple University Libraries, 2018. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/484799.

Full text
Abstract:
Computer and Information Science<br>Ph.D.<br>The Big Data revolution brought an increasing availability of data sets of unprecedented scales, enabling researchers in machine learning and data mining communities to escalate in learning from such data and providing data-driven insights, decisions, and predictions. However, on their journey, they are faced with numerous challenges, including dealing with missing observations while learning from such data or making predictions on previously unobserved or rare (“tail”) examples, which are present in a large span of domains including climate, medical, social networks, consumer, or computational advertising domains. In this thesis, we address this important problem and propose tools for handling partially observed or completely unobserved data by exploiting information from its context. Here, we assume that the context is available in the form of a network or sequence structure, or as additional information to point-informative data examples. First, we propose two structured regression methods for dealing with missing values in partially observed temporal attributed graphs, based on the Gaussian Conditional Random Fields (GCRF) model, which draw power from the network/graph structure (context) of the unobserved instances. Marginalized Gaussian Conditional Random Fields (m-GCRF) model is designed for dealing with missing response variable value (labels) in graph nodes, whereas Deep Feature Learning GCRF is able to deal with missing values in explanatory variables while learning feature representation jointly with learning complex interactions of nodes in a graph and together with the overall GCRF objective. Next, we consider unsupervised and supervised shallow and deep neural models for monetizing web search. We focus on two sponsored search tasks here: (i) query-to-ad matching, where we propose novel shallow neural embedding model worLd2vec with improved local query context (location) utilization and (ii) click-through-rate prediction for ads and queries, where Deeply Supervised Semantic Match model is introduced for dealing with unobserved and tail queries click-through-rate prediction problem, while jointly learning the semantic embeddings of a query and an ad, as well as their corresponding click-through-rate. Finally, we propose a deep learning approach for ranking investigators based on their expected enrollment performance on new clinical trials, that learns from both, investigator and trial-related heterogeneous (structured and free-text) data sources, and is applicable to matching investigators to new trials from partial observations, and for recruitment of experienced investigators, as well as new investigators with no previous experience in enrolling patients in clinical trials. Experimental evaluation of the proposed methods on a number of synthetic and diverse real-world data sets shows surpassing performance over their alternatives.<br>Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
39

Dupaix, Taylor Meredith Ireene. "Statistical Analysis and Extraction of Quantitative Data from Elliptical-Signal-Model bSSFP MRI." BYU ScholarsArchive, 2019. https://scholarsarchive.byu.edu/etd/7475.

Full text
Abstract:
Osteoarthritis is the most common type of arthritis, and is characterized by the loss of articular cartilage in a joint. This eventually leads to painful bone on bone interactions. Since the loss of cartilage is permanent, the main treatment for this disease is pain management until a full joint replacement is needed. To test new potential treatments a consistent way to measure cartilage thickness is needed. The current standard used in the knee to represent cartilage uses joint space width measured from x-rays. This measurement is highly variable, and does not directly show cartilage. Unlike x-rays, magnetic resonance imaging (MRI) can provide direct visualization of soft tissues in the body, like cartilage. One specific MRI method called balanced steady-state free precession (bSSFP) provides useful contrast between cartilage and its surrounding tissues. This allows easy delineation of the cartilage for volume and thickness measurements. Using bSSFP has unique challenges, but can provide quantitative MR tissue parameter information in addition to volume and thickness measurements.Although bSSFP provides useful contrast, it is highly sensitive to variations in the main magnetic field. This results in dark bands of signal null across an image referred to as banding artifacts. There are a few new methods for mitigating this artifact. An analysis of banding artifact reduction methods is presented in this dissertation. The new methods are shown to be better than traditional methods at reducing banding artifact. However, they do not provide as of high signal to noise ratio as traditional methods in most cases. This analysis is helpful in obtaining artifact free images for volume and thickness measurements.Image distortion can be created when there is a magnetic susceptibility mismatch between bordering substances being imaged, like in the sinuses where air and body tissues meet. A map of the main magnetic field variation can be used to fix this distortion in post processing. A novel method for obtaining a map of the main magnetic field variation is developed using bSSFP in this dissertation. In cases where bSSFP contrast is desirable this map can be obtained with no additional scan time.A new way to sift out MR tissue parameters: T2, T1, and M0 is presented in this dissertation using bSSFP. This method obtains biomarkers that can potentially show the presence of Osteoarthritis before cartilage degeneration begins at the same time as anatomical images. Adjunct scans do not need to be run to get these extra parameters saving scan time. Unlike many adjunct scans, it is also resolution matched to the anatomical images.
APA, Harvard, Vancouver, ISO, and other styles
40

Liu, Xianglin. "Global gravity field recovery from satellite-to-satellite tracking data with the acceleration approach /." Delft : NCG Nederlandse Commissie voor Geodesie, 2008. http://opac.nebis.ch/cgi-bin/showAbstract.pl?u20=9789061323096.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Eicker, Annette [Verfasser]. "Gravity field refinement by radial basis functions from in-situ satellite data / Annette Eicker." Bonn : Universitäts- und Landesbibliothek Bonn, 2019. http://d-nb.info/1199005266/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Morán, Toledo Carlos A. "Framework for Estimating Congestion performance measures : from data collection to reliability analysis: case study of Stockholm." Licentiate thesis, KTH, Trafik och Logistik (stängd 20110301), 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4625.

Full text
Abstract:
For operational and planning purposes it is important to observe and predict the traffic performance of congested urban road links and networks. Congestion can be defined as traffic conditions caused by a downstream bottleneck or excess in travel time from what is incurred during light or free-flow travel conditions. Factors affecting definitions of congestion for specific studies are reviewed and an inventory of proposed congestion performance measures is presented for both definitions of congestion. Swedish Road Administration has recognized the reliability of the estimations of congestion levels as an important factor when describing the traffic condition in the road traffic network. Traffic data collected for the Stockholm congestion charging trials was used to estimate selected congestion performance measures and to analyze their statistical characteristics and applicability. A comparative analysis of data collection methods is provided and further recommendations for their using estimating Congestion Performance Measures. The reliability of the estimations of each Congestion Performance Measure is evaluated for different area networks and different time periods of the day. These series of observations are further studied aiming to identify systematic differences in the reliability of the estimations. A reliability ranking is provided for guiding future studies in the selection of estimators. Further, a simplify methodology for estimating recommended sample sized under budget restricted conditions is provided.<br><p>QC 20101118</p>
APA, Harvard, Vancouver, ISO, and other styles
43

Drury, William B. "A data collection system for the study of RF interference from industrial, scientific, and medical equipment." Ohio : Ohio University, 1986. http://www.ohiolink.edu/etd/view.cgi?ohiou1183129782.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Morán, Toledo Carlos A. "Framework for estimating congestion performance measures : from data collection to reliability analysis : case study of Stockholm /." Stockholm : Transporter och samhällsekonomi, Kungliga Tekniska högskolan, 2008. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-4625.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Seager, Kimberly. "An exploratory data collection approach for the assessment of level of service from a traveler's perspective." [Gainesville, Fla.] : University of Florida, 2004. http://purl.fcla.edu/fcla/etd/UFE0003401.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Comrie, Fiona S. "An evaluation of the effectiveness of tailored dietary feedback from a novel online dietary assessment method for changing the eating habits of undergraduate students." Thesis, Available from the University of Aberdeen Library and Historic Collections Digital Resources, 2008. http://digitool.abdn.ac.uk:80/webclient/DeliveryManager?application=DIGITOOL-3&owner=resourcediscovery&custom_att_2=simple_viewer&pid=25224.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Koziel, Sylvie Evelyne. "From data collection to electric grid performance : How can data analytics support asset management decisions for an efficient transition toward smart grids?" Licentiate thesis, KTH, Elektroteknisk teori och konstruktion, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-292323.

Full text
Abstract:
Physical asset management in the electric power sector encompasses the scheduling of the maintenance and replacement of grid components, as well as decisions about investments in new components. Data plays a crucial role in these decisions. The importance of data is increasing with the transformation of the power system and its evolution toward smart grids. This thesis deals with questions related to data management as a way to improve the performance of asset management decisions. Data management is defined as the collection, processing, and storage of data. Here, the focus is on the collection and processing of data. First, the influence of data on the decisions related to assets is explored. In particular, the impacts of data quality on the replacement time of a generic component (a line for example) are quantified using a scenario approach, and failure modeling. In fact, decisions based on data of poor quality are most likely not optimal. In this case, faulty data related to the age of the component leads to a non-optimal scheduling of component replacement. The corresponding costs are calculated for different levels of data quality. A framework has been developed to evaluate the amount of investment needed into data quality improvement, and its profitability. Then, the ways to use available data efficiently are investigated. Especially, the possibility to use machine learning algorithms on real-world datasets is examined. New approaches are developed to use only available data for component ranking and failure prediction, which are two important concepts often used to prioritize components and schedule maintenance and replacement. A large part of the scientific literature assumes that the future of smart grids lies in big data collection, and in developing algorithms to process huge amounts of data. On the contrary, this work contributes to show how automatization and machine learning techniques can actually be used to reduce the need to collect huge amount of data, by using the available data more efficiently. One major challenge is the trade-offs needed between precision of modeling results, and costs of data management.<br><p>QC 20210330</p>
APA, Harvard, Vancouver, ISO, and other styles
48

O'Brien, T. "Sustaining data quality : lessons from the field : creating and sustaining data quality within diverse enterprise resource planning and information systems." Thesis, Nottingham Trent University, 2011. http://irep.ntu.ac.uk/id/eprint/304/.

Full text
Abstract:
This research has identified a gap in the literature surrounding the process of improving and sustaining the quality of data within enterprise resource planning and information (ERP) systems. The study not only established firmly that quality data is an absolute necessity for all organisations, none more so than those operating ERP systems, but identified that for any improvement process to be worthwhile it must gain some degree of sustainability. For this reason this study has attempted to discover the means by which the quality of data can be improved but more fundamentally become embedded within an organisation. A detailed review of the literature was undertaken which unearthed rich material in particular around the concept of data quality and its application within business systems, from which a correlation was established between the concepts of a planning and information system and that of a product manufacturing system. A conceptual framework was then developed based upon three conceptual elements seen to be key to any data quality programme namely: people, processes and data. A qualitative study was undertaken within the researcher's own organisation Remploy, employing an action research/focus group approach aligned to a data quality improvement initiative that was already in place within the organisation. A series of site meetings and conference calls took place embracing forty eight of the fifty four factories together with seven business groups. A quantitative survey was then undertaken using a web-based self-administered questionnaire distributed to a number of the researcher's colleagues within Remploy. The findings from both the qualitative study and the quantitative survey provided unique material in terms of key findings and themes. A number of principle findings then emerged relating to: the significance of the role of a 'champion' at various levels within a project; the importance of measurement, reporting and feedback relating to any improvement process; the necessity for systems and the people that use them to be allowed to mature; and the manner in which peoples' perceptions and attitudes toward data and data quality can have considerable degrees of inconsistency. In conclusion it is felt that the outcomes of this study have the potential to both improve and sustain quality data within enterprise systems when applied to practical business and professional settings, whilst also providing the academic community with the promise of a contribution to the body of knowledge. In the last stage of the research the hospitableness profiling tool was deployed in a commercial setting with a group of pub tenants and business owners. The (non-validated) hospitableness scores achieved by participants were then tested for correlation against sales and mystery customer information provided by a regional brewery. Although no relationship was found a number of mitigating factors were acknowledged that may have been significant and the document concludes with clear areas for further post-doctoral research identified.
APA, Harvard, Vancouver, ISO, and other styles
49

Pratt, Marrett Caroline. "A VIEW FROM THE FIELD: URBAN SPECIAL EDUCATION DIRECTORS' PERCEPTIONS OF ESSENTIAL COMPETENCIES FOR NEWLY APPOINTED SPECIAL." Doctoral diss., University of Central Florida, 2008. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/2436.

Full text
Abstract:
ABSTRACT The purpose of this exploratory study was to determine what competencies urban directors of special education perceived to be essential for newly appointed urban special education administrators. Two research questions and two null hypotheses were generated to investigate the underlying factors in competencies perceived by urban special education directors to be essential for newly appointed special education administrators and to investigate the relationship between years of experience as a director of special education and these underlying factors. A factor analysis revealed that there were three underlying factors reported to be essential for newly appointed special education administrators. A multiple regression analysis indicated that the relationship between the years of experience as a director of special education and the underlying factors (Management, Instruction and Change; Supervision of Faculty; and Team Building Skills) was not statistically significant. A post hoc test was conducted to further detect differences in years of experience as an urban director of special education and the underlying factors. The results were sufficient to reject the null hypotheses in both cases.<br>Ph.D.<br>Department of Child, Family and Community Sciences<br>Education<br>Education PhD
APA, Harvard, Vancouver, ISO, and other styles
50

Norcross, Stuart John. "Deriving distributed garbage collectors from distributed termination algorithms." Thesis, University of St Andrews, 2004. http://hdl.handle.net/10023/14986.

Full text
Abstract:
This thesis concentrates on the derivation of a modularised version of the DMOS distributed garbage collection algorithm and the implementation of this algorithm in a distributed computational environment. DMOS appears to exhibit a unique combination of attractive characteristics for a distributed garbage collector but the original algorithm is known to contain a bug and, previous to this work, lacks a satisfactory, understandable implementation. The relationship between distributed termination detection algorithms and distributed garbage collectors is central to this thesis. A modularised DMOS algorithm is developed using a previously published distributed garbage collector derivation methodology that centres on mapping centralised collection schemes to distributed termination detection algorithms. In examining the utility and suitability of the derivation methodology, a family of six distributed collectors is developed and an extension to the methodology is presented. The research work described in this thesis incorporates the definition and implementation of a distributed computational environment based on the ProcessBase language and a generic definition of a previously unimplemented distributed termination detection algorithm called Task Balancing. The role of distributed termination detection in the DMOS collection mechanisms is defined through a process of step-wise refinement. The implementation of the collector is achieved in two stages; the first stage defines the implementation of two distributed termination mappings with the Task Balancing algorithm; the second stage defines the DMOS collection mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography