To see the other types of publications on this topic, follow the link: BBC Data.

Dissertations / Theses on the topic 'BBC Data'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 36 dissertations / theses for your research on the topic 'BBC Data.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Marques, Iuri Lammel. "ORGANIZAÇÃO E GERENCIAMENTO DE CONTEÚDOS JORNALÍSTICOS NA WEB SEMÂNTICA." Universidade Federal de Santa Maria, 2011. http://repositorio.ufsm.br/handle/1/6327.

Full text
Abstract:
Among the technologies that have modified the digital journalism since its inception, there are two that can be highlighted: 1) the World Wide Web (Web), a network of digital documents that has being used as a platform to the practice of journalism on the Internet and that determined the three generations of digital journalism; and 2) the databases aggregate to the Web, that have become the main technology behind the structuring of journalistic products in the transition between the third and fourth generation of digital journalism. In 2001, the scientist Tim Berners-Lee, inventor of the web, published a paper with a proposal of an extension to this network, which was called the Semantic Web. The paper proposed a change in the concept of the current web: from the traditional network made of documents to a network made of data, plus the technical ability to represent real concepts, such as people, places and objects. A great advantage of this proposal is that computers would be able to understand the data and identify their meanings. With a semantic network, the information could be organized and managed more efficiently and in an automated way, and the connections between the data would be richer than the current hyperlinks between documents. The concept of the Semantic Web is still maturing, but it is currently possible to find digital products that implement this concept. This research aims to analyze two real cases that apply the concept of the Semantic Web in digital journalism, specifically in the organization and management of the newspaper reports. For the theoretical background of research, we conducted a literature review on digital journalism, paradigm of the Digital Journalism on Databases (JDBD) and how the standard technologies of the Semantic Web work, such as RDF and ontologies. This is an exploratory research and it uses the case study as a method. The cases are the site 'World Cup 2010 BBC' and the site 'BBC Wildlife'. The analysis was performed using eight categories applicable to the study of JDBD. Among the results, it is found that the Semantic Web improve some of the characteristics of JDBD, mainly due to the automation on management tasks. Moreover, it identified that automated interoperability was the more advantageous benefit of Semantic Web to both digital journalism cases, and that it can become a potential rupture if the Semantic Web project come to succeed.<br>Entre as tecnologias que transformaram o jornalismo digital desde o seu surgimento, destacam-se duas: a World Wide Web (web), rede de documentos digitais que serviu como plataforma à prática jornalística na internet e determinou as três fases evolutivas do jornalismo digital; e as bases de dados, que, agregadas à web, se tornaram a principal tecnologia estruturante dos produtos jornalísticos na fase de transição entre a terceira e a quarta geração do jornalismo digital. No ano de 2001, o cientista Tim Berners-Lee, inventor da web, publicou um artigo com a proposta de uma expansão para esta rede, a qual foi denominada Web Semântica. O artigo propunha uma mudança no conceito da web: da tradicional rede de documentos para uma rede de dados, com capacidade para representar conceitos reais, como pessoas, lugares e objetos. Um grande diferencial desta proposta é que os computadores teriam capacidade para interpretar tais dados e identificar seus significados. Em uma rede semântica, as informações poderiam ser organizadas e gerenciadas de forma mais eficiente e automatizada, e as conexões entre dados seriam mais ricas do que através dos atuais links entre documentos. O conceito de Web Semântica ainda está em fase de amadurecimento, mas já é possível encontrar em funcionamento produtos digitais que aplicam tal conceito. A proposta desta pesquisa é analisar dois casos que aplicam o conceito da Web Semântica no jornalismo digital, mais especificamente na organização e no gerenciamento das informações jornalísticas. Para o embasamento teórico da investigação, foi realizada uma revisão bibliográfica sobre o jornalismo digital, sobre o paradigma do Jornalismo Digital em Base de Dados (JDBD) e sobre o funcionamento das tecnologias empregadas na Web Semântica, tais como o RDF e as ontologias. A pesquisa apresenta caráter exploratório e emprega como estratégia de investigação o estudo de caso, especificamente dos sites BBC World Cup 2010 e BBC Wildlife. A análise foi realizada a partir de oito categorias aplicáveis ao estudo do JDBD. Entre os resultados, é constatado que a Web Semântica potencializa algumas das características do JDBD, principalmente devido à automatização. Além disso, foi identificado nos casos estudados que a interoperabilidade automatizada foi o benefício mais vantajoso da Web Semântica em relação às tecnologias até então utilizadas no jornalismo digital, e que pode se tornar uma ruptura caso o projeto de Web Semântica obtenha êxito.
APA, Harvard, Vancouver, ISO, and other styles
2

Pokta, Suriani. "Bayesian model selection using exact and approximated posterior probabilities with applications to Star Data." Diss., Texas A&M University, 2004. http://hdl.handle.net/1969.1/1121.

Full text
Abstract:
This research consists of two parts. The first part examines the posterior probability integrals for a family of linear models which arises from the work of Hart, Koen and Lombard (2003). Applying Laplace's method to these integrals is not entirely straightforward. One of the requirements is to analyze the asymptotic behavior of the information matrices as the sample size tends to infinity. This requires a number of analytic tricks, including viewing our covariance matrices as tending to differential operators. The use of differential operators and their Green's functions can provide a convenient and systematic method to asymptotically invert the covariance matrices. Once we have found the asymptotic behavior of the information matrices, we will see that in most cases BIC provides a reasonable approximation to the log of the posterior probability and Laplace's method gives more terms in the expansion and hence provides a slightly better approximation. In other cases, a number of pathologies will arise. We will see that in one case, BIC does not provide an asymptotically consistent estimate of the posterior probability; however, the more general Laplace's method will provide such an estimate. In another case, we will see that a naive application of Laplace's method will give a misleading answer and Laplace's method must be adapted to give the correct answer. The second part uses numerical methods to compute the "exact" posterior probabilities and compare them to the approximations arising from BIC and Laplace's method.
APA, Harvard, Vancouver, ISO, and other styles
3

Neloy, Md Naim Ud Dwla. "Validation of theoritical approach to measure biodiversity using plant species data." Thesis, Högskolan i Skövde, Institutionen för biovetenskap, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-19431.

Full text
Abstract:
Measuring Biodiversity is an important phenomenon to serve best to our ecology and also keep environment sound. Variety of life on different levels, like an ecosystem, life forms on a site, landscape collectively known as Biodiversity. Species richness and evenness combine measures as Biodiversity. Separate formula, index, equation are widely using to measure Biodiversity in each level. Swedish Environmental Protection Agency aimed to establish an index that consists of landscape functionality and landscape heterogeneity. For landscape functionality assessment, there BBCI (Biotope biodiversity Capacity index) is going to use. High BBCI indicates a high biodiversity for each biotope. However, empirically estimate species richness how much matched with BBCI that not been evaluated. The aim of this paper to see the relationship between empirical estimated Biodiversity and BBCI. A relationship between Shannon diversity index and BBCI also ran to see the matches between them. Collect the empirical data from selected 15 landscapes using Artportalen.se and sort the data for further calculation. Results showed that there was a strong positive relationship between empirical estimated Biodiversity and BBCI. Again Shannon diversity index and BBCI also demonstrated a positive correlation between them. It showed BBCI could explain 60%-69% of species richness data and 17%-22% of Shannon diversity index. It indicates the acceptance of theoretical study of measure Biodiversity.
APA, Harvard, Vancouver, ISO, and other styles
4

Wu, Jingwen. "Model-based clustering and model selection for binned data." Thesis, Supélec, 2014. http://www.theses.fr/2014SUPL0005/document.

Full text
Abstract:
Cette thèse étudie les approches de classification automatique basées sur les modèles de mélange gaussiens et les critères de choix de modèles pour la classification automatique de données discrétisées. Quatorze algorithmes binned-EM et quatorze algorithmes bin-EM-CEM sont développés pour quatorze modèles de mélange gaussiens parcimonieux. Ces nouveaux algorithmes combinent les avantages des données discrétisées en termes de réduction du temps d’exécution et les avantages des modèles de mélange gaussiens parcimonieux en termes de simplification de l'estimation des paramètres. Les complexités des algorithmes binned-EM et bin-EM-CEM sont calculées et comparées aux complexités des algorithmes EM et CEM respectivement. Afin de choisir le bon modèle qui s'adapte bien aux données et qui satisfait les exigences de précision en classification avec un temps de calcul raisonnable, les critères AIC, BIC, ICL, NEC et AWE sont étendus à la classification automatique de données discrétisées lorsque l'on utilise les algorithmes binned-EM et bin-EM-CEM proposés. Les avantages des différentes méthodes proposées sont illustrés par des études expérimentales<br>This thesis studies the Gaussian mixture model-based clustering approaches and the criteria of model selection for binned data clustering. Fourteen binned-EM algorithms and fourteen bin-EM-CEM algorithms are developed for fourteen parsimonious Gaussian mixture models. These new algorithms combine the advantages in computation time reduction of binning data and the advantages in parameters estimation simplification of parsimonious Gaussian mixture models. The complexities of the binned-EM and the bin-EM-CEM algorithms are calculated and compared to the complexities of the EM and the CEM algorithms respectively. In order to select the right model which fits well the data and satisfies the clustering precision requirements with a reasonable computation time, AIC, BIC, ICL, NEC, and AWE criteria, are extended to binned data clustering when the proposed binned-EM and bin-EM-CEM algorithms are used. The advantages of the different proposed methods are illustrated through experimental studies
APA, Harvard, Vancouver, ISO, and other styles
5

Knuth, Tobias. "Fraud prevention in the B2C e-Commerce mail order business : a framework for an economic perspective on data mining." Thesis, Edinburgh Napier University, 2018. http://researchrepository.napier.ac.uk/Output/1256175.

Full text
Abstract:
A remarkable gap exists between the financial impact of fraud in the B2C e-commerce mail order business and the amount of research conducted in this area — whether it be qualitative or quantitative research about fraud prevention. Projecting published fraud rates of only approx. one percent to e-commerce sales data, the affected sales volume amounts to $651 million in the German market, and in the North American market, the volume amounts to $5.22 billion; empirical data, however, indicate even higher fraud rates. Low profit margins amplify the financial damage caused by fraudulent activities. Hence, companies show increasing concern for raising numbers of internet fraud. The problem motivates companies to invest into data analytics and, as a more sophisticated approach, into automated machine learning systems in order to inspect and evaluate the high volume of transactions in which potential fraud cases can be buried. In other areas that face fraud (e.g. automobile insurance), machine learning has been applied successfully. However, there is little evidence yet about which variables may act as fraud risk indicators and how to design such systems in the e-commerce mail order business. In this research, mixed methods are applied in order to investigate the question how computer-aided systems can help detect and prevent fraudulent transactions. In the qualitative part, experts from fraud prevention companies are interviewed in order to understand how fraud prevention has been conventionally conducted in the e-commerce mail order business. The quantitative part, for which a dataset containing transactions from one of the largest e-commerce firms in Europe has been analyzed, consists of three analytical components: First, feature importance is evaluated by computing information gain and training a decision tree in order to find out which features are relevant fraud indicators. Second, a prediction model is built using logistic regression and gradient boosted trees. The prediction model allows to estimate the fraud risk of future transactions. Third, because risk estimation alone does not equal profit maximization, utility theory is woven into prioritization of transactions such that the model optimizes the financial value of fraud prevention activities. Results indicate that the interviewed companies want to use intelligent computer-aided systems that support manual inspection activities through the use of data mining techniques. Feature analysis reveals that some features, such as whether a shipment has been sent to a parcel shop, can help separate fraudulent from legitimate orders better than others. The predictive model yields promising results as it is able to correctly identify approximately 86% of the 2% most suspicious transactions as fraud. When the model is used to optimize the financial outcome instead of pure classification quality, results suggest that the company providing the dataset could achieve substantial additional savings of up to 87% through introduction of expected utility as a ranking measure when being constrained by limited inspection resources.
APA, Harvard, Vancouver, ISO, and other styles
6

Camargo, André Pierro de. "Modelos de regressão sobre dados composicionais." Universidade de São Paulo, 2011. http://www.teses.usp.br/teses/disponiveis/45/45132/tde-21052012-170807/.

Full text
Abstract:
Dados composicionais são constituídos por vetores cujas componentes representam as proporções de algum montante, isto é: vetores com entradas positivas cuja soma é igual a 1. Em diversas áreas do conhecimento, o problema de estimar as partes $y_1, y_2, \\dots, y_D$ correspondentes aos setores $SE_1, SE_2, \\dots, SE_D$, de uma certa quantidade $Q$, aparece com frequência. As porcentagens $y_1, y_2, \\dots, y_D$ de intenção de votos correspondentes aos candidatos $Ca_1, Ca_2, \\dots, Ca_D$ em eleições governamentais ou as parcelas de mercado correspondentes a industrias concorrentes formam exemplos típicos. Naturalmente, é de grande interesse analisar como variam tais proporções em função de certas mudanças contextuais, por exemplo, a localização geográfica ou o tempo. Em qualquer ambiente competitivo, informações sobre esse comportamento são de grande auxílio para a elaboração das estratégias dos concorrentes. Neste trabalho, apresentamos e discutimos algumas abordagens propostas na literatura para regressão sobre dados composicionais, assim como alguns métodos de seleção de modelos baseados em inferência bayesiana. \\\\<br>Compositional data consist of vectors whose components are the proportions of some whole. The problem of estimating the portions $y_1, y_2, \\dots, y_D$ corresponding to the pieces $SE_1, SE_2, \\dots, SE_D$ of some whole $Q$ is often required in several domains of knowledge. The percentages $y_1, y_2, \\dots, y_D$ of votes corresponding to the competitors $Ca_1, Ca_2, \\dots, Ca_D$ in governmental elections or market share problems are typical examples. Of course, it is of great interest to study the behavior of such proportions according to some contextual transitions. In any competitive environmet, additional information of such behavior can be very helpful for the strategists to make proper decisions. In this work we present and discuss some approaches proposed by different authors for compositional data regression as well as some model selection methods based on bayesian inference.\\\\
APA, Harvard, Vancouver, ISO, and other styles
7

Mwenze, Tshipeng. "The implications of Sr and Nd isotope data on the genesis of the Platreef and associated BMS and PGE mineralisation, Bushveld Igneous Complex, South Africa." University of the Western Cape, 2019. http://hdl.handle.net/11394/6922.

Full text
Abstract:
Philosophiae Doctor - PhD<br>The Platreef is a platinum group elements (PGE) deposit located in the Northern limb of the Bushveld Igneous Complex (BIC). It is a series of mafic and ultramafic sills that are overlain by rocks from the Main Zone (MZ) of the BIC. In comparison to PGE deposits (i.e., Merensky Reef and the UG-2 chromitite) occurring in the Critical Zone (CZ) of the Eastern and Western Limbs of the BIC, which are less than 1 m in thickness, the Platreef is 10 to 400 m in thickness and is comprised of a variety of rocks. PGE mineralisation in the Platreef is not confined to a specific rock type, and its distribution and styles also vary with depth and along strike. Despite the numerous researches that have been conducted, the genesis of Platreef is still poorly understood. New major and trace elements in conjunction with Sr–Nd isotope data, generated from whole-rock analyses of different Platreef rocks, were collected from four drill cores along its strike. The data were examined to determine the source of the magmas and identify the processes involved in its genesis. The study also aimed at establishing whether a genetic link exists between the Platreef magmas and the magmas that formed the Lower Zone (LZ), CZ and MZ in the Rustenburg Layered Suite (RLS) of the BIC. The petrography revealed that the Platreef in the four drill cores consists of harzburgite, olivine pyroxenite, pyroxenite, feldspathic pyroxenite and norite. Based on the textural and modal mineralogy variations, feldspathic pyroxenite was subdivided into five types (I, II, III, IV and V). The variation in the average contents of MgO, LaN/YbN and ΣREE for the Platreef rocks are consistent with the modal mineralogy from the least to the most differentiated rocks. However, the Sr–Nd isotope data of the Platreef rocks have revealed two distinct groups of samples with decreasing ɛNd2060. Group 1 consists of pyroxenite and feldspathic pyroxenite II, III and V having ɛNd2060 values that range from –8.4 to –2.9, and 87Sr/86Sr2060 values from 0.707281 to 0.712106. The Platreef rocks of group 2 consist of olivine pyroxenite and feldspathic pyroxenite Type I with ɛNd2060 ranging from –12.6 to –10.8, and 87Sr/86Sr2060 ranging from 0.707545 to 0.710042. In comparison to the LZ, CZ and MZ rocks, which have ɛNd values ranging from –8.5 to –5.1, and 87Sr/86Sr ranging from 0.704400 to 0.709671, Platreef pyroxenite of group 1 have lower negative ɛNd2060 values (from –3.8 to –2.9) and higher 87Sr/86Sr2060 values from 0.709177 to 0.710492, whereas feldspathic pyroxenite of group 1 have overlapping ɛNd2060 values (from –8.4 to –4.9) but also higher 87Sr/86Sr2060 values (from 0.707281 to 0.712106). Instead, the Platreef olivine pyroxenite and feldspathic pyroxenite in group 2 highly negative ɛNd2060 values and overlapping 87Sr/86Sr2060 values. It is therefore suggested that the Platreef magmas derived from the partial melting of an heterogeneous mantle source comprising depleted mantle melts and both metasomatized slightly unradiogenic Nd enriched melts and highly unradiogenic Nd enriched melts from the subcontinental lithospheric mantle. These magmas ascended via the continental crust using different paths and interacted with rocks of different Sr–Nd isotopic compositions which resulted in the formation the hybrid magmas. The study speculates that sulphide saturation in the Platreef magmas was reached in the staging chambers at depth, and the varying styles of the PGE mineralisation in the Platreef rocks are the result of the varying degree of partial melting of the heterogeneous source for their magmas. In conlusion, this study suggests that the genesis of the Platreef is much more complex and should be considered very much independent from processes involved in the genesis of the RLS in the Eastern and Western Limbs of BIC in agreement with earlier studies.<br>NRF Inkaba ye Africa Iphakade<br>2020-08-31
APA, Harvard, Vancouver, ISO, and other styles
8

Ben, slimen Yosra. "Knowledge extraction from huge volume of heterogeneous data for an automated radio network management." Thesis, Lyon, 2018. http://www.theses.fr/2018LYSE2046.

Full text
Abstract:
En vue d’aider les opérateurs mobiles avec la gestion de leurs réseaux d’accès radio, trois modèles sont proposés. Le premier modèle est une approche supervisée pour une prévention des anomalies. Son objectif est de détecter les dysfonctionnements futurs d’un ensemble de cellules en observant les indicateurs clés de performance considérés comme des données fonctionnelles. Par conséquent, en alertant les ingénieurs et les réseaux auto-organisés, les opérateurs mobiles peuvent être sauvés d’une dégradation de performance de leurs réseaux. Le modèle a prouvé son efficacité avec une application sur données réelles qui vise à détecter la dégradation de capacité, les problèmes d’accessibilités et les coupures d’appel dans des réseaux LTE.A cause de la diversité des technologies mobiles, le volume de données qui doivent être quotidiennement observées par les opérateurs mobiles devient énorme. Ce grand volume a devenu un obstacle pour la gestion des réseaux mobiles. Le second modèle vise à fournir une représentation simplifiée des indicateurs clés de performance pour une analyse plus facile. Du coup, un modèle de classification croisée pour données fonctionnelles est proposé. L’algorithme est basé sur un modèle de blocs latents dont chaque courbe est identifiée par ses composantes principales fonctionnelles. Ces dernières sont modélisées par une distribution Gaussienne dont les paramètres sont spécifiques à chaque bloc. Les paramètres sont estimés par un algorithme EM stochastique avec un échantillonnage de Gibbs. Ce modèle est le premier modèle de classification croisée pour données fonctionnelles et il a prouvé son efficacité sur des données simulées et aussi sur une application réelle qui vise à aider dans l’optimisation de la topologie des réseaux mobiles 4G.Le troisième modèle vise à résumer l’information issue des indicateurs clés de performance et aussi des alarmes réseaux. Un modèle de classification croisée des données mixtes : fonctionnelles et binaires est alors proposé. L’approche est basé sur un modèle de blocs latents et trois algorithmes sont comparés pour son inférence : EM stochastique avec un échantillonneur de Gibbs, EM de classification et EM variationnelle. Le modèle proposé est le premier algorithme de classification croisée pour données fonctionnelles et binaires. Il a prouvé son efficacité sur des données simulées et sur des données réelles extraites à partir de plusieurs réseaux mobiles 4G<br>In order to help the mobile operators with the management of their radio access networks, three models are proposed. The first model is a supervised approach for mobile anomalies prevention. Its objective is to detect future malfunctions of a set of cells, by only observing key performance indicators (KPIs) that are considered as functional data. Thus, by alerting the engineers as well as self-organizing networks, mobile operators can be saved from a certain performance degradation. The model has proven its efficiency with an application on real data that aims to detect capacity degradation, accessibility and call drops anomalies for LTE networks.Due to the diversity of mobile network technologies, the volume of data that has to be observed by mobile operators in a daily basis became enormous. This huge volume became an obstacle to mobile networks management. The second model aims to provide a simplified representation of KPIs for an easier analysis. Hence, a model-based co-clustering algorithm for functional data is proposed. The algorithm relies on the latent block model in which each curve is identified by its functional principal components that are modeled by a multivariate Gaussian distribution whose parameters are block-specific. These latter are estimated by a stochastic EM algorithm embedding a Gibbs sampling. This model is the first co-clustering approach for functional data and it has proven its efficiency on simulated data and on a real data application that helps to optimize the topology of 4G mobile networks.The third model aims to resume the information of data issued from KPIs and also alarms. A model-based co-clustering algorithm for mixed data, functional and binary, is therefore proposed. The approach relies on the latent block model, and three algorithms are compared for its inference: stochastic EM within Gibbs sampling, classification EM and variational EM. The proposed model is the first co-clustering algorithm for mixed data that deals with functional and binary features. It has proven its efficiency on simulated data and on real data extracted from live 4G mobile networks
APA, Harvard, Vancouver, ISO, and other styles
9

Hasselgren, Elizabeth. "The crustal structure of the northern Juan de Fuca plate from multichannel seismic reflection data." Thesis, University of British Columbia, 1991. http://hdl.handle.net/2429/29868.

Full text
Abstract:
The crustal structure of a young (<10 My) ocean basin is imaged by two multichannel seismic reflection lines comprising 230 km recorded over the central part of the northern Juan de Fuca plate off western Canada. The more northerly line ties previously interpreted deep seismic reflection lines across the Juan de Fuca ridge and the Cascadia subduction zone; the southern line ties with another interpreted line across the subduction zone. Both lines trend obliquely to the spreading direction. A marine refraction profile crossing the eastern end of the lines provides velocity constraints. The processing sequence applied to the data includes a prestack inside-trace mute of CMP gathers to reduce noise levels on the deep data, CMP stack, post-stack dip filtering, f-k migration and bandpass. Coherency-filtered stacks are helpful in tracing weaker reflectors. The stacked sections reveal a horizontally layered sedimentary sequence overlying a rugged and prominent basement reflector dipping slightly landward. A strong, fairly continuous reflection from the base of the crust at about 2 s two-way-time below the basement surface generally mimics the basement topography and shows the characteristic doubling and tripling of reflections seen in other similar surveys. Although in general the crust appears acoustically transparent, weaker, discontinuous intracrustal reflectors are observed over 40 km at the eastern end of the northern line, and are interpreted to arise from the oceanic Layer 3A/3B and Layer 2/3 boundaries. The im-persistence of these reflectors is an indication of the complexity of the processes producing intracrustal reflectivity, and an indication of the lateral variability of crustal formation. Pseudofault traces of propagating rifts are crossed at three different locations on the two lines, the first MCS crossings of such structures. Crust associated with the pseudofault traces is related to both subhorizontal and dipping subcrustal events which are interpreted as zones of crustal thickening or underplating. Although the crustal thickness elsewhere on the lines varies by only about 10%, crust associated with the pseudofaults is as much as about 25% thicker than average, suggesting that magma supply at transform-type offsets may at times be large. A small seamount discovered on the southern line may result from the excessive magma production at the ridge postulated at propagating rift zones.<br>Science, Faculty of<br>Earth, Ocean and Atmospheric Sciences, Department of<br>Graduate
APA, Harvard, Vancouver, ISO, and other styles
10

Beaudet, Nicolas. "Pr??valence et incidence de la douleur lombaire r??currente au Qu??bec : une perspective administrative." Thèse, Universit?? de Sherbrooke, 2014. http://savoirs.usherbrooke.ca/handle/11143/115.

Full text
Abstract:
R??sum?? : La douleur lombaire (DL) est l???une des conditions musculosquelettiques les plus fr??quentes et co??teuses au Canada. La pr??valence annuelle de DL aig??e varierait de 19 % ?? 57 %, et un patient sur quatre souffrirait de r??currence dans la m??me ann??e. La pr??sente ??tude vise donc ?? produire une analyse descriptive de l?????pid??miologie de la DL r??currente ?? l?????chelle de la population. Une nouvelle approche m??thodologique est propos??e afin d???optimiser l???identification de vrais cas incidents de DL r??currente ?? partir d???une analyse secondaire de donn??es administratives. Puisque 10 % des patients ayant de la DL seraient responsables de 80 % des co??ts qui y sont associ??s, nous avons ??galement d??termin?? la tendance s??culaire des co??ts d???interventions m??dicales des patients r??currents incidents entre 2003 et 2008. En utilisant le fichier des services m??dicaux r??mun??r??s ?? l???acte de la R??gie de l???assurance maladie du Qu??bec, des cohortes pr??valentes ont ??t?? construites ?? partir de 401 264 dossiers de patients ayant consult?? au moins trois fois pour de la DL entre 1999 et 2008. Onze ans d???historique m??dical des 81 329 patients de la cohorte de 2007 ont ensuite ??t?? analys??s afin d???exclure les patients ayant eu des consultations ant??rieures de DL. Une valeur pr??dictive positive et un coefficient de Kappa ??lev??s ont permis d???identifier une clairance optimale pour r??cup??rer les cas v??ritablement incidents. Les co??ts de consultations ont ensuite ??t?? calcul??s pour tous les patients incidents de 2003 ?? 2007 ?? partir des manuels de facturation. Nous avons observ?? une pr??valence annuelle de la DL r??currente de 1,64 % en 2000 chez les hommes diminuant ?? 1,33 % en 2007. Cette baisse a majoritairement eu lieu dans le groupe d?????ge des 35-59 ans. Les femmes ??g??es (> 65 ans) ??taient 1,4 fois plus ?? risque de consulter un m??decin de mani??re r??currente que les hommes du m??me ??ge. L???incidence annuelle de la DL en 2007 ??tait de 242 par 100 000 personnes. Les hommes de 18 ?? 34 ans ??taient 1,2 fois plus ?? risque que les femmes de d??velopper un premier ??pisode r??current et les personnes ??g??es 1,9 fois plus ?? risque que les jeunes. L???incidence annuelle a diminu?? de 12 % entre 2003 et 2007 pendant que les co??ts totaux augmentaient de 1,4 %. La m??diane des co??ts ??tait la plus ??lev??e chez les femmes ??g??es et tendait ?? augmenter dans le temps. Ces analyses secondaires sugg??rent de s???int??resser particuli??rement ?? la DL chez les personnes tr??s ??g??es, et de d??terminer si la baisse de fr??quence de consultations r??currentes observ??e dans le temps est li??e ?? une meilleure gestion de la DL ou ?? un probl??me d???accessibilit??. Les co??ts devraient faire l???objet d???un suivi continu pour limiter les hausses. // Abstract : Low back pain (LBP) is one of the most frequent and costly musculoskeletal health conditions in Canada. Annual prevalence was found to vary between 19 % and 57 % and likely one out of four patients experience a LBP recurrence within one year. The body of knowledge on the prevalence of recurrent LBP is still limited. This study sought to present a descriptive analysis on the epidemiology of recurrent LBP in a medical population. A new methodology is also proposed to identify true cases of incident recurrent LBP. Since 10 % of LBP patients have been reported to generate 80 % of the costs, we will sought to determine the secular trend of medical costs for the incident cohorts of 2003 to 2008. Using the Canadian province of Quebec medical administrative physicians??? claims database, 401 264 prevalent claims-based recurrent LBP patients were identified between 1999 to 2008 for having consulted at least three times for LBP in a period of 365 days. The medical history of 81 329 prevalent patients in 2007 was screened for a retrospective period of 11 years. High positive predictive values and Kappa statistics were used to determine the optimal clearance period for capturing true incidence cases among patients with no prior encounters for LBP. Physicians??? claims manuals were then used to apply a price for every intervention provided to LBP incident patients in their index year and follow-up years. We observed a decrease from 1.64 % to 1.33 % in the LBP annual prevalence between 2000 and 2007 for men. This decrease was mostly observed between 35 and 59 years of age. Older women (??? 65 years) were 1.4 times more at risk to consult a physician for LBP in a recurrent manner than older men. The annual incidence in 2007 of adult claims-based recurrent LBP was 242 per 100 000 persons. Males of 18 to 34 years of age were found 1.2 times more at risk than their counterparts. Altogether, elderlies were 1.9 times more at risk than young adults to consult in a recurrent manner for LBP. The annual incidence decreased by 12 % between 2003 and 2007, while the direct costs increase by 1.4 %. The median cost for consultations was highest for elder women and increasing in time. These secondary analyses emphasize the importance to keep the watch on the elders in regards to LBP, and to determine if the timely decrease in morbidity is related to improvements in LBP management or to a medical accessibility issue. Also, costs will need to be surveyed on a regular basis to limit the impact of future increases.
APA, Harvard, Vancouver, ISO, and other styles
11

Rezaei, Mona. "Combining Balanced Score Card and Data Envelopment Analysis for Analyzing the Performance of Small Scale Fisheries." Thesis, Université d'Ottawa / University of Ottawa, 2014. http://hdl.handle.net/10393/31892.

Full text
Abstract:
The balanced scorecard (BSC) is an accepted methodology for putting strategy into action. The BSC provides a comprehensive performance measurement for an organization with respect to both financial and non-financial perspectives, including the triple bottom line of planet, people, and profit. Through various implementations to companies, organizations, and sectors, balanced scorecards have been used widely both for strategic purposes, as well as for more tactical focus for auditing current performance. BSC implementation is particularly adequate when integrated with the operational processes of the organization. The integration between the strategic plan and the financial and operational plans proceed via the business process model that covers the operational processes associated with the objectives of the organization in the strategy map. In this way, BSC is a tool for real-time monitoring of performance as well as providing the crucial linkage to the organization’s strategy that enables the proper implementation of the organization’s strategy. Data envelopment analysis (DEA) has been widely applied for measuring the efficiency of a specific decision-making unit (DMU) against a projected point on an efficiency frontier. DEA is therefore particularly suitable for measuring the organizational efficiency based on the BSC indicators, which are defined as Key Performance Indicators (KPIs). In the commercial fisheries sector, sustainable strategy of fisheries organizations can be gained by running the current operations more effectively, and by integrating processes enabling adaptation to change. The efficiency frontier of the DEA model can be used to calculate the efficiency of fisheries operations. The proposed research is undertaken as part of the Canadian Fisheries Research Network (CFRN) to investigate the application of BSC and DEA for defining commercial fisheries performance evaluation variables with respect to the objectives of environmental sustainability, economic viability, and social and cultural stability in compliance with, and in the absence of, performance monitoring alleged in the Fisheries and Oceans, Canada Integrated Fisheries Management Plans (IFMP). The combination of BSC-DEA methodologies is developed in this research as a required performance monitoring system suitable for IFMPs for analyzing the relative efficiency of commercial fisheries case studies across Canada towards incorporating best sustainable practices in the industry.
APA, Harvard, Vancouver, ISO, and other styles
12

Alsabi, Qamar. "Characterizing Basal-Like Triple Negative Breast Cancer using Gene Expression Analysis: A Data Mining Approach." Wright State University / OhioLINK, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=wright1578936915199438.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Eriksson, Annelie. "Faktorer som påverkar konsumentens val av betalningssätt vid elektronisk handel." Thesis, University of Skövde, Department of Computer Science, 2003. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-791.

Full text
Abstract:
<p>Elektronisk handel är ett användningsområde som ökar allt mer, eftersom Internet öppnar helt nya möjligheter för alla företag, stora som små, att nå en global marknad. Antalet konsumenter växer kraftigt och utgör en köpstark grupp. För att den här ökningen skall kunna ske är det avgörande att de olika betalningsalternativen som erbjuds fungerar väl och har konsumenternas förtroende. Eftersom både konsumenten och webbutiken sparar pengar om elektroniska betalningssätt används, kan detta vara en avgörande faktor för framtiden. Syftet med denna studie har därför varit att ta reda på vilka faktorer som påverkar svenska konsumenter vid val av betalningssätt.</p><p>Utifrån resultatet av den litteraturstudie respektive enkät- och intervjustudie som utförts under detta arbete påvisas att 35% av konsumenterna använder faktura som betalningssätt. Detta betalningssätt anses säkrare och lättare att använda vid reklamation. Resultatet visar också att priset på varan är viktig, den biligaste varan letas upp och det har ingen betydelse om butiken är känd eller inte. Däremot så tittar konsumenten på om butiken anses vara seriös. En seriös butik har ett bra gränssnitt och det är lätt att navigera mellan de sidor butiken består av. Butiken bör även ha bra information om de produkter de säljer och om de betalningssätt som de erbjuder. Faktorer som därmed påverkar konsumenten vid val av betalningssätt är: gränssnittet, bra information, priset mm.</p>
APA, Harvard, Vancouver, ISO, and other styles
14

Smed, Karl-Oskar. "Efficient and Accurate Volume Rendering on Face-Centered and Body-Centered Cubic Grids." Thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-257177.

Full text
Abstract:
The body centered cubic grid (BCC) and face centered cubic grid (FCC) offer improved sampling properties when compared to the cartesian grid. Despite this there is little software and hardware support for volume rendering of data stored in one of these grids. This project is a continuation of a project adding support for such grids to the volume rendering engine Voreen. This project has three aims. Firstly, to implement new interpolation methods capable of rendering at interactive frame rates. Secondly, to improve the software by adding an alternate volume storage format offering improved frame rates for BCC methods. And thirdly, because of the issues when comparing image quality between different grid types due to aliasing, to implement a method unbiased in terms of post-aliasing. The existing methods are compared to the newly implemented ones in terms of frame rate and image quality and the results show that the new volume format improve the frame rate significantly, that the new BCC interpolation method offers similar image quality at better performance compared to existing methods and that the unbiased method produces images of good quality at the expense of speed.
APA, Harvard, Vancouver, ISO, and other styles
15

Flennfors, Martin. "Tillit vid mCommerce : Hur presenteras betalningslösningar på e-handelsplattformar i Sverige och hur påverkar presentationen kundens tillit." Thesis, Högskolan i Skövde, Institutionen för informationsteknologi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:his:diva-15947.

Full text
Abstract:
E-handel är numera en integrerad del av vår vardag och 65 % av de som använder internet handlar via e-handel. E-handel är ett värdeutbyte mellan flera parter och Business to Consumer (B2C) är det denna rapport fokuserar på. Vad som oftast utbyts är betalning mot en vara eller tjänst. För allt fler sker e-handeln via smartphone (mCommerce) och denna e-handel förväntas öka ytterligare. Tillit är en förväntan på den andra parten att uppfylla sin del av det förväntade avtalet. I fallet e-handel inkluderas även e-handelsplattform-artefakten i tilliten. Tilliten till en e-handel beror bland annat på hur betalningslösningar presenteras i e-handelsplattformen. Denna rapport har undersökt Sveriges 105 vanligaste e-handelsplattformer och hur dessa presenterar sina betalningslösningar samt hur olika deltagares (10 i första samt 18 i andra experimentet) tillit varierade beroende på hur betalningslösningar presenterades. Rapporten visar att få e-handlare presenterar sina betalningslösningar för smartphoneanvändare och att tilliten till e-handelsplattformen samt att viljan att göra ett köp är lägre då betalningslösningar ej presenteras. Rapporten föreslår vidare forskning i området och ger en indikation till utvecklare att visa betalningslösningar för att öka tilliten vid e-handel via smartphone.
APA, Harvard, Vancouver, ISO, and other styles
16

Ledolter, Johannes. "Multi-Unit Longitudinal Models with Random Coefficients and Patterned Correlation Structure: Modelling Issues." Department of Statistics and Mathematics, WU Vienna University of Economics and Business, 1999. http://epub.wu.ac.at/432/1/document.pdf.

Full text
Abstract:
The class of models which is studied in this paper, multi-unit longitudinal models, combines both the cross-sectional and the longitudinal aspects of observations. Many empirical investigations involve the analysis of data structures that are both cross-sectional (observations are taken on several units at a specific time period or at a specific location) and longitudinal (observations on the same unit are taken over time or space). Multi-unit longitudinal data structures arise in economics and business where panels of subjects are studied over time, biostatistics where groups of patients on different treatments are observed over time, and in situations where data are taken over time and space. Modelling issues in multi-unit longitudinal models with random coefficients and patterned correlation structure are illustrated in the context of two data sets. The first data set deals with short time series data on annual death rates and alcohol consumption for twenty-five European countries. The second data set deals with glaceologic time series data on snow temperature at 14 different locations within a small glacier in the Austrian Alps. A practical model building approach, consisting of model specification, estimation, and diagnostic checking, is outlined. (author's abstract)<br>Series: Forschungsberichte / Institut für Statistik
APA, Harvard, Vancouver, ISO, and other styles
17

Baudry, Jean-Patrick. "Sélection de modèle pour la classification non supervisée. Choix du nombre de classes." Phd thesis, Université Paris Sud - Paris XI, 2009. http://tel.archives-ouvertes.fr/tel-00461550.

Full text
Abstract:
Le cadre principal de cette thèse est la classification non supervisée, traitée par une approche statistique dans le cadre des modèles de mélange. Plus particulièrement, nous nous intéressons au choix du nombre de classes et au critère de sélection de modèle ICL. Une approche fructueuse de son étude théorique consiste à considérer un contraste adapté à la classification non supervisée : ce faisant, un nouvel estimateur ainsi que de nouveaux critères de sélection de modèle sont proposés et étudiés. Des solutions pratiques pour leur calcul s'accompagnent de retombées positives pour le calcul du maximum de vraisemblance dans les modèles de mélange. La méthode de l'heuristique de pente est appliquée pour la calibration des critères pénalisés considérés. Aussi les bases théoriques en sont-elles rappelées en détails, et deux approches pour son application sont étudiées. Une autre approche de la classification non supervisée est considérée : chaque classe peut être modélisée elle-même par un mélange. Une méthode est proposée pour répondre notamment à la question du choix des composantes à regrouper. Enfin, un critère est proposé pour permettre de lier le choix du nombre de composantes, lorsqu'il est identifié au nombre de classes, à une éventuelle classification externe connue a priori.
APA, Harvard, Vancouver, ISO, and other styles
18

Guler, Sevil. "Secure Bitcoin Wallet." Thesis, KTH, Skolan för informations- och kommunikationsteknik (ICT), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-177587.

Full text
Abstract:
Virtual currencies and mobile banking are technology advancements that are receiving increased attention in the global community because of their accessibility, convenience and speed. However, this popularity comes with growing security concerns, like increasing frequency of identity theft, leading to bigger problems which put user anonymity at risk. One possible solution for these problems is using cryptography to enhance security of Bitcoin or other decentralised digital currency systems and to decrease frequency of attacks on either communication channels or system storage. This report outlines various methods and solutions targeting these issues and aims to understand their effectiveness. It also describes Secure Bitcoin Wallet, standard Bitcoin transactions client, enhanced with various security features and services.
APA, Harvard, Vancouver, ISO, and other styles
19

Pavlová, Petra. "Měření výkonnosti podniku." Master's thesis, Vysoká škola ekonomická v Praze, 2012. http://www.nusl.cz/ntk/nusl-165086.

Full text
Abstract:
This thesis deals with the application of Business Intelligence (BI) to support the corporate performance management in ISS Europe, spol. s r. o. This company provides licences and implements original software products as well as third-party software products. First, an analysis is conducted in the given company, which then serves as basis for the implementation of the BI solution that should be interconnected with the company strategies. The main goal is the implementation of a pilot BI solution to aid the monitoring and optimisation of corporate performance. Among secondary goals are the analysis of related concepts, business strategy analysis, strategic goals and systems identification and the proposition and implementation of a pilot BI solution. In its theoretical part, this thesis focuses on the analysis of concepts related to corporate performance and BI implementations and shortly describes the company together with its business strategy. The following practical part is based on the theoretical findings. An analysis of the company is carried out using the Balanced Scorecard (BSC) methodology, the result of which is depicted in a strategic map. This methodology is then supplemented by the Activity Based Costing (ABC) analytical method, which divides expenses according to assets. The results are informational data about which expenses are linked to handling individual developmental, implementational and operational demands for particular contracts. This is followed by an original proposition and the implementation of a BI solution which includes the creation of a Data Warehouse (DWH), designing Extract Transform and Load (ETL) and Online Analytical Processing (OLAP) systems and generating sample reports. The main contribution of this thesis is in providing the company management with an analysis of company data using a multidimensional perspective which can be used as basis for prompt and correct decision-making, realistic planning and performance and product optimisation.
APA, Harvard, Vancouver, ISO, and other styles
20

Mallqui, Morales Nayda Isabel. "Diseño de migración de nodos B aplicado para una RNC caida de una red movil." Bachelor's thesis, Universidad Ricardo Palma, 2015. http://cybertesis.urp.edu.pe/handle/urp/1276.

Full text
Abstract:
La presente tesina consiste en el diseño de migración de nodos B aplicado para una RNC caída de una red móvil, con la finalidad de solucionar los problemas que se presenten ante un incidente que afecte los servicios de voz y datos de los usurarios de una red móvil. En el desarrollo de la tesina, se describe el planteamiento del problema, el marco teórico de la tecnología UMTS y posteriormente nos centramos en los elementos principales de esta tecnología. También describimos los equipos importantes a utilizar en desarrollo del proyecto, en este caso nos enfocamos en la descripción de la RNC. Y finalmente describimos el desarrollo del proyecto, el diseño de la solución e implementación de la misma, y en donde se presentan los resultados del diseño. This thesis is the design of migration of nodes B and its a applied when RNC fall for a mobile network, in order to solve the problems that arise before an incident affecting voice and data services from a mobile network. In developing the thesis, we describe the theoretical framework of UMTS technology and then we focus on the main elements of this technology. We also describe the important equipment used in project development; in this case we focus on the description of the RNC. And finally we describe the solution of design and implementation .Also, the results of this project.
APA, Harvard, Vancouver, ISO, and other styles
21

Scarlato, Michele. "Sicurezza di rete, analisi del traffico e monitoraggio." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2012. http://amslaurea.unibo.it/3223/.

Full text
Abstract:
Il lavoro è stato suddiviso in tre macro-aree. Una prima riguardante un'analisi teorica di come funzionano le intrusioni, di quali software vengono utilizzati per compierle, e di come proteggersi (usando i dispositivi che in termine generico si possono riconoscere come i firewall). Una seconda macro-area che analizza un'intrusione avvenuta dall'esterno verso dei server sensibili di una rete LAN. Questa analisi viene condotta sui file catturati dalle due interfacce di rete configurate in modalità promiscua su una sonda presente nella LAN. Le interfacce sono due per potersi interfacciare a due segmenti di LAN aventi due maschere di sotto-rete differenti. L'attacco viene analizzato mediante vari software. Si può infatti definire una terza parte del lavoro, la parte dove vengono analizzati i file catturati dalle due interfacce con i software che prima si occupano di analizzare i dati di contenuto completo, come Wireshark, poi dei software che si occupano di analizzare i dati di sessione che sono stati trattati con Argus, e infine i dati di tipo statistico che sono stati trattati con Ntop. Il penultimo capitolo, quello prima delle conclusioni, invece tratta l'installazione di Nagios, e la sua configurazione per il monitoraggio attraverso plugin dello spazio di disco rimanente su una macchina agent remota, e sui servizi MySql e DNS. Ovviamente Nagios può essere configurato per monitorare ogni tipo di servizio offerto sulla rete.
APA, Harvard, Vancouver, ISO, and other styles
22

Su, Shie-Shin, and 蘇十信. "BBS User Interactive Behavior Analysis with Data Mining Techniques." Thesis, 2002. http://ndltd.ncl.edu.tw/handle/71719242598608440195.

Full text
Abstract:
碩士<br>中原大學<br>資訊工程研究所<br>90<br>Virtual Community is a collection of users who interact with other by message transfer. Such interaction relation is long-term and static. The interaction relation mentioned above is a kind of communication process. Communication process depicts how messages transfer. A formal description of communication process includes communication context (messages), transmitter, receiver and the communication sequences. A special kind of graph, called interaction graph (IG), derived from the messages is proposed for representing the interaction relation between users. Based on conceptual graph and rough set theorem, this thesis presents three algorithms for user clustering. The first one ensure that the cluster rules of each user clusters are correct. The second one considers not only temporal but also spatial issues. The third one is an incremental user clustering algorithm. It can efficiently reduce the computation complexity for user behavior modeling. Classification is a common-used data mining technique besides clustering. We can handle the user clusters for cluster interpretation and two indices, called DB and CF here in this paper, can help us find hidden rules from a virtual community easily, even prediction. An experimental system is implemented to prove our idea. The demonstration system takes the data from a real BBS site to evaluate. Examples show that the proposed methods can work correctly.
APA, Harvard, Vancouver, ISO, and other styles
23

Ming-Sung, Yang, and 楊明松. "A BAC Detection and Data Management System based on an Android Platform." Thesis, 2015. http://ndltd.ncl.edu.tw/handle/03914732270762928940.

Full text
Abstract:
碩士<br>國立臺灣科技大學<br>電子工程系<br>103<br>Many serious car accidents are caused because people drive car after alcohol drinking. Many countries start to make a strict law to punish the drinking and driving behavior. Therefore, many portable alcohol testers also appear in the market. After comparing the advantages and disadvantages of different alcohol testers, we have developed an alcohol tester which can connect to an Android platform for blood alcohol concentration (BAC). Furthermore, we have also developed a data management system to record and show the BAC value in a proper display on the Android platform with an APP.
APA, Harvard, Vancouver, ISO, and other styles
24

"Bayesian Networks and Gaussian Mixture Models in Multi-Dimensional Data Analysis with Application to Religion-Conflict Data." Master's thesis, 2012. http://hdl.handle.net/2286/R.I.14952.

Full text
Abstract:
abstract: This thesis examines the application of statistical signal processing approaches to data arising from surveys intended to measure psychological and sociological phenomena underpinning human social dynamics. The use of signal processing methods for analysis of signals arising from measurement of social, biological, and other non-traditional phenomena has been an important and growing area of signal processing research over the past decade. Here, we explore the application of statistical modeling and signal processing concepts to data obtained from the Global Group Relations Project, specifically to understand and quantify the effects and interactions of social psychological factors related to intergroup conflicts. We use Bayesian networks to specify prospective models of conditional dependence. Bayesian networks are determined between social psychological factors and conflict variables, and modeled by directed acyclic graphs, while the significant interactions are modeled as conditional probabilities. Since the data are sparse and multi-dimensional, we regress Gaussian mixture models (GMMs) against the data to estimate the conditional probabilities of interest. The parameters of GMMs are estimated using the expectation-maximization (EM) algorithm. However, the EM algorithm may suffer from over-fitting problem due to the high dimensionality and limited observations entailed in this data set. Therefore, the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) are used for GMM order estimation. To assist intuitive understanding of the interactions of social variables and the intergroup conflicts, we introduce a color-based visualization scheme. In this scheme, the intensities of colors are proportional to the conditional probabilities observed.<br>Dissertation/Thesis<br>M.S. Electrical Engineering 2012
APA, Harvard, Vancouver, ISO, and other styles
25

Apolinário, Beatriz Arribança. "Customer journey e data marketing no encalço da fidelização de clientes : caso da Sportaddict." Master's thesis, 2019. http://hdl.handle.net/10773/30020.

Full text
Abstract:
A fidelização de clientes é crucial para o sucesso das empresas num mercado cada vez mais competitivo. De modo a que essa fidelização seja conseguida é fundamental que, através do marketing relacional, seja estabelecida uma relação de confiança tanto com os atuais como potenciais clientes. Para a Sportaddict, uma empresa relativamente recente e que atua tanto no mercado B2B como B2C, essa relação torna-se ainda mais importante. Com o objetivo primordial de reter e fidelizar clientes, recorreu-se à recolha de informações sobre os mesmos para a concetualização de uma base de dados de clientes a aplicar na empresa; bem como se desenvolveu o customer journey do mercado B2B para um acompanhamento rigoroso deste. Além disso, através da aplicação de um inquérito por questionário sobre o valor da marca, recolheram-se informações diretamente dos clientes relativas à sua satisfação e intenção de recomendação. Após a construção do estudo de caso, e através da aplicação das ferramentas de recolha, acompanhamento e análise de dados, entendeu-se que quanto mais elevada a satisfação maior a probabilidade de fidelização e recomendação, encontrando-se fortemente relacionados.<br>Obtaining customer loyalty is crucial for any company that wants to succeed in a global market that is becoming more competitive. In order to obtain that loyalty, it is essential for the company to, through its relationship marketing, establish relationships with both its current and potential customers. For Sportaddict, a company that is recent and operates in B2B as well as B2C markets, establishing that relationship is even more important. With the main objective of retaining loyal customers, it was collected information about the costumers with the purpose to conceive and structure a customer database to be applied in the company; as well as the development of customer journey of B2B market to a strict monitoring of it. In addition, through the application of a survey about brand value, it was gathered information from customers relating to their satisfaction and intention of recommendation. After the construction of the case study, it was possible to understand the strong relation among satisfaction, loyalty and recommendation: the higher the satisfaction the more likely of loyalty and recommendation.<br>Mestrado em Marketing
APA, Harvard, Vancouver, ISO, and other styles
26

Gaucher, Beverly Jane. "Factor Analysis for Skewed Data and Skew-Normal Maximum Likelihood Factor Analysis." Thesis, 2013. http://hdl.handle.net/1969.1/149548.

Full text
Abstract:
This research explores factor analysis applied to data from skewed distributions for the general skew model, the selection-elliptical model, the selection-normal model, the skew-elliptical model and the skew-normal model for finite sample sizes. In terms of asymptotics, or large sample sizes, quasi-maximum likelihood methods are broached numerically. The skewed models are formed using selection distribution theory, which is based on Rao’s weighted distribution theory. The models assume the observed variable of the factor model is from a skewed distribution by defining the distribution of the unobserved common factors skewed and the unobserved unique factors symmetric. Numerical examples are provided using maximum likelihood selection skew-normal factor analysis. The numerical examples, such as maximum likelihood parameter estimation with the resolution of the “sign switching” problem and model fitting using likelihood methods, illustrate that the selection skew-normal factor analysis model better fits skew-normal data than does the normal factor analysis model.
APA, Harvard, Vancouver, ISO, and other styles
27

Willoughby, Keith Allan. "BUBLS : a mixed integer program for transit centre location in the Lower Mainland." Thesis, 1993. http://hdl.handle.net/2429/1542.

Full text
Abstract:
A mixed integer optimization model is developed to determine both the optimal location of transit centres to serve BC Transit's Lower Mainland route network and the optimal allocation of buses to those centres. The existing five transit centres are explored as well as five candidate facilities. The model considers nonrevenue transportation cost (deadhead), capital cost of constructing candidate transit centres and the salvage values of existing centres. A linear regression is generated to produce the travel times from the terminus of a route to potential transit centre locations. The optimal solution is determined, resulting in potential annual savings of over $560,000 compared to the current location-allocation strategy. Various experiments are performed to examine the sensitivity of model parameters and to determine the effect of different planning scenarios. The effect of the optimal solution on driver relief is considered. Conclusions as well as directions for further research are offered.
APA, Harvard, Vancouver, ISO, and other styles
28

FUE-HSIUNG, YANG, and 楊福雄. "Health survey about net-user: establishing cohort of long-standing heavy net users by explicit count data on BBS." Thesis, 2000. http://ndltd.ncl.edu.tw/handle/43347361244256066653.

Full text
Abstract:
碩士<br>國立臺灣大學<br>職業醫學與工業衛生研究所<br>88<br>BACKGROUND More and more people's lives involve networks. Unfortunately, something wrong especially about psychological level in these net-users, termed as pathological Internet use (PIU) for example, has come up. Health researchers have explored the impacts of this new medium, mainly focused on the psychological domain; besides, some researchers have applied this modern communication medium to various health-related studies. From reviewing literature the studies performed through Internet have encountered many problems, such as self-selection bias and the kind about questionable sampling assumption. The main problem has been that the population using Internet is hard to be completely under control. Most of these studies have been proceeded by convenient sampling. Further inference is difficult. Bulletin Board System (BBS) in Taiwan, a transformed "medium" with integrated functions of Internet, has gained much popularity among young cohort in Taiwan, especially among college students. OBJECTIVE Try to establish cohort of long-standing heavy net-users for follow-up study focusing on the health impact, taking BBS in Taiwan for example. MATERIAL BBS users and their objective and explicit count data from one of the largest BBS in Taiwan. METHOD Use the legitimate functions supplied by BBS to sample its users on-line and set up user name list for further exploring. The basic released count data of the specific sampled users are then obtained by querying the individual user-name. RESULTS "Population" of the target BBS is confirmed to be at least 21,000 (could be estimated to be as large as 100,000), with name list available. Furthermore, this population could be stratified to obtain long-standing (at least one more years) heavy BBS users for further follow-up study. Pilot study was performed by giving out self-rated self-administratered health-related quality of life questionnaires to 250 randomly sampled subjects from the obtained heavy net users. 9 users completed the questionnaires. Comparison of the results implies that there is significant difference, in the physical and social health domains, between the 9 subjects and the previously surveyed 213 healthy subjects in another formal study. This study thus far implies that the explicit data of the BBS users is contributive to finding the cohort of long-standing heavy net-users. However, unsolved problem of low response rate to questionnaire-taking in this study and the aforementioned fundamental problem of biased sampling still demands other reasonable incentives and innovative methods.
APA, Harvard, Vancouver, ISO, and other styles
29

Li, Li. "Model Selection via Minimum Description Length." Thesis, 2011. http://hdl.handle.net/1807/31834.

Full text
Abstract:
The minimum description length (MDL) principle originated from data compression literature and has been considered for deriving statistical model selection procedures. Most existing methods utilizing the MDL principle focus on models consisting of independent data, particularly in the context of linear regression. The data considered in this thesis are in the form of repeated measurements, and the exploration of MDL principle begins with classical linear mixed-effects models. We distinct two kinds of research focuses: one concerns the population parameters and the other concerns the cluster/subject parameters. When the research interest is on the population level, we propose a class of MDL procedures which incorporate the dependence structure within individual or cluster with data-adaptive penalties and enjoy the advantages of Bayesian information criteria. When the number of covariates is large, the penalty term is adjusted by data-adaptive structure to diminish the under selection issue in BIC and try to mimic the behaviour of AIC. Theoretical justifications are provided from both data compression and statistical perspectives. Extensions to categorical response modelled by generalized estimating equations and functional data modelled by functional principle components are illustrated. When the interest is on the cluster level, we use group LASSO to set up a class of candidate models. Then we derive a MDL criterion for this LASSO technique in a group manner to selection the final model via the tuning parameters. Extensive numerical experiments are conducted to demonstrate the usefulness of the proposed MDL procedures on both population level and cluster level.
APA, Harvard, Vancouver, ISO, and other styles
30

Konečný, David. "Hodnocení efektivity fotbalových klubů Premier League pomocí analýzy obalu dat." Master's thesis, 2018. http://www.nusl.cz/ntk/nusl-388236.

Full text
Abstract:
Title: Evaluating of efficiency of Football Clubs in Premier League by Data Envelopment Analysis. Goals: The aim of the thesis is to identify the effectiveness of football clubs in the Premier League in the season 2016/2017. In the post optimization analysis evaluate, which observed clubs have been effective in transforming inputs into outputs, and which clubs have some deficiencies in this transformation. Methods: In the thesis for efficiency research, the Data Envelopment Analysis (DEA) data analysis method is used to evaluate the effectiveness of individual clubs in the Premier League. DEA determines which units are effective and what are the deviations from the effective frontier for the units that are inefficient. The measurements are made by an input- oriented CCR model and a BCC model. The CCR model assumes constant returns to scale and BCC considers variable returns to scale. Results: The result section identifies the productive efficiency of individual football clubs in the Premier League in the season 2016/2017. The effective frontier reached a total of 7 clubs in both CCR and BCC models. The average efficiency in the CCR model is 87 %. In the BCC model, the average efficiency is 91 %. As a result, the Premier League as a competition is highly efficient. Key words: data envelopment...
APA, Harvard, Vancouver, ISO, and other styles
31

Lin, Shih-Yang, and 林世陽. "Using Data Envelopment Analysis for the business efficiency of building corporation in Taiwan stock exchange market-Comparison of AR and BCC Model." Thesis, 2007. http://ndltd.ncl.edu.tw/handle/91584579006433358037.

Full text
Abstract:
碩士<br>國立高雄應用科技大學<br>商務經營研究所<br>95<br>This article penetrates the material envelope law and management characteristic the coordinate construction company, several achievements weights target union is the sole target, discovers suits the construction company the achievements appraisal pattern, simultaneously to goes on the market the cabinet construction company to face it environment and the management variable carries on the exploration, attempts to discover factor of the influence achievements. Mainly by material envelope analytic method (Data Envelopment Analysis; DEA) analyzes all goes on the market the construction company transport business of achievements from 2004 to 2005, and provides the invalid rate company to improve the way. This article real diagnosis result is: The different DEA pattern appraised the result overall effectiveness index is 1 (namely has efficiency in transport business) the unit 10 merchants, occupies all appraisals unit 42%. Its expression above 10 goes on the market the construction company all to reveal the expression in all DEA pattern to have the efficiency in transport business. Next, selects various merchants for the confirmation the investment and delivers the essential factor for suitably, this research besides carries on the Pearson correlation analysis, knew in addition by each kind of combination sensitivity analysis, maintains is very high for the relatively effective unit's stability, from this may confirm the investment which selects and deliver the essential factor to be supposed to be suitably. Also puts into production sensitivity of analysis the combination by each kind, confirmed invested and delivers the variable number to be more, respectively was appraised the unit the efficiency value had the deviation effectiveness the tendency.
APA, Harvard, Vancouver, ISO, and other styles
32

Rodgers, Lisa. "Synthesis of Water Quality Data and Modeling Non-Point Loading in Four Coastal B.C. Watersheds: Implications for Lake and Watershed Health and Management." Thesis, 2015. http://hdl.handle.net/1828/6999.

Full text
Abstract:
I compared and contrasted nitrogen and phosphorus concentrations and land use differences in two oligotrophic lakes (Sooke and Shawnigan) and two meso-eutrophic lakes (St. Mary and Elk) in order to evaluate nutrient concentrations over time, and evaluate the relationship between in-lake nutrients and land use in the surrounding watershed. I used MapShed© nutrient transport modeling software to estimate the mass load of phosphorus and nitrogen to each lake, and evaluated the feasibility of land use modifications for reducing in-lake nutrients. In comparing nitrogen and phosphorus data in Sooke and Shawnigan Lakes, I determined that natural watershed characteristics (i.e., precipitation, topography, and soils) did not account for the elevated nutrient concentrations in Shawnigan verses Sooke Lake. Natural watershed characteristics indicated that external loads into Shawnigan Lake would be lesser-than or equal to those into Sooke Lake if both watersheds were completely forested. I evaluated trends of in-lake nutrient concentrations for Sooke and Shawnigan Lakes, as well as two eutrophic lakes, St. Mary and Elk. Ten to 30-year trends indicate that nitrogen and phosphorus levels in these lakes have not changed significantly over time. Time-segmented data showed that nutrient trends are mostly in decline or are maintaining a steady-state. Most nutrient concentration data are not precipitation-dependent, and this, coupled with significant correlations to water temperature and dissolved oxygen, indicate that in-lake processes are the primary influence on lake nutrient concentrations -- not external loading. External loading was estimated using, MapShed©, a GIS-based watershed loading software program. Model validation results indicate that MapShed© could be used to determine the effect of external loading on lake water quality if accurate outflow volumes are available. Based on various land-cover scenarios, some reduction in external loading may be achieved through land-based restoration (e.g., reforestation), but the feasibility of restoration activities are limited by private property. Given that most of the causal loads were determined to be due to in-lake processes, land-based restoration may not be the most effective solution for reducing in-lake nitrogen and phosphorus concentrations.<br>Graduate
APA, Harvard, Vancouver, ISO, and other styles
33

Topinka, Jiří. "Efektivita fotbalových klubů v Premier League." Master's thesis, 2017. http://www.nusl.cz/ntk/nusl-367826.

Full text
Abstract:
Title: Efficiency of football clubs in Premier League Objectives: The aim of this diploma thesis is to investigate the technical efficiency of Premier League clubs in the 2015/2016 season using a Data Envelopment Analysis approach. Determine which clubs worked efficiently and identify weaknesses of individual teams. Methods: The efficiency of each Premier League club is analysed by Data envelopment analysis (DEA). The efficiency is computed for input oriented CCR model (constant returns to scale) and BCC model (variable returns to scale). Results: The technical efficiency of each club in the 2015/2016 Premier League season is evaluated in the practical part of this thesis, with eight clubs achieving maximum efficiency for both CCR and BCC models. That indicates that the competition as a whole is highly effective. Keywords: Premier League, technical efficiency data envelopment analysis, CCR, BCC
APA, Harvard, Vancouver, ISO, and other styles
34

"Cataloguing images for life six feet under: a comparative study on old kingdom Egyptian and Han Chinese visual data." 2015. http://repository.lib.cuhk.edu.hk/en/item/cuhk-1291559.

Full text
Abstract:
Huang, Tzu-hsuan.<br>Thesis Ph.D. Chinese University of Hong Kong 2015.<br>Includes bibliographical references (leaves 633-641).<br>Abstracts also in Chinese.<br>Title from PDF title page (viewed on 24, October, 2016).
APA, Harvard, Vancouver, ISO, and other styles
35

(8082655), Gustavo A. Valencia-Zapata. "Probabilistic Diagnostic Model for Handling Classifier Degradation in Machine Learning." Thesis, 2019.

Find full text
Abstract:
Several studies point out different causes of performance degradation in supervised machine learning. Problems such as class imbalance, overlapping, small-disjuncts, noisy labels, and sparseness limit accuracy in classification algorithms. Even though a number of approaches either in the form of a methodology or an algorithm try to minimize performance degradation, they have been isolated efforts with limited scope. This research consists of three main parts: In the first part, a novel probabilistic diagnostic model based on identifying signs and symptoms of each problem is presented. Secondly, the behavior and performance of several supervised algorithms are studied when training sets have such problems. Therefore, prediction of success for treatments can be estimated across classifiers. Finally, a probabilistic sampling technique based on training set diagnosis for avoiding classifier degradation is proposed<br>
APA, Harvard, Vancouver, ISO, and other styles
36

Paulo, José António. "Gestão integrada do tempo e custo : uma contribuição para a gestão de projectos de construção em Portugal." Master's thesis, 1997. http://hdl.handle.net/10400.2/1444.

Full text
Abstract:
Dissertação de Mestrado em Gestão de Projectos apresentada à Universidade Aberta<br>O presente trabalho pretende abordar a gestão dos factores tempo e custo em Projectos de Construção numa perspectiva do Dono da Obra/Gestor de Projecto. O trabalho começa por desenvolver a gestão do factor tempo subdividindo-a em planeamento do factor tempo e controlo do factor tempo. Na primeira é apresentada a Work Breakdown Structure (WBS), onde são identificadas as actividades do Projecto, estabelecida a sua sequência, estimado o tempo de duração de cada uma das actividades e executado o planeamento que representará a baseline do factor tempo. Na segunda procede-se ao planeamento e monitorização do controlo, fazendo referência às acções correctivas mais utilizadas. Seguidamente procede-se à abordagem da gestão do factor custo subdividindo-o em planeamento do factor custo e controlo do factor custo à semelhança do que anteriormente se fez para o factor tempo. O desenvolvimento do planeamento do factor custo acompanha o desenvolvimento do projecto nas suas diversas fases. Assim numa primeira fase procede-se ao estudo de viabilidade e avaliação económica do projecto, numa fase posterior procede-de à sua orçamentação e preparação do cash flow ficando estabelecida a baseline de custo do projecto. Numa segunda fase procede-se ao planeamento do controlo, monitorização do progresso, análise da situação e tomada de medidas correctivas. Tendo a noção de que a gestão individualizada destes factores não constitui uma vantagem para o Dono da Obra desenvolveu-se a gestão integrada dos factores tempo e custo, tomando como base os critérios C/SCSC e o conceito de Earned Value. A gestão integrada é também subdividida em planeamento integrado onde se desenvolvem os passos que levam à criação da baseline e respectiva curva de controlo (BCWS), e controlo integrado, onde através dos valores dos trabalhos realizados e dos custos reais de execução se vão construindo as curvas de BCWP e ACWP respectivamente. A análise destas curvas fornece ao gestor de projecto importante informação sobre o mesmo, permite extrapolar tendências, estimar o valor final do projecto na sua conclusão (EAC) e o tempo de execução que lhe está associado. Também nos últimos anos tem vindo a ganhar especial relevo a gestão de reclamações, resultante de um inúmero conjunto de factores internos e externos aos projectos, que se traduzem quase sempre em extensão do prazo e acréscimo dos custos, inicialmente previstos. O tema é abordado num dos capítulos deste trabalho, procurando que o seu estudo permita inserir na metodologia a desenvolver, medidas que minimizem o impacto das reclamações nos custos e prazo dos projectos. O desenvolvimento teórico apresentado e o conhecimento da realidade do ambiente dos Projectos em Portugal, serviram de base à preparação de uma proposta de metodologia a utilizar na gestão integrada dos factores tempo e custo em futuros Projectos de Construção. Em simultâneo desenvolveu-se um inquérito que foi distribuido a Gestores de Projectos e/ou Directores de Planeamento e Controlo de Custos e empresas de Gestão de Projectos, sobre a gestão desses factores nos respectivos Projectos. A experimentação da metologia proposta para gestão integrada, foi aplicada a um case study, onde se procurou através da Gestão Integrada dos Factores Tempo e Custo, verificar o domínio dos impactos das alterações e reclamações no prazo e custo final do projecto. Com base na metologia apresentada, resultados dos inquéritos distribuídos e experimentação, procedeu-se a uma revisão da proposta inicial de metologia tendo sido estabelecida uma proposta final de metologia para a gestão integrada dos factores tempo e custo. No último capítulo da dissertação são apresentadas as conclusões finais e abordam-se desenvolvimentos futuros no âmbito deste tema.<br>This work aims to present time and cost management in construction from a Client/Project Manager perspective. The work begins by time management wich is subdivided in time planning and time control. In the former ae Work Breakdown Structure (WBS) is presented, in wich the project's activities are identified with its own sequence. Thereafter the time duration of each activity is estimated and finaly is carried out the scheduled and the result is the “time baseline”. On the latter, we will proceed with the planning and control monitoring, including any corrective measures used. The management of cost is followed and is subdivided in cost planning and cost control, similarly with was done with time management. The development of cost planning follow the project's development at it's various stages. Therefore on a first stage a viability study is carried out as well as an economic evaluation of the project. Later on we will proceed with budget «estimation» and prepare a cash flow statement, establishing the “baseline” of the project's cost. In the second stage we will proceed with the cost control planning, and monitoring progress and carry out the corrective measures. Taking into account that the individual management of these factors does not bring any advantage to the Client, an integrated concept time and cost management was studied, based on the C/SCSC criteria and “Earned Valued” concept. The integrated management is also divided in integrated planning, in wich steps that lead to the creation of a “baseline” and it's respective control curve (BCWS), and integrated control, in which the values of the work carried out and it's real execution cost, from BCWP and ACWP curves respectivily. The analysis of these curves gives to the project manager all the information about the project, allowing to forecast trends, and final estimate of time and cost. After that we studied claims in construction contracts, and carried out an inquiry wich gave us the insight on dynamics of cost and time management in Portugal. As a consequence we developed an integrated methodology of time and management within the construction framework. We used a case study to test the aboved mentioned methodology and the results allowed us to reach some conclusions leading to the introduction of small changes in the methodology. Our main conclusions as well the future perspectives will be presented.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography