To see the other types of publications on this topic, follow the link: Data base industry.

Dissertations / Theses on the topic 'Data base industry'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Data base industry.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Thomas, Howard LaVann. "Analysis of defects in woven fabrics : development of the knowledge base." Thesis, Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/9185.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Almadi, Kanika. "Quantitative study of the movie industry based on IMDb data." Thesis, Massachusetts Institute of Technology, 2017. http://hdl.handle.net/1721.1/113502.

Full text
Abstract:
Thesis: S.M. in Engineering and Management, Massachusetts Institute of Technology, System Design and Management Program, 2017.
Cataloged from PDF version of thesis.
Includes bibliographical references (page 47).
Big Data Analytics is an emerging business capability that is providing far more intelligence to the companies nowadays to make well-informed decisions and better formulate their business strategies. This has been made possible due to easy accessibility of immense volume of data stored in clouds in a secure manner. As a result, online product review platforms have also gained enormous popularity and are successfully providing various services to the consumers primarily via user-generated content. The thesis makes use of raw and unstructured data available on IMDB website, cleans it up and organizes it in a structured format suitable for quick analysis by various analytical softwares. The thesis then examines the available literature on analytics done on IMDB movie dataset and identifies that little work has been carried out in predicting the financial success of the movies. The thesis thus carries out data analytics on the IMDB movie sets and highlights several parameters like movie interconnectedness and director's credentials, which correlates positively with the movie gross revenue. The thesis thereafter loosely defines a movie innovative index encompassing of parameters like number of references, number of follows and number of remake and discusses how the abundance of some of these parameters have a positive impact on box office success of the movie. Contrarily the lack of presence of these parameters thereby characterizing an innovative movie may not be so well received by the audiences thus leading to poor box office performance. The thesis also proposes how the director's credentials in the film industry measured by his/her total number of nominations and awards winning in the Oscar have a positive impact on the financial success of the movie and their own career advancement.
by Kanika Almadi.
S.M. in Engineering and Management
APA, Harvard, Vancouver, ISO, and other styles
3

Randhawa, Tarlochan Singh. "Incorporating Data Governance Frameworks in the Financial Industry." ScholarWorks, 2019. https://scholarworks.waldenu.edu/dissertations/6478.

Full text
Abstract:
Data governance frameworks are critical to reducing operational costs and risks in the financial industry. Corporate data managers face challenges when implementing data governance frameworks. The purpose of this multiple case study was to explore the strategies that successful corporate data managers in some banks in the United States used to implement data governance frameworks to reduce operational costs and risks. The participants were 7 corporate data managers from 3 banks in North Carolina and New York. Servant leadership theory provided the conceptual framework for the study. Methodological triangulation involved assessment of nonconfidential bank documentation on the data governance framework, Basel Committee on Banking Supervision's standard 239 compliance documents, and semistructured interview transcripts. Data were analyzed using Yin's 5-step thematic data analysis technique. Five major themes emerged: leadership role in data governance frameworks to reduce risk and cost, data governance strategies and procedures, accuracy and security of data, establishment of a data office, and leadership commitment at the organizational level. The results of the study may lead to positive social change by supporting approaches to help banks maintain reliable and accurate data as well as reduce data breaches and misuse of consumer data. The availability of accurate data may enable corporate bank managers to make informed lending decisions to benefit consumers.
APA, Harvard, Vancouver, ISO, and other styles
4

Fang, Yuan, and 方媛. "A cost-based model for optimising the construction logisticsschedules." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2011. http://hub.hku.hk/bib/B46080351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Flamand, Marina. "Le déploiement de l'intelligence technologique dans le processus d'innovation des firmes : quels objectifs, enjeux et modalités pratiques ? : Une application à l'industrie automobileu." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0083/document.

Full text
Abstract:
Confrontées à des environnements d’affaires toujours plus turbulents, les firmes doiventredoubler d’efforts pour se doter de moyens leur permettant de se saisir pleinement de ces évolutions.L’intelligence technologique, en tant que vecteur de connaissances sur les dynamiques d’innovation,constitue un instrument au service des firmes afin d’orienter leurs activités économiques.L’enjeu de cette thèse, financée par le Groupe PSA, est de participer au renforcement des pratiquesd’intelligence technologique d’un grand groupe industriel.La première partie de cette thèse vise à rendre l’intelligence technologique plus intelligible afind’asseoir la légitimité de son intégration effective dans les processus des firmes. Pour cela, nousmobilisons les éléments théoriques du référentiel des ressources et compétences de la firme afind’apporter des éléments de réponse à trois problématiques. Pourquoi la compréhension del’environnement externe relève d’une nécessité pour la firme ? Quel statut au sein de la firme octroyerà cette aptitude de compréhension ? Et enfin, quels sont les apports concrets de l’intelligencetechnologique pour le management stratégique et opérationnel de l’innovation ?L’opérationnalisation de l’intelligence technologique est au coeur de la seconde partie de cette thèsequi s’attache à améliorer les pratiques de collecte de matériaux informationnels sur l’environnementexterne. Plus précisément, elle ambitionne non seulement de déterminer l’apport informationnel desdonnées brevet et de données actuellement peu exploitées, à savoir les données financières, maiségalement de formuler des recommandations opérationnelles pour leur exploitation
Challenged by turbulent environment, firms are driven to make extra efforts in order tothrive. Technology intelligence, as a vector of knowledge of innovation dynamics, constitutes aninstrument at the firms’ disposal to help steer their economic activities.The aim of this thesis, funded by Groupe PSA, is to participate in the enhancement of theimplementation of technology intelligence within large industrial groups.The first part of this Ph.D. thesis aims at making technology intelligence more comprehensible inorder to reinforce its purposes in the innovation process of firms.To this end, we will call upon theoretical elements from the resources and competencies based view ofthe firm in order to answer three questions: Why is the understanding of the external environment ofthe firm a necessity? What is its place within the organization of the firm? What is the significance oftechnological intelligence for strategic and operational management of innovation?By putting technology intelligence into practice, the second part of this thesis focuses on improvingcollection methods of data required for the analysis of the external environment of the firm. Moreprecisely, the intent is not only to determine the informational benefits of patent data and the seldomused financial data, but also to make practical recommendations for their exploitation
APA, Harvard, Vancouver, ISO, and other styles
6

Williams, Trefor P. "Knowledge-based productivity analysis of construction operations." Diss., Georgia Institute of Technology, 1987. http://hdl.handle.net/1853/20195.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Rosales, Vizuete Jonathan P. "IIoT based Augmented Reality for Factory Data Collection and Visualization." University of Cincinnati / OhioLINK, 2020. http://rave.ohiolink.edu/etdc/view?acc_num=ucin1592136317716895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Tan, Lujiao. "Data-Driven Marketing: Purchase Behavioral Targeting in Travel Industry based on Propensity Model." Thesis, Blekinge Tekniska Högskola, Institutionen för industriell ekonomi, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-14745.

Full text
Abstract:
By means of data-driven marketing as well as big data technology, this paper presents the investigation of a case study from travel industry implemented by a combination of propensity model and a business model “2W1H”. The business model “2W1H” represents the purchasing behavior “What to buy”, “When to buy”, and “How to buy”. This paper presents the process of building propensity models for the application in behavioral targeting in travel industry.     Combined the propensity scores from predictive analysis and logistic regression with proper marketing and CRM strategies when communicating with travelers, the business model “2W1H” can perform personalized targeting for evaluating of marketing strategy and performance. By analyzing the business model “2W1H” and the propensity model on each business model, both the validation of the model based on training model and test data set, and the validation of actual marketing activities, it has been proven that predictive analytics plays a vital role in the implementation of travelers’ purchasing behavioral targeting in marketing.
APA, Harvard, Vancouver, ISO, and other styles
9

Selvaraju, Sathishkumar. "Simulation based scheduling using interactive data and lean concepts in a manufacturing industry." Diss., Online access via UMI:, 2009.

Find full text
Abstract:
Thesis (M.S.)--State University of New York at Binghamton, Thomas J. Watson School of Engineering and Applied Science, Department of Systems Science and Industrial Engineering, 2009.
Includes bibliographical references.
APA, Harvard, Vancouver, ISO, and other styles
10

Stephenson, Paul. "Estimating, planning and control systems based on production data in the construction industry." Thesis, Sheffield Hallam University, 1988. http://shura.shu.ac.uk/20400/.

Full text
Abstract:
The processes of estimating, planning and control within the building industry are seldom fully integrated. This study considers the integration of the processes based on production data collected from several projects. The aim of the research is to investigate the feasibility of the integrated approach as a means of improving the estimating, planning and control processes within the construction industry. Selected cost significant work sections are considered in the study and production data are formulated based on feedback information from several first sample projects. Comparisons are made between average production data and individual project data. A structured systems analysis of the collaborating body identifies existing processes and production orientated information requirements. A model and working system prototype are developed which illustrate integration of the processes and generation of management information. Application of the model as a basis for estimating and planning at various levels of detail is demonstrated. Forecast-observation diagrams provide the necessary control mechanism for monitoring production outputs. Forecasts on a second independent sample of projects are assessed based on tolerances of performances from first sample projects. Accuracy of average forecasts from the model are compared with other data sources, these being estimators' data used in the preparation of the estimate, and bonus surveyors' targets used during the production process. The research concludes that the production data and model give a worthwhile improvement over existing methods in forecasting average productivity performances when methods of placing can be clearly identified and related to work packages. The production data and model are insufficiently accurate to give a worthwhile improvement when measured items cover work packages of varying degrees of complexity, and when proportioning methods are used to obtain production data for different categories of items which collectively represent work packages. Assessment of the model together with refinements are also discussed.
APA, Harvard, Vancouver, ISO, and other styles
11

Melhem, Mariam. "Développement des méthodes génériques d'analyses multi-variées pour la surveillance de la qualité du produit." Thesis, Aix-Marseille, 2017. http://www.theses.fr/2017AIXM0543.

Full text
Abstract:
L’industrie microélectronique est un domaine compétitif, confronté de manière permanente à plusieurs défis. Pour évaluer les étapes de fabrication, des tests de qualité sont appliqués. Ces tests étant discontinus, une défaillance des équipements peut causer une dégradation de la qualité du produit. Des alarmes peuvent être déclenchées pour indiquer des problèmes. D’autre part, on dispose d’une grande quantité de données des équipements obtenues à partir de capteurs. Une gestion des alarmes, une interpolation de mesures de qualité et une réduction de données équipements sont nécessaires. Il s’agit dans notre travail à développer des méthodes génériques d’analyse multi-variée permettant d’agréger toutes les informations disponibles sur les équipements pour prédire la qualité de produit en prenant en compte la qualité des différentes étapes de fabrication. En se basant sur le principe de reconnaissance de formes, nous avons proposé une approche pour prédire le nombre de produits restant à produire avant les pertes de performance liée aux spécifications clients en fonction des indices de santé des équipement. Notre approche permet aussi d'isoler les équipements responsables de dégradation. En plus, une méthodologie à base de régression régularisée est développée pour prédire la qualité du produit tout en prenant en compte les relations de corrélations et de dépendance existantes dans le processus. Un modèle pour la gestion des alarmes est construit où des indices de criticité et de similarité sont proposés. Les données alarmes sont ensuite utilisées pour prédire le rejet de produits. Une application sur des données industrielles provenant de STMicroelectronics est fournie
The microelectronics industry is a highly competitive field, constantly confronted with several challenges. To evaluate the manufacturing steps, quality tests are applied during and at the end of production. As these tests are discontinuous, a defect or failure of the equipment can cause a deterioration in the product quality and a loss in the manufacturing Yield. Alarms are setting off to indicate problems, but periodic alarms can be triggered resulting in alarm flows. On the other hand, a large quantity of data of the equipment obtained from sensors is available. Alarm management, interpolation of quality measurements and reduction of correlated equipment data are required. We aim in our work to develop generic methods of multi-variate analysis allowing to aggregate all the available information (equipment health indicators, alarms) to predict the product quality taking into account the quality of the various manufacturing steps. Based on the pattern recognition principle, data of the degradation trajectory are compared with health indices for failing equipment. The objective is to predict the remaining number of products before loss of the performance related to customer specifications, and the isolation of equipment responsible for degradation. In addition, regression- ased methods are used to predict the product quality while taking into account the existing correlation and the dependency relationships in the process. A model for the alarm management is constructed where criticality and similarity indices are proposed. Then, alarm data are used to predict the product scrap. An application to industrial data from STMicroelectronics is provided
APA, Harvard, Vancouver, ISO, and other styles
12

Zou, Haichuan. "Investigation of hardware and software configuration on a wavelet-based vision system--a case study." Thesis, Georgia Institute of Technology, 1998. http://hdl.handle.net/1853/8719.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Zhang, Jingshu Ph D. Massachusetts Institute of Technology. "Simulation based micro-founded structural market analysis : a case study of the copper industry." Thesis, Massachusetts Institute of Technology, 2018. http://hdl.handle.net/1721.1/118223.

Full text
Abstract:
Thesis: Ph. D. in Engineering Systems, Massachusetts Institute of Technology, School of Engineering, Institute for Data, Systems, and Society, June 2018.
Cataloged from PDF version of thesis.
Includes bibliographical references (pages 102-115).
This work aims to provide a widely applicable modeling framework that can be used to credibly investigate materials scarcity risks for various types of commodities. Different from existing literature, this work contributes to a better understanding of commodity scarcity risk, specifically copper future consumption on several fronts. Firstly, it introduces an elaborate price mechanism absent in comparable materials flow assessment. It teases out short term and long term substitution, allowing consumers to switch from one type of commodity to another based on price signals and their respective price elasticities of demand. Secondly, the model allows for individual deposit tracking, which allows the modeler to extract ore grade information as a function of consumption and reserve size. Thirdly, it models the supply side on an agent-based basis, allowing for aggregation of granular information, capturing potential emergent phenomena. We believe these three aspects, which are least addressed (none of existing work has addressed the first aspect, and few have addressed the second or the third), are important in assessing scarcity risks. Without them, scarcity assessment is likely to be biased. We hope our work may serve as some sort of foundation upon which more reliable future work on mineral scarcity evaluation can be carried out.
by Jingshu Zhang.
Ph. D. in Engineering Systems
APA, Harvard, Vancouver, ISO, and other styles
14

Shasha, Ziphozakhe Theophilus. "Measurement of the usability of web-based hotel reservation systems." Thesis, Cape Peninsula University of Technology, 2016. http://hdl.handle.net/20.500.11838/2353.

Full text
Abstract:
Thesis (MTech (Business Information Systems))--Cape Peninsula University of Technology, 2016.
The aim of this research project was to determine what the degree of usability is of a sample of online reservation systems of Cape Town hotels. The literature has indicated that the main aim of website usability is to make the engagement process with a website a more efficient and enjoyable experience. Researchers noted that well designed, high-quality websites, with grammatically accurate content, create a trustworthy online presence. User-friendly sites also attract far more traffic. Previous research has also shown that a loss of potential sales is possible due to users being unable to find what they want, if poor website design has been implemented. Loss of potential income through repeat visits is also a possibility, due to a negative user experience. The research instrument that was employed in this research is usability testing. It is a technique used to evaluate product development that incorporates user feedback in an attempt to create instruments and products that meet user needs, and to decrease costs. The research focused on Internet-based hotel reservation systems. Only the usability was measured. Both standard approaches were used in this research project, in a combined quantitative and qualitative research design. In conclusion, the purpose of this research was to determine the degree of usability of specified Cape Town hotel online reservation systems. The outcomes of this study indicated interesting patterns in that reservation systems met user requirements more often than expected. However, the figures of acceptability obtained were still below the generally accepted norms for usability. The amount of time spent to complete a booking also decreased, as users worked on more than one reservation system.
APA, Harvard, Vancouver, ISO, and other styles
15

Zhang, Daqun. "Incentive Regulation with Benchmarking in the Electricity Distribution Industry." Diss., Temple University Libraries, 2015. http://cdm16002.contentdm.oclc.org/cdm/ref/collection/p245801coll10/id/367047.

Full text
Abstract:
Business Administration/Accounting
Ph.D.
This dissertation investigates two broad management accounting questions in the context of electric utility industry: How do regulators for electricity industry use the information generated from accounting systems to make pricing decisions? What are the economic consequences of these decisions? In Chapter 2, I review regulatory reforms and discuss existing issues of using DEA models for efficiency benchmarking in four aspects. Suggestions are given for improving the use of DEA models based on the review and discussion. In Chapter 3, I empirically investigate the effect of incentive regulation with DEA benchmarking on operational efficiency using a panel of electricity distribution firms in Brazil. In Chapter 4, I examine the effect of restructuring and retail competition on cost reduction using a sample of US investor-owned electric utilities. The effects of privatization, industrial restructuring, incentive regulation and benchmarking are effectively disentangled from one another using the research setting in Brazil and US electricity industry. In Chapter 5, I combine the idea of activity based costing and data envelopment analysis to further develop a detailed benchmarking model for incentive regulation.
Temple University--Theses
APA, Harvard, Vancouver, ISO, and other styles
16

Nilsson, Valentin, and André Dahlgren. "Business Analytics Maturity Model : An adaptation to the e-commerce industry." Thesis, Uppsala universitet, Företagsekonomiska institutionen, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-388358.

Full text
Abstract:
Maturity models have become a widely used framework for assessing various capabilities and technologies among businesses. This thesis develops a maturity model for assessing Business Analytics (BA) in Swedish e-commerce firms. Business Analytics has become an increasingly important part of modern businesses, and firms are continuously looking for new ways to perform analysis of the data available to them. The prominent previous maturity models within BA have mainly been developed by IT-consultancy firms with the underlying intent of selling their IT services. Consequently, these models have a primary focus on the technical factors towards Business Analytics maturity, partly neglecting the importance of organisational factors. This thesis develops a Business Analytic Maturity Model (BAMM) which fills an identified research gap of academic maturity models with emphasis on the organisational factors of BA maturity. Using a qualitative research design, the BAMM is adapted to the Swedish e-commerce industry through two sequential evaluation stages. The study finds that organisational factors have a greater impact on BA maturity than previous research suggests. The BAMM and the study's results contribute with knowledge of Business Analytics, as well as providing e-commerce firms with insights into how to leverage their data.
APA, Harvard, Vancouver, ISO, and other styles
17

Diaz, Zarate Gerardo Daniel. "A knowledge-based system for estimating the duration of cast in place concrete activities." FIU Digital Commons, 1992. http://digitalcommons.fiu.edu/etd/2806.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Lindström, Maja. "Food Industry Sales Prediction : A Big Data Analysis & Sales Forecast of Bake-off Products." Thesis, Umeå universitet, Institutionen för fysik, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-184184.

Full text
Abstract:
In this thesis, the sales of bread and coffee bread at Coop Värmland AB have been studied. The aim was to find what factors that are important for the sales and then make predictions of how the sales will look like in the future to reduce waste and increase profits. Big data analysis and data exploration was used to get to know the data and find the factors that affect the sales the most. Time series forecasting and supervised machine learning models were used to predict future sales. The main focus was five different models that were compared and analysed, they were; Decision tree regression, Random forest regression, Artificial neural networks, Recurrent neural networks and a time series model called Prophet. Comparing the observed values to the predictions made by the models indicated that using a model based on the time series is to be preferred, that is, Prophet and Recurrent neural network. These two models gave the lowest errors and by that, the most accurate results. Prophet yielded mean absolute percentage errors of 8.295% for bread and 9.156% for coffee bread. The Recurrent neural network gave mean absolute percentage errors of 7.938% for bread and 13.12% for coffee bread. That is about twice as good as the models they are using today at Coop which are based on the mean value of the previous sales.
I denna avhandling har försäljningen av matbröd och fikabröd på Coop Värmland AB studerats. Målet var att hitta vilka faktorer som är viktiga för försäljningen och sedan förutsäga hur försäljningen kommer att se ut i framtiden för att minska svinn och öka vin- ster. Big data- analys och explorativ dataanalys har använts för att lära känna datat och hitta de faktorer som påverkar försäljningen mest. Tidsserieprediktion och olika mask- ininlärningsmodeller användes för att förutspå den framtida försäljningen. Huvudfokus var fem olika modeller som jämfördes och analyserades. De var Decision tree regression, Random forest regression, Artificial neural networks, Recurrent neural networks och en tidsseriemodell som kallas Prophet. Jämförelse mellan de observerade värdena och de värden som predicerats med modellerna indikerade att de modeller som är baserade på tidsserierna är att föredra, det vill säga Prophet och Recurrent neural networks. Dessa två modeller gav de lägsta felen och därmed de mest exakta resultaten. Prophet gav genomsnittliga absoluta procentuella fel på 8.295% för matbröd och 9.156% för fikabröd. Recurrent neural network gav genomsnittliga absoluta procentuella fel på 7.938% för matbröd och 13.12% för fikabröd. Det är ungefär dubbelt så korrekt som de modeller de använder idag på Coop som baseras på medelvärdet av tidigare försäljning.
APA, Harvard, Vancouver, ISO, and other styles
19

Théry, Clément. "Model-based covariable decorrelation in linear regression (CorReg) : application to missing data and to steel industry." Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10060/document.

Full text
Abstract:
Les travaux effectués durant cette thèse ont pour but de pallier le problème des corrélations au sein des bases de données, particulièrement fréquentes dans le cadre industriel. Une modélisation explicite des corrélations par un système de sous-régressions entre covariables permet de pointer les sources des corrélations et d'isoler certaines variables redondantes. Il en découle une pré-sélection de variables sans perte significative d'information et avec un fort potentiel explicatif (la structure de sous-régression est explicite et simple). Un algorithme MCMC (Monte-Carlo Markov Chain) de recherche de structure de sous-régressions est proposé, basé sur un modèle génératif complet sur les données. Ce prétraitement ne dépend pas de la variable réponse et peut donc être utilisé de manière générale pour toute problématique de corrélations. Par la suite, un estimateur plug-in pour la régression linéaire est proposé pour ré-injecter l'information résiduelle de manière séquentielle sans souffrir des corrélations entre covariables. Enfin, le modèle génératif complet peut être utilisé pour gérer des valeurs manquantes dans les données. Cela permet l'imputation multiple des données manquantes, préalable à l'utilisation de méthodes classiques incompatibles avec la présence de valeurs manquantes. Le package R intitulé CorReg implémente les méthodes développées durant cette thèse
This thesis was motivated by correlation issues in real datasets, in particular industrial datasets. The main idea stands in explicit modeling of the correlations between covariates by a structure of sub-regressions, that simply is a system of linear regressions between the covariates. It points out redundant covariates that can be deleted in a pre-selection step to improve matrix conditioning without significant loss of information and with strong explicative potential because this pre-selection is explained by the structure of sub-regressions, itself easy to interpret. An algorithm to find the sub-regressions structure inherent to the dataset is provided, based on a full generative model and using Monte-Carlo Markov Chain (MCMC) method. This pre-treatment does not depend on a response variable and thus can be used in a more general way with any correlated datasets. In a second part, a plug-in estimator is defined to get back the redundant covariates sequentially. Then all the covariates are used but the sequential approach acts as a protection against correlations. Finally, the generative model defined here allows, as a perspective, to manage missing values both during the MCMC and then for imputation. Then we are able to use classical methods that are not compatible with missing datasets. Once again, linear regression is used to illustrate the benefits of this method but it remains a pre-treatment that can be used in other contexts, like clustering and so on. The R package CorReg implements the methods created during this thesis
APA, Harvard, Vancouver, ISO, and other styles
20

Pascarella, Pietro. "Fault detection e diagnosis di macchine automatiche con tecniche di data mining." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2017.

Find full text
Abstract:
Lo scopo di questo progetto è stato quello di realizzare un sistema di diagnostica per la manutenzione preventiva, in grado di valutare in tempo reale lo stato di una macchina automatica, identificare eventuali anomalie e suggerire all'operatore interventi da effettuare al fine di anticipare gli stop e semplificare il troubleshooting. L'idea consiste nell'utilizzare tecniche di Data Mining per analizzare ed estrapolare informazioni utili dai dati storici per poi creare un modello che consenta di elaborare in tempo reale i dati provenienti dalla macchina. Il progetto è stato interamente svolto in G.D Spa, azienda leader mondiale nella produzione di macchine automatiche. E' da specificare che il progetto ha coinvolto anche un altro tirocinante che ha redatto una tesi dal titolo "Metodi e modelli diagnostici per la manutenzione su condizione di macchine automatiche" e quindi parte del lavoro è stato condiviso.
APA, Harvard, Vancouver, ISO, and other styles
21

Galletti, Alessandro, and Dimitra-Christina Papadimitriou. "How Big Data Analytics are perceived as a driver for Competitive Advantage : A qualitative study on food retailers." Thesis, Uppsala universitet, Företagsekonomiska institutionen, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-205508.

Full text
Abstract:
The recent explosion of digital data has led the business world to a new era towards a more evidence-based decision making. Companies nowadays collect, store and analyze huge amount of data and the terms such Big Data Analytics are used to define those practices. This paper investigates how Big Data Analytics (BDA) can be perceived and used as a driver for companies’ Competitive Advantage (CA). It thus contributes in the debate about the potential role of IT assets as a source of CA, through a Resource-Based View approach, by introducing a new phenomenon such as BDA in that traditional theoretical background. A conceptual model developed by Wade and Nevo (2010) is used as guidance, where the concept of synergy developed between IT assets and other organizational resources is seen as crucial in order to create such a CA. We focus our attention on the Food Retail industry and specifically investigate two case studies, ICA Sverige AB and Masoutis S.A. The evidence shows that, although this process is at an embryonic stage, the companies perceive the implementation of BDA as a key driver for the creation of CA. Efforts are put in place in order to develop successful implementation of BDA within the company as a strategic tool for several departments, however, some hurdles have been spotted which might impede that practice.
APA, Harvard, Vancouver, ISO, and other styles
22

Mirzaie, Shra Afroz. "A New Insight into Data Requirements Between Discrete Event Simulation and Industry 4.0 : A simulation-based case study in the automotive industry supporting operational decisions." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-42724.

Full text
Abstract:
Current industrial companies are highly pressured by growing competitiveness and globalization, while striving for increased production effectiveness. Meanwhile, flustered markets and amplified customer demands are causing manufacturers to shift strategy. Hence, international companies are challenged to pursue changes, in order to continue being competitive on global markets. Consequently, a new industrial revolution has taken place, introduced as Industry 4.0. This new concept incorporates organizational improvement and digitalization of current information and data flows. Accomplished by data from embedded systems through connected machines, devices and humans into a combined interface. Thus, companies are given possibilities to improve current production systems, simultaneously saving operational costs and minimizing insufficient production development. Smart Factories, being the foundation of Industry 4.0 results in making more accurate and precise operational decisions from abilities to test industrial changes in a virtual world before real-life implementation. However, in order to assure these functions as intended, enormous amount of data must be collected, analysed and evaluated. The indicated data will aid companies to make more self-aware and automated decisions, resulting in increased effectiveness in production. Thus, the concept will clearly change how operational decisions are made today. Nowadays, Discrete Event Simulation is a commonly applied tool founded on specific data requirements as operational changes can be tested in virtual settings. Accordingly, it is believed that simulation can aid companies that are striving for implementing Industry 4.0. As a result, data requirements between Discrete Event Simulation and Industry 4.0 needs to be established, while detecting the current data gap in operational context. Hence, the purpose of this thesis is to analyse the data requirements of Discrete Event Simulation and Industry 4.0 for improving operational decisions of production systems. In order to justify the purpose, the following research questions has been stated:   RQ1: What are the data challenges in existing production systems? RQ2: What data is required for implementing Industry 4.0 in production systems? RQ3: How can data requirements from Discrete Event Simulation benefit operational decisions when implementing Industry 4.0?   The research questions were answered by conducting a case study, in collaboration with Scania CV AB. The case study performed observations, interviews and other relevant data collection to accomplish the purpose. In parallel, a literature review focusing on data requirements for operational decisions was compared to the empirical findings. The analysis identified the current data gap in existing production systems, in correlation to Industry 4.0, affecting the accuracy of operational decisions. In addition, it was shown that simulation can undoubtedly give positive outcome for adaptation of Industry 4.0, and a clear insight on data requirements.
APA, Harvard, Vancouver, ISO, and other styles
23

Watson, Iain David. "An investigation of the use of market and industry data in financial distress modelling : based on data derived from the Unlisted Securities Market and Official List." Thesis, University of Ulster, 1995. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.339298.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Rogers, Jennifer Kathleen. "Safety Benchmarking of Industrial Construction Projects Based on Zero Accidents Techniques." Thesis, Virginia Tech, 2012. http://hdl.handle.net/10919/42859.

Full text
Abstract:
Safety is a continually significant issue in the construction industry. The Occupation Safety and Health Administration as well as individual construction companies are constantly working on verifying that their selected safety plans have a positive effect on reduction of workplace injuries. Worker safety is a large concern for both the workers and employers in construction and the government also attempts to impose effective regulations concerning minimum safety requirements. There are many different methods for creating and implementing a safety plan, most notably the Construction Industry Instituteâ s (CII) Zero Accidents Techniques (ZAT). This study will attempt to identify a relationship between the level of ZAT implementation and safety performance on industrial construction projects. This research also proposes that focusing efforts on certain ZAT elements over others will show different safety performance results. There are three findings in this study that can be used to assist safety professionals in designing efficient construction safety plans. The first is a significant log-log relationship that is identified between the DEA efficiency scores and Recordable Incident Rate (RIR). There is also a significant difference in safety performance found between the Light Industrial and Heavy Industrial sectors. Lastly, regression is used to show that the pre-construction and worker selection ZAT components can predict a better safety performance.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
25

LEIJON, ANNA. "The Right Price – At WhatCost? : A Multi-industry Approach in the Context ofTechnological Advancement." Thesis, KTH, Hållbarhet och industriell dynamik, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-214969.

Full text
Abstract:
he business climate is undergoing a transformation and managers are faced with several challenges, notthe least of which is related to pricing strategy. With an increased transparency in the market as well as anincreased competitive pressure, and with more sophisticated and well-informed consumers, retailbusinesses find it hard to navigate the pricing jungle. At the same time, the conventional wisdom in thefield of pricing and the theoretical models on the topic, originate from a time long before thedigitalization. Old models are not a problem in itself, but when there are new forces in the pricingecosystem, driven by technological advancement, an assessment of the incumbent models is in the bestinterest of both businesses and academia. The reason for this is that, the use of old models that rely oninaccurate assumptions may impact businesses’ prioritizing of resources or their overall business strategy.In addition, researchers might be distracted and the research field disrupted. Thus, the purpose of thisstudy is to discuss whether or not there are additional dimensions in pricing strategy that are not coveredby the incumbent pricing models. Here, dimensions refer to the key components of businesses’ strategicdecision making in regards to pricing.This thesis examines pricing models in today’s business context in order to answer the research question:“Are there additional dimensions of the empirical reality of pricing strategy that are not covered by theincumbent pricing models?” The research question has been studied qualitatively through a literaturereview, a pilot study and twelve case studies, where the pilot study had the purpose of exploring thedepth, whereas the multiple case studies focused on the breadth, of pricing strategies. The case studiescover businesses in different retail industries and of different sizes, namely the industries of Clothing &Accessories, Daily Goods, Furniture and Toys & Tools, and of the following sizes: micro, small, mediumand large. The empirical data has mainly been gathered by conducting interviews with production, salesand management personnel at the case businesses. The data has been structured, reduced and analysedwith the help of a framework of analysis that has been developed throughout the pilot study.The results of this study lean on previous research and a main divider in pricing strategies has beenidentified as businesses use either a data-driven or an intuition-driven approach in their strategic workwith pricing. As such, it is proposed that the division of pricing strategies need to be acknowledged, sincethe separate methodological approaches may lead to different results, while implying different costs,resources and required knowledge. Furthermore, the division may form a basis for competitive advantage,be extended to other areas of strategic management and become clearer, since the adoption of technologyand its impact will increase in the future. As a result, in the future of pricing, they key is going to be toaccount for both the strategic perspectives and the methodological approaches in the strategic decisionmaking process of pricing.
APA, Harvard, Vancouver, ISO, and other styles
26

Kostis, Angelos. "The role of Big Data in the evolution of Platform based Ecosystems : A case study of an emerging platform-based ecosystem in the software engineering industry." Thesis, Umeå universitet, Institutionen för informatik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-116871.

Full text
Abstract:
Platform based ecosystems are becoming dominant models in the software engineering industry. ‘Big data’ has recently gained increased attention from both academia and practitioners and it is believed that big data affects every sector and industry. While an abundance of research focuses on big data and platform-based ecosystems, these two are typically approached as secluded spheres. This study aimed toward an investigation of big data’s role in the evolution of platform-based ecosystems in the software engineering industry. In the present thesis the influence of big data on the software engineering industry and more specifically, the impact of big data on the evolution of software ecosystems, is examined. A case study focused on a platform owner and pioneer in the software engineering industry has been conducted. This study identifies challenges and opportunities triggered by the advent of big data in context of platform-based ecosystems. Hence, considerable insight regarding the impact of big data on contemporary platform providers and the evolution of platform-centric ecosystems is gained. The findings illustrate that software ecosystems are affected by big data in a positive manner, but some identified challenges emerge and have to be tackled. Additionally, in this paper, it is suggested that both academia and practitioners should dig deeper into this relationship and identify how the evolution of platform-based ecosystems is impacted by the advent of big data.
APA, Harvard, Vancouver, ISO, and other styles
27

Cavallin, Petter. "Data driven support of production through increased control of information parameters : A case study." Thesis, Uppsala universitet, Institutionen för samhällsbyggnad och industriell teknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-419891.

Full text
Abstract:
The current manufacturing business environment becomes more dependent of digital tools to increase business opportunities and customer value. The organizations ability to embrace the digital tools is dependent of a its digital maturity position. The organization structure, information systems, and communication are variables affecting the position and enables or disables possibility of data-based decisions-making (DBDM). To determine the ability information system and information flow is analyzed through a case study at one of the production departments. The areas studied in the case study are information flow of metal powder and compression tools.  The case study is performed to study the organizations ability to connecting information, study information flow and assess potential information disruption. It is assessed by using digital maturity assessments. This result provides an insight of how it affects the DBDM abilities within the department. These areas are common in a general production setting. The metal powder area is analyzed by an experiment where the metal powder containers is manually measured and evaluate the real weight compared to the depreciated weight in the information system. The compression tool analysis is performed by extracting and analyzing structured- and unstructured machine data from the production. This analytical angle is dependent of reliable data, and information disruption between the production processes and the servers is noticed during the extraction of data. This extraction process and analysis resembles the need when implementing machine learning and other automatic applications.  The 360DMA assess a general view of the organizations position and follow up with a method how to reach certain goals to increase one of the five levels. The Acatech-model is used to assess two structural areas, resources and information systems. The metal powder container analysis shows that there is a problem between the information stored in the systems regarding weight of the metal powder containers. The compression tool analysis result is that the stored data about the compression tools and the count of the different components is not correct. This and difficulties with manually- and automatically extracting data from server’s cause information disruptions and decrease the production process information reliability and validity. This decrease the ability to use the production data to make data driven decisions and gain insights about the production. The digital maturity assessment position the organization on a connectivity level (Acatech model) regarding information systems and resources means that data is unreliable and once its reliable the next level is in reach. The varying position within the 360DMA model call for management to synchronize development between processes by introducing strategies, define responsibilities and understand the information flow.
APA, Harvard, Vancouver, ISO, and other styles
28

Schaefer, Kerstin J. [Verfasser]. "Competitiveness through R&D internationalization : patent- and interview data based studies on a latecomer in the telecommunications industry / Kerstin J. Schaefer." Hannover : Gottfried Wilhelm Leibniz Universität Hannover, 2020. http://d-nb.info/1204459355/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Samagaio, Antonio. "Essays on managing english football clubs." Doctoral thesis, Instituto Superior de Economia e Gestão, 2015. http://hdl.handle.net/10400.5/9106.

Full text
Abstract:
Doutoramento em Gestão
Esta dissertação visa estudar a performance corporativa dos clubes ingleses de futebol profissional, bem como os determinantes da performance dos clubes, com destaque para o efeito dos jogadores formados pelas Academias. O estudo evidencia existir uma associação positiva e significativa entre a performance financeira e desportiva dos clubes ingleses ao longo das épocas de 1993/94 a 2010/11. Os testes de cointegração indicam a existência uma relação estrutural de longo-prazo entre as variáveis desportivas e financeiras. Este estudo proporciona suporte à corrente da literatura que refere a maximização da performance desportiva sujeita a uma restrição de sustentabilidade financeira de longo prazo, como os principais objectivos dos clubes de futebol. Os testes de causalidade de Granger mostram que existe relações causais diferenciadas entre clubes. Os jogadores formados pelos próprios clubes tiveram um impacto negativo no desempenho desportivo e receitas, mas um efeito positivo na redução dos gastos salariais. A opção por jogadores formados por outros clubes ingleses teve uma influência negativa na performance desportiva, mas um efeito positivo na rendibilidade e despesas salariais dos clubes. Os resultados sugerem a necessidade de melhorar a produtividade do sistema de desenvolvimento de jovens jogadores em Inglaterra. Finalmente, observámos que os clubes ingleses são heterogéneos sinalizando a existência de peculiaridades em cada clube que são importantes para compreender o a performance alcançada e como desenvolvem as suas vantagens competitivas sustentáveis.
This dissertation examines the corporate performance of English professional football clubs and the determinants of clubs’ performance, with particular emphasis on homegrown locally-trained players. The study shows that there is a positive and significant association between the financial and sporting performance of English clubs over the 1993/94 to 2010/11 seasons. Cointegration tests show that sporting performance and financial variables are linked by a set of long-run structural relationship. Our study lends supports to the theoretical stream that argues that the maximisation of sporting goals, subject to constraint of long- term financial sustainability, are the two main objectives of football clubs. Granger causality tests show that there are different causal relationships between clubs. Homegrown club-trained players had a negative impact on the sporting performance and revenue functions, yet they had a positive effect on reducing salary expenditure. Association-trained players option had a negative influence on the sporting performance function, but a positive effect on profitability and wage expenses for English football clubs. The results suggest that there is a need to improve the productivity of the system for developing young players in England. Finally, we observed that English clubs are heterogeneous, which signals that idiosyncrasies exist in each club, which is important for understanding both performance and how to develop sustainable competitive advantages.
APA, Harvard, Vancouver, ISO, and other styles
30

Oliveira, Barros Filipe Marinho. "Proposal for a LCA improvement roadmap in the Agri-food sector based on information exchange requirements and the enclosed data in recent LCAs works." Doctoral thesis, Universitat Politècnica de València, 2017. http://hdl.handle.net/10251/89082.

Full text
Abstract:
Innovation is essential to promote human progress and to improve the humans' quality of life, but it should be done in a social and environmental context and in accordance with the principles of sustainable development. To achieve this challenge the environmental innovation guidelines should be taken into account. In this line, it is necessary to analyze the life cycle assessment (LCA) of any product, process or service and compute its environmental impacts. Despite the rapid evolution of environmental methods and tools and the increase of sustainability studies in recent years, LCA remains an area that still has to face major development challenges. This thesis provides an analysis from a new perspective with the intention to serve as a support in the conceptual and empirical application of the LCA in the Agri-food sector. It consists of a qualitative analysis designed to know the type of relationship between the different actors involved and their information exchange needs. The case study made it possible to compare the differences between the academic and the industrial fields, as well as the differences between Spanish and Brazilian LCA experts. Through expert panels, 40 specialists were interviewed and were asked to made a survey to evaluate experts' relationships using the Social Network Analysis method (SNA). Moreover, the network flow of environmental information in Brazil and Spain was mapped. A second quantitative study was carried out reviewing 70 scientific publications of LCA in the Agri-food sector according to a checklist based on the definition of 20 control variables. The objective was to evaluate the quantity and quality of the information enclosed in the different works. To do this, the entropy and diversity of information were calculated using the Shannon and the equitability indexes, using the number of inputs considered in each impact category. A threshold of minimum information is proposed, using percentiles 25 and 75 (Tukey values) of the calculated Shannon indexes from the papers sample. Moreover, a cluster analysis was done using 10 out the 20 control variable to classify LCAs into clusters with similar levels of performance for the LCAs of the same group and different from the LCA belonging to the other groups. Based on the analysis of the centers of resulting group, the strengths and weaknesses of each group were identified. Then, a roadmap or improvement plan was succinctly defined, pointing out the actions to be taken to improve the performance levels in each group in the short, medium and long term. Finally, a set of actions to improve and facilitate the implementation of LCA in the Agri-food sector was defined as a kind of good practice manual. In sum, it could be concluded that this present thesis could serve to improve the LCA studies performance levels for industry, and, at the same time, it could serve as a baseline with which to compare academic standards of a more academic works.
La innovación es esencial para promover el progreso de la humanidad y la mejora de la calidad de vida, pero debe realizarse respetando un suelo social y un techo ambiental de acuerdo con los principios del desarrollo sostenible. Para intentar conseguirlo surge la innovación ambiental. En esa línea, resulta necesario analizar el ciclo de vida (ACV) de cualquier producto, proceso o servicio y computar sus impactos ambientales. A pesar de la rápida evolución de los métodos y herramientas y del incremento de estudios en los últimos años, el ACV sigue siendo un área que se enfrenta a retos de desarrollo importantes. Esta tesis proporciona un análisis desde una perspectiva nueva con la intención de servir de apoyo en la aplicación conceptual y empírica del ACV en el sector agroalimentario. Consta de un análisis cualitativo destinado a conocer el tipo de relación entre los distintos actores involucrados y sus necesidades de intercambio de información. El caso de aplicación permite comparar las diferencias entre el mundo académico y el industrial, así como las diferencias entre expertos en ACV de España y Brasil. A través de paneles de expertos se entrevistó a 40 especialistas y con un cuestionario se evaluó la red de contactos usando el método de Análisis de Redes Sociales (SNA). Con todo se mapeó el flujo de información ambiental en Brasil y España. En un segundo estudio cuantitativo se realizó una revisión crítica de 70 publicaciones científicas de ACV pertenecientes al sector agroalimentario, evaluando las mejores revistas y congresos de todo el mundo entre 2010 y 2016 a partir de la definición de 20 variables de control. El objetivo era evaluar la cantidad y calidad de la información contenida en los distintos trabajos. Para ello se calculó la entropía y diversidad de la información a través del Índice de Shannon y del cálculo de la heterogeneidad en lo refiere al número de inputs considerados en cada categoría de impacto. Tras los valores obtenidos se proponen unos umbrales de información mínima aconsejable usando como límites el valor de las bisagras de Tukey de la distribución de los 70 índices de Shannon calculados. Por otra parte, a partir de 10 de las 20 variables de control se agruparon los distintos ACV analizados con objeto de clasificarlos en grupos con parecido nivel de desempeño para los ACV de un mismo grupo y distinto al de los ACV pertenecientes al resto de grupos. A partir del análisis del análisis de los centros de cada grupo, se identificó las fortalezas y debilidades de cada grupo, para más tarde definir de forma sucinta un mapa de ruta o plan de mejora apuntando las acciones a realizar para mejorar los niveles de desempeño en el corto, medio y largo plazo de cada grupo. Finalmente, se definió a modo de un manual de buenas prácticas un conjunto de acciones a realizar para mejorar y facilitar la realización de ACV en el sector agroalimentario. Con todo. Se podría concluir que la tesis puede servir para mejorar los niveles de desempeño de la realización futura de estudios de ACV en el sector industrial, al tiempo que podría servir como línea de base con la que comparar los estándares de estudios de carácter más académico.
La innovació és essencial per a promoure el progrés de la humanitat i la millora de la qualitat de vida, però ha de realitzar-se respectant un sòl social i un sostre ambiental d'acord amb els principis del desenrotllament sostenible. Per a intentar aconseguir-ho sorgix la innovació ambiental. En eixa línia, resulta necessari analitzar el cicle de vida (ACV) de qualsevol producte, procés o servici i computar els seus impactes ambientals. A pesar de la ràpida evolució dels mètodes i ferramentes i de l'increment d'estudis en els últims anys, l'ACV continua sent una àrea que s'enfronta a reptes de desenrotllament importants. Esta tesi proporciona una anàlisi des d'una perspectiva nova amb la intenció de servir de suport en l'aplicació conceptual i empírica de l'ACV en el sector agroalimentari. Consta d'una anàlisi qualitativa destinada a conèixer el tipus de relació entre els distints actors involucrats i les seues necessitats d'intercanvi d'informació. El cas d'aplicació permet comparar les diferències entre el món acadèmic i l'industrial, així com les diferències entre experts en ACV d'Espanya i Brasil. A través de panells d'experts es va entrevistar a 40 especialistes i amb un qüestionari es va avaluar la xarxa de contactes usant el mètode d'Anàlisi de Xarxes Socials (SNA). Amb tot es dissenyà el flux d'informació ambiental a Brasil i Espanya. En un segon estudi quantitatiu es va realitzar una revisió crítica de 70 publicacions científiques d'ACV pertanyents al sector agroalimentari, avaluant les millors revistes i congressos de tot el món entre 2010 i 2016 a partir de la definició de 20 variables de control. L'objectiu era avaluar la quantitat i qualitat de la informació continguda en els distints treballs. Per a això es va calcular l'entropia i diversitat de la informació a través de l'Índex de Shannon i del càlcul de l'heterogeneïtat en el que es refereix al nombre d'inputs considerats en cada categoria d'impacte. Després dels valors obtinguts es proposen uns llindars d'informació mínima aconsellable usant com a límits el valor de les frontisses de Tukey de la distribució dels 70 índexs de Shannon calculats. D'altra banda, a partir de 10 de les 20 variables de control es van agrupar els distints ACV analitzats a fi de classificar-los en grups amb paregut nivell d'excel·lència per als ACV d'un mateix grup i diferent del dels ACV pertanyents a la resta de grups. A partir de l'anàlisi de l'anàlisi dels centres de cada grup, es va identificar les fortaleses i debilitats de cada grup, per a més tard definir de forma succinta un mapa de ruta o pla de millora apuntant les accions a realitzar per a millorar els nivells d'exercici en el curt, mitjà i llarg termini de cada grup. Finalment, es va definir a manera d'un manual de bones pràctiques un conjunt d'accions a realitzar per a millorar i facilitar la realització d'ACV en el sector agroalimentari. Amb tot, es podria concloure que la tesi pot servir per a millorar els nivells d'exercici de la realització futura d'estudis d'ACV en el sector industrial, alhora que podria servir com a línia de base amb què comparar els estàndards d'estudis de caràcter més acadèmic.
Oliveira Barros, FM. (2017). Proposal for a LCA improvement roadmap in the Agri-food sector based on information exchange requirements and the enclosed data in recent LCAs works [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/89082
TESIS
APA, Harvard, Vancouver, ISO, and other styles
31

Dahlqvist-Sjöberg, Philip, and Robin Strandlund. "Predicting the area of industry : Using machine learning to classify SNI codes based on business descriptions, a degree project at SCB." Thesis, Umeå universitet, Statistik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-160806.

Full text
Abstract:
This study is a part of an experimental project at Statistics Sweden,which aims to, with the use of natural language processing and machine learning, predict Swedish businesses’ area of industry codes, based on their business descriptions. The response to predict consists of the most frequent 30 out of 88 main groups of Swedish standard industrial classification (SNI) codes that each represent a unique area of industry. The transformation from business description text to numerical features was done through the bag-of-words model. SNI codes are set when companies are founded, and due to the human factor, errors can occur. Using data from the Swedish Companies Registration Office, the purpose is to determine if the method of gradient boosting can provide high enough classification accuracy to automatically set the correct SNI codes that differ from the actual response. Today these corrections are made manually. The best gradient boosting model was able to correctly classify 52 percent of the observations, which is not considered high enough to implement automatic code correction into a production environment.
APA, Harvard, Vancouver, ISO, and other styles
32

Johansson, David. "Automatic Device Segmentation for Conversion Optimization : A Forecasting Approach to Device Clustering Based on Multivariate Time Series Data from the Food and Beverage Industry." Thesis, Luleå tekniska universitet, Institutionen för system- och rymdteknik, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:ltu:diva-81476.

Full text
Abstract:
This thesis investigates a forecasting approach to clustering device behavior based on multivariate time series data. Identifying an equitable selection to use in conversion optimization testing is a difficult task. As devices are able to collect larger amounts of data about their behavior it becomes increasingly difficult to utilize manual selection of segments in traditional conversion optimization systems. Forecasting the segments can be done automatically to reduce the time spent on testing while increasing the test accuracy and relevance. The thesis evaluates the results of utilizing multiple forecasting models, clustering models and data pre-processing techniques. With optimal conditions, the proposed model achieves an average accuracy of 97,7%.
APA, Harvard, Vancouver, ISO, and other styles
33

王晨 and Chen Wang. "The impact of the Internet on development strategies of real estate agencies: a qualitative study based onBeijing's real estate agency industry." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 2003. http://hub.hku.hk/bib/B31244853.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Fridholm, Victoria. "IMPROVE MAINTENANCE EFFECTIVENESS AND EFFICIENCY BY USING HISTORICAL BREAKDOWN DATA FROM A CMMS : Exploring the possibilities for CBM in the Manufacturing Industry." Thesis, Mälardalens högskola, Akademin för innovation, design och teknik, 2018. http://urn.kb.se/resolve?urn=urn:nbn:se:mdh:diva-39816.

Full text
Abstract:
Purpose: Explore how historical data from a CMMS can be used in order to improve maintenance effectiveness and efficiency of activities, and investigate the possibilities for CBM in the manufacturing industry in the context of digitalization.  Research questions: RQ1: To what extent could condition-based maintenance or other maintenance types being used in order to predict, prevent or in other way eliminate historical breakdowns/faults?  RQ2: Which significance has an organization's degree of maturity to reduce the number of breakdowns?  Method: A case study was performed at Volvo Construction Equipment Operations in Eskilstuna, who manufactures machinery for the construction industry. The case study was compiled in two phases. Phase one was a quantitative study where raw data were collected from a CMMS and tabulated in order to later perform in-depth analysis. Phase two was designed to collect information that generated a wider understanding of the research area, by performing interviews and observations. A literature study was performed to compare the empirical findings with peer-reviewed information to ensure the quality of the study. The data is compiled and analyzed with an abductive approach. The analysis was followed by a discussion of how the research findings could support identifying possibilities of different maintenance types in the future.  Conclusion: The result showed that using historical breakdown data from a CMMS can be useful in order to identify organization’s current state and what possibilities different maintenance types have to decrease the number of breakdowns. To what extent the breakdowns can be decreased relies not only on the maintenance type but also an organizations maturity level. The case study´s result showed that by combining different maintenance types and increasing degree of maturity, Volvo could decrease the historical breakdowns with 86,5%. By only using CBM with current maturity level, 56% of the historical breakdowns could be predicted. However, to decide how many breakdowns that is cost-effective to prevent and precisely what maintenance type that should be used requires a cost analysis which this study is not covering.
APA, Harvard, Vancouver, ISO, and other styles
35

Leijon, Anna Mikaelsdotter. "The Right Price - At What Cost? : A Multi-industry Approach in the Context of Technological Advancement." Thesis, KTH, Skolan för industriell teknik och management (ITM), 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-217898.

Full text
Abstract:
The business climate is undergoing a transformation and managers are faced with several challenges, not the least of which is related to pricing strategy. With an increased transparency in the market as well as anincreased competitive pressure, and with more sophisticated and well-informed consumers, retail businesses find it hard to navigate the pricing jungle. At the same time, the conventional wisdom in the field of pricing and the theoretical models on the topic, originate from a time long before the digitalization. Old models are not a problem in itself, but when there are new forces in the pricing ecosystem, driven by technological advancement, an assessment of the incumbent models is in the best interest of both businesses and academia. The reason for this is that, the use of old models that rely on inaccurate assumptions may impact businesses’ prioritizing of resources or their overall business strategy. In addition, researchers might be distracted and the research field disrupted. Thus, the purpose of this study is to discuss whether or not there are additional dimensions in pricing strategy that are not covered by the incumbent pricing models. Here, dimensions refer to the key components of businesses’ strategic decision making in regards to pricing. This thesis examines pricing models in today’s business context in order to answer the research question: “Are there additional dimensions of the empirical reality of pricing strategy that are not covered by the incumbent pricing models?” The research question has been studied qualitatively through a literature review, a pilot study and twelve case studies, where the pilot study had the purpose of exploring the depth, whereas the multiple case studies focused on the breadth, of pricing strategies. The case studies cover businesses in different retail industries and of different sizes, namely the industries of Clothing & Accessories, Daily Goods, Furniture and Toys & Tools, and of the following sizes: micro, small, medium and large. The empirical data has mainly been gathered by conducting interviews with production, sales and management personnel at the case businesses. The data has been structured, reduced and analysed with the help of a framework of analysis that has been developed throughout the pilot study. The results of this study lean on previous research and a main divider in pricing strategies has been identified as businesses use either a data-driven or an intuition-driven approach in their strategic work with pricing. As such, it is proposed that the division of pricing strategies need to be acknowledged, since the separate methodological approaches may lead to different results, while implying different costs, resources and required knowledge. Furthermore, the division may form a basis for competitive advantage, be extended to other areas of strategic management and become clearer, since the adoption of technology and its impact will increase in the future. As a result, in the future of pricing, they key is going to be to account for both the strategic perspectives and the methodological approaches in the strategic decision making process of pricing.
Affärsklimatet genomgår en omvandling och företagsledare står inför flera utmaningar, inte minst utmaningar som är relaterade till prissättningsstrategi. Med en alltmer transparent marknad och en ökad konkurrens företag emellan samt en mer sofistikerad och välinformerad konsument, finner företagen i detaljhandeln det svårt att navigera i prissättningsdjungeln. Samtidigt härrör den konventionella visdomen inom prissättning och de teoretiska modellerna på samma ämne från en tid långt innan digitaliseringen. Gamla modeller är inte ett problem i sig, men när det finns nya krafter i prissättningens ekosystem, som drivs på av teknologisk utveckling, är en omprövning av de befintliga modellerna i både företag och akademikers intresse. Användningen av gamla modeller som bygger på felaktiga antaganden kan dock inverka på företagens prioritering av resurser eller på deras övergripande affärsstrategi. Dessutom kan forskare distraheras och forskningsfältet störas. Syftet med denna studie är således att diskutera huruvida det finns ytterligare dimensioner i prissättningsstrategi som inte omfattas av de befintliga prissättningsmodellerna. Här avser dimensioner nyckelkomponenter i företagens strategiska beslutsfattande när det gäller prissättning. Denna avhandling undersöker prissättningsmodellerna i dagens affärssammanhang för att svara på frågan: "Finns det ytterligare dimensioner av den empiriska verkligheten av prissättningsstrategi som inte omfattas av de befintliga prissättningsmodellerna?" Forskningsfrågan har studerats kvalitativt genom en litteraturgranskning, en pilotstudie och tolv fallstudier, där pilotstudien hade till syfte att utforska djupet, medan de flera fallstudierna inriktades på bredden, av prissättningsstrategier. Fallstudierna omfattar företag i industrin för detaljhandeln och företag av olika storlekar, nämligen inom detaljhandeln för Kläder & Accessoarer, Dagligvaror, Möbler och Leksaker & Verktyg, och av följande storlekar: mikro, små, medelstora och stora. Den empiriska datan har huvudsakligen insamlats med hjälp av intervjuer med produktions- och försäljningspersonal samt företagsledare hos företagen i fallstudierna. Uppgifterna har strukturerats, reducerats och analyserats med hjälp av en analysram som har utvecklats under pilotstudien. Resultaten av denna studie tar rygg på tidigare forskning och en huvuddelare i prissättningsstrategier har identifierats, eftersom företag använder antingen ett data-drivet eller ett intuition-drivet tillvägagångssätt i sitt strategiska arbete med prissättning. Som sådan föreslås att uppdelningen av prissättningsstrategier måste tas hänsyn till eftersom de separata metodologiska metoderna kan leda till olika resultat, samtidigt som de innebär olika kostnader samt kräver olika resurser och nödvändig förkunskap. Dessutom kan uppdelningen ligga till grund för konkurrensfördel, utvidgas till andra strategiska områden för företagsledare och bli tydligare, eftersom teknikens utbredning och påverkan kommer att öka i framtiden. Som en följd av detta kommer nyckeln i framtidens strategiska prissättning att vara att ta hänsyn till både de strategiska perspektiven och de metodologiska metoderna i den strategiska beslutsprocessen för prissättning.
APA, Harvard, Vancouver, ISO, and other styles
36

Yang, Cheng-Yun (Mark). "Understanding the role of b2b social and relational factors on web-based EDI adoption : a collaborative approach in the container liner shipping industry." Thesis, Royal Holloway, University of London, 2013. http://repository.royalholloway.ac.uk/items/8fbbe328-4d42-43ed-b4cf-5c3ec721c248/1/.

Full text
Abstract:
Organisations today operate in a complex, unpredictable, globalised, and competitive business environment and challenging marketplace, emphasis on just-in-time deliveries and service quality through the integration of resources. In response to the changing business dynamics, web-based EDI (WEDI) has been adopted by the global container shipping industry to cost-effectively utilise available resources to build and remain its competitive advantage. To improve the current understanding of WEDI adoption factors, this research explores inter-organisational collaboration of WEDI adoption, focusing on the organisational adoption stage and examine how business level social and relational factors influence WEDI adoption in the context of the container liner shipping industry. Based on theoretical and literature reviews on previous EDI adoption, in particular to three key inter-organisational system adoption empirical research (including Lee and Lim, 2005; Boonstra and de Vries, 2005; Zhu et al., 2006), an integrated research model was established of which features ‘Social Resources' of (trading partner power, trading partner dependence and social network effect), ‘Relational Resources' of (trading partner trust, top management commitment and guanxi, ‘Reward' of (perceived interests), and ‘Technological State' of (technological trust and e-readiness) as prominent antecedents. Through E-mail and Web Survey approach, we examine the nine independent constructs in the research model quantitatively on a dataset of 164 respondents from the top 20 leading container shipping liner in year 2009 and 195 respondents of the top 20 leading container shipping liner in 2012 by 3 case studies through online surveys. After examining its reliability, validity and correlation of the constructs, PLS structural Equation Modelling was applied to test hypotheses. The empirical results update how firms exchange business dada, in particular to the use of WEDI in the industry. This study demonstrated that ‘Social Resources' of trading partner power, trust and guanxi, positively associated with the perceived interest of WEDI adoption. Relational Resources' of trading partner trust, top management commitment and guanxi positively associated with the perceived interest of WEDI adoption. It also confirms the nine constructs to be positively association the WEDI adoption decisions. Drawing upon social exchange theory, we argue that firms simultaneously modify and adjust their social and relational resources to affect other firms' expected benefit as a reward. Overall, based on a rigorous empirical analysis of two different international dataset, this research provides valuable and the most updated insights into a set of key factors that influence WEDI adoption. By recognising what may influence WEDI adoption in the context of the container liner shipping, this study will be useful in suggesting strategies to overcome the constraints that inhibit adoption. Researchers will benefit from the study's theoretical insights and explore further WEDI adoption and diffusion patterns. Practitioners who learn why organisations adopt WEDI and what the related factors are that influence the adoption process will make better strategic decisions concerning the adoption of WEDI.
APA, Harvard, Vancouver, ISO, and other styles
37

Tabosa, Florencio Filho Roberto. "Uma aplicação de mineração de dados ao programa bolsa escola da prefeitura da cidade do Recife." Universidade Federal de Pernambuco, 2009. https://repositorio.ufpe.br/handle/123456789/2440.

Full text
Abstract:
Made available in DSpace on 2014-06-12T15:58:15Z (GMT). No. of bitstreams: 2 arquivo3328_1.pdf: 1621200 bytes, checksum: d29e5bc60f1421ccb8a8ca95694cb6d6 (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2009
Faculdade dos Guararapes
A tarefa de Mineração de Dados envolve um conjunto de técnicas de estatística e inteligência artificial com objetivo de descobrir informações não encontradas por ferramentas usualmente utilizadas para extração e armazenamento de dados em grandes bases de dados. A aplicação da Mineração de Dados pode ser realizada em qualquer área de conhecimento (Ciências Exatas, Humanas, Sociais, Biológica, Saúde, Agrária e outras) proporcionando ganhos de informações e conhecimentos, ora desconhecidos, em qualquer uma delas. Este trabalho apresenta uma aplicação de mineração de dados ao programa Bolsa Escola da Prefeitura da Cidade do Recife (PCR), particularmente na investigação da situação das famílias beneficiadas, com o objetivo de oferecer à administração municipal uma ferramenta de suporte à decisão capaz de aprimorar o processo de concessão de benefícios. Foi analisada uma massa de dados sócio-econômicos inicialmente de cerca de 60 mil famílias cadastradas no programa. Foi utilizada uma rede neural artificial MultiLayer Perceptron (MLP) para classificar as famílias beneficiadas com base nas suas características sócio-econômicas. A avaliação de desempenho e resultados obtidos, além da resposta da especialista no domínio de aplicação, demonstram a viabilidade dessa aplicação no processo de concessão do benefício ao Programa Bolsa Escola da Prefeitura da Cidade do Recife
APA, Harvard, Vancouver, ISO, and other styles
38

Fuga, Alba. "Harmonisation de l'information géo-scientifique de bases de données industrielles par mesures automatiques de ressemblance." Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066184/document.

Full text
Abstract:
Pour automatiser l’harmonisation des bases de données industrielles de navigation sismique, une méthodologie et un logiciel ont été mis en place. La méthodologie d’Automatisation des Mesures de Ressemblance (AMR), permet de modéliser et hiérarchiser les critères de comparaison servant de repères pour l’automatisation. Accompagné d’un ensemble de seuils de tolérance, le modèle hiérarchisé a été utilisé comme filtre à tamis dans le processus de classification automatique permettant de trouver rapidement les données fortement similaires. La similarité est mesurée par un ensemble de métriques élémentaires, aboutissant à des scores numériques, puis elle est mesurée de manière plus globale et contextuelle, notamment suivant plusieurs échelles : entre les attributs, entre les données, et entre les groupes. Ces évaluations de la similarité permettent à la fois au système expert de présenter des analyses précises automatisées et à l’expert géophysicien de réaliser des interprétations multicritères en faisant en environ deux jours le travail qu’il faisait en trois semaines. Les stratégies de classification automatique sont quant à elles adaptables à différentes problématiques, à l’harmonisation des données, mais aussi à la réconciliation des données ou au géo-référencement de documents techniques. Le Logiciel Automatique de Comparaisons (LAC) est une implantation de l’AMR réalisée pour les services de Data Management et de Documentation Technique de TOTAL. L’outil industrialisé est utilisé depuis trois ans, mais n’est plus en maintenance informatique aujourd’hui malgré son usage. Les nouvelles fonctionnalités d'imagerie de base de données qui ont été développées dans cette thèse n'y sont pas encore intégrées, mais devraient permettre une meilleure visualisation des phénomènes. Cette dernière manière de représenter les données, fondée sur la mesure de similarité, permet d’avoir une image assez claire de données lourdes car complexes tout en permettant de lire des informations nécessaires à l’harmonisation et à l’évaluation de la qualité des bases. Ne pourrait-on pas chercher à caractériser, comparer, analyser, gérer les flux entrants et sortants des bases de données, suivre leurs évolutions et tirer des modes d’apprentissage automatique à partir du développement de cette imagerie ?
In order to harmonize industrial seismic navigation data bases, a methodology and a software have been developed. The methodology of Similarity Measurement Automation provides protocols to build a model and a hierarchy for the comparison criteria that shall be used as points of reference for the automation. With its tolerance set of thresholds, the model has been used as a scaled filter within the automatic classification process which aim is to find as quickly as possible very similar data. Similarity is measured by combinations of elementary metrics giving scores, and also by a global and contextual procedure, giving access to three levels of results: similarity between attributes, between individuals, and between groups. Accurate automated analyses of the expert system as well as human interpretations on multiple criteria are now possible thanks to these similarity estimations, reducing to two days instead of three weeks the work of a geophysicist. Classification strategies have been designed to suit the different data management issues, as well as harmonization, reconciliation or geo-referencing. The methodology has been implemented in software for automatic comparisons named LAC, and developed for Data Management and Technical Documentation services in TOTAL. The software has been industrialized and has been used for three years, even if now there is no technical maintenance anymore. The last data base visualization functionalities that have been developed have not been integrated yet to the software, but shall provide a better visualization of the phenomena. This latest way to visualize data is based on similarity measurement and obtains an image of complex and voluminous data clear enough. It also puts into relief information useful for harmonization and data quality evaluation. Would it be possible to characterize, compare, analyze and manage data flows, to monitor their evolution and figure out new machine learning methods by developing further this kind of data base imaging?
APA, Harvard, Vancouver, ISO, and other styles
39

Parada, Medina Raúl. "RFID based people-object interaction detection." Doctoral thesis, Universitat Pompeu Fabra, 2016. http://hdl.handle.net/10803/381250.

Full text
Abstract:
The Internet of Things (IOT) technologies introduced the “things” entity to interact with computers and people, being a key factor to enhance services in the Smart City context. An example of IoT technology is radio-frequency identification (RFID) which provides identification and communication capabilities to simple objects. Based on the utilization of RFID, our goal is to enable context-aware scenarios providing further information on the people-object interaction in the environment. Our contributions are focused on hardware infrastructure and intelligent systems applied to the IoT. Specifically, we propose the design and implementation of low-cost antennas providing the context-aware hardware infrastructure, and empirical methods to detect people-object interactions for the following applications: user interest, loss prevention system and direction of pass. Finally, we evaluate part of the contributions of this dissertation in real retail environments. We believe this dissertation may contribute towards improving the state-of-the-art in IoT and Smart City technologies.
La Internet de les coses (IoT) va introduir l’element “coses” per interactuar amb ordinadors i persones sent un factor clau per millorar serveis en el context de ciutat intel•ligent. Un exemple de tecnologia IoT és la identificació per radiofreqüència (RFID) la qual proporciona propietats d’identificació i comunicació d’objectes simples. Basat en la utilització de RFID, el nostre objectiu és permetre als escenaris sensibles al context de proveir d’informació addicional en la interacció persona-objecte en entorns. Les nostres contribucions estan centrades en la infraestructura de maquinari i sistemes intel•ligents aplicats en la IoT. Específicament, proposem el disseny i implementació d’antenes econòmiques proporcionant la infraestructura de maquinari, i mètodes empírics per detectar les interaccions persona-objecte per les següents aplicacions: interest d’usuaris, sistema en prevenció de pèrdues i direcció de pas. Finalment, avaluem part de les contribucions d’aquesta dissertació en botigues. Creiem que aquesta dissertació pot contribuir a la millora de l’estat de l’art en IoT i tecnologies de ciutat intel•ligent.
APA, Harvard, Vancouver, ISO, and other styles
40

Rezgui, Yacine. "Intégration des systèmes d'informations techniques pour l'exploitation des ouvrages." Phd thesis, Ecole Nationale des Ponts et Chaussées, 1994. http://tel.archives-ouvertes.fr/tel-00523175.

Full text
Abstract:
La gestion de l'information technique et administrative produite durant le cycle de vie d'un projet de construction est envisageable via une description structurée des données. Cette description destinée à l'utilisateur mais aussi à l'ordinateur peut s'exprimer selon un langage (tel EXPRESS) qui devrait permettre l'inter-opérabilité des systèmes informatiques qui la mettent en oeuvre. Ces derniers manipulent ainsi une structure unique et non ambiguë de données, assurant de la sorte l'intégrité et la cohérence de l'information produite et manipulée. La description de cette structure est communément appelée "Modèle de Données". Le document est le support favori de description d'un projet d'ingénierie. Il constitue la base conceptuelle et réglementaire de tout processus industriel. L'analyse des documents produits durant le cycle de vie d'un projet révèle l'importance de leur cadre descriptif, législatif et juridique, comme en témoigne l'exemple du Cahier des Clauses Techniques Particulières (CCTP). Le CCTP est un des documents essentiels issu des études détaillées d'un projet. Il se distingue notamment par son volume, et par la pertinence de son contenu : il définit les conditions particulières d'exécution des ouvrages et complète leur description faite au travers des plans techniques. La consultation des entrepreneurs impose la répartition des corps d'état en lots de travaux. Les conséquences essentielles d'une telle démarche concernent la coordination de l'exécution des ouvrages et les responsabilités postérieures à leur achèvement. Tous ces détails et notamment tous ceux portant sur les limites de prestations entre lots doivent être judicieusement traités par le lot en question. Ainsi, le souci actuel des professionnels du bâtiment est de pouvoir produire au moment utile et opportun pour un prescripteur donné, un descriptif de qualité, compatible avec ceux précédemment approuvés, et fidèle à la description réelle du projet, fournie par un modèle de données du bâtiment. Cette thèse se propose de démontrer la possibilité de génération de pièces écrites via un modèle de données supportant la description formelle, physique et performancielle d'un projet de construction. Il s'agit de proposer une structure logique de document, à partir de laquelle est dérivée la définition type du CCTP de référence (DTD CCTP) en langage SGML. Les éléments de la DTD sont ensuite instanciés afin de produire la version balisée du CCTP. Une telle mise en oeuvre permet entre autres la génération du sommaire, des listes de références ainsi que des liens hypertexte internes et externes au document. Nous proposons par la suite un modèle d'association permettant l'indexation des concepts du modèle de données du bâtiment par des items documentaires du CCTP balisé. C'est au travers des instances de ce modèle qu'est produit le CCTP projet, moyennant tous les contrôles de cohérences internes et externes au document. Cette approche assure une qualité maximale des pièces descriptives d'un projet et contribue à la diminution des risques d'erreurs liés au processus complexe de conception / réalisation / maintenance d'une opération de construction. En guise de conclusion, nous proposons une généralisation de cette approche à tout type de document "projet".
APA, Harvard, Vancouver, ISO, and other styles
41

Tröger, Ralph. "Supply Chain Event Management – Bedarf, Systemarchitektur und Nutzen aus Perspektive fokaler Unternehmen der Modeindustrie." Doctoral thesis, Universitätsbibliothek Leipzig, 2014. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-155014.

Full text
Abstract:
Supply Chain Event Management (SCEM) bezeichnet eine Teildisziplin des Supply Chain Management und ist für Unternehmen ein Ansatzpunkt, durch frühzeitige Reaktion auf kritische Ausnahmeereignisse in der Wertschöpfungskette Logistikleistung und -kosten zu optimieren. Durch Rahmenbedingungen wie bspw. globale Logistikstrukturen, eine hohe Artikelvielfalt und volatile Geschäftsbeziehungen zählt die Modeindustrie zu den Branchen, die für kritische Störereignisse besonders anfällig ist. In diesem Sinne untersucht die vorliegende Dissertation nach einer Beleuchtung der wesentlichen Grundlagen zunächst, inwiefern es in der Modeindustrie tatsächlich einen Bedarf an SCEM-Systemen gibt. Anknüpfend daran zeigt sie nach einer Darstellung bisheriger SCEM-Architekturkonzepte Gestaltungsmöglichkeiten für eine Systemarchitektur auf, die auf den Designprinzipien der Serviceorientierung beruht. In diesem Rahmen erfolgt u. a. auch die Identifikation SCEM-relevanter Business Services. Die Vorzüge einer serviceorientierten Gestaltung werden detailliert anhand der EPCIS (EPC Information Services)-Spezifikation illustriert. Abgerundet wird die Arbeit durch eine Betrachtung der Nutzenpotenziale von SCEM-Systemen. Nach einer Darstellung von Ansätzen, welche zur Nutzenbestimmung infrage kommen, wird der Nutzen anhand eines Praxisbeispiels aufgezeigt und fließt zusammen mit den Ergebnissen einer Literaturrecherche in eine Konsolidierung von SCEM-Nutzeffekten. Hierbei wird auch beleuchtet, welche zusätzlichen Vorteile sich für Unternehmen durch eine serviceorientierte Architekturgestaltung bieten. In der Schlussbetrachtung werden die wesentlichen Erkenntnisse der Arbeit zusammengefasst und in einem Ausblick sowohl beleuchtet, welche Relevanz die Ergebnisse der Arbeit für die Bewältigung künftiger Herausforderungen innehaben als auch welche Anknüpfungspunkte sich für anschließende Forschungsarbeiten ergeben.
APA, Harvard, Vancouver, ISO, and other styles
42

Fenollosa, Artés Felip. "Contribució a l'estudi de la impressió 3D per a la fabricació de models per facilitar l'assaig d'operacions quirúrgiques de tumors." Doctoral thesis, Universitat Politècnica de Catalunya, 2019. http://hdl.handle.net/10803/667421.

Full text
Abstract:
La present tesi doctoral s’ha centrat en el repte d’aconseguir, mitjançant Fabricació Additiva (FA), models per a assaig quirúrgic, sota la premissa que els equips per fer-los haurien de ser accessibles a l’àmbit hospitalari. L’objectiu és facilitar l’extensió de l’ús dels prototips com a eina de preparació d’operacions quirúrgiques, transformant la pràctica mèdica actual de la mateixa manera que en el seu moment ho van fer tecnologies com les que van facilitar l’ús de radiografies. El motiu d’utilitzar FA, en lloc de tecnologies més tradicionals, és la seva capacitat de materialitzar de forma directa les dades digitals obtingudes de l’anatomia del pacient mitjançant sistemes d’escanejat tridimensional, fent possible l’obtenció de models personalitzats. Els resultats es centren en la generació de nou coneixement sobre com aconseguir equipaments d’impressió 3D multimaterials accessibles que permetin l’obtenció de models mimètics respecte als teixits vius. Per facilitar aquesta buscada extensió de la tecnologia, s’ha focalitzat en les tecnologies de codi obert com la Fabricació per Filament Fos (FFF) i similars basades en líquids catalitzables. La recerca s’alinea dins l’activitat de desenvolupament de la FA al CIM UPC, i en aquest àmbit concret amb la col·laboració amb l’Hospital Sant Joan de Déu de Barcelona (HSJD). El primer bloc de la tesi inclou la descripció de l’estat de l’art, detallant les tecnologies existents i la seva aplicació a l’entorn mèdic. S’han establert per primer cop unes bases de caracterització dels teixits vius -sobretot tous- per donar suport a la selecció de materials que els puguin mimetitzar en un procés de FA, a efectes de millorar l’experiència d’assaig dels cirurgians. El caràcter rígid dels materials majoritàriament usats en impressió 3D els fa poc útils per simular tumors i altres referències anatòmiques. De forma successiva, es tracten paràmetres com la densitat, la viscoelasticitat, la caracterització dels materials tous a la indústria, l’estudi del mòdul elàstic de teixits tous i vasos, la duresa d’aquests, i requeriments com l’esterilització dels models. El segon bloc comença explorant la impressió 3D mitjançant FFF. Es classifiquen les variants del procés des del punt de vista de la multimaterialitat, essencial per fer models d’assaig quirúrgic, diferenciant entre solucions multibroquet i de barreja al capçal. S’ha inclòs l’estudi de materials (filaments i líquids) que serien més útils per mimetitzar teixits tous. Es constata com en els líquids, en comparació amb els filaments, la complexitat del treball en processos de FA és més elevada, i es determinen formes d’imprimir materials molt tous. Per acabar, s’exposen sis casos reals de col·laboració amb l’HJSD, una selecció d’aquells en els que el doctorand ha intervingut en els darrers anys. L’origen es troba en la dificultat de l’abordatge d’operacions de resecció de tumors infantils com el neuroblastoma, i a la iniciativa del Dr. Lucas Krauel. Finalment, el Bloc 3 té per objecte explorar nombrosos conceptes (fins a 8), activitat completada al llarg dels darrers cinc anys amb el suport dels mitjans del CIM UPC i de l’activitat associada a treballs finals d’estudis d’estudiants de la UPC, arribant-se a materialitzar equipaments experimentals per validar-los. La recerca ampla i sistemàtica al respecte fa que s’estigui més a prop de disposar d’una solució d’impressió 3D multimaterial de sobretaula. Es determina que la millor via de progrés és la de disposar d’una pluralitat de capçals independents a fi de capacitar la impressora 3D per integrar diversos conceptes estudiats, materialitzant-se una possible solució. Cloent la tesi, es planteja com seria un equipament d’impressió 3D per a models d’assaig quirúrgic, a fi de servir de base per a futurs desenvolupaments.
La presente tesis doctoral se ha centrado en el reto de conseguir, mediante Fabricación Aditiva (FA), modelos para ensayo quirúrgico, bajo la premisa que los equipos para obtenerlos tendrían que ser accesibles al ámbito hospitalario. El objetivo es facilitar la extensión del uso de modelos como herramienta de preparación de operaciones quirúrgicas, transformando la práctica médica actual de la misma manera que, en su momento, lo hicieron tecnologías como las que facilitaron el uso de radiografías. El motivo de utilizar FA, en lugar de tecnologías más tradicionales, es su capacidad de materializar de forma directa los datos digitales obtenidos de la anatomía del paciente mediante sistemas de escaneado tridimensional, haciendo posible la obtención de modelos personalizados. Los resultados se centran en la generación de nuevo conocimiento para conseguir equipamientos de impresión 3D multimateriales accesibles que permitan la obtención de modelos miméticos respecto a los tejidos vivos. Para facilitar la buscada extensión de la tecnología, se ha focalizado en las tecnologías de código abierto como la Fabricación por Hilo Fundido (FFF) y similares basadas en líquidos catalizables. Esta investigación se alinea dentro de la actividad de desarrollo de la FA en el CIM UPC, y en este ámbito concreto con la colaboración con el Hospital Sant Joan de Déu de Barcelona (HSJD). El primer bloque de la tesis incluye la descripción del estado del arte, detallando las tecnologías existentes y su aplicación al entorno médico. Se han establecido por primera vez unas bases de caracterización de los tejidos vivos – principalmente blandos – para dar apoyo a la selección de materiales que los puedan mimetizar en un proceso de FA, a efectos de mejorar la experiencia de ensayo de los cirujanos. El carácter rígido de los materiales mayoritariamente usados en impresión 3D los hace poco útiles para simular tumores y otras referencias anatómicas. De forma sucesiva, se tratan parámetros como la densidad, la viscoelasticidad, la caracterización de materiales blandos en la industria, el estudio del módulo elástico de tejidos blandos y vasos, la dureza de los mismos, y requerimientos como la esterilización de los modelos. El segundo bloque empieza explorando la impresión 3D mediante FFF. Se clasifican las variantes del proceso desde el punto de vista de la multimaterialidad, esencial para hacer modelos de ensayo quirúrgico, diferenciando entre soluciones multiboquilla y de mezcla en el cabezal. Se ha incluido el estudio de materiales (filamentos y líquidos) que serían más útiles para mimetizar tejidos blandos. Se constata como en los líquidos, en comparación con los filamentos, la complejidad del trabajo en procesos de FA es más elevada, y se determinan formas de imprimir materiales muy blandos. Para acabar, se exponen seis casos reales de colaboración con el HJSD, una selección de aquellos en los que el doctorando ha intervenido en los últimos años. El origen se encuentra en la dificultad del abordaje de operaciones de resección de tumores infantiles como el neuroblastoma, y en la iniciativa del Dr. Lucas Krauel. Finalmente, el Bloque 3 desarrolla numerosos conceptos (hasta 8), actividad completada a lo largo de los últimos cinco años con el apoyo de los medios del CIM UPC y de la actividad asociada a trabajos finales de estudios de estudiantes de la UPC, llegándose a materializar equipamientos experimentales para validarlos. La investigación amplia y sistemática al respecto hace que se esté más cerca de disponer de una solución de impresión 3D multimaterial de sobremesa. Se determina que la mejor vía de progreso es la de disponer de una pluralidad de cabezales independientes, a fin de capacitar la impresora 3D para integrar diversos conceptos estudiados, materializándose una posible solución. Para cerrar la tesis, se plantea cómo sería un equipamiento de impresión 3D para modelos de ensayo quirúrgico, a fin de servir de base para futuros desarrollos.
APA, Harvard, Vancouver, ISO, and other styles
43

Almeida, António Manuel Galinho Pires de. "Modelo de sistemas de informação técnica baseado numa plataforma SIG." Master's thesis, 2006. http://hdl.handle.net/10362/3642.

Full text
Abstract:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Ciência e Sistemas de Informação Geográfica
O presente trabalho pretende desenvolver um modelo conceptual de um Sistema de Informação Técnica (SIT) baseado numa plataforma SIG, aplicado à Industria, mais especificamente à rede eléctrica de uma fábrica, apresentando ao mesmo tempo a metodologia a seguir na integração do modelo numa organização, e as vantagens que uma ferramenta como esta poderá proporcionar. O modelo conceptual do SIT começará por ser especificado e documentado em linguagem UML, tendo-se identificado neste processo, dois subsistemas na sua constituição, que serão posteriormente transpostos para uma plataforma SIG e para uma plataforma SGBD relacional, tendo-se recorrido para o efeito, ao modelo entidade-atributo-relação (EAR) de [CHEN, 1976] e às regras de transposição de [BENNET et al., 1999]. Concluída a transposição do modelo para as plataformas SIG e SGBD, realizaram-se simulações da sua aplicabilidade a uma grande organização, mais concretamente à VWAutoeuropa, empresa seleccionada para o estudo de caso. As simulações contemplaram os três tipos de análise suportados pelo SIT, nomeadamente, análise de problemas rotineiros de localização de equipamentos, análise de problemas com recurso à integração de informação de outros sistemas de informação, como o SAP e o Sistema de Gestão de Energia (SGE) e análise de problemas complexos com recurso a operações de geoprocessamento, em que neste caso o (SIT) pode ser encarado como um sistema de apoio à decisão. O modelo criado deixa antever que existe a possibilidade de expansão a outros tipos de infraestruturas, nomeadamente às redes de água, saneamento, gás e informática. O tipo de abordagem que foi feita ao longo da presente dissertação, através da inclusão de vários tipos de modelos, tornam esta dissertação numa espécie de Guideline a utilizar na integração de SIG’s ou outros Sistemas de Informação em organizações.
APA, Harvard, Vancouver, ISO, and other styles
44

Lai, Ruei-Yang, and 賴瑞陽. "Ontology based Big Data Analytics for the Benchmarking Hospitality Industry." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/86094624228790627954.

Full text
Abstract:
碩士
元智大學
工業工程與管理學系
105
Hospitality industry is a data rich industry that captures huge volumes of data of various types including arrival time, the frequency of use of public facilities, meals diet, customer service, social network comments, etc. with high velocity. These data encapsulate useful information regarding every phase of the customer journey. The effective use of analytics can improve dramatically how business is run in terms of delivering memorable and personalized guest experiences, while maximizing revenue and profits. The challenge of Big Data doesn't just stem from the volume and velocity of the data sets themselves, but also from the variety challenge posed by gaining big data insight in the context of an industry. For example, how to score the performance of hotels based on a variety of semantic data sources. Ontology is a formal representation of knowledge as a hierarchy of concepts within a domain, using a shared vocabulary to denote the types, properties and interrelationships of those concepts. An ontology driven Big Data analytics has potential in providing a pragmatic framework to address the semantic challenges presented by Big Data sets. This research aims to develop an ontology driven Big Data Sematic Analytic Platform which enables hotels to improve their overall performance in furnishing better customer experiences by infusing analytics through every phase of the guest journey. The platform allows the business in hospitality industry not only to capture and store the influx of semantic data effectively but also to evaluate its key performance factors by using ontology organized international standard knowledge-based such as U.S. AAA Diamond evaluation systems.
APA, Harvard, Vancouver, ISO, and other styles
45

"Cluster Metrics and Temporal Coherency in Pixel Based Matrices." Master's thesis, 2014. http://hdl.handle.net/2286/R.I.24849.

Full text
Abstract:
abstract: In this thesis, the application of pixel-based vertical axes used within parallel coordinate plots is explored in an attempt to improve how existing tools can explain complex multivariate interactions across temporal data. Several promising visualization techniques are combined, such as: visual boosting to allow for quicker consumption of large data sets, the bond energy algorithm to find finer patterns and anomalies through contrast, multi-dimensional scaling, flow lines, user guided clustering, and row-column ordering. User input is applied on precomputed data sets to provide for real time interaction. General applicability of the techniques are tested against industrial trade, social networking, financial, and sparse data sets of varying dimensionality.
Dissertation/Thesis
M.S. Computer Science 2014
APA, Harvard, Vancouver, ISO, and other styles
46

LEE, BO-RU, and 李柏儒. "Establishment of Data Mining System Based on Time Series - A Case Study of Machine Tool Industry Data." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/566h2r.

Full text
Abstract:
碩士
逢甲大學
工業工程與系統管理學系
106
A time series is a group of statistical data, the sequences of these data were arranged by the sequential occurrence. There may be potential causality between time series data. However, the causality of time series data may be quite complex. If data mining can be used to clarify the causality between time series, it is possible to find available predic-tion models. In order to let users explore the relationships between sequences quickly, this study mainly uses Excel to create a set of database. The database allows users to input the required time series data, and then use the internal VBA writing decision tree algorithm to explore its sequence. which makes users find results quickly. Therefore, this study will use the machine tool and economic indicators as actual cases to summa-rize various economic indicators, import and export amount and quantity as time series data and to find the causality between data.
APA, Harvard, Vancouver, ISO, and other styles
47

Zeng, Guan-Lun, and 曾冠倫. "Building an intelligent factory big data platform based on the Industry 4.0." Thesis, 2017. http://ndltd.ncl.edu.tw/handle/55839530487149056515.

Full text
Abstract:
碩士
中原大學
資訊管理研究所
105
In recent years the industry 4.0 development and the technology were become more and more popular.The industry 4.0 concepts mainly proposed by Germany in 2012.The research integrate the Internet of Things(IoT) and the Big date to make the industry 4.0 applications in the general tradition factory. Because of the formalization and the big quantification, the manufacture parameter usually will not be record or delete in the short time.Therefore,the research hope to build an big data analysis platform,will make the manufacture parameter to be record and analysis.The research hopes to monitor product production situation.After the research making the factory information visualize,staff member can grasp project progress quickly and help staff member to find out the problem.
APA, Harvard, Vancouver, ISO, and other styles
48

Lin, Ching-Yi, and 林靜怡. "Application of Big Data Analytics in Telecom Industry Based On BroadBand Log." Thesis, 2013. http://ndltd.ncl.edu.tw/handle/97914825195555360815.

Full text
Abstract:
碩士
國立臺灣科技大學
管理研究所
101
In recent years, big data has become a critical issue, driven by the popularity of smart phones and social media. Technology venders increase its investment in big data analysis industry. Besides, many companies began to invest money and human resource to estimate the benefit that big data can bring in. By analyzing the data from customers’ behavior to improve the hit rate of cross-sell and up-sell and generate more profits. The company which understanding more own customers can take more advantages in the competition. By using text mining to analyze unstructured data and combing internal structured data in order to find out the lifestyle and preferences of customers from big data analysis platform. We have to start from the data collection and data management in order to understand customers’ lifestyle. This step is called "data processing and preparation". There are many sources of data, for example: billing records, application forms, telephone records, mobile Internet browsing behavior. Next, we have to analyze and interpret data. The collected big data will be converted to information and knowledge, which is data mining and text mining analysis. Therefore, this research tries to find out the business value by big data analysis platform. It is success to create new business income according to integrating internal and external data to find out customer preference.
APA, Harvard, Vancouver, ISO, and other styles
49

楊能吉. "ISP10303-Based Configuration Management Data Modeling and Its Application in Electronic Industry." Thesis, 1998. http://ndltd.ncl.edu.tw/handle/85024044987112778399.

Full text
Abstract:
碩士
國立清華大學
工業工程研究所
86
Electronic industry is the fastest growing industry in this decade. Many small- and mid- enterprises are formed to be an integrate part of the industry. These companies rely heavily on electronic data interchange with their cooperating companies for the quick response to the market needs while keep the production cost low. This thesis presents the Configuration Management (CM) data model that complies with the international standard of product data exchange (ISO10303). Further, the algorithm of electronic assemblability analysis is developed, which verifies the CM data to ensure valid product revisions and engineering changes. This research focuses on providing a neutral and international data model that will bring bring the global virtual enterprise to reality. By using the Common Gateway Interface (CGI) method, the system can be accessed on a remote PC, and acquire the desired product data. It lowers the costs that enterprises spend on communication and data interchange.
APA, Harvard, Vancouver, ISO, and other styles
50

Silva, João Pedro Gonçalves da. "A predictive maintenance approach based in big data analysis." Master's thesis, 2019. http://hdl.handle.net/10071/20241.

Full text
Abstract:
With the evolution of information systems, the data flow escalated into new boundaries, allowing enterprises to further develop their approach to important sectors, such as production, logistic, IT and especially maintenance. This last field accompanied industry developments hand in hand in each of the four iterations. More specifically, the fourth iteration (Industry 4.0) marked the capability to connect machines and further enhance data extraction, which allowed companies to use a new data-driven approach into their specific problems. Nevertheless, with a wider flow of data being generated, understanding data became a priority for maintenance-related decision-making processes. Therefore, the correct elaboration of a roadmap to apply predictive maintenance (PM) is a key step for companies. A roadmap can allow a safe approach, where resources may be placed strategically with a ratio of low risk and high reward. By analysing multiple approaches to PM, a generic model is proposed, which contains an array of guidelines. This combination aims to assist maintenance departments that wish to understand the feasibility of implementing a predictive maintenance solution in their company. To understand the utility of the developed artefact, a practical application was conducted to a production line of HFA, a Portuguese Small and Medium Enterprise.
Através da evolução dos sistemas de informação (SI), o fluxo de dados atingiu novos limites, permitindo assim às empresas desenvolver diferentes focos e aplicar novas perspetivas nos departamentos fulcrais à sua atividade, tais como produção, logística e, mais especificamente, a manutenção. Esta última componente evolui paralelamente à indústria, evidenciando novos desenvolvimentos em cada iteração da mesma. Particularmente, a quarta revolução industrial destacou-se pela capacidade de conectar máquinas entre si e pela evolução posterior do processo de extração de dados. Assim, surgiu uma nova perspetiva focada na utilização dos dados extraídos para resolução de problemas. Consequentemente, esta inovação fomentou uma redefinição das prioridades nas decisões tomadas relativas à manutenção, dando primazia à compreensão dos dados gerados. Por conseguinte, a correta elaboração de um plano de implementação de manutenção preditiva (MP) destaca-se como um passo fulcral para as empresas. Este plano tem como objetivo permitir uma abordagem mais segura, possibilitando assim alocar os recursos estrategicamente, reduzindo o risco e potenciando a recompensa. Mediante a análise de múltiplas abordagens de MP, é proposto um modelo genérico que reúne um conjunto diretrizes. Este tem intuito de auxiliar os departamentos de manutenção que pretendem compreender a viabilidade da instalação de uma solução de MP na empresa. A fim de perceber a utilidade dos artefactos desenvolvidos, foi realizada uma aplicação prática do modelo numa pequena e média empresa (PME).
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography