Добірка наукової літератури з теми "Extraction de graphes"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Extraction de graphes".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Extraction de graphes":

1

Ahmad, Jawad, Abdur Rehman, Hafiz Tayyab Rauf, Kashif Javed, Maram Abdullah Alkhayyal, and Abeer Ali Alnuaim. "Service Recommendations Using a Hybrid Approach in Knowledge Graph with Keyword Acceptance Criteria." Applied Sciences 12, no. 7 (March 31, 2022): 3544. http://dx.doi.org/10.3390/app12073544.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Businesses are overgrowing worldwide; people struggle for their businesses and startups in almost every field of life, whether industrial or academic. The businesses or services have multiple income streams with which they generate revenue. Most companies use different marketing and advertisement strategies to engage their customers and spread their services worldwide. Service recommendation systems are gaining popularity to recommend the best services and products to customers. In recent years, the development of service-oriented computing has had a significant impact on the growth of businesses. Knowledge graphs are commonly used data structures to describe the relations among data entities in recommendation systems. Domain-oriented user and service interaction knowledge graph (DUSKG) is a framework for keyword extraction in recommendation systems. This paper proposes a novel method of chunking-based keyword extractions for hybrid recommendations to extract domain-specific keywords in DUSKG. We further show that the performance of the hybrid approach is better than other techniques. The proposed chunking method for keyword extraction outperforms the existing value feature entity extraction (VF2E) by extracting fewer keywords.
2

Cramond, Fala, Alison O'Mara-Eves, Lee Doran-Constant, Andrew SC Rice, Malcolm Macleod, and James Thomas. "The development and evaluation of an online application to assist in the extraction of data from graphs for use in systematic reviews." Wellcome Open Research 3 (March 7, 2019): 157. http://dx.doi.org/10.12688/wellcomeopenres.14738.3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background: The extraction of data from the reports of primary studies, on which the results of systematic reviews depend, needs to be carried out accurately. To aid reliability, it is recommended that two researchers carry out data extraction independently. The extraction of statistical data from graphs in PDF files is particularly challenging, as the process is usually completely manual, and reviewers need sometimes to revert to holding a ruler against the page to read off values: an inherently time-consuming and error-prone process. Methods: To mitigate some of the above problems we integrated and customised two existing JavaScript libraries to create a new web-based graphical data extraction tool to assist reviewers in extracting data from graphs. This tool aims to facilitate more accurate and timely data extraction through a user interface which can be used to extract data through mouse clicks. We carried out a non-inferiority evaluation to examine its performance in comparison with participants’ standard practice for extracting data from graphs in PDF documents. Results: We found that the customised graphical data extraction tool is not inferior to users’ (N=10) prior standard practice. Our study was not designed to show superiority, but suggests that, on average, participants saved around 6 minutes per graph using the new tool, accompanied by a substantial increase in accuracy. Conclusions: Our study suggests that the incorporation of this type of tool in online systematic review software would be beneficial in facilitating the production of accurate and timely evidence synthesis to improve decision-making.
3

Hernández-García, Ángel, and Miguel Ángel Conde-González. "Bridging the Gap between LMS and Social Network Learning Analytics in Online Learning." Journal of Information Technology Research 9, no. 4 (October 2016): 1–15. http://dx.doi.org/10.4018/jitr.2016100101.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Despite the great potential of social network analysis (SNA) methods and visualizations for learning analytics in computer-supported collaborative learning (CSCL), these approaches have not been fully explored due to two important barriers: the scarcity and limited functionality of built-in tools in Learning Management Systems (LMS), and the difficulty to import educational data from formal virtual learning environments into social network analysis programs. This study aims to cover that gap by introducing GraphFES, an application and web service for extraction of interaction data from Moodle message boards and generation of the corresponding social graphs for later analysis using Gephi, a general purpose SNA software. In addition, this paper briefly illustrates the potential of the combination of the three systems (Moodle, GraphFES and Gephi) for social learning analytics using real data from a computer-supported collaborative learning course with strong focus on teamwork and intensive use of forums.
4

REBENCIUC, Ioana, and Ovidiu TIȚA. "Influence of Pectolytic Enzymes on the Quality of Wine Maceration." Bulletin of University of Agricultural Sciences and Veterinary Medicine Cluj-Napoca. Animal Science and Biotechnologies 75, no. 1 (May 19, 2018): 53. http://dx.doi.org/10.15835/buasvmcn-asb:000417.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Obtaining high-quality wines with a registered designation of origin means making the most of the specific features of the variety and those imprinted with technology and the place of harvesting. The aroma is due to the chemical compounds of terpenic nature that accumulate in the grapes (variety flavors) and the secondary aromas that are formed during the alcoholic fermentation and the aging period of the wines (Amrani, and Glories, 1995). The wine-making of the varieties is followed by the appreciation of the primary grapes. This is achieved by preferential maceration of the must by means of enzymes (Marin et al., 1998). The technology of aromatic wines has two fundamental objectives: extracting the primary aromas of grapes (terpenols) and favoring the formation of secondary fermentation aromas. In order to obtain aromatic wines with variety typology, the preferential stage is decisive. Pectolytic enzyme preparations are used in oenology to accelerate and complete the extraction and clarification processes of the must, extracting and stabilizing the color, highlighting the varietal aromatic potential of the varieties and improving the filterability and maturation of the wines.
5

Cramond, Fala, Alison O'Mara-Eves, Lee Doran-Constant, Andrew SC Rice, Malcolm Macleod, and James Thomas. "The development and evaluation of an online application to assist in the extraction of data from graphs for use in systematic reviews." Wellcome Open Research 3 (December 10, 2018): 157. http://dx.doi.org/10.12688/wellcomeopenres.14738.1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background: The extraction of data from the reports of primary studies, on which the results of systematic reviews depend, needs to be carried out accurately. To aid reliability, it is recommended that two researchers carry out data extraction independently. The extraction of statistical data from graphs in PDF files is particularly challenging, as the process is usually completely manual, and reviewers need sometimes to revert to holding a ruler against the page to read off values: an inherently time-consuming and error-prone process. Methods: To mitigate some of the above problems we developed a new web-based graphical data extraction tool to assist reviewers in extracting data from graphs. This tool aims to facilitate more accurate and timely data extraction through a user interface which can be used to extract data through mouse clicks. We carried out a non-inferiority evaluation to examine its performance in comparison to standard practice. Results: We found that our new graphical data extraction tool is not inferior to users’ prior preferred current approaches. Our study was not designed to show superiority, but suggests that there may be a saving in time of around 6 minutes per graph, accompanied by a substantial increase in accuracy. Conclusions: Our study suggests that the incorporation of this type of tool in online systematic review software would be beneficial in facilitating the production of accurate and timely evidence synthesis to improve decision-making.
6

Cramond, Fala, Alison O'Mara-Eves, Lee Doran-Constant, Andrew SC Rice, Malcolm Macleod, and James Thomas. "The development and evaluation of an online application to assist in the extraction of data from graphs for use in systematic reviews." Wellcome Open Research 3 (January 25, 2019): 157. http://dx.doi.org/10.12688/wellcomeopenres.14738.2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Background: The extraction of data from the reports of primary studies, on which the results of systematic reviews depend, needs to be carried out accurately. To aid reliability, it is recommended that two researchers carry out data extraction independently. The extraction of statistical data from graphs in PDF files is particularly challenging, as the process is usually completely manual, and reviewers need sometimes to revert to holding a ruler against the page to read off values: an inherently time-consuming and error-prone process. Methods: To mitigate some of the above problems we integrated and customised two existing JavaScript libraries to create a new web-based graphical data extraction tool to assist reviewers in extracting data from graphs. This tool aims to facilitate more accurate and timely data extraction through a user interface which can be used to extract data through mouse clicks. We carried out a non-inferiority evaluation to examine its performance in comparison to standard practice. Results: We found that the customised graphical data extraction tool is not inferior to users’ prior preferred current approaches. Our study was not designed to show superiority, but suggests that there may be a saving in time of around 6 minutes per graph, accompanied by a substantial increase in accuracy. Conclusions: Our study suggests that the incorporation of this type of tool in online systematic review software would be beneficial in facilitating the production of accurate and timely evidence synthesis to improve decision-making.
7

Taran, Nicolae, Boris Morari, and Olga Soldatenco. "INFLUENCE OF DIFFERENT TECHNOLOGICAL PROCESSES ON THE CONTENT OF BIOLOGICALLY ACTIVE SUBSTANCES AT THE PRODUCTION OF DRY RED WINE FROM THE CABERNET SAUVIGNON VARIETY." Akademos 60, no. 1 (June 2021): 63–67. http://dx.doi.org/10.52673/18570461.21.1-60.08.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This research was focused on the influence of different fermentation-maceration processes for the optimization of the extraction of anthocyanins, tannins and biologically active substances from grapes of the Cabernet Sauvignon variety and their impact on the quality during dry red wine production. It was determined that increasing the duration of the fermentation-maceration process and extracting 20 % of the juice from the must allow the production of dry red wines with high proanthocyanidin content.
8

Czyrski, Andrzej, and Hubert Jarzębski. "Response Surface Methodology as a Useful Tool for Evaluation of the Recovery of the Fluoroquinolones from Plasma—The Study on Applicability of Box-Behnken Design, Central Composite Design and Doehlert Design." Processes 8, no. 4 (April 17, 2020): 473. http://dx.doi.org/10.3390/pr8040473.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The aim of this study was to find the best design that is suitable for optimizing the recovery of the representatives of the 2nd, 3rd and 4th generation of fluoroquinolones. The following designs were applied: Central Composite Design, Box–Behnken Design and Doehlert Design. The recovery, which was a dependent variable, was estimated for liquid–liquid extraction. The time of shaking, pH, and the volume of the extracting agent (dichloromethane) were the independent variables. All results underwent the statistical analysis (ANOVA), which indicated Central Composite Design as the best model for evaluation of the recovery. For each analyte, an equation was generated that enabled to estimate the theoretical value for the applied conditions. The graphs for these equations were provided by the Response Surface Methodology. The statistical analysis also estimated the most significant factors that have an impact on the liquid–liquid extraction, which occurred to be pH for ciprofloxacin and moxifloxacin and the volume of an extracting solvent for levofloxacin.
9

Iglesias-Carres, Lisard, Anna Mas-Capdevila, Lucía Sancho-Pardo, Francisca Isabel Bravo, Miquel Mulero, Begoña Muguerza, and Anna Arola-Arnal. "Optimized Extraction by Response Surface Methodology Used for the Characterization and Quantification of Phenolic Compounds in Whole Red Grapes (Vitis vinifera)." Nutrients 10, no. 12 (December 5, 2018): 1931. http://dx.doi.org/10.3390/nu10121931.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Scientific research has focused on the characterization of bioactive polyphenols from grape seeds and skins, and the pulp has often been overlooked. However, since the beneficial properties of grapes are associated with the consumption of whole fruit, a full extraction and posterior characterization of the phenolic compounds in whole grapes is required to identify the involved bioactive compounds. Such methodologies are not currently available for the whole edible parts of red grapes. This study aimed to determine the best polyphenol extraction conditions of whole red grapes, and apply the method to characterize and quantify the polyphenol composition of three different grapes. The optimized conditions were 80 mL/g, 65% methanol (1% formic acid), 72 °C, and 100 minutes under agitation of 500 rpm. Also, methanol and ethanol were compared as extraction solvents, and methanol achieved statistically higher extraction rates for anthocyanins. The results of this work suggest a higher quantification of phenolic compounds when red grapes are analyzed whole, including the seeds, pulp, and skin.
10

Zohdi, Zeynab, Mahdi Hashemi, Abdusalam Uheida, Mohammad Moein, and Mohamed Abdel-Rehim. "Graphene Oxide Tablets for Sample Preparation of Drugs in Biological Fluids: Determination of Omeprazole in Human Saliva for Liquid Chromatography Tandem Mass Spectrometry." Molecules 24, no. 7 (March 27, 2019): 1191. http://dx.doi.org/10.3390/molecules24071191.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this study, a novel sort of sample preparation sorbent was developed, by preparing thin layer graphene oxide tablets (GO-Tabs) utilizing a mixture of graphene oxide and polyethylene glycol on a polyethylene substrate. The GO-Tabs were used for extraction and concentration of omeprazole (OME) in human saliva samples. The determination of OME was carried out using liquid chromatography-tandem mass spectrometry (LC–MS/MS) under gradient LC conditions and in the positive ion mode (ESI+) with mass transitions of m/z 346.3→198.0 for OME and m/z 369.98→252.0 for the internal standard. Standard calibration for the saliva samples was in the range of 2.0–2000 nmol L−1. Limits of detection and quantification were 0.05 and 2.0 nmol L−1, respectively. Method validation showed good method accuracy and precision; the inter-day precision values ranged from 5.7 to 8.3 (%RSD), and the accuracy of determinations varied from −11.8% to 13.3% (% deviation from nominal values). The extraction recovery was 60%, and GO-Tabs could be re-used for more than ten extractions without deterioration in recovery. In this study, the determination of OME in real human saliva samples using GO-Tab extraction was validated.

Дисертації з теми "Extraction de graphes":

1

Haugeard, Jean-Emmanuel. "Extraction et reconnaissance de primitives dans les façades de Paris à l'aide d'appariement de graphes." Thesis, Cergy-Pontoise, 2010. http://www.theses.fr/2010CERG0497.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cette dernière décennie, la modélisation des villes 3D est devenue l'un des enjeux de la recherche multimédia et un axe important en reconnaissance d'objets. Dans cette thèse nous nous sommes intéressés à localiser différentes primitives, plus particulièrement les fenêtres, dans les façades de Paris. Dans un premier temps, nous présentons une analyse des façades et des différentes propriétés des fenêtres. Nous en déduisons et proposons ensuite un algorithme capable d'extraire automatiquement des hypothèses de fenêtres. Dans une deuxième partie, nous abordons l'extraction et la reconnaissance des primitives à l'aide d'appariement de graphes de contours. En effet une image de contours est lisible par l'oeil humain qui effectue un groupement perceptuel et distingue les entités présentes dans la scène. C'est ce mécanisme que nous avons cherché à reproduire. L'image est représentée sous la forme d'un graphe d'adjacence de segments de contours, valué par des informations d'orientation et de proximité des segments de contours. Pour la mise en correspondance inexacte des graphes, nous proposons plusieurs variantes d'une nouvelle similarité basée sur des ensembles de chemins tracés sur les graphes, capables d'effectuer les groupements des contours et robustes aux changements d'échelle. La similarité entre chemins prend en compte la similarité des ensembles de segments de contours et la similarité des régions définies par ces chemins. La sélection des images d'une base contenant un objet particulier s'effectue à l'aide d'un classifieur SVM ou kppv. La localisation des objets dans l'image utilise un système de vote à partir des chemins sélectionnés par l'algorithme d'appariement
This last decade, modeling of 3D city became one of the challenges of multimedia search and an important focus in object recognition. In this thesis we are interested to locate various primitive, especially the windows, in the facades of Paris. At first, we present an analysis of the facades and windows properties. Then we propose an algorithm able to extract automatically window candidates. In a second part, we discuss about extraction and recognition primitives using graph matching of contours. Indeed an image of contours is readable by the human eye, which uses perceptual grouping and makes distinction between entities present in the scene. It is this mechanism that we have tried to replicate. The image is represented as a graph of adjacency of segments of contours, valued by information orientation and proximity to edge segments. For the inexact matching of graphs, we propose several variants of a new similarity based on sets of paths, able to group several contours and robust to scale changes. The similarity between paths takes into account the similarity of sets of segments of contours and the similarity of the regions defined by these paths. The selection of images from a database containing a particular object is done using a KNN or SVM classifier
2

Haugeard, Jean-Emmanuel. "Extraction et reconnaissance de primitives dans les façades de Paris à l'aide de similarités de graphes." Phd thesis, Université de Cergy Pontoise, 2010. http://tel.archives-ouvertes.fr/tel-00593985.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Cette dernière décennie, la modélisation des villes 3D est devenue l'un des enjeux de la recherche multimédia et un axe important en reconnaissance d'objets. Dans cette thèse nous nous sommes intéressés à localiser différentes primitives, plus particulièrement les fenêtres, dans les façades de Paris. Dans un premier temps, nous présentons une analyse des façades et des différentes propriétés des fenêtres. Nous en déduisons et proposons ensuite un algorithme capable d'extraire automatiquement des hypothèses de fenêtres. Dans une deuxième partie, nous abordons l'extraction et la reconnaissance des primitives à l'aide d'appariement de graphes de contours. En effet une image de contours est lisible par l'oeil humain qui effectue un groupement perceptuel et distingue les entités présentes dans la scène. C'est ce mécanisme que nous avons cherché à reproduire. L'image est représentée sous la forme d'un graphe d'adjacence de segments de contours, valué par des informations d'orientation et de proximité des segments de contours. Pour la mise en correspondance inexacte des graphes, nous proposons plusieurs variantes d'une nouvelle similarité basée sur des ensembles de chemins tracés sur les graphes, capables d'effectuer les groupements des contours et robustes aux changements d'échelle. La similarité entre chemins prend en compte la similarité des ensembles de segments de contours et la similarité des régions définies par ces chemins. La sélection des images d'une base contenant un objet particulier s'effectue à l'aide d'un classifieur SVM ou kppv. La localisation des objets dans l'image utilise un système de vote à partir des chemins sélectionnés par l'algorithme d'appariement.
3

Raveaux, Romain. "Fouille de graphes et classification de graphes : application à l'analyse de plans cadastraux." Phd thesis, Université de La Rochelle, 2010. http://tel.archives-ouvertes.fr/tel-00567218.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les travaux présentés dans ce mémoire de thèse abordent sous différents angles très intéressants, un sujet vaste et ambitieux : l'interprétation de plans cadastraux couleurs.Dans ce contexte, notre approche se trouve à la confluence de différentes thématiques de recherche telles que le traitement du signal et des images, la reconnaissance de formes, l'intelligence artificielle et l'ingénierie des connaissances. En effet, si ces domaines scientifiques diffèrent dans leurs fondements, ils sont complémentaires et leurs apports respectifs sont indispensables pour la conception d'un système d'interprétation. Le centre du travail est le traitement automatique de documents cadastraux du 19e siècle. La problématique est traitée dans le cadre d'un projet réunissant des historiens, des géomaticiens et des informaticiens. D'une part nous avons considéré le problème sous un angle systémique, s'intéressant à toutes les étapes de la chaîne de traitements mais aussi avec un souci évident de développer des méthodologies applicables dans d'autres contextes. Les documents cadastraux ont été l'objet de nombreuses études mais nous avons su faire preuve d'une originalité certaine, mettant l'accent sur l'interprétation des documents et basant notre étude sur des modèles à base de graphes. Des propositions de traitements appropriés et de méthodologies ont été formulées. Le souci de comblé le gap sémantique entre l'image et l'interprétation a reçu dans le cas des plans cadastraux étudiés une réponse.
4

Berger, Laurent. "Extraction de regions d'une image apres codage par quadtree en vue de la reconstitution de scenes tridimensionnelles par stereoscopie." Le Mans, 1992. http://www.theses.fr/1992LEMA1007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Le present travail traite du codage d'images par quadtrees a l'aide d'approximations polynomiales pour segmenter un couple d'images stereoscopiques en vue de la stereorestitution d'une scene. Selon l'ordre 0, 1 ou 2 des estimateurs polynomiaux, les images sont localement modelisees par des plans horizontaux, des plans quelconques ou des paraboloides. Une formulation compacte de l'estimateur en permet le calcul recursif rapide au sens des moindres carres. La segmentation des images est composee de deux parties, construction et fusion des regions. La construction des regions prend en compte la taille et le gradient de l'estimateur sur un bloc. La fusion est ensuite effectuee selon les niveaux de gris des regions, leurs surfaces respectives et la qualite de leurs frontieres. L'algorithme de segmentation est ensuite applique a des images de robotique, des images aeriennes et metallographiques
5

Garnero, Valentin. "(Méta)-noyaux constructifs et linéaires dans les graphes peu denses." Thesis, Montpellier, 2016. http://www.theses.fr/2016MONTT328/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
En algorithmique et en complexité, la plus grande part de la recherche se base sur l’hypothèse que P ≠ NP (Polynomial time et Non deterministic Polynomial time), c'est-à-dire qu'il existe des problèmes dont la solution peut être vérifiée mais non construite en temps polynomial. Si cette hypothèse est admise, de nombreux problèmes naturels ne sont pas dans P (c'est-à-dire, n'admettent pas d'algorithme efficace), ce qui a conduit au développement de nombreuses branches de l'algorithmique. L'une d'elles est la complexité paramétrée. Elle propose des algorithmes exacts, dont l'analyse est faite en fonction de la taille de l'instance et d'un paramètre. Ce paramètre permet une granularité plus fine dans l'analyse de la complexité.Un algorithme sera alors considéré comme efficace s'il est à paramètre fixé, c'est-à-dire, lorsque sa complexité est exponentielle en fonction du paramètre et polynomiale en fonction de la taille de l'instance. Ces algorithmes résolvent les problèmes de la classe FPT (Fixed Parameter Tractable).L'extraction de noyaux est une technique qui permet, entre autre, d’élaborer des algorithmes à paramètre fixé. Elle peut être vue comme un pré-calcul de l'instance, avec une garantie sur la compression des données. Plus formellement, une extraction de noyau est une réduction polynomiale depuis un problème vers lui même, avec la contrainte supplémentaire que la taille du noyau (l'instance réduite) est bornée en fonction du paramètre. Pour obtenir l’algorithme à paramètre fixé, il suffit de résoudre le problème dans le noyau, par exemple par une recherche exhaustive (de complexité exponentielle, en fonction du paramètre). L’existence d'un noyau implique donc l'existence d'un algorithme à paramètre fixé, la réciproque est également vraie. Cependant, l’existence d'un algorithme à paramètre fixé efficace ne garantit pas un petit noyau, c'est a dire un noyau dont la taille est linéaire ou polynomiale. Sous certaines hypothèses, il existe des problèmes n’admettant pas de noyau (c'est-à-dire hors de FPT) et il existe des problèmes de FPT n’admettant pas de noyaux polynomiaux.Un résultat majeur dans le domaine des noyaux est la construction d'un noyau linéaire pour le problème Domination dans les graphes planaires, par Alber, Fellows et Niedermeier.Tout d'abord, la méthode de décomposition en régions proposée par Alber, Fellows et Niedermeier, a permis de construire de nombreux noyaux pour des variantes de Domination dans les graphes planaires. Cependant cette méthode comportait un certain nombre d’imprécisions, ce qui rendait les preuves invalides. Dans la première partie de notre thèse, nous présentons cette méthode sous une forme plus rigoureuse et nous l’illustrons par deux problèmes : Domination Rouge Bleue et Domination Totale.Ensuite, la méthode a été généralisée, d'une part, sur des classes de graphes plus larges (de genre borné, sans-mineur, sans-mineur-topologique), d'autre part, pour une plus grande variété de problèmes. Ces méta-résultats prouvent l’existence de noyaux linéaires ou polynomiaux pour tout problème vérifiant certaines conditions génériques, sur une classe de graphes peu denses. Cependant, pour atteindre une telle généralité, il a fallu sacrifier la constructivité des preuves : les preuves ne fournissent pas d'algorithme d'extraction constructif et la borne sur le noyau n'est pas explicite. Dans la seconde partie de notre thèse nous effectuons un premier pas vers des méta-résultats constructifs ; nous proposons un cadre général pour construire des noyaux linéaires en nous inspirant des principes de la programmation dynamique et d'un méta-résultat de Bodlaender, Fomin, Lokshtanov, Penninkx, Saurabh et Thilikos
In the fields of Algorithmic and Complexity, a large area of research is based on the assumption that P ≠ NP(Polynomial time and Non deterministic Polynomial time), which means that there are problems for which a solution can be verified but not constructed in polynomial time. Many natural problems are not in P, which means, that they have no efficient algorithm. In order to tackle such problems, many different branches of Algorithmic have been developed. One of them is called Parametric Complexity. It consists in developing exact algorithms whose complexity is measured as a function of the size of the instance and of a parameter. Such a parameter allows a more precise analysis of the complexity. In this context, an algorithm will be considered to be efficient if it is fixed parameter tractable (fpt), that is, if it has a complexity which is exponential in the parameter and polynomial in the size of the instance. Problems that can be solved by such an algorithm form the FPT class.Kernelisation is a technical that produces fpt algorithms, among others. It can be viewed as a preprocessing of the instance, with a guarantee on the compression of the data. More formally, a kernelisation is a polynomial reduction from a problem to itself, with the additional constraint that the size of the kernel, the reduced instance, is bounded by a function of the parameter. In order to obtain an fpt algorithm, it is sufficient to solve the problem in the reduced instance, by brute-force for example (which has exponential complexity, in the parameter). Hence, the existence of a kernelisiation implies the existence of an fpt algorithm. It holds that the converse is true also. Nevertheless, the existence of an efficient fpt algorithm does not imply a small kernel, meaning a kernel with a linear or polynomial size. Under certain hypotheses, it can be proved that some problems can not have a kernel (that is, are not in FPT) and that some problems in FPT do not have a polynomial kernel.One of the main results in the field of Kernelisation is the construction of a linear kernel for the Dominating Set problem on planar graphs, by Alber, Fellows and Niedermeier.To begin with, the region decomposition method proposed by Alber, Fellows and Niedermeier has been reused many times to develop kernels for variants of Dominating Set on planar graphs. Nevertheless, this method had quite a few inaccuracies, which has invalidated the proofs. In the first part of our thesis, we present a more thorough version of this method and we illustrate it with two examples: Red Blue Dominating Set and Total Dominating Set.Next, the method has been generalised to larger classes of graphs (bounded genus, minor-free, topological-minor-free), and to larger families of problems. These meta-results prove the existence of a linear or polynomial kernel for all problems verifying some generic conditions, on a class of sparse graphs. As a price of generality, the proofs do not provide constructive algorithms and the bound on the size of the kernel is not explicit. In the second part of our thesis, we make a first step to constructive meta-results. We propose a framework to build linear kernels based on principles of dynamic programming and a meta-result of Bodlaender, Fomin, Lokshtanov, Penninkx, Saurabh and Thilikos
6

Raveaux, Romain. "Fouille de graphes et classification de graphes : application à l’analyse de plans cadastraux." Thesis, La Rochelle, 2010. http://www.theses.fr/2010LAROS311/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les travaux présentés dans ce mémoire de thèse abordent sous différents angles très intéressants, un sujet vaste et ambitieux : l’interprétation de plans cadastraux couleurs.Dans ce contexte, notre approche se trouve à la confluence de différentes thématiques de recherche telles que le traitement du signal et des images, la reconnaissance de formes, l’intelligence artificielle et l’ingénierie des connaissances. En effet, si ces domaines scientifiques diffèrent dans leurs fondements, ils sont complémentaires et leurs apports respectifs sont indispensables pour la conception d’un système d’interprétation. Le centre du travail est le traitement automatique de documents cadastraux du 19e siècle. La problématique est traitée dans le cadre d'un projet réunissant des historiens, des géomaticiens et des informaticiens. D'une part nous avons considéré le problème sous un angle systémique, s'intéressant à toutes les étapes de la chaîne de traitements mais aussi avec un souci évident de développer des méthodologies applicables dans d'autres contextes. Les documents cadastraux ont été l'objet de nombreuses études mais nous avons su faire preuve d'une originalité certaine, mettant l'accent sur l'interprétation des documents et basant notre étude sur des modèles à base de graphes. Des propositions de traitements appropriés et de méthodologies ont été formulées. Le souci de comblé le gap sémantique entre l’image et l’interprétation a reçu dans le cas des plans cadastraux étudiés une réponse
This thesis tackles the problem of technical document interpretationapplied to ancient and colored cadastral maps. This subject is on the crossroadof different fields like signal or image processing, pattern recognition, artificial intelligence,man-machine interaction and knowledge engineering. Indeed, each of thesedifferent fields can contribute to build a reliable and efficient document interpretationdevice. This thesis points out the necessities and importance of dedicatedservices oriented to historical documents and a related project named ALPAGE.Subsequently, the main focus of this work: Content-Based Map Retrieval within anancient collection of color cadastral maps is introduced
7

Rahmoun, Somia. "Extraction, caractérisation et mesure de courbes imparfaites en résolution limitée." Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCK041.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Dans le domaine moléculaire, les polymères sont observés et étudiés par microscopie. Les formes obtenues sont souvent imprécises en raison de l'effet de convolution et de diffraction de l'acquisition par microscopie. Par conséquent, le polymère apparait comme une courbe épaisse, bruitée et floue. Pour étudier les caractéristiques d'une chaîne de polymère, une des approches possibles consiste à réduire la forme acquise par microscope à une représentation minimale, soit une courbe. Cette dernière doit représenter au mieux l'objet étudié malgré les différentes difficultés rencontrées telles que la qualité des images ou les imprécisions dues à la discrétisation. De plus, un polymère adopte un mouvement "Reptilien" et peut former des géométries complexes telles que des courbes fermées ou avec boucles. L'objet de cette thèse est donc l'extraction de courbes visant à fournir une représentation minimale des polymères à des fins d'analyse. La méthode que nous proposons comprend deux grandes étapes à savoir : L'extraction des géodésiques et leur fusion. La première étape consiste à calculer un ensemble de géodésiques, chacune parcourant une partie distincte de la forme. Ces morceaux de géodésiques sont fusionnés dans la seconde étape afin de générer la courbe complète. Afin de représenter la reptation, les géodésiques doivent être fusionnées dans un ordre précis. Nous modélisons ce problème par graphes et nous cherchons l'ordre de fusion en parcourant le graphe. La fusion est effectuée selon le chemin optimal minimisant différentes contraintes
In the molecular field, polymers are observed and studied by microscopy. The shapes obtained are often inaccurate because of the convolution and diffraction effect of microscopy. Therefore, the polymer appears as a thick, noisy and fuzzy curve. In order to study a polymer chain, one of the possible approaches consists in reducing the acquired shape to a minimal representation, ie a curve. This curve must represent the studied object in the best way, despite the various encountered difficulties, such as the quality of the images or inaccuracies due to discretization. In addition, a polymer performs a "Reptilian" movement and can form complex geometries such as closed and looped curves. The object of this thesis is, therefore, the extraction of curves aimed at providing a minimal representation of the polymers for analysis. The proposed method comprises two major steps: geodesics extraction and their fusion. The first step is to compute a set of geodesics, each one traversing a distinct part of the shape. These pieces of geodesics are fused in the second step to generate the complete curve. In order to represent the reptation, the geodesics have to be merged in a precise order. We model this problem by graphs and consider the fusion as a graph traversal problem. The fusion is performed according to the optimal path minimizing various constraints
8

Viana, do Espírito Santo Ilísio. "Inspection automatisée d’assemblages mécaniques aéronautiques par vision artificielle : une approche exploitant le modèle CAO." Thesis, Ecole nationale des Mines d'Albi-Carmaux, 2016. http://www.theses.fr/2016EMAC0022/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Les travaux présentés dans ce manuscrit s’inscrivent dans le contexte de l’inspection automatisée d’assemblages mécaniques aéronautiques par vision artificielle. Il s’agit de décider si l’assemblage mécanique a été correctement réalisé (assemblage conforme). Les travaux ont été menés dans le cadre de deux projets industriels. Le projet CAAMVis d’une part, dans lequel le capteur d’inspection est constitué d’une double tête stéréoscopique portée par un robot, le projet Lynx© d’autre part, dans lequel le capteur d’inspection est une caméra Pan/Tilt/Zoom (vision monoculaire). Ces deux projets ont pour point commun la volonté d’exploiter au mieux le modèle CAO de l’assemblage (qui fournit l’état de référence souhaité) dans la tâche d’inspection qui est basée sur l’analyse de l’image ou des images 2D fournies par le capteur. La méthode développée consiste à comparer une image 2D acquise par le capteur (désignée par « image réelle ») avec une image 2D synthétique, générée à partir du modèle CAO. Les images réelles et synthétiques sont segmentées puis décomposées en un ensemble de primitives 2D. Ces primitives sont ensuite appariées, en exploitant des concepts de la théorie de graphes, notamment l’utilisation d’un graphe biparti pour s’assurer du respect de la contrainte d’unicité dans le processus d’appariement. Le résultat de l’appariement permet de statuer sur la conformité ou la non-conformité de l’assemblage. L’approche proposée a été validée à la fois sur des données de simulation et sur des données réelles acquises dans le cadre des projets sus-cités
The work presented in this manuscript deals with automated inspection of aeronautical mechanical parts using computer vision. The goal is to decide whether a mechanical assembly has been assembled correctly i.e. if it is compliant with the specifications. This work was conducted within two industrial projects. On one hand the CAAMVis project, in which the inspection sensor consists of a dual stereoscopic head (stereovision) carried by a robot, on the other hand the Lynx© project, in which the inspection sensor is a single Pan/Tilt/Zoom camera (monocular vision). These two projects share the common objective of exploiting as much as possible the CAD model of the assembly (which provides the desired reference state) in the inspection task which is based on the analysis of the 2D images provided by the sensor. The proposed method consists in comparing a 2D image acquired by the sensor (referred to as "real image") with a synthetic 2D image generated from the CAD model. The real and synthetic images are segmented and then decomposed into a set of 2D primitives. These primitives are then matched by exploiting concepts from the graph theory, namely the use of a bipartite graph to guarantee the respect of the uniqueness constraint required in such a matching process. The matching result allows to decide whether the assembly has been assembled correctly or not. The proposed approach was validated on both simulation data and real data acquired within the above-mentioned projects
9

Ayed, Rihab. "Recherche d’information agrégative dans des bases de graphes distribuées." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1305.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Le sujet de cette thèse s’inscrit dans le cadre général de la Recherche d’Information et la gestion des données massives et distribuées. Notre problématique concerne l’évaluation et l’optimisation de requêtes agrégatives (Aggregated Search). La Recherche d’Information Agrégative est un nouveau paradigme permettant l’accès à l’information massivement distribuée. Elle a pour but de retourner à l’utilisateur d’un système de recherche d’information des objets résultats qui sont riches et porteurs de connaissances. Ces objets n’existent pas en tant que tels dans les sources. Ils sont construits par assemblage (ou configuration ou agrégation) de fragments issus de diffèrentes sources. Les sources peuvent être non spécifiées dans l’expression de la requête mais découvertes dynamiquement lors de la recherche. Nous nous intéressons particulièrement à l’exploitation des dépendances de données pour optimiser les accès aux sources distribuées. Dans ce cadre, nous proposons une approche pour l’un des sous processus de systèmes de RIA, principalement le processus d’indexation/organisation des documents. Nous considérons dans cette thèse, les systèmes de recherche d’information orientés graphes (graphes RDF). Utilisant les relations dans les graphes, notre travail s’inscrit dans le cadre de la recherche d’information agrégative relationnelle (Relational Aggregated Search) où les relations sont exploitées pour agréger des fragments d’information. Nous proposons d’optimiser l’accès aux sources d’information dans un système de recherche d’information agrégative. Ces sources contiennent des fragments d’information répondant partiellement à la requête. L’objectif est de minimiser le nombre de sources interrogées pour chaque fragment de la requête, ainsi que de maximiser les opérations d’agrégations de fragments dans une même source. Nous proposons d’effectuer cela en réorganisant la/les base(s) de graphes dans plusieurs clusters d’information dédiés aux requêtes agrégatives. Ces clusters sont obtenus à partir d’une approche de clustering sémantique ou structurel des prédicats des graphes RDF. Pour le clustering structurel, nous utilisons les algorithmes d’extraction de sous-graphes fréquents et dans ce cadre nous élaborons une étude comparative des performances de ces algorithmes. Pour le clustering sémantique, nous utilisons les métadonnées descriptives des prédicats dont nous appliquons des outils de similarité textuelle sémantique. Nous définissons une approche de décomposition de requêtes basée essentiellement sur le clustering choisi
In this research, we are interested in investigating issues related to query evaluation and optimization in the framework of aggregated search. Aggregated search is a new paradigm to access massively distributed information. It aims to produce answers to queries by combining fragments of information from different sources. The queries search for objects (documents) that do not exist as such in the targeted sources, but are built from fragments extracted from the different sources. The sources might not be specified in the query expression, they are dynamically discovered at runtime. In our work, we consider data dependencies to propose a framework for optimizing query evaluation over distributed graph-oriented data sources. For this purpose, we propose an approach for the document indexing/orgranizing process of aggregated search systems. We consider information retrieval systems that are graph oriented (RDF graphs). Using graph relationships, our work is within relational aggregated search where relationships are used to aggregate fragments of information. Our goal is to optimize the access to source of information in a aggregated search system. These sources contain fragments of information that are relevant partially for the query. We aim at minimizing the number of sources to ask, also at maximizing the aggregation operations within a same source. For this, we propose to reorganize the graph database(s) in partitions, dedicated to aggregated queries. We use a semantic or strucutral clustering of RDF predicates. For structural clustering, we propose to use frequent subgraph mining algorithms, we performed for this, a comparative study of their performances. For semantic clustering, we use the descriptive metadata of RDF predicates and apply semantic textual similarity methods to calculate their relatedness. Following the clustering, we define query decomposing rules based on the semantic/structural aspects of RDF predicates
10

Bui, Quang Anh. "Vers un système omni-langage de recherche de mots dans des bases de documents écrits homogènes." Thesis, La Rochelle, 2015. http://www.theses.fr/2015LAROS010/document.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Notre thèse a pour objectif la construction d’un système omni-langage de recherche de mots dans les documents numérisés. Nous nous plaçons dans le contexte où le contenu du document est homogène (ce qui est le cas pour les documents anciens où l’écriture est souvent bien soignée et mono-scripteur) et la connaissance préalable du document (le langage, le scripteur, le type d’écriture, le tampon, etc.) n’est pas connue. Grâce à ce système, l'utilisateur peut composer librement et intuitivement sa requête et il peut rechercher des mots dans des documents homogènes de n’importe quel langage, sans détecter préalablement une occurrence du mot à rechercher. Le point clé du système que nous proposons est les invariants, qui sont les formes les plus fréquentes dans la collection de documents. Pour le requêtage, l’utilisateur pourra créer le mot à rechercher en utilisant les invariants (la composition des requêtes), grâce à une interface visuelle. Pour la recherche des mots, les invariants peuvent servir à construire des signatures structurelles pour représenter les images de mots. Nous présentons dans cette thèse la méthode pour extraire automatiquement les invariants à partir de la collection de documents, la méthode pour évaluer la qualité des invariants ainsi que les applications des invariants à la recherche de mots et à la composition des requêtes
The objective of our thesis is to build an omni-language word retrieval system for scanned documents. We place ourselves in the context where the content of documents is homogenous and the prior knowledge about the document (the language, the writer, the writing style, etc.) is not known. Due to this system, user can freely and intuitively compose his/her query. With the query created by the user, he/she can retrieve words in homogenous documents of any language, without finding an occurrence of the word to search. The key of our proposed system is the invariants, which are writing pieces that frequently appeared in the collection of documents. The invariants can be used in query making process in which the user selects and composes appropriate invariants to make the query. They can be also used as structural descriptor to characterize word images in the retrieval process. We introduce in this thesis our method for automatically extracting invariants from document collection, our evaluation method for evaluating the quality of invariants and invariant’s applications in the query making process as well as in the retrieval process

Книги з теми "Extraction de graphes":

1

Lin, I.-Jong. Video object extraction and representation: Theory and applications. Boston, Mass: Kluwer Academic Publisher, 2000.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Shitov, Viktor. Information content development (by industry). ru: INFRA-M Academic Publishing LLC., 2022. http://dx.doi.org/10.12737/1853495.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The textbook describes methods of working with documentation, stages of development of technical specifications, annotation design, development of information content, development and layout of texts using specialized packages, standard presentation preparation packages, basic tools for working with raster and vector graphics, technologies for extracting information from text documents and databases, and much more. Meets the requirements of the federal state educational standards of secondary vocational education of the latest generation. For students of secondary vocational education institutions. It can be used when mastering the professional module "Administration of information resources" in the discipline "Development of information content (by industry)" for the specialty "Information systems and programming" when mastering the qualification "Information Resources Specialist".
3

Li, Ying. Video content analysis using multimodal information: For movie content extraction, indexing, and representation. Boston, MA: Kluwer Academic Publishers, 2003.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

(Editor), Georges-Pierre Bonneau, Thomas Ertl (Editor), and Gregory M. Nielson (Editor), eds. Scientific Visualization: The Visual Extraction of Knowledge from Data (Mathematics and Visualization). Springer, 2005.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Eriksson, Olle, Anders Bergman, Lars Bergqvist, and Johan Hellsvik. Implementation. Oxford University Press, 2017. http://dx.doi.org/10.1093/oso/9780198788669.003.0007.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this chapter, we will present the technical aspects of atomistic spin dynamics, in particular how the method can be implemented in an actual computer software. This involves calculation of effective field and creation of neighbour lists for setting up the geometry of the system of interest as well as choosing a suitable integrator scheme for the SLL (or SLLG) equation. We also give examples of extraction and processing of relevant observables that are common output from simulations. Atomistic spin dynamics simulations could be a computationally heavy tool but it is also very well adapted for modern computer architectures like massive parallel computing and/or graphics processing units and we provide examples how to utilize these architectures in an efficient manner. We use our own developed software UppASD as example, but the discussion could be applied to any other atomistic spin dynamics software.
6

Kung, S. Y., and I.-Jong Lin. Video Object Extraction and Representation: Theory and Applications (The Springer International Series in Engineering and Computer Science). Springer, 2000.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Kuo, C. C. Jay, and Ying Li. Video Content Analysis Using Multimodal Information: For Movie Content Extraction, Indexing and Representation. Springer, 2003.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

J, Farrell Edward, Society of Photo-optical Instrumentation Engineers., SPSE--the Society for Imaging Science and Technology., Technical Association of the Graphic Arts., and SPIE/SPSE Symposium on Electronic Imaging Science and Technology (1990 : Santa Clara, Calif.), eds. Extracting meaning from complex data: Processing, display, interaction : 14-16 February 1990, Santa Clara, California. Bellingham, Wash., USA: SPIE, 1990.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

J, Farrell Edward, Society of Photo-optical Instrumentation Engineers., IS & T--the Society for Imaging Science and Technology., and Rochester Institute of Technology. Center for Imaging Science., eds. Extracting meaning from complex data: Processing, display, interaction II : 26-28 February 1991, San Jose, California. Bellingham, Wash., USA: SPIE, 1991.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

(Editor), Bernd Jähne, Rudolf Mester (Editor), Erhardt Barth (Editor), and Hanno Scharr (Editor), eds. Complex Motion: First International Workshop, IWCM 2004, Günzburg, Germany, October 12-14, 2004, Revised Papers (Lecture Notes in Computer Science). Springer, 2007.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Extraction de graphes":

1

Carberry, Sandra, Stephanie Elzer, Richard Burns, Peng Wu, Daniel Chester, and Seniz Demir. "Information Graphics in Multimodal Documents." In Multimedia Information Extraction, 235–52. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2012. http://dx.doi.org/10.1002/9781118219546.ch15.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Carvalho, Danilo S., André Freitas, and João C. P. da Silva. "Graphia: Extracting Contextual Relation Graphs from Text." In Advanced Information Systems Engineering, 236–41. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-41242-4_31.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Gibson, David, Ravi Kumar, Kevin S. McCurley, and Andrew Tomkins. "Dense Subgraph Extraction." In Mining Graph Data, 411–41. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2006. http://dx.doi.org/10.1002/9780470073049.ch16.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Kejriwal, Mayank. "Information Extraction." In Domain-Specific Knowledge Graph Construction, 9–31. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-12375-8_2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Shi, Wei, Weiguo Zheng, Jeffrey Xu Yu, Hong Cheng, and Lei Zou. "Keyphrase Extraction Using Knowledge Graphs." In Web and Big Data, 132–48. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-63579-8_11.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Aggarwal, Charu C. "Information Extraction and Knowledge Graphs." In Machine Learning for Text, 419–63. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96623-2_13.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Pamula, Wiesław. "Feature Extraction Using Reconfigurable Hardware." In Computer Vision and Graphics, 158–65. Berlin, Heidelberg: Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15907-7_20.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Martinez-Rodriguez, Jose L., Ivan Lopez-Arevalo, Ana B. Rios-Alvarado, Julio Hernandez, and Edwin Aldana-Bobadilla. "Extraction of RDF Statements from Text." In Knowledge Graphs and Semantic Web, 87–101. Cham: Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-21395-4_7.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

van Bakel, Ruud, Teodor Aleksiev, Daniel Daza, Dimitrios Alivanistos, and Michael Cochez. "Approximate Knowledge Graph Query Answering: From Ranking to Binary Classification." In Lecture Notes in Computer Science, 107–24. Cham: Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-72308-8_8.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractLarge, heterogeneous datasets are characterized by missing or even erroneous information. This is more evident when they are the product of community effort or automatic fact extraction methods from external sources, such as text. A special case of the aforementioned phenomenon can be seen in knowledge graphs, where this mostly appears in the form of missing or incorrect edges and nodes.Structured querying on such incomplete graphs will result in incomplete sets of answers, even if the correct entities exist in the graph, since one or more edges needed to match the pattern are missing. To overcome this problem, several algorithms for approximate structured query answering have been proposed. Inspired by modern Information Retrieval metrics, these algorithms produce a ranking of all entities in the graph, and their performance is further evaluated based on how high in this ranking the correct answers appear.In this work we take a critical look at this way of evaluation. We argue that performing a ranking-based evaluation is not sufficient to assess methods for complex query answering. To solve this, we introduce Message Passing Query Boxes (MPQB), which takes binary classification metrics back into use and shows the effect this has on the recently proposed query embedding method MPQE.
10

Gerstner, Thomas, and Martin Rumpf. "Multi-Resolutional Parallel Isosurface Extraction based on Tetrahedral Bisection." In Volume Graphics, 267–78. London: Springer London, 2000. http://dx.doi.org/10.1007/978-1-4471-0737-8_17.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Extraction de graphes":

1

Molokwu, Bonaventure. "Event Prediction in Complex Social Graphs using One-Dimensional Convolutional Neural Network." In Twenty-Eighth International Joint Conference on Artificial Intelligence {IJCAI-19}. California: International Joint Conferences on Artificial Intelligence Organization, 2019. http://dx.doi.org/10.24963/ijcai.2019/914.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Social network graphs possess apparent and latent knowledge about their respective actors and links which may be exploited, using effective and efficient techniques, for predicting events within the social graphs. Understanding the intrinsic relationship patterns among spatial social actors and their respective properties are crucial factors to be taken into consideration in event prediction within social networks. My research work proposes a unique approach for predicting events in social networks by learning the context of each actor/vertex using neighboring actors in a given social graph with the goal of generating vector-space embeddings for each vertex. Our methodology introduces a pre-convolution layer which is essentially a set of feature-extraction operations aimed at reducing the graph's dimensionality to aid knowledge extraction from its complex structure. Consequently, the low-dimensional node embeddings are introduced as input features to a one-dimensional ConvNet model for event prediction about the given social graph. Training and evaluation of this proposed approach have been done on datasets (compiled: November, 2017) extracted from real world social networks with respect to 3 European countries. Each dataset comprises an average of 280,000 links and 48,000 actors.
2

Yang, Huifan, Da-Wei Li, Zekun Li, Donglin Yang, and Bin Wu. "Open Relation Extraction with Non-existent and Multi-span Relationships." In 19th International Conference on Principles of Knowledge Representation and Reasoning {KR-2022}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/kr.2022/37.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Open relation extraction (ORE) aims to assign semantic relationships among arguments, essential to the automatic construction of knowledge graphs (KG). The previous ORE methods and some benchmark datasets consider a relation between two arguments as definitely existing and in a simple single-span form, neglecting possible non-existent relationships and flexible, expressive multi-span relations. However, detecting non-existent relations is necessary for a pipelined information extraction system (first performing named entity recognition then relation extraction), and multi-span relationships contribute to the diversity of connections in KGs. To fulfill the practical demands of ORE, we design a novel Query-based Multi-head Open Relation Extractor (QuORE) to extract single/multi-span relations and detect non-existent relationships effectively. Moreover, we re-construct some public datasets covering English and Chinese to derive augmented and multi-span relation tuples. Extensive experiment results show that our method outperforms the state-of-the-art ORE model LOREM in the extraction of existing single/multi-span relations and the overall performances on four datasets with non-existent relationships.
3

Lee, See Hian, Feng Ji, and Wee Peng Tay. "SGAT: Simplicial Graph Attention Network." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/443.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Heterogeneous graphs have multiple node and edge types and are semantically richer than homogeneous graphs. To learn such complex semantics, many graph neural network approaches for heterogeneous graphs use metapaths to capture multi-hop interactions between nodes. Typically, features from non-target nodes are not incorporated into the learning procedure. However, there can be nonlinear, high-order interactions involving multiple nodes or edges. In this paper, we present Simplicial Graph Attention Network (SGAT), a simplicial complex approach to represent such high-order interactions by placing features from non-target nodes on the simplices. We then use attention mechanisms and upper adjacencies to generate representations. We empirically demonstrate the efficacy of our approach with node classification tasks on heterogeneous graph datasets and further show SGAT's ability in extracting structural information by employing random node features. Numerical experiments indicate that SGAT performs better than other current state-of-the-art heterogeneous graph learning methods.
4

Fei, Hao, Jingye Li, Shengqiong Wu, Chenliang Li, Donghong Ji, and Fei Li. "Global Inference with Explicit Syntactic and Discourse Structures for Dialogue-Level Relation Extraction." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/570.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Recent research attention for relation extraction has been paid to the dialogue scenario, i.e., dialogue-level relation extraction (DiaRE). Existing DiaRE methods either simply concatenate the utterances in a dialogue into a long piece of text, or employ naive words, sentences or entities to build dialogue graphs, while the structural characteristics in dialogues have not been fully utilized. In this work, we investigate a novel dialogue-level mixed dependency graph (D2G) and an argument reasoning graph (ARG) for DiaRE with a global relation reasoning mechanism. First, we model the entire dialogue into a unified and coherent D2G by explicitly integrating both syntactic and discourse structures, which enables richer semantic and feature learning for relation extraction. Second, we stack an ARG graph on top of D2G to further focus on argument inter-dependency learning and argument representation refinement, for sufficient argument relation inference. In our global reasoning framework, D2G and ARG work collaboratively, iteratively performing lexical, syntactic and semantic information exchange and representation learning over the entire dialogue context. On two DiaRE benchmarks, our framework shows considerable improvements over the current state-of-the-art baselines. Further analyses show that the model effectively solves the long-range dependence issue, and meanwhile gives explainable predictions.
5

Faralli, Stefano, Irene Finocchi, Simone Paolo Ponzetto, and Paola Velardi. "Efficient Pruning of Large Knowledge Graphs." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. California: International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/564.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper we present an efficient and highly accurate algorithm to prune noisy or over-ambiguous knowledge graphs given as input an extensional definition of a domain of interest, namely as a set of instances or concepts. Our method climbs the graph in a bottom-up fashion, iteratively layering the graph and pruning nodes and edges in each layer while not compromising the connectivity of the set of input nodes. Iterative layering and protection of pre-defined nodes allow to extract semantically coherent DAG structures from noisy or over-ambiguous cyclic graphs, without loss of information and without incurring in computational bottlenecks, which are the main problem of state-of-the-art methods for cleaning large, i.e., Web-scale, knowledge graphs. We apply our algorithm to the tasks of pruning automatically acquired taxonomies using benchmarking data from a SemEval evaluation exercise, as well as the extraction of a domain-adapted taxonomy from the Wikipedia category hierarchy. The results show the superiority of our approach over state-of-art algorithms in terms of both output quality and computational efficiency.
6

Cádiz, Rodrigo F., Lothar Droppelmann, Max Guzmán, and Cristian Tejos. "Auditory Graphs from Denoising Real Images Using Fully Symmetric Convolutional Neural Networks." In ICAD 2021: The 26th International Conference on Auditory Display. icad.org: International Community for Auditory Display, 2021. http://dx.doi.org/10.21785/icad2021.028.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Auditory graphs are a very useful way to deliver numerical information to visually impaired users. Several tools have been proposed for chart data sonification, including audible spreadsheets, custom interfaces, interactive tools and automatic models. In the case of the latter, most of these models are aimed towards the extraction of contextual information and not many solutions have been proposed for the generation of an auditory graph directly from the pixels of an image by the automatic extraction of the underlying data. These kind of tools can dramatically augment the availability and usability of auditory graphs for the visually impaired community. We propose a deep learning-based approach for the generation of an automatic sonification of an image containing a bar or a line chart using only pixel information. In particular, we took a denoising approach to this problem, based on a fully symmetric convolutional neural network architecture. Our results show that this approach works as a basis for the automatic sonification of charts directly from the information contained in the pixels of an image.
7

Gupta, Vishal, Manoj Chinnakotla, and Manish Shrivastava. "Retrieve and Re-rank: A Simple and Effective IR Approach to Simple Question Answering over Knowledge Graphs." In Proceedings of the First Workshop on Fact Extraction and VERification (FEVER). Stroudsburg, PA, USA: Association for Computational Linguistics, 2018. http://dx.doi.org/10.18653/v1/w18-5504.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Molokwu, Bonaventure C., and Ziad Kobti. "Social Network Analysis using RLVECN: Representation Learning via Knowledge-Graph Embeddings and Convolutional Neural-Network." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/739.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Social Network Analysis (SNA) has become a very interesting research topic with regard to Artificial Intelligence (AI) because a wide range of activities, comprising animate and inanimate entities, can be examined by means of social graphs. Consequently, classification and prediction tasks in SNA remain open problems with respect to AI. Latent representations about social graphs can be effectively exploited for training AI models in a bid to detect clusters via classification of actors as well as predict ties with regard to a given social network. The inherent representations of a social graph are relevant to understanding the nature and dynamics of a given social network. Thus, our research work proposes a unique hybrid model: Representation Learning via Knowledge-Graph Embeddings and ConvNet (RLVECN). RLVECN is designed for studying and extracting meaningful representations from social graphs to aid in node classification, community detection, and link prediction problems. RLVECN utilizes an edge sampling approach for exploiting features of the social graph via learning the context of each actor with respect to its neighboring actors.
9

Popova, Ekaterina, and Vladimir Spitsyn. "Sentiment Analysis of Short Russian Texts Using BERT and Word2Vec Embeddings." In 31th International Conference on Computer Graphics and Vision. Keldysh Institute of Applied Mathematics, 2021. http://dx.doi.org/10.20948/graphicon-2021-3027-1011-1016.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This article is devoted to modern approaches for sentiment analysis of short Russian texts from social networks using deep neural networks. Sentiment analysis is the process of detecting, extracting, and classifying opinions, sentiments, and attitudes concerning different topics expressed in texts. The importance of this topic is linked to the growth and popularity of social networks, online recommendation services, news portals, and blogs, all of which contain a significant number of people's opinions on a variety of topics. In this paper, we propose machine-learning techniques with BERT and Word2Vec embeddings for tweets sentiment analysis. Two approaches were explored: (a) a method, of word embeddings extraction and using the DNN classifier; (b) refinement of the pre-trained BERT model. As a result, the fine- tuning BERT outperformed the functional method to solving the problem.
10

Shang, Yu-Ming, Heyan Huang, Xin Sun, Wei Wei, and Xian-Ling Mao. "Relational Triple Extraction: One Step is Enough." In Thirty-First International Joint Conference on Artificial Intelligence {IJCAI-22}. California: International Joint Conferences on Artificial Intelligence Organization, 2022. http://dx.doi.org/10.24963/ijcai.2022/605.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Extracting relational triples from unstructured text is an essential task in natural language processing and knowledge graph construction. Existing approaches usually contain two fundamental steps: (1) finding the boundary positions of head and tail entities; (2) concatenating specific tokens to form triples. However, nearly all previous methods suffer from the problem of error accumulation, i.e., the boundary recognition error of each entity in step (1) will be accumulated into the final combined triples. To solve the problem, in this paper, we introduce a fresh perspective to revisit the triple extraction task and propose a simple but effective model, named DirectRel. Specifically, the proposed model first generates candidate entities through enumerating token sequences in a sentence, and then transforms the triple extraction task into a linking problem on a ``head -> tail" bipartite graph. By doing so, all triples can be directly extracted in only one step. Extensive experimental results on two widely used datasets demonstrate that the proposed model performs better than the state-of-the-art baselines.

Звіти організацій з теми "Extraction de graphes":

1

Reedy, Geoffrey, Alex Bertels, and Asael Sorensen. Understanding Data Structures by Extracting Memory Access Graphs. Office of Scientific and Technical Information (OSTI), October 2017. http://dx.doi.org/10.2172/1813903.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Reisch, Bruce, Pinhas Spiegel-Roy, Norman Weeden, Gozal Ben-Hayyim, and Jacques Beckmann. Genetic Analysis in vitis Using Molecular Markers. United States Department of Agriculture, April 1995. http://dx.doi.org/10.32747/1995.7613014.bard.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Genetic analysis and mapping in grapes has been difficult because of the long generation period and paucity of genetic markers. In the present work, chromosome linkage maps were developed with RAPD, RFLP and isozyme loci in interspecific hybrid cultivars, and RAPD markers were produced in a V. vinifera population. In three cultivars, there were 19 linkage groups as expected for a species with 38 somatic chromosomes. These maps were used to locate chromosome regions with linkages to important genes, including those influencing powdery mildew and botrytis bunch rot resistance; flower sex; and berry shape. In V. vinifera, the occurrence of specific markers was correlated with seedlessness, muscat flavor and fruit color. Polymorphic RAPD bands included single copy as well as repetitive DNA. Mapping procedures were improved by optimizing PCR parameters with grape DNA; by the development of an efficient DNA extraction protocol; and with the use of long (17- to 24-mer) primers which amplify more polymorphic loci per primer. DNA fingerprint analysis with RAPD markers indicated that vinifera cultivars could be separated readily with RAPD profiles. Pinot gris, thought to be a sort of Pinot noir, differed by 12 bands from Pinot noir. This suggests that while Pinot gris may be related to Pinot noir, it is not likely to be a clone. The techniques developed in this project are now being further refined to use marker-assisted selection in breeding programs for the early selection of elite seedlings. Furthermore, the stage has been set for future attempts to clone genes from grapes based upon map locations.

До бібліографії