Статті в журналах з теми "Semi-Semantic knowledge base"

Щоб переглянути інші типи публікацій з цієї теми, перейдіть за посиланням: Semi-Semantic knowledge base.

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся з топ-30 статей у журналах для дослідження на тему "Semi-Semantic knowledge base".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Переглядайте статті в журналах для різних дисциплін та оформлюйте правильно вашу бібліографію.

1

Willmes, Christian, Finn Viehberg, Sarah Esteban Lopez, and Georg Bareth. "CRC806-KB: A Semantic MediaWiki Based Collaborative Knowledge Base for an Interdisciplinary Research Project." Data 3, no. 4 (October 25, 2018): 44. http://dx.doi.org/10.3390/data3040044.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In the frame of an interdisciplinary research project that is concerned with data from heterogeneous domains, such as archaeology, cultural sciences, and the geosciences, a web-based Knowledge Base system was developed to facilitate and improve research collaboration between the project participants. The presented system is based on a Wiki that was enhanced with a semantic extension, which enables to store and query structured data within the Wiki. Using an additional open source tool for Schema–Driven Development of the data model, and the structure of the Knowledge Base, improved the collaborative data model development process, as well as semi-automation of data imports and updates. The paper presents the system architecture, as well as some example applications of a collaborative Wiki based Knowledge Base infrastructure.
2

Monteiro, Luciane Lena Pessanha, and Mark Douglas de Azevedo Jacyntho. "Use of Linked Data principles for semantic management of scanned documents." Transinformação 28, no. 2 (August 2016): 241–51. http://dx.doi.org/10.1590/2318-08892016000200010.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The study addresses the use of the Semantic Web and Linked Data principles proposed by the World Wide Web Consortium for the development of Web application for semantic management of scanned documents. The main goal is to record scanned documents describing them in a way the machine is able to understand and process them, filtering content and assisting us in searching for such documents when a decision-making process is in course. To this end, machine-understandable metadata, created through the use of reference Linked Data ontologies, are associated to documents, creating a knowledge base. To further enrich the process, (semi)automatic mashup of these metadata with data from the new Web of Linked Data is carried out, considerably increasing the scope of the knowledge base and enabling to extract new data related to the content of stored documents from the Web and combine them, without the user making any effort or perceiving the complexity of the whole process.
3

Juan and Faber. "Extraction of Terms Related to Named Rivers." Languages 4, no. 3 (June 27, 2019): 46. http://dx.doi.org/10.3390/languages4030046.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
EcoLexicon is a terminological knowledge base on environmental science, whose design permits the geographic contextualization of data. For the geographic contextualization of landform concepts, this paper presents a semi-automatic method for extracting terms associated with named rivers (e.g., Mississippi River). Terms were extracted from a specialized corpus, where named rivers were automatically identified. Statistical procedures were applied for selecting both terms and rivers in distributional semantic models to construct the conceptual structures underlying the usage of named rivers. The rivers sharing associated terms were also clustered and represented in the same conceptual network. The results showed that the method successfully described the semantic frames of named rivers with explanatory adequacy, according to the premises of Frame-Based Terminology.
4

Celik, Duygu, and Atilla Elci. "Semantic composition of business processes using Armstrong's Axioms." Knowledge Engineering Review 29, no. 2 (March 2014): 248–64. http://dx.doi.org/10.1017/s0269888914000083.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractLack of sufficient semantic description in the content of Web services makes it difficult to find and compose suitable Web services during analysis, search, and matching processes. Semantic Web Services are Web services that have been enhanced with formal semantic description, which provides well-defined meaning. Due to insertion of semantics, meeting user demands will be made possible through logical deductions achieving resolutions automatically. We have developed an inference-based semantic business process composition agent (SCA) that employs inference techniques. The semantic composition agent system is responsible for the synthesis of new services from existing ones in a semi-automatic fashion. SCA System composes available Web Ontology Language for Web services atomic processes utilizing Revised Armstrong's Axioms (RAAs) in inferring functional dependencies. RAAs are embedded in the knowledge base ontologies of SCA System. Experiments show that the proposed SCA System produces process sequences as a composition plan that satisfies user's requirement for a complex task. The novelty of the SCA System is that for the first time Armstrong's Axioms are revised and used for semantic-based planning and inferencing of Web services.
5

Rangel, Carlos Ramón, Junior Altamiranda, Mariela Cerrada, and Jose Aguilar. "Procedure Based on Semantic Similarity for Merging Ontologies by Non-Redundant Knowledge Enrichment." International Journal of Knowledge Management 14, no. 2 (April 2018): 16–36. http://dx.doi.org/10.4018/ijkm.2018040102.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The merging procedures of two ontologies are mostly related to the enrichment of one of the input ontologies, i.e. the knowledge of the aligned concepts from one ontology are copied into the other ontology. As a consequence, the resulting new ontology extends the original knowledge of the base ontology, but the unaligned concepts of the other ontology are not considered in the new extended ontology. On the other hand, there are experts-aided semi-automatic approaches to accomplish the task of including the knowledge that is left out from the resulting merged ontology and debugging the possible concept redundancy. With the aim of facing the posed necessity of including all the knowledge of the ontologies to be merged without redundancy, this article proposes an automatic approach for merging ontologies, which is based on semantic similarity measures and exhaustive searching along of the closest concepts. The authors' approach was compared to other merging algorithms, and good results are obtained in terms of completeness, relationships and properties, without creating redundancy.
6

Zhou, Lu-jie, Zhi-peng Zhao, and Jian-wu Dang. "Combining BERT Model with Semi-Supervised Incremental Learning for Heterogeneous Knowledge Fusion of High-Speed Railway On-Board System." Computational Intelligence and Neuroscience 2022 (May 31, 2022): 1–15. http://dx.doi.org/10.1155/2022/9948218.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
On-board system fault knowledge base (KB) is a collection of fault causes, maintenance methods, and interrelationships among on-board modules and components of high-speed railways, which plays a crucial role in knowledge-driven dynamic operation and maintenance (O&M) decisions for on-board systems. To solve the problem of multi-source heterogeneity of on-board system O&M data, an entity matching (EM) approach using the BERT model and semi-supervised incremental learning is proposed. The heterogeneous knowledge fusion task is formulated as a pairwise binary classification task of entities in the knowledge units. Firstly, the deep semantic features of fault knowledge units are obtained by BERT. We also investigate the effectiveness of knowledge unit features extracted from different hidden layers of the model on heterogeneous knowledge fusion during model fine-tuning. To further improve the utilization of unlabeled test samples, a semi-supervised incremental learning strategy based on pseudo labels is devised. By selecting entity pairs with high confidence to generate pseudo labels, the label sample set is expanded to realize incremental learning and enhance the knowledge fusion ability of the model. Furthermore, the model’s robustness is strengthened by embedding-based adversarial training in the fine-tuning stage. Based on the on-board system’s O&M data, this paper constructs the fault KB and compares the model with other solutions developed for related matching tasks, which verifies the effectiveness of this model in the heterogeneous knowledge fusion task of the on-board system.
7

Wang, Tiexin, Jingwen Cao, Chuanqi Tao, Zhibin Yang, Yi Wu, and Bohan Li. "A Configurable Semantic-Based Transformation Method towards Conceptual Models." Discrete Dynamics in Nature and Society 2020 (September 27, 2020): 1–14. http://dx.doi.org/10.1155/2020/6718087.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Conceptual models are built to depict and analyze complex systems. They are made of concepts and relationships among these concepts. In a particular domain, conceptual models are helpful for different stakeholders to reach a clear and unified view of domain problems. However, the process of building conceptual models is time-consuming, tedious, and expertise required. To improve the efficiency of the building process, this paper proposes a configurable semantic-based (semi-) automatic conceptual model transformation methodology (SbACMT) that tries to reuse existing conceptual models to generate new models. SbACMT contains three parts: (i) a configurable semantic relatedness computing method building on the structured linguistic knowledge base “ConceptNet” (SRCM-CNet), (ii) a specific meta-model, which follows the Ecore standard, defines the rules of applying SRCM-CNet to different conceptual models to automatically detect transformation mappings, and (iii) a multistep matching and transformation process that employs SRCM-CNet. A case study is carried out to detail the working mechanism of SbACMT. Furthermore, through a systematically analysis of this case study, we validate the performance of SbACMT. We prove that SbACMT can support the automatic transformation process of conceptual models (e.g., class diagrams). The scalability of SbACMT can be improved by adapting the meta-model and predefined syntax transformation rules.
8

León-Paredes, Gabriel Alejandro, Liliana Ibeth Barbosa-Santillán, Juan Jaime Sánchez-Escobar, and Antonio Pareja-Lora. "Ship-SIBISCaS: A First Step towards the Identification of Potential Maritime Law Infringements by means of LSA-Based Image." Scientific Programming 2019 (March 3, 2019): 1–14. http://dx.doi.org/10.1155/2019/1371328.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Maritime safety and security are being constantly jeopardized. Therefore, identifying maritime flow irregularities (semi-)automatically may be crucial to ensure maritime security in the future. This paper presents a Ship Semantic Information-Based, Image Similarity Calculation System (Ship-SIBISCaS), which constitutes a first step towards the automatic identification of this kind of maritime irregularities. In particular, the main goal of Ship-SIBISCaS is to automatically identify the type of ship depicted in a given image (such as abandoned, cargo, container, hospital, passenger, pirate, submersible, three-decker, or warship) and, thus, classify it accordingly. This classification is achieved in Ship-SIBISCaS by finding out the similarity of the ship image and/or description with other ship images and descriptions included in its knowledge base. This similarity is calculated by means of an LSA algorithm implementation that is run on a parallel architecture consisting of CPUs and GPUs (i.e., a heterogeneous system). This implementation of the LSA algorithm has been trained with a collection of texts, extracted from Wikipedia, that associate some semantic information to ImageNet ship images. Thanks to its parallel architecture, the indexing process of this image retrieval system has been accelerated 10 times (for the 1261 ships included in ImageNet). The range of the precision of the image similarity method is 46% to 92% with 100% recall (that is, a 100% coverage of the database).
9

Laukaitis, Algirdas, and Neda Laukaitytė. "SEMI-AUTOMATIC ONTOLOGICAL ALIGNMENT OF DIGITIZED BOOKS PARALLEL CORPORA." Mokslas - Lietuvos ateitis 13 (July 2, 2021): 1–8. http://dx.doi.org/10.3846/mla.2021.15034.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
In this paper, we present a method for general ontology management integration with an alignment of digitized books paraphrase corpus, which have been compiled from bilingual parallel corpus. We show that our method can improve ontology development and consistency checking when we add semantic parsing and machine translation to the process of general knowledge management. Additionally, we argue that the focus on one’s favorite books gives a factor of gamification for knowledge management process. A new formalism of semantic parsing ontological alignments is introduced and its use for ontology development and consistency checking is discussed. It is shown that existing general ontologies requires much more axioms than it is currently available in order to explain unaligned content of books. Proactive learning approach is suggested as part of the solution to improve development of ontology predicates and axioms. WordNet, FrameNet and SUMO ontologies are used as a starting knowledge base of paraphrase corpus semantic alignment method.
10

García-Manotas, Ignacio, Eduardo Lupiani, Francisco García-Sánchez, and Rafael Valencia-García. "Populating Knowledge Based Decision Support Systems." International Journal of Decision Support System Technology 2, no. 1 (January 2010): 1–20. http://dx.doi.org/10.4018/jdsst.2010101601.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Knowledge-based decision support systems (KBDSS) hold up business and organizational decision-making activities on the basis of the knowledge available concerning the domain under question. One of the main problems with knowledge bases is that their construction is a time-consuming task. A number of methodologies have been proposed in the context of the Semantic Web to assist in the development of ontology-based knowledge bases. In this paper, we present a technique for populating knowledge bases from semi-structured text which take advantage of the semantic underpinnings provided by ontologies. This technique has been tested and evaluated in the financial domain
11

Zhu, Hong Mei, Yong Quan Liang, Qi Jia Tian, and Shu Juan Ji. "Agricultural Policy-Oriented Ontology-Based Semantic Information Retrieval." Key Engineering Materials 439-440 (June 2010): 572–76. http://dx.doi.org/10.4028/www.scientific.net/kem.439-440.572.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Research on architecture of ontology-based information semantic representation and Retrieval is done. As a case study, a prototype for agricultural policy-oriented ontology-based semantic information retrieval system (APOSIRS) is established. Ontology plays a role that providing a shared terminology and supporting for the retrieval process. The architecture allows APOSIRS-based applications to perform automatic semantic information Retrieval of agricultural policy text at more length: automatic and dynamic semantic annotation of unstructured and semi-structured content, semantically-enabled information extraction, indexing, retrieval, as well as ontology management, such as querying and modifying the underlying ontology and knowledge bases. Main components of this architecture have been implemented and their results are reported.
12

PASCHKE, ADRIAN, and HAROLD BOLEY. "RULE RESPONDER: RULE-BASED AGENTS FOR THE SEMANTIC-PRAGMATIC WEB." International Journal on Artificial Intelligence Tools 20, no. 06 (December 2011): 1043–81. http://dx.doi.org/10.1142/s0218213011000528.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Rule Responder is a Pragmatic Web infrastructure for distributed rule-based event processing multi-agent eco-systems. This allows specifying virtual organizations — with their shared and individual (semantic and pragmatic) contexts, decisions, and actions/events for rule-based collaboration between the distributed members. The (semi-)autonomous agents use rule engines and Semantic Web rules to describe and execute derivation and reaction logic which declaratively implements the organizational semiotics and the different distributed system/agent topologies with their negotiation/coordination mechanisms. They employ ontologies in their knowledge bases to represent semantic domain vocabularies, normative pragmatics and pragmatic context of event-based conversations and actions.
13

Wu, Li Qun, Ming Zhao, and Juan Liu. "Study on OWL-Based Construction and Inference Method of Vegetable Products Ontology." Advanced Materials Research 121-122 (June 2010): 523–27. http://dx.doi.org/10.4028/www.scientific.net/amr.121-122.523.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
With the rapid development of information technology, ontology is gradually used for knowledge engineering and information science sphere. Agricultural Ontology mainly is composed of the concept and relationship between concepts in agricultural knowledge, as well as the computer can identify the composition of the formal description language. The goal of build the agricultural ontology is to form a common understanding of the agricultural information organizational structure, to known and analysis of knowledge in the field of agriculture so as to further establish the foundation for the agriculture semantic web. In this paper, we present a design principles and implementation process in vegetable products domain ontology. Through the collection and analysis of the field of vegetable products information as well as the use of inference engine, used to build vegetable domain ontology with OWL. In the information extraction prototype system, we achieved a semi-automatic filled with examples of vegetable products domain ontology. Through compared to the results before and after the use of inference engine, it show that vegetables and semi-automatic domain ontology building methods is feasible.
14

Gillani, Saira, and Andrea Ko. "Incremental Ontology Population and Enrichment through Semantic-based Text Mining." International Journal on Semantic Web and Information Systems 11, no. 3 (July 2015): 44–66. http://dx.doi.org/10.4018/ijswis.2015070103.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Higher education and professional trainings often apply innovative e-learning systems, where ontologies are used for structuring domain knowledge. To provide up-to-date knowledge for the students, ontology has to be maintained regularly. It is especially true for IT audit and security domain, because technology is changing fast. However manual ontology population and enrichment is a complex task that require professional experience involving a lot of efforts. The authors' paper deals with the challenges and possible solutions for semi-automatic ontology enrichment and population. ProMine has two main contributions; one is the semantic-based text mining approach for automatically identifying domain-specific knowledge elements; the other is the automatic categorization of these extracted knowledge elements by using Wiktionary. ProMine ontology enrichment solution was applied in IT audit domain of an e-learning system. After ten cycles of the application ProMine, the number of automatically identified new concepts are tripled and ProMine categorized new concepts with high precision and recall.
15

Alaa, Rana, Mariam Gawish, and Manuel Fernández-Veiga. "Improving Recommendations for Online Retail Markets Based on Ontology Evolution." Electronics 10, no. 14 (July 11, 2021): 1650. http://dx.doi.org/10.3390/electronics10141650.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The semantic web is considered to be an extension of the present web. In the semantic web, information is given with well-defined meanings, and thus helps people worldwide to cooperate together and exchange knowledge. The semantic web plays a significant role in describing the contents and services in a machine-readable form. It has been developed based on ontologies, which are deemed the backbone of the semantic web. Ontologies are a key technique with which semantics are annotated, and they provide common comprehensible foundation for resources on the semantic web. The use of semantics and artificial intelligence leads to what is known to be “Smarter Web”, where it will be easy to retrieve what customers want to see on e-commerce platforms, and thus will help users save time and enhance their search for the products they need. The semantic web is used as well as webs 3.0, which helps enhancing systems performance. Previous personalized recommendation methods based on ontologies identify users’ preferences by means of static snapshots of purchase data. However, as the user preferences evolve with time, the one-shot ontology construction is too constrained for capturing individual diverse opinions and users’ preferences evolution over time. This paper will present a novel recommendation system architecture based on ontology evolution, the proposed subsystem architecture for ontology evolution. Furthermore, the paper proposes an ontology building methodology based on a semi-automatic technique as well as development of online retail ontology. Additionally, a recommendation method based on the ontology reasoning is proposed. Based on the proposed method, e-retailers can develop a more convenient product recommendation system to support consumers’ purchase decisions.
16

Delazari, Luciene Stamato, Leonardo Ercolin Filho, and Ana Luiza Stamato Delazari Skroch. "UFPR CampusMap: a laboratory for a Smart City developments." Abstracts of the ICA 1 (July 15, 2019): 1–2. http://dx.doi.org/10.5194/ica-abs-1-57-2019.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
<p><strong>Abstract.</strong> A Smart City is based on intelligent exchanges of information that flow between its many different subsystems. This flow of information is analyzed and translated into citizen and commercial services. The city will act on this information flow to make its wider ecosystem more resource-efficient and sustainable. The information exchange is based on a smart governance operating framework designed to make cities sustainable.</p><p>The public administration needs updated and reliable geospatial data which depicts the urban environment. These data can be obtained through smart devices (smartphones, e.g.), human agents (collaborative mapping) and remote sensing technologies, such as UAV (Unnamed Aerial Vehicles). According to some authors, there are four dimensions in a Smart City. The first dimension concerns the application of a wide range of electronic and digital technologies to create a cyber, digital, wired, informational or knowledge-based city; the second is the use of information technology to transform life and work; the third is to embed ICT (Information and Communication Technology) in the city infrastructure; the fourth is to bring ICT and people together to enhance innovation, learning, and knowledge. Analyzing these dimensions, it is possible to say that in all of them the geospatial information is crucial, otherwise, none of them are possible. Considering these aspects, this research intends to use the Smart City concept as a methodological approach using the UFPR (Federal University of Parana) as a target to develop a case study.</p><p>The UFPR has 26 campus in different cities of the Paraná State, south of Brazil. Its structure has 14 institutes. It comprises 11 million square meters of area, 500,000 square meters of constructed area and 316 buildings. There are more than 6,300 employees (staff and administration), 50,000 undergraduate students and 10,000 graduate students. Besides these figures, there are external people who need access to the UFPR facilities, such as deliveries, service providers and the community in general.</p><p>The lack of knowledge about the space and its characteristics has a direct impact on issues such as resources management (human and material), campi infrastructure (outside and inside of the buildings), security and other activities which can be supported using an updated geospatial database. In 2014, the UFPR CampusMap project was started with the indoor mapping as the main goal. However, the base map of the campus was needed in order to support the indoor mapping, the available one was produced in 2000. Thereafter, the campus Centro Politécnico (located in the city of Curitiba) is being used as a case study to develop methodologies to create a geospatial database which will allows to different users the knowledge and management of the space.</p><p>According to Gruen (2013), a Smart City must have spatial intelligence. Moreover, it is necessary the establishment of a database, in particular, a geospatial database. The knowledge of the space where the events happen is a key element in this context. This author also states that to achieve this objective are necessary the following items:</p> <ul><li>Automatic or semi-automated Digital Surface Models (DSM) generation from satellite, aerial and terrestrialimages and/or LiDAR data;</li><li>Further development of the semi-automated techniques onto a higher level of automation; </li><li>Integrated automated and semi-automated processing of LiDAR point clouds and images, both from aerial andterrestrial platforms; </li><li>Streamlining the processing pipeline for UAV image data projects; </li><li>Set-up of GIS with 3D/4D capabilities; </li><li>Change detection and databases updating; </li><li>Handling of dynamic and semantic aspects of city modeling and simulation. This leads to 4D city models; </li><li>LBS (Location Based Services) system investigations (PDAs, mobiles); and </li><li>Establishment of a powerful visualization and interaction platform.</li></ul><p>Some of these aspects are being addressed in this research. The first one is the integration of indoor/outdoor data to helps the space management and provides a tool for navigation between the spaces. The base map was updated through a stereo mapping compilation from images collected using a UAV Phantom 4 from DJI (https://www.dji.com/phantom-4). The use of this technology for data acquisition is not only faster but also cheaper compared to the traditional photogrammetric method. Besides the quality of the images (in this case a GSD – Ground Sample Distance – of 2,5 cm), it can be use in urban areas as a rapid response in emergency situations.</p><p> To georreferencing the image block, it was used 50 control points collected by GNSS (Global Navigation Satellite System) and the software Agisoft Photoscan (http://www.agisoft.com/) to perform the bundle block adjustment with self-calibration. After the processing, the exterior orientation parameters of image block and the tridimensional coordinates of each tie point were calculated simultaneously with the determination of the interior orientation parameters: focal length (f), principal point coordinates (x0, y0), radial symmetric (k1, k2, k3) and decentering distortion coefficients (p1, p2).</p><p> In the mapping production step, the features were extracted through stereo mapping compilation accordingly the standards defined by the Brazilian Mapping Agency. The several layers were edited in GIS software (QGIS) and then the topology was built. Afterward, it was created a spatial database using Postgre/PostGIS. Also, the dense point cloud was generated using SfM (Structure from Motion) algorithms to allow to generate the digital surface model and orthomosaics.</p><p> Meanwhile, a website using HTML5+CSS3&amp;reg; and JavaScript&amp;reg; technologies was developed to publish the results and the first applications. (www.campusmap.ufpr.br). The architecture of this application uses JavaScript&amp;reg;, LeafLet, PgRouting library (to calculate the routes between interest points), files in GeoJson format and custom applications. The indoor database comprises the data about the interior of the buildings and provides to the user some functionalities such as: search for rooms, laboratories, and buildings; routes between points (inside and outside the buildings), floor change. Also, some web applications were developed in order to demonstrate the capabilities of the use of geospatial information in an environment very similar to a city and its problems, e.g. parking management, security, logistics, resources inventory, among others. It was developed a mobile application to provide the indoor user positioning through Wi-Fi (Wireless Fidelity) networks. This, combined with the indoor mapping, will allow the users to navigate in real time inside the buildings. Using the data from the point cloud and the CityGML standard it was developed a 3D model of some buildings. An application to inform crime occurrences (such as robbery, assaults) was also developed so these occurrences can be mapped, and the administration can increase the security of the campus.</p><ol type="a"> <li>Design an interface with functionalities to integrate all applications which are being presented in individual Webpages;</li><li>Develop a visualization tool for 3D models using CityGML;</li><li>Evaluate the potential of UAV images for different applications in urban scenarios;</li><li>Develop an interface for collaborative database update.</li><li>Expand the database to other campus of UFPR and develop new functionalities to different users;</li></ol><p> The “smart city” concept allows to develop an optimized system that use geospatial data to understand the complexity of the urban environments. The use of the geospatial data can improve efficiency and security to manage urban aspects like infrastructure, building and public spaces, natural environment, urban services, health and education. Also, this concept can give a support to the city management agents during the design, realization and evaluation of the urban projects.</p><p>In the present project, we believe these are the first steps to build a connected environment and apply the “smart city” concept into the university administration to make the sustainable use of resources and could suit as an example to some existing problems in public administrations.</p>
17

Huang, Yubo, and Zhong Xiang. "RPDNet: Automatic Fabric Defect Detection Based on a Convolutional Neural Network and Repeated Pattern Analysis." Sensors 22, no. 16 (August 19, 2022): 6226. http://dx.doi.org/10.3390/s22166226.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
On a global scale, the process of automatic defect detection represents a critical stage of quality control in textile industries. In this paper, a semantic segmentation network using a repeated pattern analysis algorithm is proposed for pixel-level detection of fabric defects, which is termed RPDNet (repeated pattern defect network). Specifically, we utilize a repeated pattern detector based on convolutional neural network (CNN) to detect periodic patterns in fabric images. Through the acquired repeated pattern information and proper guidance of the network in a high-level semantic space, the ability to understand periodic feature knowledge and emphasize potential defect areas is realized. Concurrently, we propose a semi-supervised learning scheme to inject the periodic knowledge into the model separately, which enables the model to function independently from further pre-calculation during detection, so there is no additional network capacity required and no loss in detection speed caused. In addition, the model integrates two advanced architectures of DeeplabV3+ and GhostNet to effectively implement lightweight fabric defect detection. The comparative experiments on repeated pattern fabric images highlights the potential of the algorithm to determine competitive detection results without incurring further computational cost.
18

Mendes, David, Irene Pimenta Rodrigues, Carlos Rodriguez-Solano, and Carlos Fernandes Baeta. "Enrichment/Population of Customized CPR (Computer-Based Patient Record) Ontology from Free-Text Reports for CSI (Computer Semantic Interoperability)." Journal of Information Technology Research 7, no. 1 (January 2014): 1–11. http://dx.doi.org/10.4018/jitr.2014010101.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
CSI (Computer Semantic Interoperability) is a very important issue in healthcare. Ways for heterogeneous computer systems to “understand” important facts from the clinical process for clinical decision support are now beginning to be addressed. The authors present here comprehensive contributions to achieve CSI. EHR (Electronic Health Record) systems provide a way to extract reports of the clinicians activity. In order to formalize an automated acquisition from semi-structured, free-form, natural language texts in Portuguese into a Clinical Practice Ontology an important step is to develop the ability of decoding all the nicknames, acronyms and short-hand forms that each clinician tend to write down in their reports. The authors present the steps to develop clinical vocabularies extracting directly from clinical reports in Portuguese available in the SAM (Sistema de Apoio ao Médico) system. The presented techniques are easily further developed for any other natural language or knowledge representation framework with due adaptations.
19

Енкарнацьйон Санчеc Аренас and Ессам Басем. "Cognitive Exploration of ‘Traveling’ in the Poetry of Widad Benmoussa." East European Journal of Psycholinguistics 5, no. 2 (December 28, 2018): 6–15. http://dx.doi.org/10.29038/eejpl.2018.5.2.are.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The concept of motion is central to the human cognition and it is universally studied in cognitive linguistics. This research paper investigates concept of motion, with special reference to traveling, in the poetry of Widad Benmoussa. It mainly focuses on the cognitive dimensions underlying the metaphorical representation of traveling. To this end, the research conducts a semi-automated analysis of a corpus representing Widad’s poetic collections. MetaNet’s physical path is mainly used to reveal the cognitive respects of traveling. The personae the poetess assigns are found to pursue a dynamic goal through activation of several physical paths. During the unstable romantic relations, several travel impediments are met. Travel stops and detours, travel companions, paths in journey as well as changing travel destinations are the most stressed elements of ‘Traveling’ respects. With such a described high frequency of sudden departures and hopping, the male persona the poetess assigns evinces typical features of 'wanderlust' or dromomania. References Arenas, E. S. (2018). Exploring pornography in Widad Benmoussa’s poetry using LIWC and corpus tools. Sexuality & Culture, 22(4), 1094–1111. Baicchi, A. (2017). The relevance of conceptual metaphor in semantic interpretation. Estetica. Studi e Ricerche, 7(1), 155–170. Carey, A. L., Brucks, M. S., Küfner, A. C., Holtzman, N. S., Back, M. D., Donnellan, M. B., ... & Mehl, M. R. (2015). Narcissism and the use of personal pronouns revisited. Journal of Personality and Social Psychology, 109(3), e1. David, O., & Matlock, T. (2018). Cross-linguistic automated detection of metaphors for poverty and cancer. Language and Cognition, 10(3), 467–493. David, O., Lakoff, G., & Stickles, E. (2016). Cascades in metaphor and grammar. Constructions and Frames, 8(2), 214–255. Essam, B. A. (2016). Nizarre Qabbani’s original versus translated pornographic ideology: A corpus-based study. Sexuality & Culture, 20(4), 965–986 Forceville, C. (2016). Conceptual metaphor theory, blending theory, and other cognitivist perspectives on comics. The Visual Narrative Reader, 89–114. Gibbs Jr, R. W. (2011). Evaluating conceptual metaphor theory. Discourse Processes, 48(8), 529–562. Kövecses, Z. (2008). Conceptual metaphor theory: Some criticisms and alternative proposals. Annual Review of Cognitive Linguistics, 6(1), 168–184. Lakoff, G. (2014). Mapping the brain's metaphor circuitry: Metaphorical thought in everyday reason. Frontiers in Human Neuroscience, 8, 958. Lakoff, G., & Johnson, M. (2008). Metaphors We Live By. University of Chicago press. Lee, M. G., & Barnden, J. A. (2001). Mental metaphors from the Master Metaphor List: Empirical examples and the application of the ATT-Meta system. Cognitive Science Research Papers-University of Birmingham CSRP. Lönneker-Rodman, B. (2008). The Hamburg metaphor database project: issues in resource creation. Language Resources and Evaluation, 42(3), 293–318. Martin, J. H. (1994). Metabank: A knowledge‐base of metaphoric language conventioms. Computational Intelligence, 10(2), 134–149. MetaNet Web Site: https://metanet.icsi.berkeley.edu/metanet/ Pennebaker, J. W., Boyd, R. L., Jordan, K., & Blackburn, K. (2015). The development and psychometric properties of LIWC2015. Retrieved from https://repositories.lib.utexas.edu/ handle/2152/31333 Santarpia, A., Blanchet, A., Venturini, R., Cavallo, M., & Raynaud, S. (2006, August). La catégorisation des métaphores conceptuelles du corps. In Annales Médico-psychologiques, revue psychiatrique. Vol. 164, No. 6. (pp. 476-485). Elsevier Masson. Stickles, E., David, O., Dodge, E. K., & Hong, J. (2016). Formalizing contemporary conceptual metaphor theory. Constructions and Frames, 8(2), 166–213 Tausczik, Y. R., & Pennebaker, J. W. (2010). The psychological meaning of words: LIWC and computerized text analysis methods. Journal of Language and Social Psychology,29(1), 24–54. Sources Benmoussa, W. (2001). I have Roots in Air (in Arabic). Morocco: Ministry of Culture. Benmoussa, W. (2006). Between Two Clouds (in Arabic and French). Morocco: Marsam Publishing House. Benmoussa, W. (2007). I Opened It on You (in Arabic). Morocco: Marsam Publishing House. Benmoussa, W. (2008). Storm in a Body (in Arabic). Morocco: Marsam Publishing House. Benmoussa, W. (2010). I Hardly Lost my Narcissism (in Arabic). Syria: Ward Publishing House. Benmoussa, W. (2014). I Stroll Along This Life. Morocco: Tobkal Publishing House
20

Li, Yanling, Chuansheng Wang, Qi Wang, Jieling Dai, and Yushan Zhao. "Secure Multi-User k-Means Clustering Based on Encrypted IoT Data." Computer and Information Science 12, no. 2 (March 25, 2019): 35. http://dx.doi.org/10.5539/cis.v12n2p35.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
IoT technology collects information from a lot of clients, which may relate to personal privacy. To protect the privacy, the clients would like to encrypt the raw data with their own keys before uploading. However, to make use of the information, the data mining technology with cloud computing is used for the knowledge discovery. Hence, it is an emergent issue of how to effectively performing data mining algorithm on the encrypted data. In this paper, we present a k-means clustering scheme with multi-user based on the IoT data. Although, there are many privacy-preserving k-means clustering protocols, they rarely focus on the situation of encrypting with different public keys. Besides, the existing works are inefficient and impractical. The scheme we propose in this paper not only solves the problem of evaluation on the encrypted data under different public keys but also improves the efficiency of the algorithm. It is semantic security under the semi-honest model according to our theoretical analysis. At last, we evaluate the experiment based on a real dataset, and comparing with previous works, the result shows that our scheme is more efficient and practical.
21

Xia, Lixin, Zhongyi Wang, Chen Chen, and Shanshan Zhai. "Research on feature-based opinion mining using topic maps." Electronic Library 34, no. 3 (June 6, 2016): 435–56. http://dx.doi.org/10.1108/el-11-2014-0197.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Purpose Opinion mining (OM), also known as “sentiment classification”, which aims to discover common patterns of user opinions from their textual statements automatically or semi-automatically, is not only useful for customers, but also for manufacturers. However, because of the complexity of natural language, there are still some problems, such as domain dependence of sentiment words, extraction of implicit features and others. The purpose of this paper is to propose an OM method based on topic maps to solve these problems. Design/methodology/approach Domain-specific knowledge is key to solve problems in feature-based OM. On the one hand, topic maps, as an ontology framework, are composed of topics, associations, occurrences and scopes, and can represent a class of knowledge representation schemes. On the other hand, compared with ontology, topic maps have many advantages. Thus, it is better to integrate domain-specific knowledge into OM based on topic maps. This method can make full use of the semantic relationships among feature words and sentiment words. Findings In feature-level OM, most of the existing research associate product features and opinions by their explicit co-occurrence, or use syntax parsing to judge the modification relationship between opinion words and product features within a review unit. They are mostly based on the structure of language units without considering domain knowledge. Only few methods based on ontology incorporate domain knowledge into feature-based OM, but they only use the “is-a” relation between concepts. Therefore, this paper proposes feature-based OM using topic maps. The experimental results revealed that this method can improve the accuracy of the OM. The findings of this study not only advance the state of OM research but also shed light on future research directions. Research limitations/implications To demonstrate the “feature-based OM using topic maps” applications, this work implements a prototype that helps users to find their new washing machines. Originality/value This paper presents a new method of feature-based OM using topic maps, which can integrate domain-specific knowledge into feature-based OM effectively. This method can improve the accuracy of the OM greatly. The proposed method can be applied across various application domains, such as e-commerce and e-government.
22

Redmond, Alan, Roger West, and Alan Hore. "Designing a Framework for Exchanging Partial Sets of BIM Information on a Cloud-Based Service." International Journal of 3-D Information Modeling 2, no. 4 (October 2013): 12–24. http://dx.doi.org/10.4018/ij3dim.2013100102.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
This paper reviews the rationale for using a partial data set in Building Information Modeling (BIM) exchanges, influenced by the recognized difficulty of exchanging data at element or object level which depends on the information requiring compatible hardware and software, in order for the data to be read and transferred freely between applications. The solution was not to introduce a new schema in contrast to the industry's existing open exchange model ‘Industry Foundation Classes' which has been in existence since the 1980's, but for the authors to re-engineer an existing Simplified Markup Language ‘BIM XML' into subsets via XML Style Sheet Transition. The language of XML was chosen because Web services, which are developed from XML data representation format and Hypertext Transfer Protocol (HTTP) communication protocol, are platform neutral, widely accepted and utilized and come with a wide range of useful technologies. Furthermore, they support Service Oriented Architecture (SOA) – the internet platform that enables interoperability between different software programs. The methodology involved developing a full hybrid research model based on mixed methods, ‘quantitative and qualitative', interlaced into two main phases. The first phase comprised of a main survey questionnaire, focus groups, two Delphi questionnaires, semi-structured interviews and a case study. The final phase, ‘product design and testing', used semantic methods and tools, such as Business Process Management Notation. The final case study (a prototype test) successfully itemized the potential of combining three applications asynchronously in real-time. The interoperable capabilities of Web services APIs for exchanging partial sets of BIM data enabled assumptions with a higher amount of detail to be reviewed at the feasibility design stage. Future services will be built upon existing Web Ontology languages such as SPARQL descriptions to be used in conjunction with several web services connecting together on a Cloud platform to produce a knowledge ‘Semantic Web'.
23

Constantopoulos, Panos. "Leveraging Digital Cultural Memories." Digital Presentation and Preservation of Cultural and Scientific Heritage 6 (September 30, 2016): 37–42. http://dx.doi.org/10.55630/dipp.2016.6.3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The penetration of ICT in the management and study of material culture and the emergence of digital cultural repositories and linked cultural data in particular are expected to enable new paths in humanities research and new approaches to cultural heritage. Success is contingent upon securing information trustworthiness, long-term preservation, and the ability to re-use, re-combine and re-interpret digital content. In this perspective, we review the use in the cultural heritage domain of digital curation and curation-aware repository systems; achieving semantic interoperability through ontologies; explicitly addressing contextual issues of cultural heritage and humanities information; and the services of digital research infrastructures. The last two decades have witnessed an increasing penetration of ICT in the management and study of material culture, as well as in the Humanities at large. From collections management, to object documentation and domain modelling, to supporting the creative synthesis and re-interpretation of data, significant progress has been achieved in the development of relevant knowledge structures and software tools. As a consequence of this progress, digital repositories are being created that aim at serving as digital cultural memories, while a process of convergence among the different kinds of memory institutions, i.e., museums, archives, and libraries, in what concerns their information functions is already evolving. Yet the advantages offered by information management technology, mass storage, copying, and the ease of searching and quantitative analysis, are not enough to ensure the usefulness of those digital cultural memories unless information trustworthiness, long-term preservation, and the ability to re-use, re-combine and re-interpret digital content are ensured. Furthermore, the widely encountered need for integrating heterogeneous information becomes all the more pressing in the case of cultural heritage due to the specific traits of information in this domain. In view of the above fundamental requirements, in this presentation we briefly review the leveraging power of certain practices and approaches in realizing the potential of digital cultural memories. In particular, we review the use of digital curation and curation-aware repository systems; achieving semantic interoperability through ontologies; explicitly addressing contextual issues of cultural heritage and humanities information; and the services of digital research infrastructures. Digital curation is an interdisciplinary field of enquiry and practice, which brings together disciplinary traditions and practices from computer science, information science, and disciplines practicing collections-based or data-intensive research, such as history of art, archaeology, biology, space and earth sciences, and application areas 38 such as e-science repositories, organizational records management, and memory institutions (Constantopoulos and Dallas 2008). Digital curation aims at ensuring adequate representation of and long-term access to digital information as its context of use changes, and at mitigating the risk of repositories becoming “data mortuaries”. To this end a lifecycle approach to the representation of curated information objects is adopted; event-centric representations are used to capture information “life events”; the class of agents involved is extended to include knowledge producers and communicators in addition to information custodians; and context-specificity is explicitly addressed. Cultural heritage information comprises representations of actual cultural objects (texts, artefacts, historical records, etc.), their histories, agents (persons and organizations) operating on such objects, and their relationships. It also includes interpretations of and opinions about such objects. The recording of this knowledge is characterized by disciplinary diversity, representational complexity and heterogeneity, historical orientation, and textual bias. These characteristics of information are in line with the character of humanities research: hermeneutic and intertextual, rather than experimental; narrative, rather than formal; idiographic rather than nomothetic; and, conformant to a realist rather than positivist account of episteme (Dallas 1999). The primary use of this information has been to support knowledge-based access, while now it is gradually also being targeted at various synthetic and creative uses. A rich semantic structure, including subsumption, meronymic, temporal, spatial, and various other semantic relations, is inherent to cultural information. Complexity is compounded by terminological inconsistency, subjectivity, multiplicity of interpretation and missing information. From an information lifecycle perspective, digital curation involves a number of distinct processes: appraisal; ingesting; classification, indexing and cataloguing; knowledge enhancement; presentation, publication and dissemination; user experience; repository management; and preservation. These processes rely on three supporting processes, namely, goal and usage modelling, domain modelling and authority management. These processes effectively capture the context of digital curation and produce valuable resources which can themselves be seen as curated digital assets (Constantopoulos and Dallas 2008; Constantopoulos et al. 2009). The field of cultural information presents itself as a privileged domain for digital curation. There is a relatively long history of developing library systems and museum systems, along with recent intense activity on interoperable, semantically rich cultural information systems, boosted by two important developments: the emergence of the CIDOC CRM (ISO 21127) 1 standard ontology for cultural documentation; and the movement for convergence of museum, library and archive systems, one manifestation of which is the CIDOC CRM compatible FRBR-oo model 2 . Advances such as those outlined above allow addressing old research questions in new ways, as well as putting new questions that were very hard or impossible to tackle without the means of digital technologies. Significant enablers towards this direc- 1 http://www.cidoc-crm.org/ 2 http://www.cidoc-crm.org/frbr_inro.html 39 tion are the so-called digital research infrastructures, which bear the promise of facilitating research through sharing tools and data. Several trends can be identified in the development of research infrastructures, which follow two main approaches: a) The normative approach, whereby normalized collections of data and tools are developed as common resources and managed centrally by the infrastructure. b) The regulative approach, whereby resources reside with individual organizations willing to contribute them, under specific terms, to the community. A set of interoperability conditions and mechanisms provide a regulatory function that lies at the heart of the infrastructure. Both approaches are being pursued in all disciplines, but the mix differs: in hard sciences building common normalized infrastructures appears to be a necessity, with a complementary, yet significant role to be played by a network of interoperable, disparate sources. In the humanities, on the other hand, long scholarly traditions have produced a formidable variety of information collections and formats, mostly offering interpreted, rather than raw material for publication and sharing. These conditions favour the development of regulated networks of interoperable sources, with centralized, normative infrastructures in a complementary capacity. By way of example, a recent such infrastructure is DARIAH- GR / ΔΥΑΣ 3 , one of the national constituents of DARIAH-EU 4 , the Europe-wide digital infrastructure in the arts and humanities. DARIAH- GR / ΔΥΑΣ is a hybrid -virtual distributed infrastructure, bringing together the strengths and capacities of leading research, academic, and collection custodian institutions through a carefully defined, lightweight layer of services, tools and activities complementing, rather than attempting to replicate, prior investments and capabilities. Arts and humanities data and content resources are as a rule thematically organized, widely distributed, under the custodianship and curation of diverse institutions, including government agencies and departments, public and private museums, archives and special libraries, as well as academic and research units, associations, research projects, and other actors, and displaying a diverse degree of digitization. The mission of the infrastructure is then to provide the research communities with effective, comprehensive and sustainable capability to discover, access, integrate, analyze, process, curate and disseminate arts and humanities data and information resources, through a concerted plan of virtual services and tools, and hybrid (combined virtual and physical) activities, integrating and running on top of existing primary information systems and leveraging integration and synergies with DARIAH- EU and other related infrastructures and aggregators (e.g. ARIADNE 5 , CARARE 6 , LoCloud 7 ). In its first stage of development, the DARIAH- GR / ΔΥΑΣ Research Infrastructure has offered the following groups of services: 3 http://www.dyas-net.gr/ 4 http://www.dariah.eu/ 5 http://www.ariadne-infrastructure.eu/ 6 http://www.carare.eu/ 7 http://www.locloud.eu/ 40 • Data sharing : comprehensive registries of digital resources; • Supporting the development of digital resources : tools and best practice guidelines for the development of digital resources; • Capacity building: workshops and training activities; and • Digital Humanities Observatory : evidence-based research on digital humanities, monitoring, outreach and dissemination activities. Key factor in the development of DARIAH- GR / ΔΥΑΣ, ARIADNE, CARARE and LoCloud resources alike has been a curation-oriented aggregator, the Metadata and Object Repository - MORe 8 (Gavrilis, Angelis & Dallas 2013; Gavrilis et al. 2013). This system supports the aggregation of metadata from multiple sources (OAI-PMH, Archive, SIP, Omeka, MINT) and heterogeneous systems in a single repository, the creation of unified indexes of normalized and enriched metadata, the creation of RDF databases, and the publication of aggregated records to multiple recipients (OAI- PMH, Archive, Elastic Search, RDF Stores). It enables the dynamic definition of validation and enrichment plans, supported by a number of micro-services, as well as the measurement of metadata quality. MORe can incorporate any XML/RDF metadata schema and can support several intermediate schemas in parallel. Its architecture is based on micro-services, a software development model according to which a complex application is composed of small, independent services communicating via a language-agnostic API, thus being highly reusable. MORe currently maintains access to 30 SKOS-encoded thesauri, totaling several hundred thousands of terms, as well as to copies of the Geo-names and Perio.do services, thus offering information enrichment on the basis of a wide array of sources. Metadata enrichment is a process of automatic generation of metadata through the linking of metadata elements with data sources and/or vocabularies. The enrichment process increases the volume of metadata, but it also considerably enhances their precision, therefore their quality. Performing metadata aggregation and enrichment carries several benefits: increase of repository / site traffic, better retrieval precision, concentration of indexes in one system, better performance of user services. To date MORe is used by 110 content provider institutions, and accommodates 23 different metadata schemas and about 20,800,000 records. The advent of digital infrastructures for arts and humanities research calls for a deeper understanding of how humanists work with digital resources, tools and services as they engage with different aspects of research activity: from capturing, encoding, and publishing scholarly data to analyzing, visualizing, interpreting and communicating data and research argumentation to co-workers and readers. Digitally enabled scholarly work and the integration of digital content, tools and methods present not only commonalities but also differences across disciplines, methodological traditions, and communities of researchers. A significant challenge in providing integrated access to disparate digital humanities resources and, more broadly, in supporting digitally-enabled humanities research, lies in empirically capturing the context of use of digital content, methods and tools. 8 http://more.dcu.gr/ 41 Several attempts have been made to develop a conceptual framework for DH in practice. In 2008, the AHRC ICT Methods Network 9 developed a taxonomy of digital methods in the arts and humanities. This was the basis for the classification of over 200 digital humanities projects funded by the U.K. Arts and Humanities Research Council in the online resource arts-humanities.net, as well as for the subsequent Digital Humanities at Oxford 10 taxonomy. Other initiatives to build a taxonomy of Digital Humanities include TADIRAH 11 and DH Commons 12 . From 2011 to 2015 the Network for Digital Methods in the Arts and Humanities 13 (NeDiMAH) ran over 40 activities structured around key methodological areas in the humanities (digital representations of space and time; visualisation; linked data; creating and using large scale corpora; and creating editions). Through these activities, NeDiMAH gathered a snapshot of the practice of digital humanities in Europe, and the impact of digital methods on research. A key output of NeDiMAH is NeMO 14 : the NeDiMAH Ontology of Digital Methods in the Arts and Humanities . This ontology of digital methods in the humanities has been built as a framework for understanding not just the use of digital methods, but also their relationship to digital content and tools. The development of an ontology, rather than a taxonomy, stands in recognition of the complexity of the digital humanities landscape, the interdisciplinarity of the field, and the dependencies that impact the use of digital methods in research. NeMO provides a conceptual framework capable of representing scholarly work in the humanities, addressing aspects of intentionality and capturing the diverse associations between research actors and their goals, activities undertaken, methods employed, resources and tools used, and outputs produced, with the aim of obtaining semantically rich structured representations of scholarly work (Angelis et al 2015; Hughes, Constantopoulos & Dallas 2016). It is grounded on earlier empirical research through semi-structured interviews with scholars from across Europe, which focused on analysing their research practices and capturing the resulting information requirements for research infrastructures (Benardou, Constantopoulos & Dallas 2013). The relevance of NeMO to the DH community was validated in a series of workshops through use cases contributed by researchers. A variety of complex associative queries articulated by researchers and encoded in SPARQL, demonstrated the potential of NeMO as an effective mechanism for information extraction and reasoning with regard to the use of digital resources in scholarly work and as a knowledge base schema for documenting scholarly practices. In a recent workshop in DH2016, researchers created their own NeMO-based descriptions of projects with an easy to use tool (Constantopoulos et al 2016). 9 http://www.methodsnetwork.ac.uk/index.html 10 https://digital.humanities.ox.ac.uk/people-projects 11 http://tadirah.dariah.eu/vocab/index.php 12 http://dhcommons.org/ 13 http://nedimah.eu/ 14 http://nemo.dcu.gr/ 42 Knowledge bases documenting scholarly practice through NeMO can be useful to researchers by (a) helping them find information on earlier work relevant for their own research; (b) supporting goal-oriented organization of research work; (c) facilitating the discovery of new paths with regard to resources, tools and methods; and, (d) promoting networking among researchers with common interests. In addition research groups can get support for better project planning by explicitly exposing links between goals, actors, activities, methods, resources and tools, as well as assistance for discovering methodological trends, future directions and promising research ideas. Funding agencies, on the other hand, could benefit from the kind of systematic documentation and comparative overview of project work enabled by the ontology.
24

Yuan, Yuan, Lei Lin, Jingbo Chen, Hichem Sahli, Yixiang Chen, Chengyi Wang, and Bin Wu. "A New Framework for Modelling and Monitoring the Conversion of Cultivated Land to Built-up Land Based on a Hierarchical Hidden Semi-Markov Model Using Satellite Image Time Series." Remote Sensing 11, no. 2 (January 21, 2019): 210. http://dx.doi.org/10.3390/rs11020210.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Large amounts of farmland loss caused by urban expansion has been a severe global environmental problem. Therefore, monitoring urban encroachment upon farmland is a global issue. In this study, we propose a novel framework for modelling and monitoring the conversion of cultivated land to built-up land using a satellite image time series (SITS). The land-cover change process is modelled by a two-level hierarchical hidden semi-Markov model, which is composed of two Markov chains with hierarchical relationships. The upper chain represents annual land-cover dynamics, and the lower chain encodes the vegetation phenological patterns of each land-cover type. This kind of architecture enables us to represent the multilevel semantic information of SITS at different time scales. Specifically, intra-annual series reflect phenological differences and inter-annual series reflect land-cover dynamics. In this way, we can take advantage of the temporal information contained in the entire time series as well as the prior knowledge of land cover conversion to identify where and when changes occur. As a case study, we applied the proposed method for mapping annual, long-term urban-induced farmland loss from Moderate Resolution Imaging Spectroradiometer (MODIS) Normalized Difference Vegetation Index (NDVI) time series in the Jing-Jin-Tang district, China from 2001 to 2010. The accuracy assessment showed that the proposed method was accurate for detecting conversions from cultivated land to built-up land, with the overall accuracy of 97.72% in the spatial domain and the temporal accuracy of 74.60%. The experimental results demonstrated the superiority of the proposed method in comparison with other state-of-the-art algorithms. In addition, the spatial-temporal patterns of urban expansion revealed in this study are consistent with the findings of previous studies, which also confirms the effectiveness of the proposed method.
25

Janda, Gwen Eva, Axel Wisiorek, and Stefanie Eckmann. "Reference tracking mechanisms and automatic annotation based on Ob-Ugric information structure." Suomalais-Ugrilaisen Seuran Aikakauskirja 2017, no. 96 (January 1, 2017). http://dx.doi.org/10.33340/susa.70251.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
The following paper is concerned with information structure in the Ob-Ugric languages and its manifestation in reference tracking and its mechanisms. We will show how both knowledge on information structure and on reference tracking mechanisms can be used to develop a system for a (semi-)automatic annotation of syntactic, semantic and pragmatic functions. We assume that the principles of information structure, i.e., the balancing of the content of an utterance, are indicated by the use of anaphoric devices to mark participants in an on-going discourse. This process in which participants are encoded by the speaker and decoded by the hearer is called reference tracking. Our model distinguishes four important factors that play a role in reference tracking: inherent (linguistic) features of a referent, information structure, referential devices and referential strategies. The interaction between these factors we call reference tracking mechanisms. Here, the passive voice and the dative shift are used to exemplify this complex interaction system. Drawing conclusions from this, rules are developed to annotate both syntactic, semantic and pragmatic roles of discourse participants (semi-)automatically.
26

Liu, Chang, and Meihua Chen. "A genre-based approach in the secondary school English writing class: Voices from student-teachers in the teaching practicum." Frontiers in Psychology 13 (September 6, 2022). http://dx.doi.org/10.3389/fpsyg.2022.992360.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
While the genre-based approach (GBA) has assumed increasing prominence in discussions of writing pedagogy for diverse classrooms, little is known about how secondary school student-teachers understand and adopt genre pedagogies in the English as a foreign language (EFL) writing class. Based on the data from semi-structured interviews and teaching materials, this study examined Chinese EFL student-teachers’ knowledge and use of genre-based writing instruction (GBWI) during the teaching practicum and explored the challenges they encountered in enacting it. The findings demonstrated that teacher informants showed some familiarity with genre pedagogies, especially in terms of scaffolding the linguistic features and semantic patterns in the focused genres. However, they were generally confused over the connection between language, content, and context, and their GBWI practice scarcely involved the explicit teaching of the linguistic and semantic choices for a specific audience and context, which gave rise to some perceived tensions in the teaching reality. Further probing has revealed the complex interplay between Chinese EFL student-teachers’ professional knowledge, perceived difficulties, and genre instructional practice in the secondary school writing class. The study concludes with practical implications for the student-teachers’ professional development of effective GBA.
27

Hamroun, Mohamed, Karim Tamine, and Benoît Crespin. "Multimodal Video Indexing (MVI): A New Method Based on Machine Learning and Semi-Automatic Annotation on Large Video Collections." International Journal of Image and Graphics, June 19, 2021, 2250022. http://dx.doi.org/10.1142/s021946782250022x.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Indexing video by the concept is one of the most appropriate solutions for such problems. It is based on an association between a concept and its corresponding visual sound, or textual features. This kind of association is not a trivial task. It requires knowledge about the concept and its context. In this paper, we investigate a new concept detection approach to improve the performance of content-based multimedia documents retrieval systems. To achieve this goal, we are going to tackle the problem from different plans and make four contributions at various stages of the indexing process. We propose a new method for multimodal indexation based on (i) a new weakly supervised semi-automatic method based on the genetic algorithm (ii) the detection of concepts from the text in the videos (iii) the enrichment of the basic concepts thanks to the usage of our method DCM. Subsequently, the semantic and enriched concepts allow a better multimodal indexation and the construction of an ontology. Finally, the different contributions are tested and evaluated on a large dataset (TRECVID 2015).
28

Alégroth, Emil, Kristian Karl, Helena Rosshagen, Tomas Helmfridsson, and Nils Olsson. "Practitioners’ best practices to Adopt, Use or Abandon Model-based Testing with Graphical models for Software-intensive Systems." Empirical Software Engineering 27, no. 5 (May 30, 2022). http://dx.doi.org/10.1007/s10664-022-10145-2.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
AbstractModel-based testing (MBT) has been extensively researched for software-intensive systems but, despite the academic interest, adoption of the technique in industry has been sparse. This phenomenon has been observed by our industrial partners for MBT with graphical models. They perceive one cause to be a lack of evidence-based MBT guidelines that, in addition to technical guidelines, also take non-technical aspects into account. This hypothesis is supported by a lack of such guidelines in the literature.Objective: The objective of this study is to elicit, and synthesize, MBT experts’ best practices for MBT with graphical models. The results aim to give guidance to practitioners and aspire to give researchers new insights to inspire future research.Method: An interview survey is conducted using deep, semi-structured, interviews with an international sample of 17 MBT experts, in different roles, from software industry. Interview results are synthesised through semantic equivalence analysis and verified by MBT experts from industrial practice.Results: 13 synthesised conclusions are drawn from which 23 best-practice guidelines are derived for the adoption, use and abandonment of the technique. In addition, observations and expert insights are discussed that help explain the lack of wide-spread adoption of MBT with graphical models in industrial practice.Conclusions: Several technical aspects of MBT are covered by the results as well as conclusions that cover process- and organizational factors. These factors relate to the mindset, knowledge, organization, mandate and resources that enable the technique to be used effectively within an organization. The guidelines presented in this work complement existing knowledge and, as a primary objective, provide guidance for industrial practitioners to better succeed with MBT with graphical models.
29

Marín, Milagros, Francisco J. Esteban, Hilario Ramírez-Rodrigo, Eduardo Ros, and María José Sáez-Lara. "An integrative methodology based on protein-protein interaction networks for identification and functional annotation of disease-relevant genes applied to channelopathies." BMC Bioinformatics 20, no. 1 (November 12, 2019). http://dx.doi.org/10.1186/s12859-019-3162-1.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Abstract Background Biologically data-driven networks have become powerful analytical tools that handle massive, heterogeneous datasets generated from biomedical fields. Protein-protein interaction networks can identify the most relevant structures directly tied to biological functions. Functional enrichments can then be performed based on these structural aspects of gene relationships for the study of channelopathies. Channelopathies refer to a complex group of disorders resulting from dysfunctional ion channels with distinct polygenic manifestations. This study presents a semi-automatic workflow using protein-protein interaction networks that can identify the most relevant genes and their biological processes and pathways in channelopathies to better understand their etiopathogenesis. In addition, the clinical manifestations that are strongly associated with these genes are also identified as the most characteristic in this complex group of diseases. Results In particular, a set of nine representative disease-related genes was detected, these being the most significant genes in relation to their roles in channelopathies. In this way we attested the implication of some voltage-gated sodium (SCN1A, SCN2A, SCN4A, SCN4B, SCN5A, SCN9A) and potassium (KCNQ2, KCNH2) channels in cardiovascular diseases, epilepsies, febrile seizures, headache disorders, neuromuscular, neurodegenerative diseases or neurobehavioral manifestations. We also revealed the role of Ankyrin-G (ANK3) in the neurodegenerative and neurobehavioral disorders as well as the implication of these genes in other systems, such as the immunological or endocrine systems. Conclusions This research provides a systems biology approach to extract information from interaction networks of gene expression. We show how large-scale computational integration of heterogeneous datasets, PPI network analyses, functional databases and published literature may support the detection and assessment of possible potential therapeutic targets in the disease. Applying our workflow makes it feasible to spot the most relevant genes and unknown relationships in channelopathies and shows its potential as a first-step approach to identify both genes and functional interactions in clinical-knowledge scenarios of target diseases. Methods An initial gene pool is previously defined by searching general databases under a specific semantic framework. From the resulting interaction network, a subset of genes are identified as the most relevant through the workflow that includes centrality measures and other filtering and enrichment databases.
30

Richardson, Sarah Catherine. "“Old Father, Old Artificer”: Queering Suspicion in Alison Bechdel’s Fun Home." M/C Journal 15, no. 1 (February 17, 2012). http://dx.doi.org/10.5204/mcj.396.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Анотація:
Halfway through the 2006 memoir comic Fun Home, the reader encounters a photograph that the book’s author, Alison Bechdel, found in a box of family snapshots shortly after her father’s death. The picture—“literally the core of the book, the centrefold” (Bechdel qtd. in Chute “Interview” 1006)—of Alison’s teenaged babysitter, Roy, erotically reclining on a bed in only his underwear, is the most tangible and direct evidence of her father’s sexual affairs with teenage boys, more confronting than his own earlier confession. Through this image, and a rich archive of familial texts, Bechdel chronicles her father’s thwarted desires and ambitions, probable suicide, and her own sexual and artistic coming of age.Bruce Bechdel, a married school teacher and part-time funeral director, was also an avid amateur historical restorer and connoisseur of modernist literature. Shortly after Alison came out to her parents at nineteen, Bruce was hit by a truck in what his daughter believes was an act of suicide. In Fun Home, Bechdel reads her family history suspiciously, plumbing family snapshots, letters, and favoured novels, interpreting against the grain, to trace her queer genealogy. Ultimately, she inverts this suspicious and interrogative reading, using the evidence she has gathered in order to read her father’s sexuality positively and embrace her queer and artistic inheritance from him. In The New York Times Magazine, in 2004, Charles McGrath made the suggestion that comics were “the new literary form” (24). Although comics have not yet reached widespread mainstream acceptance as a medium of merit, the burgeoning field of comics scholarship over the last fifteen years, the 2007 adaptation of Marjane Satrapi’s Persepolis into a feature film, and the addition of comics to the Best American series all testify to the widening popularity and status of the form. Memoir comics have established themselves, as Hillary Chute notes, as “the dominant mode of contemporary work” (Graphic 17). Many of these autobiographical works, including Fun Home, recount traumatic histories, employing the medium’s unique capacity to evoke the fractured and repetitive experience of the traumatised through panel structure and use of images. Comics articulate “what wasn’t permitted to be said or imagined, defying the ordinary processes of thought” (Said qtd. in Whitlock 967). The hand-drawn nature of comics emphasises the subjectivity of perception and memory, making it a particularly powerful medium for personal histories. The clear mediation of a history by the artist’s hand complicates truth claims. Comics open up avenues for both suspicious and restorative readings because their form suggests that history is always constructed and therefore not able to be confirmed as “ultimately truthful,” but also that there is no ultimate truth to be unveiled. No narrative is unmediated; a timeline is not more “pure” than a fleshed out narrative text. All narratives exclude information in order to craft a comprehensible series of events. Bechdel’s role as a suspicious reader of her father and of her own history resonates through her role as a historian and her interrogation of the ethical concerns of referential writing.Eve Kosofsky Sedgwick’s Touching Feeling: Affect, Pedagogy, Performativity critiques the hermeneutics of suspicion from a queer theory perspective, instead advocating reparative reading as a critical strategy. The hermeneutics of suspicion describes “the well-oiled machine of ideology critique” that has become the primary mode of critical reading over the last thirty or so years, suspiciously interpreting texts to uncover their hidden ideological biases (Felski, Uses 1). Reparative reading, on the other hand, moves away from this paranoid mode, instead valuing pleasure and “positive affects like joy and excitement” (Vincent). Sedgwick does not wholly reject suspicious reading, suggesting that it “represent[s] a way, among other ways, of seeking, finding, and organizing knowledge. Paranoia knows some things well and others poorly” (Touching 129). Felski, paraphrasing Ricoeur, notes that the hermeneutics of suspicion “adopts an adversarial sensibility to probe for concealed, repressed, or disavowed meanings” (“Suspicious” 216). In this fashion, Bechdel employs suspicious strategies to reveal her father’s hidden desires and transgressions that were obscured in the standard version of her family narrative, but ultimately moves away from such techniques to joyfully embrace her inheritance from him. Sedgwick notes that paranoid readings may only reveal that which is already known:While there is plenty of hidden violence that requires exposure there is also, and increasingly, an ethos where forms of violence that are hypervisible from the start may be offered as an exemplary spectacle rather than remain to be unveiled as a scandalous secret. (Touching 139)This is contrary to suspicious reading’s assumption that violence is culturally shunned, hidden, and in need of “unveiling” in contemporary Western culture. It would be too obvious for Bechdel to condemn her father: gay men have been unfairly misrepresented in the American popular imagination for decades, if not longer. Through her reparative reading of him, she rejects this single-minded reduction of people to one negative type. She accepts both her father’s weaknesses and her debts to him. A reading which only sought to publicise Bruce’s homosexual affairs would lack the great depth that Bechdel finds in the slippage between her father’s identity and her own.Bechdel’s embrace of Bruce’s failings as a father, a husband, and an artist, her revisioning of his death as a positive, creative act full of agency, and her characterisation of him as a supportive forerunner, “there to catch [Alison] as [she] leapt,” (Bechdel 232) moves his story away from archetypal narratives of homosexual tragedy. Bechdel’s memoir ends with (and enacts through its virtuoso execution) her own success, and the support of those who came before her. This move mirrors Joseph Litvak’s suggestion that “the importance of ‘mistakes’ in queer reading and writing […] has a lot to do with loosening the traumatic, inevitable-seeming connection between mistakes and humiliation […] Doesn’t reading queer mean learning, among other things, that mistakes can be good rather than bad surprises?” (Sedgwick Touching 146–7).Fun Home is saturated with intertextual references and archival materials that attempt to piece together the memoir’s fractured and hidden histories. The construction of this personal history works by including familial and historical records to register the trauma of the Bechdels’ personal tragedy. The archival texts are meticulously hand-drawn, their time-worn and ragged physicality maintained to emphasise the referentiality of these documents. Bechdel’s use of realistically drawn family photographs, complete with photo corners, suggests a family photograph album, although rather than establishing a censored and idealistic narrative, as most family albums do, the photographs are read and reproduced for their suppressed and destabilising content. Bechdel describes them as “particularly mythic” (Chute “Interview” 1009), and she plunders this symbolic richness to rewrite her family history. The archival documents function as primary texts, which stand in opposition to the deadly secrecy of her childhood home: they are concrete and evidentiary. Bechdel reads her father’s letters and photographs (and their gothic revival house) for sexual and artistic evidence, “read[ing] the text against the grain in order to draw out what it refuses to own up to” (Felski “Suspicious” 23). She interprets his letters’ baroque lyrical flourishes as indications both of his semi-repressed homosexuality and of the artistic sensibility that she would inherit and refine.Suspicion of the entire historical project marks the memoir. Philippe Lejeune describes the “Autobiographical Pact” as “a contract of identity that is sealed by the proper name” of the author (19). Bechdel does not challenge this pact fundamentally—the authoritative narrative voice of her book structures it to be read as historically truthful—but she does challenge and complicate the apparent simplicity of this referential model. Bechdel’s discussion of the referential failings of her childhood diary making—“the troubled gap between word and meaning”—casts a suspicious eye over the rest of the memoir’s historical project (Bechdel 143). She asks how language can adequately articulate experience or refer to the external world in an environment defined by secrets and silence. At the time of her childhood, it cannot—the claim to full disclosure that the memoir ultimately makes is predicated on distance and time. Bechdel simultaneously makes a claim for the historical veracity of her narrative and destabilises our assumptions around the idea of factual and retrospective truth:When I was ten, I was obsessed with making sure my diary entries bore no false witness. But as I aged, hard facts gave way to vagaries of emotion and opinion. False humility, overwrought penmanship, and self-disgust began to cloud my testimony […] until […] the truth is barely perceptible behind a hedge of qualifiers, encryption, and stray punctuation. (Bechdel 169)That which is “unrepresentable” is simultaneously represented and denied. The comics medium itself, with its simultaneous graphic and textual representation, suggests the unreliability of any one means of representation. Of Bechdel’s diaries, Jared Gardner notes, “what develops over the course of her diary […] is an increasing sense that text and image are each alone inadequate to the task, and that some merger of the two is required to tell the story of the truth, and the truth of the story” (“Archives” 3).As the boyishly dressed Alison urges her father, applying scare-quoted “bronzer,” to hurry up, Bechdel narrates, “my father began to seem morally suspect to me long before I knew that he actually had a dark secret” (16). Alison is presented as her father’s binary opposite, “butch to his nelly. Utilitarian to his aesthete,” (15) and, as a teenager, frames his love of art and extravagance as debauched. This clear distinction soon becomes blurred, as Alison and Bruce’s similarities begin to overwhelm their differences. The huge drawn hand shown holding the photograph of Roy, in the memoir’s “centrefold,” more than twice life-size, reproduces the reader’s hand holding the book. We are placed in Bechdel’s, and by extension her father’s, role, as the illicit and transgressive voyeurs of the erotic spectacle of Roy’s body, and as the possessors and consumers of hidden, troubling texts. At this point, Bechdel begins to take her queer reading of this family archive and use it to establish a strong connection between her initially unsympathetic father and herself. Despite his neglect of his children, and his self-involvement, Bechdel claims him as her spiritual and creative father, as well as her biological one. This reparative embrace moves Bruce from the role of criticised outsider in Alison’s world to one of queer predecessor. Bechdel figures herself and her father as doubled aesthetic and erotic observers and appreciators. Ann Cvetkovich suggests that “mimicking her father as witness to the image, Alison is brought closer to him only at the risk of replicating his illicit sexual desires” (118). For Alison, consuming her father’s texts connects her with him in a positive yet troubling way: “My father’s end was my beginning. Or more precisely, […] the end of his lie coincided with the beginning of my truth” (Bechdel 116–17). The final panel of the same chapter depicts Alison’s hands holding drawn photos of herself at twenty-one and Bruce at twenty-two. The snapshots overlap, and Bechdel lists the similarities between the photographs, concluding, “it’s about as close as a translation can get” (120). Through the “vast network of transversals” (102) that is their life together, Alison and Bruce are, paradoxically, twinned “inversions of one another” (98). Sedgwick suggests that “inversion models […] locate gay people—whether biologically or culturally—at the threshold between genders” (Epistemology 88). Bechdel’s focus on Proust’s “antiquated clinical term” both neatly fits her thematic expression of Alison and Bruce’s relationship as doubles (“Not only were we inverts. We were inversions of one another”) and situates them in a space of possibility and liminality (97-98).Bechdel rejects a wholly suspicious approach by maintaining and embracing the aporia in her and her father’s story, an essential element of memory. According to Chute, Fun Home shows “that the form of comics crucially retains the insolvable gaps of family history” (Graphic 175). Rejecting suspicion involves embracing ambiguity and unresolvability. It concedes that there is no one authentic truth to be neatly revealed and resolved. Fun Home’s “spatial and semantic gaps […] express a critical unknowability or undecidability” (Chute Graphic 182). Bechdel allows the gaps in her narrative to remain, refusing to “pretend to know” Bruce’s “erotic truth” (230), an act to which suspicious reading is diametrically opposed. Suspicious reading wishes to close all gaps, to articulate silences and literalise mysteries, and Bechdel’s narrative progressively moves away from this mode. The medium of comics uses words and images together, simultaneously separate and united. Similarly, Alison and Bruce are presented as opposites: butch/sissy, artist/dilettante. Yet the memoir’s conclusion presents Alison and Bruce in a loving, reciprocal relationship. The final page of the book has two frames: one of Bruce’s perspective in the moment before his death, and one showing him contentedly playing with a young Alison in a swimming pool—death contrasted with life. The gaps in the narrative are not closed but embraced. Bechdel’s “tricky reverse narration” (232) suggests a complex mode of reading that allows both Bechdel and the reader to perceive Bruce as a positive forebear. Comics as a medium pay particular visual attention to absence and silence. The gutter, the space between panels, functions in a way that is not quite paralleled by silence in speech and music, and spaces and line breaks in text—after all, there are still blank spaces between words and elements of the image within the comics panel. The gutter is the space where closure occurs, allowing readers to infer causality and often the passing of time (McCloud 5). The gutters in this book echo the many gaps in knowledge and presence that mark the narrative. Fun Home is impelled by absence on a practical level: the absence of the dead parent, the absence of a past that was unspoken of and yet informed every element of Alison’s childhood.Bechdel’s hyper-literate narration steers the reader through the memoir and acknowledges its own aporia. Fun Home “does not seek to preserve the past as it was, as its archival obsession might suggest, but rather to circulate ideas about the past with gaps fully intact” (Chute Graphic 180). Bechdel, while making her own interpretation of her father’s death clear, does not insist on her reading. While Bruce attempted to restore his home into a perfect, hermetically sealed simulacrum of nineteenth-century domestic glamour, Bechdel creates a postmodern text that slips easily between a multiplicity of time periods, opening up the absences, failures, and humiliations of her story. Chute argues:Bruce Bechdel wants the past to be whole; Alison Bechdel makes it free-floating […] She animates the past in a book that is […] a counterarchitecture to the stifling, shame-filled house in which she grew up: she animates and releases its histories, circulating them and giving them life even when they devolve on death. (Graphic 216)Bechdel employs a literary process of detection in the revelation of both of their sexualities. Her archive is constructed like an evidence file; through layered tableaux of letters, novels and photographs, we see how Bruce’s obsessive love of avant-garde literature functions as an emblem of his hidden desire; Alison discovers her sexuality through the memoirs of Colette and the seminal gay pride manifestos of the late 1970s. Watson suggests that the “panels, gutters, and page, as bounded and delimited visual space, allow texturing of the two-dimensional image through collage, counterpoint, the superimposition of multiple media, and self-referential gestures […] Bechdel's rich exploitation of visual possibilities places Fun Home at an autobiographical interface where disparate modes of self-inscription intersect and comment upon one another” (32).Alison’s role as a literary and literal detective of concealed sexualities and of texts is particularly evident in the scene when she realises that she is gay. Wearing a plaid trench coat with the collar turned up like a private eye, she stands in the campus bookshop reading a copy of Word is Out, with a shadowy figure in the background (one whose silhouette resembles her father’s teenaged lover, Roy), and a speech bubble with a single exclamation mark articulating her realisation. While “the classic detective novel […] depends on […] a double plot, telling the story of a crime via the story of its investigation” (Felski “Suspicious” 225), Fun Home tells the story of Alison’s coming out and genesis as an artist through the story of her father’s brief life and thwarted desires. On the memoir’s final page, revisioning the artifactual photograph that begins her final chapter, Bechdel reclaims her father from what a cool reading of the historical record (adultery with adolescents, verbally abusive, emotionally distant) might encourage readers to superficially assume. Cvetkovich articulates the way Fun Home uses:Ordinary experience as an opening onto revisionist histories that avoid the emotional simplifications that can sometimes accompany representations of even the most unassimilable historical traumas […] Bechdel refuses easy distinctions between heroes and perpetrators, but doing so via a figure who represents a highly stigmatised sexuality is a bold move. (125)Rejecting paranoid strategies, Bechdel is less interested in classification and condemnation of her father than she is in her own tangled relation to him. She adopts a reparative strategy by focusing on the strands of joy and identification in her history with her father, rather than simply making a paranoid attack on his character.She occludes the negative possibilities and connotations of her father’s story to end on a largely positive note: “But in the tricky reverse narration that impels our entwined stories, he was there to catch me when I leapt” (232). In the final moment of her text Bechdel moves away from the memoir’s earlier destabilising actions, which forced the reader to regard Bruce with suspicion, as the keeper of destructive secrets and as a menacing presence in the Bechdels’ family life. The final image is of complete trust and support. His death is rendered not as chaotic and violent as it historically was, but calm, controlled, beneficent. Bechdel has commented, “I think it’s part of my father’s brilliance, the fact that his death was so ambiguous […] The idea that he could pull that off. That it was his last great wheeze. I want to believe that he went out triumphantly” (qtd. in Burkeman). The revisioning of Bruce’s death as a suicide and the reverse narration which establishes the accomplished artist and writer Bechdel’s creative and literary debt to him function as a redemption.Bechdel queers her suspicious reading of her family history in order to reparatively reclaim her father’s historical and personal connection with herself. The narrative testifies to Bruce’s failings as a father and husband, and confesses to Alison’s own complicity in her father’s transgressive desires and artistic interest, and to her inability to represent the past authoritatively and with complete accuracy. Bechdel both engages in and ultimately rejects a suspicious interpretation of her family and personal history. As Gardner notes, “only by allowing the past to bleed into history, fact to bleed into fiction, image into text, might we begin to allow our own pain to bleed into the other, and more urgently, the pain of the other to bleed into ourselves” (“Autobiography’s” 23). Suspicion itself is queered in the reparative revisioning of Bruce’s life and death, and in the “tricky reverse narration” (232) of the künstlerroman’s joyful conclusion.ReferencesBechdel, Alison. Fun Home: A Family Tragicomic. New York: Mariner Books, 2007. Burkeman, Oliver. “A life stripped bare.” The Guardian 16 Oct. 2006: G2 16.Cvetkovich, Ann. “Drawing the Archive in Alison Bechdel’s Fun Home.” Women’s Studies Quarterly 36.1/2 (2008): 111–29. Chute, Hillary L. Graphic Women: Life Narrative and Contemporary Comics. New York: Columbia UP, 2010. ---. “Interview with Alison Bechdel.” MFS Modern Fiction Studies 52.4 (2006): 1004–13. Felski, Rita. Uses of Literature. Malden: Blackwell Publishing, 2008.---. “Suspicious Minds.” Poetics Today 32:3 (2011): 215–34. Gardner, Jared. “Archives, Collectors, and the New Media Work of Comics.” MFS Modern Fiction Studies 52.4 (2006): 787–806. ---. “Autobiography’s Biography 1972-2007.” Biography 31.1 (2008): 1–26. Lejeune, Philippe. On Autobiography. Ed. Paul John Eakin. Trans. Katherine Leary. Minneapolis: University of Minnesota Press, 1989. McCloud, Scott. Understanding Comics: The Invisible Art. New York: HarperPerennial, 1994. McGrath, Charles. “Not Funnies.” New York Times Magazine 11 Jul. 2004: 24–56. Sedgwick, Eve Kosofsky. Epistemology of the Closet. Berkeley: University of California Press, 2008. ---. Touching Feeling. Durham : Duke University Press, 2003. Vincent, J. Keith. “Affect and Reparative Reading.” Honoring Eve. Ed. J. Keith Vincent. Affect and Reparative Reading. Boston University College of Arts and Sciences. October 31 2009. 25 May 2011. ‹http://www.bu.edu/honoringeve/panels/affect-and-reparative-reading/?›.Watson, Julia. “Autographic disclosures and genealogies of desire in Alison Bechdel’s Fun Home.” Biography 31.1 (2008): 27–59. Whitlock, Gillian. “Autographics: The Seeing “I” of the Comics.” Modern Fiction Studies 52.4 (2006): 965–79.

До бібліографії