Academic literature on the topic 'Identification du script'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Identification du script.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Identification du script"

1

Ubul, Kurban, Gulzira Tursun, and Alim Aysa. "Recent Advances in Script Identification." Applied Mechanics and Materials 610 (August 2014): 734–40. http://dx.doi.org/10.4028/www.scientific.net/amm.610.734.

Full text
Abstract:
There are a variety of different scripts in the world. Almost every country have there own languages and scripts which can distinguish from each other in different aspects. It is very essential to identify different scripts in multi-lingual, multi-script document. In recent years, different kinds of approaches have been developed for script identification and gotten promising results. In this paper, an overview of the script identification is proposed under different categories: script systems, extracted features and classification methods. Earlier researches and future property of this field is discussed. It is very obvious that, the research in this area is not so satisfied and still more research is to be done.
APA, Harvard, Vancouver, ISO, and other styles
2

Singh, Pawan Kumar, Ram Sarkar, and Mita Nasipuri. "Word-Level Script Identification Using Texture Based Features." International Journal of System Dynamics Applications 4, no. 2 (April 2015): 74–94. http://dx.doi.org/10.4018/ijsda.2015040105.

Full text
Abstract:
Script identification is an appealing research interest in the field of document image analysis during the last few decades. The accurate recognition of the script is paramount to many post-processing steps such as automated document sorting, machine translation and searching of text written in a particular script in multilingual environment. For automatic processing of such documents through Optical Character Recognition (OCR) software, it is necessary to identify different script words of the documents before feeding them to the OCR of individual scripts. In this paper, a robust word-level handwritten script identification technique has been proposed using texture based features to identify the words written in any of the seven popular scripts namely, Bangla, Devanagari, Gurumukhi, Malayalam, Oriya, Telugu, and Roman. The texture based features comprise of a combination of Histograms of Oriented Gradients (HOG) and Moment invariants. The technique has been tested on 7000 handwritten text words in which each script contributes 1000 words. Based on the identification accuracies and statistical significance testing of seven well-known classifiers, Multi-Layer Perceptron (MLP) has been chosen as the final classifier which is then tested comprehensively using different folds and with different epoch sizes. The overall accuracy of the system is found to be 94.7% using 5-fold cross validation scheme, which is quite impressive considering the complexities and shape variations of the said scripts. This is an extended version of the paper described in (Singh et al., 2014).
APA, Harvard, Vancouver, ISO, and other styles
3

Obaidullah, Sk Md, Chitrita Goswami, K. C. Santosh, Nibaran Das, Chayan Halder, and Kaushik Roy. "Separating Indic Scripts with matra for Effective Handwritten Script Identification in Multi-Script Documents." International Journal of Pattern Recognition and Artificial Intelligence 31, no. 05 (February 27, 2017): 1753003. http://dx.doi.org/10.1142/s0218001417530032.

Full text
Abstract:
We present a novel approach for separating Indic scripts with ‘matra’, which is used as a precursor to advance and/or ease subsequent handwritten script identification in multi-script documents. In our study, among state-of-the-art features and classifiers, an optimized fractal geometry analysis and random forest are found to be the best performer to distinguish scripts with ‘matra’ from their counterparts. For validation, a total of 1204 document images are used, where two different scripts with ‘matra’: Bangla and Devanagari are considered as positive samples and the other two different scripts: Roman and Urdu are considered as negative samples. With this precursor, an overall script identification performance can be advanced by more than 5.13% in accuracy and 1.17 times faster in processing time as compared to conventional system.
APA, Harvard, Vancouver, ISO, and other styles
4

Mahajan, Shilpa, and Rajneesh Rani. "Word Level Script Identification Using Convolutional Neural Network Enhancement for Scenic Images." ACM Transactions on Asian and Low-Resource Language Information Processing 21, no. 4 (July 31, 2022): 1–29. http://dx.doi.org/10.1145/3506699.

Full text
Abstract:
Script identification from complex and colorful images is an integral part of the text recognition and classification system. Such images may contain twofold challenges: (1) Challenges related to the camera like blurring effect, non-uniform illumination and noisy background, and so on, and (2) Challenges related to the text shape, orientation, and text size. The present work in this area is much focused on non-Indian scripts. In contrast, Gurumukhi, Hindi, and English scripts play a vital role in communication among Indians and foreigners. In this article, we focus on the above said challenges in the field of identifying the script. Additionally, we have introduced a new dataset that contains Hindi, Gurumukhi, and English scripts from scenic images collected from different sources. We also proposed a CNN-based model, which is capable of distinguishing between the scripts with good accuracy. Performance of the method has been evaluated for own dataset, i.e., NITJDATASET and other benchmarked datasets available for Indian scripts, i.e., CVSI-2015 (Task-1 and Task 4) and ILST. This work is an extension to find the script from strict text background.
APA, Harvard, Vancouver, ISO, and other styles
5

Singh, Pawan Kumar, Supratim Das, Ram Sarkar, and Mita Nasipuri. "Line Parameter based Word-Level Indic Script Identification System." International Journal of Computer Vision and Image Processing 6, no. 2 (July 2016): 18–41. http://dx.doi.org/10.4018/ijcvip.2016070102.

Full text
Abstract:
In this paper, a line parameter based approach is presented to identify the handwritten scripts written in eight popular scripts. Since Optical Character Recognition (OCR) engines are usually script-dependent, automatic text recognition in multi-script environment requires a pre-processing module that helps identifying the scripts before processing the same through the respective OCR engine. The work becomes more challenging when it deals with handwritten document which is still a less explored research area. In this paper, a line parameter based approach is presented to identify the handwritten scripts written in eight popular scripts namely, Bangla, Devanagari, Gujarati, Gurumukhi, Manipuri, Oriya, Urdu, and Roman. A combination of Hough transform (HT) and Distance transform (DT) is used to extract the directional spatial features based on the line parameter. Experimentations are performed at word-level using multiple classifiers on a dataset of 12000 handwritten word images and Multi Layer Perceptron (MLP) classifier is found to be the best performing classifier showing an identification accuracy of 95.28%. The performance of the present technique is also compared with those of other state-of-the-art script identification methods on the same database. A combination of Hough transform (HT) and Distance transform (DT) is used to extract the directional spatial features based on the line parameter. Experimentation are performed at word-level on a total dataset of 12000 handwritten word images and Multi Layer Perceptron (MLP) classifier is found to be the best performing classifier showing an identification accuracy of 95.28%.
APA, Harvard, Vancouver, ISO, and other styles
6

Busch, A., W. W. Boles, and S. Sridharan. "Texture for script identification." IEEE Transactions on Pattern Analysis and Machine Intelligence 27, no. 11 (November 2005): 1720–32. http://dx.doi.org/10.1109/tpami.2005.227.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Behrad, Alireza, Malike Khoddami, and Mehdi Salehpour. "A novel framework for Farsi and latin script identification and Farsi handwritten digit recognition." Journal of Automatic Control 20, no. 1 (2010): 17–25. http://dx.doi.org/10.2298/jac1001017b.

Full text
Abstract:
Optical character recognition is an important task for converting handwritten and printed documents to digital format. In multilingual systems, a necessary process before OCR algorithm is script identification. In this paper novel methods for the script language identification and the recognition of Farsi handwritten digits are proposed. Our method for script identification is based on curvature scale space features. The proposed features are rotation and scale invariant and can be used to identify scripts with different fonts. We assumed that the bilingual scripts may have Farsi and English words and characters together; therefore the algorithm is designed to be able to recognize scripts in the connected components level. The output of the recognition is then generalized to word, line and page levels. We used cluster based weighted support vector machine for the classification and recognition of Farsi handwritten digits that is reasonably robust against rotation and scaling. The algorithm extracts the required features using principle component analysis (PCA) and linear discrimination analysis (LDA) algorithms. The extracted features are then classified using a new classification algorithm called cluster based weighted SVM (CBWSVM). The experimental results showed the promise of the algorithms.
APA, Harvard, Vancouver, ISO, and other styles
8

Lu, Liqiong, Dong Wu, Ziwei Tang, Yaohua Yi, and Faliang Huang. "Mining discriminative patches for script identification in natural scene images." Journal of Intelligent & Fuzzy Systems 40, no. 1 (January 4, 2021): 551–63. http://dx.doi.org/10.3233/jifs-200260.

Full text
Abstract:
This paper focuses on script identification in natural scene images. Traditional CNNs (Convolution Neural Networks) cannot solve this problem perfectly for two reasons: one is the arbitrary aspect ratios of scene images which bring much difficulty to traditional CNNs with a fixed size image as the input. And the other is that some scripts with minor differences are easily confused because they share a subset of characters with the same shapes. We propose a novel approach combing Score CNN, Attention CNN and patches. Attention CNN is utilized to determine whether a patch is a discriminative patch and calculate the contribution weight of the discriminative patch to script identification of the whole image. Score CNN uses a discriminative patch as input and predict the score of each script type. Firstly patches with the same size are extracted from the scene images. Secondly these patches are used as inputs to Score CNN and Attention CNN to train two patch-level classifiers. Finally, the results of multiple discriminative patches extracted from the same image via the above two classifiers are fused to obtain the script type of this image. Using patches with the same size as inputs to CNN can avoid the problems caused by arbitrary aspect ratios of scene images. The trained classifiers can mine discriminative patches to accurately identify some confusing scripts. The experimental results show the good performance of our approach on four public datasets.
APA, Harvard, Vancouver, ISO, and other styles
9

Schellens, Tammy, Hilde Van Keer, Bram De Wever, and Martin Valcke. "The effects of two computer-supported collaborative learning (CSCL) scripts on university students' critical thinking." Psicologia Escolar e Educacional 11, spe (December 2007): 83–92. http://dx.doi.org/10.1590/s1413-85572007000300008.

Full text
Abstract:
The present study focuses on the use of two different types of scripts as possible ways to structure university students' discourse in asynchronous discussion groups and consequently promote their learning. More specifically, the aim of the study is to determine how requiring students to label their contributions by means of De Bono's Thinking Hats (script 1) and Weinberger's script for the construction of argumentation sequences (script 2) affects the ongoing critical thinking processes reflected in the discussion. The results suggest that both scripts successfully facilitated critical thinking. The results showed that the labeling condition (script 1) surpasses the argumentation script (script 2) with regard to the overall depth of critical thinking in the discussion, and the critical thinking processes during the stages of problem identification and problem integration in particular. Further, it can be argued that students in the labeling condition are engaged in more focused, more critical, and more practically-oriented discussions.
APA, Harvard, Vancouver, ISO, and other styles
10

Pal, U., and B. B. Chaudhuri. "Identification of different script lines from multi-script documents." Image and Vision Computing 20, no. 13-14 (December 2002): 945–54. http://dx.doi.org/10.1016/s0262-8856(02)00101-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Identification du script"

1

Khlif, Wafa. "Multi-lingual scene text detection based on convolutional neural networks." Thesis, La Rochelle, 2022. http://www.theses.fr/2022LAROS022.

Full text
Abstract:
Cette thèse propose des approches de détection de texte par des techniques d'apprentissage profond pour explorer et récupérer des contenus faiblement structurés dans des images de scène naturelles. Ces travaux proposent, dans un premier temps, une méthode de détection de texte dans des images de scène naturelle basée sur une analyse multi-niveaux des composantes connexes (CC) et l'apprentissage des caractéristiques du texte par un réseau de neurones convolutionnel (CNN), suivie d'un regroupement des zones de texte détectées par une méthode à base de graphes. Les caractéristiques des composantes texte brut/non-texte obtenues à différents niveaux de granularité sont apprises via un CNN. Une deuxième méthode est présentée dans cette thèse inspirée du système YOLO. Le système réalise la détection du texte et l'identification du script simultanément. Nous considérons la tâche de détection de texte multi script comme un problème de détection d'objets, où l'objet est le script du texte. La détection de texte et l'identification des scripts sont réalisées avec une approche holistique en utilisant un réseau neuronal convolutionnel unique. Les évaluations expérimentales de ces approches sont réalisées sur le jeu de données MLT (Multi-Lingual Text dataset), nous avons contribué à la création de ce nouveau jeu de données. Il est composé d'images de scènes naturelles et synthétiques contenant du texte, tels que des panneaux de circulation et publicitaires, des noms de magasins, d'images extraites des réseaux sociaux. Ce type d'images représente l'un des types d'images les plus fréquemment rencontrés sur Internet, à savoir les images avec du texte incorporé dans les réseaux sociaux
This dissertation explores text detection approaches via deep learning techniques towards achieving the goal of mining and retrieval of weakly structured contents in scene images. First, this dissertation presents a method for detecting text in scene images based on multi-level connected component (CC) analysis and learning text component features via convolutional neural networks (CNN), followed by a graph-based grouping of overlapping text boxes. The features of the resulting raw text/non-text components of different granularity levels are learned via a CNN. The second contribution is inspired from YOLO: Real-Time Object Detection system. Both methods perform text detection and script identification simultaneously. The system presents a joint text detection and script identification approach based on casting the multi-script text detection task as an object detection problem, where the object is the script of the text. Joint text detection and script identification strategy is realized in a holistic approach using a single convolutional neural network where the input data is the full image and the outputs are the text bounding boxes and their script. Textual feature extraction and script classification are performed jointly via a CNN. The experimental evaluation of these methods are performed on the Multi-Lingual Text MLT dataset. We contributed in building this new dataset. It is constituted of natural scene images with embedded text, such as street signs and advertisement boards, passing vehicles, user photos in microblog. This kind of images represents one of the mostly encountered image types on the internet which are the images with embedded text in social media
APA, Harvard, Vancouver, ISO, and other styles
2

Wahlberg, Fredrik. "Interpreting the Script : Image Analysis and Machine Learning for Quantitative Studies of Pre-modern Manuscripts." Doctoral thesis, Uppsala universitet, Avdelningen för visuell information och interaktion, 2017. http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-314211.

Full text
Abstract:
The humanities have for a long time been a collection of fields that have not gained from the advancements in computational power, as predicted by Moore´s law.  Fields like medicine, biology, physics, chemistry, geology and economics have all developed quantitative tools that take advantage of the exponential increase of processing power over time.  Recent advances in computerized pattern recognition, in combination with a rapid digitization of historical document collections around the world, is about to change this. The first part of this dissertation focuses on constructing a full system for finding handwritten words in historical manuscripts. A novel segmentation algorithm is presented, capable of finding and separating text lines in pre-modern manuscripts.  Text recognition is performed by translating the image data of the text lines into sequences of numbers, called features. Commonly used features are analysed and evaluated on manuscript sources from the Uppsala University library Carolina Rediviva and the US Library of Congress.  Decoding the text in the vast number of photographed manuscripts from our libraries makes computational linguistics and social network analysis directly applicable to historical sources. Hence, text recognition is considered a key technology for the future of computerized research methods in the humanities. The second part of this thesis addresses digital palaeography, using a computers superior capacity for endlessly performing measurements on ink stroke shapes. Objective criteria of character shapes only partly catches what a palaeographer use for assessing similarity. The palaeographer often gets a feel for the scribe's style.  This is, however, hard to quantify.  A method for identifying the scribal hands of a pre-modern copy of the revelations of saint Bridget of Sweden, using semi-supervised learning, is presented.  Methods for production year estimation are presented and evaluated on a collection with close to 11000 medieval charters.  The production dates are estimated using a Gaussian process, where the uncertainty is inferred together with the most likely production year. In summary, this dissertation presents several novel methods related to image analysis and machine learning. In combination with recent advances of the field, they enable efficient computational analysis of very large collections of historical documents.
q2b
APA, Harvard, Vancouver, ISO, and other styles
3

Rask, Ulf, and Pontus Mannestig. "Improvement of hardware basic testing : Identification and development of a scripted automation tool that will support hardware basic testing." Thesis, Blekinge Tekniska Högskola, Institutionen för programvaruteknik och datavetenskap, 2002. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-3392.

Full text
Abstract:
In the ever-increasing development pace, circuits and hardware are no exception. Hardware designs grow and circuits gets more complex at the same time as the market pressure lowers the expected time-to-market. In this rush, verification methods often lag behind. Hardware manufacturers must be aware of the importance of total verification if they want to avoid quality flaws and broken deadlines which in the long run will lead to delayed time-to-market, bad publicity and a decreasing market share. This paper discusses how a basic testing team may use an automated test environment in order to establish intellectual control regarding the testing and verification in a large hardware project. Company-specific factors that influence the design of an automated test environment are analyzed and a suggestion of a suitable environment is made. A prototype of the environment is constructed so that the project results may be evaluated in the real world. The thesis support the academic field in stating that large chips are hard to verify and that script-automation tools are one way to make verification of larger chips possible. Hardware verification should be made without complicated and untested software so that the debugging process only has the hardware to deal with. The thesis also indicates that an automated test tool increases the test rate, provides better test coverage and make regression testing feasible.
APA, Harvard, Vancouver, ISO, and other styles
4

Benjelil, Mohamed. "Analyse d'images de documents complexes et identification de scripts : cas des documents administratifs." La Rochelle, 2010. http://www.theses.fr/2010LAROS299.

Full text
Abstract:
Ce document présente nos travaux sur les méthodes d'analyse d'images de documents multilingues multi-script : cas des documents administratifs à l'aide d'une approche texture. Deux thèmes sont abordés: (1) segmentation d'images de documents; (2) identification de scripts Arabe et Latin imprimés et / ou manuscrits. Les approches développés concernent le flux de documents tout venant dont la particularité est qu'il n'obéit pas à un modèle bien déterminé. Le premier chapitre présente la problématique et l'état de l'art de la segmentation et l'identification de script dans les documents complexes. Le second chapitre est consacré au développement d'outils méthodologiques pour la segmentation d'images de documents en régions. Dans le troisième chapitre nous présentons l'application de notre approche sur la segmentation des documents administratifs. Dans le quatrième chapitre nous présentons l'application de notre approche sur l'identification de script Arabe et Latin imprimés et/ ou manuscrits. Trois objectifs distincts sont envisagés: (1) la segmentation complète de l'image, (2) l'identification du script du contenu textuel de l'image du document, (3) la possibilité d'extraire un objet particulier dans l'image. L'approche adoptée est basée sur la classification des régions à l'aide des caractéristiques extraites de la décomposition en pyramide orientale. Les résultats obtenus au cours de cette thèse convergent, tous, pour démontrer la capacité des approches proposés à l'analyse et à la caractérisation d'images de documents complexes. Des exemples d'application, des tests de performance et des études comparatives sont ensuites présentées
This thesis describes our work in the field of multilingual multi-script complex document image segmentation: case of official documents. We proposed texture-based approach. Two different subjects are presented: (1) document image segmentation; (2) Arabic and Latin script identification in printed ant/ or handwriten types. The developed approaches concern the flow of documents that do not obey to a specific model. Chapter 1 presents the problematic and state of the complex document image segmentation and script identification. The work described in chapter 2 aimed at finding new models for complex multilingual multi-script document image segmentation. Algorythms have been developed for the segmentation of document images into homogeneous regions, identifying the script of textual blocs contained in document image and also can segment out a particular object in an image. The approach is based on classification on text and non text regions by mean of steerable pyramid features. Chapter 3 describes our work on official documents images segmentation based on steerable pyramid features. Chapter 4 describes our work on Arabic and Latin script identification in printed and/ or handwritten types. Experimental results shows that the proposed approaches perform consistently well on large sets of complex document images. Examples of application, performance tests and comparative studies are also presented
APA, Harvard, Vancouver, ISO, and other styles
5

Pruss, Nicole. "The effects of using a scripted or unscripted interview in forensic interviews with interpreters." To access this resource online via ProQuest Dissertations and Theses @ UTEP, 2008. http://0-proquest.umi.com.lib.utep.edu/login?COPT=REJTPTU0YmImSU5UPTAmVkVSPTI=&clientId=2515.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Lao, Weng Chon. "Degradation of scrap tyres by bacillus sp.–optimization of major environmental parameters and identification of potential growth substrates." Thesis, University of Macau, 2018. http://umaclib3.umac.mo/record=b3950673.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Dodier, Olivier. "Les adolescents en situation de témoignage oculaire : d’observations de terrain à l’étude d’un protocole d’audition judiciaire en laboratoire." Thesis, Université Clermont Auvergne‎ (2017-2020), 2017. http://www.theses.fr/2017CLFAL019/document.

Full text
Abstract:
L’objectif de cette thèse était de fournir des recommandations aux professionnels de la justice pour recueillir la parole des adolescents, population de témoin peu étudiée en laboratoire. Pour cela, cinq études ont été conduites. Les deux premières avaient pour objectif de dresser un état des lieux des pratiques des enquêteurs français. Nous avons observé une spécificité des adolescents, notamment en ce qui concerne le recours aux suggestions d’informations. Celles-ci étaient plus fréquemment faites lorsque l’adolescent venait de développer un propos, ce qui n’était pas le cas avec des mineurs plus jeunes. Cela pourrait signifier des objectifs d’audition différents selon l’âge du mineur (Étude 1). Pourtant, les recommandations internationales déconseillent fortement l’usage des suggestions en raison des biais mémoriels qu’elles peuvent entrainer immédiatement comme de manière différée. Ensuite, nous avons montré que les adolescents sont les plus représentés parmi les mineurs témoins et/ou victimes dans les affaires françaises et qu’ils sont généralement perçus comme menteurs et pudiques par les enquêteurs (Étude 2). Une étude conduite avec des gendarmes formés aux techniques de recueil de la parole des mineurs témoins (vs. non formés ; Étude 3) a montré que ces utilisations des questions suggestives seraient dues à une croyance des enquêteurs selon laquelle les suggestions pouvaient aider le mineur à se souvenir et à rappeler des informations, mais aussi (et surtout) permettre à l’enquête d’avancer. Ceci était d’autant plus vrai pour les gendarmes non formés. Pour répondre à ces pratiques inappropriées, mais aussi aux besoins des enquêteurs, nous nous sommes intéressés à deux versions modifiées de l’entretien cognitif (ECM). En effet, ce protocole d’audition est basé sur un questionnement ouvert (plutôt que fermé ou suggestif), et propose des stratégies de récupération efficaces. En favorisant la récupération en mémoire et le rappel des informations, il pourrait alors optimiser leur fiabilité, en vue de les exploiter lors de l’enquête judiciaire. Pour cela, nous avons testé la mnémotechnique du Séquençage (Étude 4), qui a montré ses bénéfices. Nous avons en effet observé une hausse du rappel des informations correctes (vs. entretien structuré ; ES). Toutefois, celle-ci s’accompagnait d’une hausse des erreurs. Un résultat similaire a été observé en testant une version raccourcie de l’ECM pour des événements répétés dans le temps (vs. événement unique ; Étude 5). De plus, cette étude a mis en avant une hausse des affabulations avec l’ECM (comparativement à un ES, et indépendamment de la fréquence de l’événement), mais aussi des confusions entre les différents événements visionnés par une partie des adolescents. Ces augmentations des informations erronées n’entrainaient cependant, dans aucune des deux études, de chute du taux d’exactitude. Ces résultats seront discutés au regard de la littérature scientifique, et des recommandations appliquées seront formulées afin d’aider les enquêteurs à conduire au mieux leurs auditions d’adolescents témoins et/ou victimes
The goal of this thesis was to provide recommendations to any practitioner involved in the justice system to interview adolescent witnesses and/or victims, a population little studied in laboratory analogue contexts. To do so, five studies were conducted. The first two studies were aimed at establishing an inventory of the young French investigators’ witness interview practices. We observed that adolescents are a specific population, in particular regarding the use of suggestive questions. This type of questions increased right after the adolescents had just developed a statement, which was not the case with younger children. This result might reveal that, during investigative interviews with children and adolescents, the investigators have different aims depending on the age of the young witness (Study 1). However, international recommendations strongly discourage the use of suggestions because of immediate and delayed memory biases that may occur. Secondly, we have shown that adolescents represent most of the under legal age witnesses and/or victims in French cases, and that investigators generally perceived them as liars and as easily ashamed (Study 2). A study conducted with military police officers who previously had training in the use of structured interview techniques (vs. untrained officers; Study 3) showed that their use of suggestive questions were related to the belief that suggestive prompts could help the young witness and/or victim retrieve and recall information, but also (and most importantly) allow the investigation to move forward. This was especially observed with untrained military police officers. To deal with these inappropriate practices, we investigated the efficiency of two modified versions of the cognitive interview (MCI). This interview protocol is based on an open (rather than closed or suggestive) questioning style, and proposes effective retrieval strategies. Relying on techniques that promote memory retrieval and recall of information, it could then enhance the adolescents’ statements’ reliability, for these to be used during the investigation. We therefore tested a mnemonic called ‘guided peripheral focus’ (Study 4), which showed its benefits. Indeed, we observed an increase in the recall of correct information (vs. structured interview; SI). However, this was accompanied by an increase in errors. A similar pattern was observed with a shortened version of the MCI (vs. SI) used for repeated events (vs. single event; Study 5). In addition, this last study showed an increase in confabulations with the MCI (compared to a SI, and irrespective of the frequency of the event), but also in confusions between the different events experienced by some of the adolescents. However, these increases in erroneous details did not lead to a drop in the accuracy rate in either study. The results of the five studies will be discussed in regards with the scientific literature, and recommendations to help justice practitioners conduct their adolescent witness and/or victim interviews as appropriately as possible will be provided
APA, Harvard, Vancouver, ISO, and other styles
8

Colomb, Cindy. "L’entretien cognitif sous influence : Du développement d’un protocole modifié à son étude en interaction avec trois variables sociales." Thesis, Clermont-Ferrand 2, 2011. http://www.theses.fr/2011CLF20012.

Full text
Abstract:
Malgré les avancées considérables dans l’analyse des preuves matérielles, et le développement ces dernières années de la police scientifique, les témoignages oculaires occupent encore aujourd’hui un rôle primordial dans les décisions de justice. Et pourtant, ces témoignages sont faillibles. En effet, de nombreux facteurs se trouvant au croisement de processus mnésiques et/ou cognitifs et de processus sociaux et/ou sociocognitifs, peuvent les impacter de façon irréversible. C’est dans ce contexte, et dans le but de mieux comprendre certaines variables à l’origine de leur fragilité, que nous avons réalisé les sept expérimentations présentées dans cette thèse.De façon plus précise, les trois premières études s’intéressaient à une technique d’audition efficace, appelée l’entretien cognitif. Notre objectif était alors de développer et d’évaluer, en laboratoire et sur le terrain, un protocole modifié d’entretien cognitif fondé sur le principe de multiplication des rappels libres. Toutefois, dans cette thèse, nous souhaitions adopter une approche plus dynamique et situationnelle de l’entretien cognitif que celle rencontrée jusqu’à présent dans la littérature. C’est pourquoi, dans une seconde partie, nous avons examiné l’efficacité de ce protocole en lien avec trois variables évaluatrices indissociables des situations d’auditions, et pouvant dans les faits impacter fortement et négativement la qualité des témoignages oculaires. Ces trois variables sont: (a) les scripts que partagent les individus à propos des évènements criminels, (b) les discussions entre témoins, et, (c) les stéréotypes associés aux témoins par le biais de leursappartenances groupales.Plusieurs résultats ont alors été montrés. Tout d’abord, nous avons confirmé l’efficacité d’une version modifiée d’entretien cognitif (ECM). Plus précisément, un protocole composé de deux rappels libres, incluant les consignes d’exhaustivité et de remise en contexte ainsi qu’une nouvelle technique destinée à favoriser le souvenir, la focalisation périphérique guidée, a permis d’améliorer, dans toutes nos études, la richesse du rappel des participants sans nuire à son exactitude. Son efficacité a d’ailleurs été montrée aussi bien en laboratoire que sur le terrain. De plus, ce protocole intègre les consignes cognitives les plus efficaces et omet les moins effectives. Parallèlement, nous avons confirmé l’impact néfaste des scripts et des discussions entre témoins sur les témoignages oculaires. Certains effets des stéréotypes liés à l’appartenance groupale du témoin ont aussi été suggérés. Enfin, concernant l’efficacité de l’entretien cognitif, et plus précisément de la versionmodifiée, certains effets délétères de ce protocole et des consignes qui le composent ont été observés en lien avec les trois variables évaluatrices considérées. Toutefois, plusieurs bénéfices intéressants ont également été révélés dans ce cadre.Ces résultats seront discutés au regard des données disponibles dans la littérature à ce jour. Des recommandations appliquées seront également émises
Despite many advances in analyzing physical evidence, and the development these past years of the forensic police, eyewitnesses’ testimonies remains decisive in the decisions of justice. Nevertheless, these testimonies are fallible. Numerous factors, at the crossroad of memory and/or cognitive processes and of social and/or sociocognitive processes, can impact them in an irreversible manner. In this context, we realized the seven experimentations presented in this dissertation. The aim was to understand better some variables responsible for the fragility of eyewitnesses’ accounts.More precisely, the first three studies presented in the first part of this work were dealing with an effective technique for interviewing eyewitnesses, called the Cognitive Interview. Our purpose was to develop and evaluate, in the lab and in the field, a modified version of Cognitive Interview, based on the principle of multiplication of the free recalls. However, in this dissertation, we chose to adopt a more dynamic and situational approach that the one encountered in the literature until now. Therefore, in a second part, we examined the efficacy of this protocol in interaction with three estimator variables, inseparable from the context of hearing witnesses, and which can in the real life strongly and negatively impact the quality of their accounts. These variables are: (a) the scripts shared by individuals about criminal events, (b) the talk between witnesses, and, (c) the stereotypes associated with witnesses through the social groups they belong to.Several results were shown. First, we confirmed the efficacy of a modified version of the Cognitive Interview (MCI). More precisely, a protocol composed of two free recallattempts, composed of the report all and the context reinstatement instructions, as well as a new technique designed to enhance memories, the guided peripheral focus, increased in all the studies the richness of participants’ recalls without impairing their accuracy. Its efficacy was shown in the lab and in the field. Moreover, this protocol includes the most effective cognitive instructions and omits the less beneficial. Then, in the second part, we confirmed that the scripts and the talk among witnesses have a detrimental impact on eyewitnesses’ testimonies. Some effects of the stereotypes linked to the group membership of the witness were also suggested. Finally, concerning the efficacy of the Cognitive Interview, and more precisely the modified protocol, some negative effects were observed in interaction with the three estimator variables considered. However, some interesting benefits of this protocol and of the cognitive instructions it includes were also shown.These results will be discussed in regards with the literature available today. Some applied recommendations will also be emitted
APA, Harvard, Vancouver, ISO, and other styles
9

Bunz, Svenja-Catharina [Verfasser], Gerhard K. E. [Akademischer Betreuer] Scriba, Christian [Akademischer Betreuer] Neusüß, and Govert [Akademischer Betreuer] Somsen. "Capillary electrophoresis-mass spectrometry for the identification of aminopyrene trisulfonic acid labeled glycans / Svenja-Catharina Bunz. Gutachter: Gerhard K. E. Scriba ; Christian Neusüß ; Govert Somsen." Jena : Thüringer Universitäts- und Landesbibliothek Jena, 2014. http://d-nb.info/105097803X/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hong, Chen-Yu, and 洪承榆. "The Design of Personal Information De-identification Service Using Google Apps Script." Thesis, 2019. http://ndltd.ncl.edu.tw/cgi-bin/gs32/gsweb.cgi/login?o=dnclcdr&s=id=%22107NCHU5396004%22.&searchmode=basic.

Full text
Abstract:
碩士
國立中興大學
資訊管理學系所
107
In recent years, the international regulations on personal privacy have become more and more rigorous. Our country also revised the law of personal data protection in 2010. However, since the implementation of personal protection law, the incident of data disclosure has still been heard. Among them, human negligence accounts for a high proportion. It mainly occurs that the average person doesn’t understand the concept of personal identification so affects data processing flow and anonymous security. When the personal data need to be announced, as a medical research object or a document delivery process, it is easy to be re-identified. Nowadays, the de-identification tools on the Internet are used by experts. It is difficult for users who don’t have such background knowledge to use them. Thus, our study hopes to design a low-cost and low-threshold de-identification tool. We organize and identify basic technologies from currently existing literature and tools. And then we develop an easy-to-use tool with Google Apps Script and execute it on Google Drive. This tool is not only easy to build and transplant, but also doesn’t require professional background knowledge in the operation process. It provides an effective personal identification solution for small and medium-sized organizations or schools compared to other tools that operate more efficiently.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Identification du script"

1

Riley, W. D. Spectral characteristics of grinding sparks used for identification of scrap metals. [Avondale, Md.]: U.S. Dept. of the Interior, Bureau of Mines, 1985.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Cantrell, Jaime. Coming Out and Tutor-Text Performance in Jane Chambers’s Lesbi-Dramas. University of Illinois Press, 2017. http://dx.doi.org/10.5406/illinois/9780252039805.003.0009.

Full text
Abstract:
This chapter traces the phenomenon of feminist drama through three overtly lesbian plays by Jane Chambers: A Late Snow (1974), Last Summer at Bluefish Cove (1980), and My Blue Heaven (1981). Through a combination of surface and close readings, it argues that these plays function as especially explicit tutor-texts, because instances of lesbian hypervisibility in these works, are, in fact, performed. In this way, the plays concretize visually and aurally what the script conveys, and, in so doing, they require the audience to process and understand codes and meanings at a moment's notice—while, perhaps, calling into question the theatergoer's beliefs or values. Through the performance of these plays a sort of visual imaginary is communicated to the audience: discourses within the scripts advocate for lesbian social justice at the national level, intersecting with social politics and public identification.
APA, Harvard, Vancouver, ISO, and other styles
3

Polis, Stéphane. The Scribal Repertoire of Amennakhte Son of Ipuy. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198768104.003.0005.

Full text
Abstract:
This chapter investigates linguistic variation in the texts written by the Deir el-Medina scribe Amennakhte son of Ipuy in New Kingdom Egypt (Twentieth Dynasty; c. 1150 BCE). After a discussion of the challenge posed by the identification of scribes and authors in this sociocultural setting, I provide an overview of the corpus of texts that can tentatively be linked to this individual and justify the selection that has been made for the present study. The core of this paper is then devoted to a multidimensional analysis of Amennakhte’s linguistic registers. By combining the results of this section with a description of Amennakhte’s scribal habits—both at the graphemo-morphological and constructional levels—I test the possibility of using ‘idiolectal’ features to identify the scribe (or the author) of other texts stemming from the community of Deir el-Medina and closely related to Amennakhte.
APA, Harvard, Vancouver, ISO, and other styles
4

Conoley, Collie W., and Michael J. Scheel. Goal Focused Positive Psychotherapy. Oxford University Press, 2017. http://dx.doi.org/10.1093/med-psych/9780190681722.001.0001.

Full text
Abstract:
Goal Focused Positive Psychotherapy presents the first comprehensive positive psychology psychotherapy model that optimizes well-being and thereby diminishes psychological distress. The theory of change is the Broaden-and-Build Theory of positive emotions. The therapeutic process promotes client strengths, hope, positive emotions, and goals. The book provides the foundational premises, empirical support, theory, therapeutic techniques and interventions, a training model, case examples, and future directions. A three-year study is presented that reveals that Goal Focused Positive Psychotherapy (GFPP) was as effective as cognitive-behavioral therapy and short-term psychodynamic therapies, which fits the meta-analyses of therapy outcome studies that no bona fide psychotherapy achieves superior outcome. However, GFPP was significantly more attractive to the clients. Descriptions are provided of the Broaden-and-Build Theory, therapy goals based upon clients’ values and personal meaning (i.e., approach goals and intrinsic goals), identification and use of clients’ personal strengths (including client culture), centrality of hope and hope theory, the implicit theory of personal change or the growth mindset, and finally Self-Determination Theory. The techniques and interventions of GFPP as well as the importance of the therapist’s intentions during therapy are presented. GFPP focuses upon the client and relationship while not viewing psychotherapy as a set of potent scripted treatments that acts upon the client. Goal Focused Positive Supervision is presented as a new model that supports the supervisee’s strength-based self-definition rather than a pathological one or deficit orientation. Training that includes the experiential learning of GFPP principles is underscored.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Identification du script"

1

Lu, Tong, Shivakumara Palaiahnakote, Chew Lim Tan, and Wenyin Liu. "Script Identification." In Video Text Detection, 195–219. London: Springer London, 2014. http://dx.doi.org/10.1007/978-1-4471-6515-6_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Joshi, Gopal Datt, Saurabh Garg, and Jayanthi Sivaswamy. "Script Identification from Indian Documents." In Document Analysis Systems VII, 255–67. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11669487_23.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hangarge, Mallikarjun, and K. C. Santosh. "Word-Level Handwritten Script Identification from Multi-script Documents." In Recent Advances in Information Technology, 49–55. New Delhi: Springer India, 2014. http://dx.doi.org/10.1007/978-81-322-1856-2_6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Singh, Pawan Kumar, Arafat Mondal, Showmik Bhowmik, Ram Sarkar, and Mita Nasipuri. "Word-Level Script Identification from Handwritten Multi-script Documents." In Advances in Intelligent Systems and Computing, 551–58. Cham: Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-11933-5_62.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Obaidullah, Sk Md, Chitrita Goswami, K. C. Santosh, Chayan Halder, Nibaran Das, and Kaushik Roy. "Separating Indic Scripts with ‘matra’—A Precursor to Script Identification in Multi-script Documents." In Advances in Intelligent Systems and Computing, 205–14. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-2104-6_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Jajoo, Madhuram, Neelotpal Chakraborty, Ayatullah Faruk Mollah, Subhadip Basu, and Ram Sarkar. "Script Identification from Camera-Captured Multi-script Scene Text Components." In Advances in Intelligent Systems and Computing, 159–66. Singapore: Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-1280-9_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Mijit, Buvajar, Alimjan Aysa, Nurbiya Yadikar, Xing-kun Han, and Kurban Ubul. "Script Identification Based on HSV Features." In Communications in Computer and Information Science, 588–97. Singapore: Springer Singapore, 2016. http://dx.doi.org/10.1007/978-981-10-3005-5_48.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Dhanya, D., and A. G. Ramakrishnan. "Script Identification in Printed Bilingual Documents." In Lecture Notes in Computer Science, 13–24. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-45869-7_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Jetley, Saumya, Kapil Mehrotra, Atish Vaze, and Swapnil Belhe. "Multi-script Identification from Printed Words." In Lecture Notes in Computer Science, 359–68. Cham: Springer International Publishing, 2014. http://dx.doi.org/10.1007/978-3-319-11758-4_39.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Halder, Chayan, Kishore Thakur, Santanu Phadikar, and Kaushik Roy. "Writer Identification from Handwritten Devanagari Script." In Advances in Intelligent Systems and Computing, 497–505. New Delhi: Springer India, 2015. http://dx.doi.org/10.1007/978-81-322-2247-7_51.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Identification du script"

1

Singh, Pawan Kumar, Ram Sarkar, Mita Nasipuri, and David Doermann. "Word-level script identification for handwritten Indic scripts." In 2015 13th International Conference on Document Analysis and Recognition (ICDAR). IEEE, 2015. http://dx.doi.org/10.1109/icdar.2015.7333932.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hassan, Ehtesham, Ritu Garg, Santanu Chaudhury, and M. Gopal. "Script based text identification." In the 2011 Joint Workshop. New York, New York, USA: ACM Press, 2011. http://dx.doi.org/10.1145/2034617.2034630.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Chanda, Sukalpa, Umapada Pal, Katrin Franke, and Fumitaka Kimura. "Script Identification – A Han and Roman Script Perspective." In 2010 20th International Conference on Pattern Recognition (ICPR). IEEE, 2010. http://dx.doi.org/10.1109/icpr.2010.1127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Singh, Pawan Kumar, Santu Kumar Dalal, Ram Sarkar, and Mita Nasipuri. "Page-level script identification from multi-script handwritten documents." In 2015 3rd International Conference on Computer, Communication, Control and Information Technology (C3IT). IEEE, 2015. http://dx.doi.org/10.1109/c3it.2015.7060113.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Nicolaou, Anguelos, Andrew D. Bagdanov, Lluis Gomez, and Dimosthenis Karatzas. "Visual Script and Language Identification." In 2016 12th IAPR Workshop on Document Analysis Systems (DAS). IEEE, 2016. http://dx.doi.org/10.1109/das.2016.63.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Sristy, Nagesh Bhattu, N. Satya Krishna, B. Shiva Krishna, and Vadlamani Ravi. "Language Identification in Mixed Script." In FIRE'17: Forum for Information Retrieval Evaluation. New York, NY, USA: ACM, 2017. http://dx.doi.org/10.1145/3158354.3158357.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Roy, K., S. Kundu Das, and Sk Md Obaidullah. "Script Identification from Handwritten Document." In 2011 Third National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG). IEEE, 2011. http://dx.doi.org/10.1109/ncvpripg.2011.22.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bertolini, Diego, Luiz S. Oliveira, and Robert Sabourin. "Multi-script writer identification using dissimilarity." In 2016 23rd International Conference on Pattern Recognition (ICPR). IEEE, 2016. http://dx.doi.org/10.1109/icpr.2016.7900098.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pan, W. M., C. Y. Suen, and T. D. Bui. "Script identification using steerable Gabor filters." In Eighth International Conference on Document Analysis and Recognition (ICDAR'05). IEEE, 2005. http://dx.doi.org/10.1109/icdar.2005.206.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ferrer, Miguel A., Aythami Morales, and Umapada Pal. "LBP Based Line-Wise Script Identification." In 2013 12th International Conference on Document Analysis and Recognition (ICDAR). IEEE, 2013. http://dx.doi.org/10.1109/icdar.2013.81.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Identification du script"

1

Hochberg, J., M. Cannon, P. Kelly, and J. White. Page segmentation using script identification vectors: A first look. Office of Scientific and Technical Information (OSTI), July 1997. http://dx.doi.org/10.2172/495845.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography