To see the other types of publications on this topic, follow the link: Metadata editor.

Journal articles on the topic 'Metadata editor'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Metadata editor.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Gennaro, Claudio. "Regia: a metadata editor for audiovisual documents." Multimedia Tools and Applications 36, no. 3 (2007): 185–201. http://dx.doi.org/10.1007/s11042-007-0129-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Ke He, Bo Hao Cheng, Jin Shui Wu, and Yue Yuan. "Design and Implementation of an SVG Editor for Power System." Advanced Materials Research 756-759 (September 2013): 972–76. http://dx.doi.org/10.4028/www.scientific.net/amr.756-759.972.

Full text
Abstract:
It is the fundamental requirement for power system graphical editor to be able to draw rapidly, precisely and allocate corresponding configuration parameters for the power components and reserve operation interface. Element of this editor uses the SVG format and defines metadata tags based on the CIM specification carrying metadata in the SVG document which makes the graphics and data can be linked to a unified canonical form. The editor achieves the purpose as follows: editing, storing, and exporting a wiring diagram; importing a wiring diagram from other systems; automatically extracting and storing layers and elements in the importing process; creating and maintaining the user-defined primitive library or layer library and binding associated data. This SVG graphics editor for power system has met the functional requirements of the user and has been applied in several practical projects.
APA, Harvard, Vancouver, ISO, and other styles
3

Peixoto, Douglas Alves, Lucas Francisco da Matta Vegi, and Jugurta Lisboa-Filho. "Um Editor de Metadados para Documentar Padrões de Análise em uma Infraestrutura de Reuso." iSys - Brazilian Journal of Information Systems 7, no. 4 (2014): 23–42. http://dx.doi.org/10.5753/isys.2014.266.

Full text
Abstract:
O processo de desenvolvimento de software muitas vezes encontra obstáculos para reutilizar padrões de análise devido ao difícil acesso a estes artefatos computacionais. A falta de uma ferramenta que facilite o processo de documentação dos padrões de análise e de um repositório digital para armazená-los contribui negativamente na recuperação e reuso dos mesmos. Este trabalho apresenta a ferramenta DC2AP Metadata Editor. Esta ferramenta é um editor de metadados para padrões de análise baseada no modelo Dublin Core Application Profile for Analysis Patterns (DC2AP). Para organizar o processo de documentação dos padrões de análise e facilitar sua recuperação, o DC2AP Metadata Editor provê padrões de análise documentados como Linked Data, permitindo assim que o conhecimento armazenado nesses artefatos sejam compartilhados e automaticamente interpretados por software.
APA, Harvard, Vancouver, ISO, and other styles
4

Jantz, Ronald. "Letter to the Editor: Re: Authentic Digital Objects." International Journal of Digital Curation 4, no. 2 (2009): 8–11. http://dx.doi.org/10.2218/ijdc.v4i2.101.

Full text
Abstract:
This letter responds to Andrew Wilson’s concerns regarding my article in IJDC 4(1) entitled “An Institutional Framework for Creating Authentic Digital Objects”. This response clears up some of the issues regarding my assertions about digital certificates, metadata, and the roles of librarians in the digital environment.
APA, Harvard, Vancouver, ISO, and other styles
5

Micsik, András, Sándor Turbucz, and Zoltán Tóth. "Exploring publication metadata graphs with the LODmilla browser and editor." International Journal on Digital Libraries 16, no. 1 (2014): 15–24. http://dx.doi.org/10.1007/s00799-014-0130-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Krichel, Thomas, and Nisa Bakkalbasi. "Metadata characteristics as predictors for editor selectivity in a current awareness service." Proceedings of the American Society for Information Science and Technology 42, no. 1 (2006): n/a. http://dx.doi.org/10.1002/meet.14504201132.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Medeiros, Norm. "A craftsman and his tool: Andy Powell and the DC‐dot metadata editor." OCLC Systems & Services: International digital library perspectives 17, no. 2 (2001): 60–64. http://dx.doi.org/10.1108/10650750110391939.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Minegar, Ben. "Forging a Balanced Presumption in Favor of Metadata Disclosure Under the Freedom of Information Act." Pittsburgh Journal of Technology Law and Policy 16, no. 1 (2016): 23–57. http://dx.doi.org/10.5195/tlp.2015.177.

Full text
Abstract:
Law Clerk to Chief Judge Joy Flowers Conti, United States District Court for the Western District of Pennsylvania; J.D. magna cum laude 2015, University of Pittsburgh (Lead Executive Editor, University of Pittsburgh Law Review); B.A. 2009, University of North Florida. Thank you Professor Rhonda Wasserman for your advice and assistance on this paper and for an enlightening class on electronic discovery. Faculty for the University of Pittsburgh School of Law awarded this paper the William H. Eckert Prize.
APA, Harvard, Vancouver, ISO, and other styles
9

Rasmussen, Karsten Boye. "Metadata is key - the most important data after data." IASSIST Quarterly 42, no. 2 (2018): 1. http://dx.doi.org/10.29173/iq922.

Full text
Abstract:
Welcome to the second issue of volume 42 of the IASSIST Quarterly (IQ 42:2, 2018).
 The IASSIST Quarterly has had several papers on many different aspects of the Data Documentation Initiative - for a long time better known by its acronym DDI, without any further explanation. DDI is a brand. The IASSIST Quarterly has also included special issues of collections of papers concerning DDI.
 Among staff at data archives and data libraries, as well as the users of these facilities, I think we can agree that it is the data that comes first. However, fundamental to all uses of data is the documentation describing the data, without which the data are useless. Therefore, it comes as no surprise that the IASSIST Quarterly is devoted partly to the presentation of papers related to documentation. The question of documentation or data resembles the question of the chicken or the egg. Don't mistake the keys for your car. The metadata and the data belong together and should not be separated.
 DDI now is a standard, but as with other standards it continues to evolve. The argument about why standards are good comes to mind: 'The nice thing about standards is that you have so many to choose from!'. DDI is the de facto standard for most social science data at data archives and university data libraries.
 The first paper demonstrates a way to tackle the heterogeneous character of the usage of the DDI. The approach is able to support collaborative questionnaire development as well as export in several formats including the metadata as DDI. The second paper shows how an institutionalized and more general metadata standard - in this case the Belgian Encoded Archival Description (EAD) - is supported by a developed crosswalk from DDI to EAD. However, IQ 42:2 is not a DDI special issue, and the third paper presents an open-source research data management platform called Dendro and a laboratory notebook called LabTablet without mentioning DDI. However, the paper certainly does mention metadata - it is the key to all data. 
 The winner of the paper competition of the IASSIST 2017 conference is presented in this issue. 'Flexible DDI Storage' is authored by Oliver Hopt, Claus-Peter Klas, Alexander Mühlbauer, all affiliated with GESIS - the Leibniz-Institute for the Social Sciences in Germany. The authors argue that the current usage of DDI is heterogeneous and that this results in complex database models for each developed application. The paper shows a new binding of DDI to applications that works independently of most version changes and interpretative differences, thus avoiding continuous reimplementation. The work is based upon their developed DDI-FlatDB approach, which they showed at the European DDI conferences in 2015 and 2016, and which is also described in the paper. Furthermore, a web-based questionnaire editor and application supports large DDI structures and collaborative questionnaire development as well as production of structured metadata for survey institutes and data archives. The paper describes the questionnaire workflow from the start to the export of questionnaire, DDI XML, and SPSS. The development is continuing and it will be published as open source. 
 The second paper is also focused on DDI, now in relation to a new data archive. 'Elaborating a Crosswalk Between Data Documentation Initiative (DDI) and Encoded Archival Description (EAD) for an Emerging Data Archive Service Provider' is by Benjamin Peuch who is a researcher at the State Archives of Belgium. It is expected that the future Belgian data archive will be part of the State Archives, and because DDI is the most widespread metadata standard in the social sciences, the State Archives have developed a DDI-to-EAD crosswalk in order to re-use their EAD infrastructure. The paper shows the conceptual differences between DDI and EAD - both XML based - and how these can be reconciled or avoided for the purpose of a data archive for the social sciences. The author also foresees a fruitful collaboration between traditional archivists and social scientists.
 The third paper is by a group of scholars connected to the Informatics Engineering Department of University of Porto and the INESC TEC in Portugal. Cristina Ribeiro, João Rocha da Silva, João Aguiar Castro, Ricardo Carvalho Amorim, João Correia Lopes, and Gabriel David are the authors of 'Research Data Management Tools and Workflows: Experimental Work at the University of Porto'. The authors start with the statement that 'Research datasets include all kinds of objects, from web pages to sensor data, and originate in every domain'. The task is to make these data visible, described, preserved, and searchable. The focus is on data preparation, dataset organization and metadata creation. Some groups were proposed a developed open-source research data management platform called Dendro and a laboratory notebook called LabTablet, while other groups that demanded a domain-specific approach had special developed models and applications. All development and metadata modelling have in sight the metadata dissemination.
 Submissions of papers for the IASSIST Quarterly are always very welcome. We welcome input from IASSIST conferences or other conferences and workshops, from local presentations or papers especially written for the IQ. When you are preparing such a presentation, give a thought to turning your one-time presentation into a lasting contribution. Doing that after the event also gives you the opportunity of improving your work after feedback. We encourage you to login or create an author login to https://www.iassistquarterly.com (our Open Journal System application). We permit authors 'deep links' into the IQ as well as deposition of the paper in your local repository. Chairing a conference session with the purpose of aggregating and integrating papers for a special issue IQ is also much appreciated as the information reaches many more people than the limited number of session participants and will be readily available on the IASSIST Quarterly website at https://www.iassistquarterly.com. Authors are very welcome to take a look at the instructions and layout:
 https://www.iassistquarterly.com/index.php/iassist/about/submissions
 Authors can also contact me directly via e-mail: kbr@sam.sdu.dk. Should you be interested in compiling a special issue for the IQ as guest editor(s) I will also be delighted to hear from you.
 Karsten Boye Rasmussen - June, 2018
APA, Harvard, Vancouver, ISO, and other styles
10

Reese, Terry, and Wendy Robertson. "A Beginners Guide to MarcEdit and Beyond the Editor: Advanced Tools and Techniques for Working with Metadata." Serials Librarian 74, no. 1-4 (2018): 3–8. http://dx.doi.org/10.1080/0361526x.2018.1439247.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Crandell, Adam. "MerMEId: Metadata Editor and Repository for MEI Data by The National Library, Danish Centre for Music Publication." Notes 71, no. 3 (2015): 543–44. http://dx.doi.org/10.1353/not.2015.0037.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Hiekata, Kazuo, Hiroyuki Yamato, and Piroon Rojanakamolsan. "Ship Design Educational Framework Using ShareFast: A Case Study of Teaching Ship Design With CAD Software." Journal of Ship Production 23, no. 04 (2007): 202–9. http://dx.doi.org/10.5957/jsp.2007.23.4.202.

Full text
Abstract:
This paper proposes a ship design educational framework using ShareFast, an open source, client/server application for a document-management system based on workflow. The association between design documents and workflow is described by metadata based on Semantic Web technology. The client software offers a workflow editor to create and edit workflow and then uploads the created workflow to the server. It also allows users to browse the workflow and associated documents. Additionally, the software offers a function for instructors to monitor student's behavior so that they can analyze it to improve class efficiency. The system has been used for an experimental investigation with university students. The result showed that learning ship design by workflow helped the students understand design process more easily, and the system can shorten the students' learning duration.
APA, Harvard, Vancouver, ISO, and other styles
13

Young, Renee. "The Alert Collector: Listen Up: Best Practices for Audiobooks in Libraries." Reference & User Services Quarterly 58, no. 4 (2019): 210. http://dx.doi.org/10.5860/rusq.58.4.7146.

Full text
Abstract:
This issue’s Alert Collector offering on audiobooks is a departure from the usual subject-based column. With the wide availability of downloadable audiobooks, there is a huge opportunity for libraries to serve readers who would rather listen on their mobile devices. Renee Young, a Metadata Librarian III with EBSCO, offers some great advice for any librarian trying to build or improve their audiobook collection. She also suggests ways to promote your collection and help those you serve find great new “reads” in audiobook format. Young is a former reviewer of audiobooks for Booklist, served as member and chair of Listen List Council of the Collection Development and Evaluation Section (CODES) of the Reference and User Services Association (RUSA), and has presented on listener’s advisory at national conferences. Her “listening” skills go back to before becoming a librarian: she served in the US Army as a cryptologic linguist, which involved listening to and translating radio transmissions.—Editor
APA, Harvard, Vancouver, ISO, and other styles
14

Turner, Chris, and Ian Gill. "Developing a Data Management Platform for the Ocean Science Community." Marine Technology Society Journal 52, no. 3 (2018): 28–32. http://dx.doi.org/10.4031/mtsj.52.3.8.

Full text
Abstract:
AbstractThe management of oceanographic data is particularly challenging given the variety of protocols for the analysis of data collection and model output, the vast range of environmental conditions studied, and the potentially enormous extent and volume of the resulting data sets and model results. Here, we describe the Research Workspace (the Workspace), a web platform designed around data management best practices to meet the challenges of managing oceanographic data throughout the research life cycle. The Workspace features secure user accounts and automatic file versioning to assist with the early stages of project planning and data collection. Jupyter Notebooks have been integrated into the Workspace to support reproducible numerical analysis and data visualization while making use of high-performance computer resources collocated with data assets. An ISO-compliant metadata editor has also been integrated into the Workspace to support data synthesis, publication, and reuse. The Workspace currently supports stakeholders across the ocean science community, from funding agencies to individual investigators, by providing a data management platform to meet the needs of big ocean data.
APA, Harvard, Vancouver, ISO, and other styles
15

Gourkova, Helen. "International Yearbook of Library and Information Management 2003‐2004: Metadata Applications and Management20051Associate Editor Daniel G. Dorner. International Yearbook of Library and Information Management 2003‐2004: Metadata Applications and Management. Metuchen, NJ: Scarecrow Press 2004. 359 pp., ISBN: 0810849801." Library Management 26, no. 6/7 (2005): 413–15. http://dx.doi.org/10.1108/01435120410609815.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Engelhardt, Michael, Arne Hildebrand, Dagmar Lange, and Thomas C. Schmidt. "Semantic overlays in educational content networks – the hylOs approach." Campus-Wide Information Systems 23, no. 4 (2006): 254–67. http://dx.doi.org/10.1108/10650740610704126.

Full text
Abstract:
PurposeThe paper, aims to introduce an educational content management system Hypermedia Learning Objects System (hylOs), which is fully compliant to the IEEE LOM eLearning object metadata standard. Enabled through an advanced authoring toolset, hylOs allows the definition of instructional overlays of a given eLearning object mesh.Design/methodology/approachIn educational content management, simple file distribution is considered insufficient. Instead, IEEE LOM standardised eLearning objects have been well established as the basic building blocks for educational online content. They are nicely suited for self‐explorative learning approaches within adaptive hypermedia applications. Even though eLearning objects typically reside within content repositories, they may propagate metadata relations beyond repository limits. Given the explicit meaning of these interobject references, a semantic net of content strings can be knotted, overlaying the repository infrastructure.FindingsBased on a newly introduced ontological evaluation layer, meaningful overlay relations between knowledge objects are shown to derive autonomously. A technology framework to extend the resulting semantic nets beyond repository limits is also presented.Research limitations/implicationsThis paper provides proof of concept for the derivation and use of semantic content networks in educational hypermedia. It thereby opens up new directions for future eLearning technologies and pedagogical adoption.Practical implicationsThe paper illustrates capabilities of the hylOs eLearning content management. The hylOs is built upon the more general Media Information Repository (MIR) and the MIR Adaptive Context Linking Environment (MIRaCLE): its linking extension. MIR is an open system supporting the standard XML, CORBA and JNDI. hylOs benefits from manageable information structures, sophisticated access logic and high‐level authoring tools like the eLO editor responsible for the semi‐manual creation of meta data and WYSIWYG like XML–content editing, allowing for rapid distributed content development.Originality/valueOver the last few years, networking technologies and distributed information systems have moved up the OSI layer and are established well within application‐centric middleware. Most recently, content overlay networks have matured, incorporating the semantics of data files into their self‐organisational structure with the aim of optimising data‐centric distributed indexing and retrieval. This paper elaborates a corresponding concept of semantic structuring for educational content objects. It introduces and analyses the autonomous generation and educational exploitation of semantic content nets, providing proof of concept by a full‐featured implementation within the hylOs educational content management system.
APA, Harvard, Vancouver, ISO, and other styles
17

Silva, Marcel Santos, and Silvana Aparecida Borsetti Gregorio Vidotti. "Arquitetura para integração de bibliotecas digitais geográficas por meio de mecanismos de geoprocessamento no contexto da ciência da informação." Encontros Bibli: revista eletrônica de biblioteconomia e ciência da informação 25 (September 2, 2020): 01–19. http://dx.doi.org/10.5007/1518-2924.2020.e70807.

Full text
Abstract:
Objetivo: Construir uma arquitetura conceitual, com elementos para a criação de uma Biblioteca Digital Geográfica, utilizando os padrões e os conceitos da Ciência da Informação em conjunto com o Geoprocessamento.Método: Por meio de um estudo teórico, exploratório e bibliográfico nas áreas de Ciência da Informação e Geoprocessamento, foi possível desenvolver um modelo conceitual de arquitetura para a Biblioteca Digital Geográfica. A proposta foi estruturada em três camadas: a Cliente, responsável pelo processo de visualização; a Aplicação, que possui os processos de gerenciamento e análise; e a camada de dados, que contempla os serviços Web de dados, com foco na recuperação de metadados, via protocolo PMH (Protocol Metadata Harvesting).Resultado: A arquitetura conceitual criada atendeu os requisitos de representação da informação, as formas de comunicação com o protocolo de coleta de metadados e objetos digitais, possibilitando assim, o compartilhamento dos acervos informacionais geográficos distribuídos em diferentes Bibliotecas Digitais Geográficas ao redor do mundo. Os elementos informacionais enfocados no Geoprocessamento e as formas de representação temática e descritiva, de organização e recuperação de informação da Ciência da Informação confirmou-se o potencial de utilização recíproca e compartilhada de conceitos e ferramentas destas duas áreas.Conclusões: Os principais aspectos da pesquisa foram: com a implantação das três camadas e quatro processos, é possível a utilização de sistema de informações geográficas e aplicativos de interface ao usuário para facilitar o processo de compartilhamento e recuperação da informação. A utilização do gerenciador e do padrão de metadados proporciona a recuperação de informação precisa, juntamente com o editor de geo-ontologia único para todas as bibliotecas participantes.
APA, Harvard, Vancouver, ISO, and other styles
18

Fourie, Ina. "International Yearbook of Library and Information Management 2003‐2004: Metadata Applications and Management20042Edited by G.E. Gorman, Associate Editor Daniel G. Dorner. International Yearbook of Library and Information Management 2003‐2004: Metadata Applications and Management. London: Facet 2003. 384 pp., ISBN: 1‐85604‐474‐2 £60.00 (hardback)." Electronic Library 22, no. 4 (2004): 362–63. http://dx.doi.org/10.1108/02640470410553018.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Tolwinska, Anna. "Participation Reports help Crossref members drive research further." Science Editing 8, no. 2 (2021): 180–85. http://dx.doi.org/10.6087/kcse.253.

Full text
Abstract:
This article aims to explain the key metadata elements listed in Participation Reports, why it’s important to check them regularly, and how Crossref members can improve their scores. Crossref members register a lot of metadata in Crossref. That metadata is machine-readable, standardized, and then shared across discovery services and author tools. This is important because richer metadata makes content more discoverable and useful to the scholarly community. It’s not always easy to know what metadata Crossref members register in Crossref. This is why Crossref created an easy-to-use tool called Participation Reports to show editors, and researchers the key metadata elements Crossref members register to make their content more useful. The key metadata elements include references and whether they are set to open, ORCID iDs, funding information, Crossmark metadata, licenses, full-text URLs for text-mining, and Similarity Check indexing, as well as abstracts. ROR IDs (Research Organization Registry Identifiers), that identify institutions will be added in the future. This data was always available through the Crossref ’s REST API (Representational State Transfer Application Programming Interface) but is now visualized in Participation Reports. To improve scores, editors should encourage authors to submit ORCIDs in their manuscripts and publishers should register as much metadata as possible to help drive research further.
APA, Harvard, Vancouver, ISO, and other styles
20

Krasnov, Fedor, Mikhail Shvartsman, and Alexander Dimentov. "Comparative Analysis of Scientific Journals Collections." SPIIRAS Proceedings 18, no. 3 (2019): 767–93. http://dx.doi.org/10.15622/sp.2019.18.3.766-792.

Full text
Abstract:
The authors developed an approach to comparative analysis of scientific journals collections based on the analysis of co-authors graph and the text model. The use of time series of co-authorship graphs metrics allowed the authors to analyze trends in the development of journal authors. The text model was built using machine learning techniques. The journals content was classified to determine the authenticity degree of various journals and different issues of a single journal via a text model. The authors developed a metric of Content Authenticity Ratio, which allows quantifying the authenticity of journal collections in comparison. Comparative thematic analysis of journals collections was carried out using the thematic model with additive regularization. Based on the created thematic model, the authors constructed thematic profiles of the journals archives in a single thematic basis. The approach developed by the authors was applied to archives of two journals on the Rheumatology for the period 2000–2018. As a benchmark for comparing the co-author’s metrics, public data sets from the SNAP research laboratory at Stanford University were used. As a result, the authors adapted the existing examples of the effective functioning of the authors collaborations in order to improve the work of journals editorial staff. Quantitative comparison of large volumes of texts and metadata of scientific articles was carried out. As a result of the experiment conducted using the developed methods, it was shown that the content authenticity of the selected journals is 89%, co-authorships in one of the journals have a pronounced centrality, which is a distinctive feature of the policy editor. The clarity and consistency of the results confirm the effectiveness of the approach proposed by the authors. The code developed in the course of the experiment in the Python programming language can be used for comparative analysis of other collections of journals in the Russian language.
APA, Harvard, Vancouver, ISO, and other styles
21

Kobayashi, Ichiro. "Special Issue on Language-Based Human Intelligence and Personalization." Journal of Advanced Computational Intelligence and Intelligent Informatics 10, no. 6 (2006): 771–72. http://dx.doi.org/10.20965/jaciii.2006.p0771.

Full text
Abstract:
At the annual conference of the Japan Society for Artificial Intelligence (JSAI), a special survival session called "Challenge for Realizing Early Profits (CREP)" is organized to support and promote excellent ideas in new AI technologies expected to be realized and contributed to society within five years. Every year at the session, researchers propose their ideas and compete in being evaluated by conference participants. The Everyday Language Computing (ELC) project, started in 2000 at the Brain Science Institute, RIKEN, and ended in 2005, participated in the CREP program in 2001 to have their project evaluated by third parties and held an organized session every year in which those interested in language-based intelligence and personalization participate. They competed with other candidates, survived the session, and achieved the session's final goal to survive for five years. Papers in this special issue selected for presentation at the session include the following: The first article, "Everyday-Language Computing Project Overview," by Ichiro Kobayashi et al., gives an overview and the basic technologies of the ELC Project. The second to sixth papers are related to the ELC Project. The second article, "Computational Models of Language Within Context and Context-Sensitive Language Understanding," by Noriko Ito et al., proposes a new database, called the "semiotic base," that compiles linguistic resources with contextual information and an algorithm for achieving natural language understanding with the semiotic base. The third article, "Systemic-Functional Context-Sensitive Text Generation in the Framework of Everyday Language Computing," by Yusuke Takahashi et al., proposes an algorithm to generate texts with the semiotic base. The fourth article, "Natural Language-Mediated Software Agentification," by Michiaki Iwazume et al., proposes a method for agentifying and verbalizing existing software applications, together with a scheme for operating/running them. The fifth article, "Smart Help for Novice Users Based on Application Software Manuals," by Shino Iwashita et al., proposes a new framework for reusing electronic software manuals equipped with application software to provide tailor-made operation instructions to users. The sixth article, "Programming in Everyday Language: A Case for Email Management," by Toru Sugimoto et al., making a computer program written in natural language. Rhetorical structure analysis is used to translate the natural language command structure into the program structure. The seventh article, "Application of Paraphrasing to Programming with Linguistic Expressions," by Nozomu Kaneko et al., proposes a method for translating natural language commands into a computer program through a natural language paraphrasing mechanism. The eighth article, "A Human Interface Based on Linguistic Metaphor and Intention Reasoning," by Koichi Yamada et al., proposes a new human interface paradigm called Push Like Talking (PLT), which enables people to operate machines as they talk. The ninth article, "Automatic Metadata Annotation Based on User Preference Evaluation Patterns," by Mari Saito proposes effective automatic metadata annotation for content recommendations matched to user preference. The tenth article, "Dynamic Sense Representation Using Conceptual Fuzzy Sets," by Hiroshi Sekiya et al., proposes a method to represent word senses, which vary dynamically depending on context, using conceptual fuzzy sets. The eleventh article, "Common Sense from the Web? Naturalness of Everyday Knowledge Retrieved from WWW," by Rafal Rzepka et al., is a challenging work to acquire common-sense knowledge from information on the Web. The twelfth article, "Semantic Representation for Understanding Meaning Based on Correspondence Between Meanings," by Akira Takagi et al., proposes a new semantic representation to deal with Japanese language in natural language processing. I thank the reviewers and contributors for their time and effort in making this special issue possible, and I wish to thank the JACIII editorial board, especially Professors Kaoru Hirota and Toshio Fukuda, the Editors-in-Chief, for inviting me to serve as Guest Editor of this Journal. Thanks also go to Kazuki Ohmori and Kenta Uchino of Fuji Technology Press for their sincere support.
APA, Harvard, Vancouver, ISO, and other styles
22

Grant, Rebecca, Graham Smith, and Iain Hrynaszkiewicz. "Assessing Metadata and Curation Quality." International Journal of Digital Curation 14, no. 1 (2020): 238–49. http://dx.doi.org/10.2218/ijdc.v14i1.599.

Full text
Abstract:
Since 2017, the publisher Springer Nature has provided an optional Research Data Support service to help researchers deposit and curate data that support their peer-reviewed publications. This service builds on a Research Data Helpdesk, which since 2016 has provided support to authors and editors who need advice on the options available for sharing their research data. In this paper, we describe a short project which aimed to facilitate an objective assessment of metadata quality, undertaken during the development of a third-party curation service for researchers (Research Data Support). We provide details on the single-blind user-testing that was undertaken, and the results gathered during this experiment. We also briefly describe the curation services which have been developed and introduced following an initial period of testing and piloting.
APA, Harvard, Vancouver, ISO, and other styles
23

Chbeir, Richard, Harald Kosch, Frederic Andres, and Hiroshi Ishikawa. "Guest Editors' Introduction: Multimedia Metadata and Semantic Management." IEEE Multimedia 16, no. 4 (2009): 8–11. http://dx.doi.org/10.1109/mmul.2009.101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Baum, B., J. Christoph, I. Engel, et al. "Integrated Data Repository Toolkit (IDRT)." Methods of Information in Medicine 55, no. 02 (2016): 125–35. http://dx.doi.org/10.3414/me15-01-0082.

Full text
Abstract:
SummaryBackground: In recent years, research data warehouses moved increasingly into the focus of interest of medical research. Nevertheless, there are only a few center-independent infrastructure solutions available. They aim to provide a consolidated view on medical data from various sources such as clinical trials, electronic health records, epidemiological registries or longitudinal cohorts. The i2b2 framework is a well-established solution for such repositories, but it lacks support for importing and integrating clinical data and metadata.Objectives: The goal of this project was to develop a platform for easy integration and administration of data from heterogeneous sources, to provide capabilities for linking them to medical terminologies and to allow for transforming and mapping of data streams for user-specific views.Methods: A suite of three tools has been developed: the i2b2 Wizard for simplifying administration of i2b2, the IDRT Import and Mapping Tool for loading clinical data from various formats like CSV, SQL, CDISC ODM or biobanks and the IDRT i2b2 Web Client Plugin for advanced export options. The Import and Mapping Tool also includes an ontology editor for rearranging and mapping patient data and structures as well as annotating clinical data with medical terminologies, primarily those used in Germany (ICD-10-GM, OPS, ICD-O, etc.).Results: With the three tools functional, new i2b2-based research projects can be created, populated and customized to researcher’s needs in a few hours. Amalgamating data and metadata from different databases can be managed easily. With regards to data privacy a pseudonymization service can be plugged in. Using common ontologies and reference terminologies rather than project-specific ones leads to a consistent understanding of the data semantics.Conclusions: i2b2’s promise is to enable clinical researchers to devise and test new hypothesis even without a deep knowledge in statistical programing. The approach pre -sented here has been tested in a number of scenarios with millions of observations and tens of thousands of patients. Initially mostly observant, trained researchers were able to construct new analyses on their own. Early feedback indicates that timely and extensive access to their “own” data is appreciated most, but it is also lowering the barrier for other tasks, for instance checking data quality and completeness (missing data, wrong coding).
APA, Harvard, Vancouver, ISO, and other styles
25

Strobel, Jochen. "Performanz in der Briefkommunikation und ihre editorische Repräsentation." Editio 33, no. 1 (2019): 129–40. http://dx.doi.org/10.1515/editio-2019-0009.

Full text
Abstract:
Abstract Letters as a form of communication gain their meanings not least by performative practices hints of which are clearly shown in their materiality. Vice versa, the paper discusses the possibilities of performing and making a film of correspondences using the example of Paul Celan and Ingeborg Bachmann (published 2008) and its film version Die Geträumten (‘Dreamed people’, 2016). The performative aspects of letters may also be represented by audio books or picture books. Yet especially digital letter editions should examine the phatic and conative functions of letter communication supplying specially designed kinds of metadata and their visualizations. The ‚Jenaer Romantikertreffen‘ (‘Jena Meeting of Romanticists’) in 1799 serves as a final historical example.
APA, Harvard, Vancouver, ISO, and other styles
26

Lapeña, José Florencio F. "A Dozen Years, A Dozen Roses." Philippine Journal of Otolaryngology-Head and Neck Surgery 33, no. 2 (2018): 4–5. http://dx.doi.org/10.32412/pjohns.v33i2.293.

Full text
Abstract:
Twelve years have passed since my first editorial for the Philippine Journal of Otolaryngology Head and Neck Surgery, on the occasion of the silver anniversary of our journal and the golden anniversary of the Philippine Society of Otolaryngology – Head and Neck Surgery (PSO-HNS).1 Special editorials have similarly marked our thirtieth (pearl)2 and thirty-fifth (coral or jade)3 journal anniversaries, punctuating editorials on a variety of themes in between. Whether they were a commentary on issues and events in the PSO-HNS or Philippine Society, or on matters pertaining to medical research and writing, publication and peer review, I have often wondered whether my words fell on deaf ears. But write, must I-- despite my writer’s doubt.
 
 What then, do a dozen years symbolize? As a baby boomer, I am all too familiar with what “cheaper by the dozen” meant in daily life, outwardly displayed in the matching attire my siblings and I wore on special occasions -- such as Yuletide when we would sing the carol “twelve days of Christmas.”4 We read the comedy “Twelfth Night”5 in school, although I admittedly enjoyed “The Dirty Dozen”6 more than Shakespeare. College ROTC introduced me to the “Daily Dozen” and the grueling Navy count- 1,2,3, ONE! One, two, three, TWO! (One, two, three, four! I love the Marine Corps!) And that is as far as my list of memorable dozens goes, covering five dozen years of life.
 
 Of these, one fifth or 20% of my life has been devoted to our journal. From that perspective, I cannot help but wonder whether, or how it mattered. After 12 years, the day-to-day routine has hardly changed; neither have the periodic problems that precede the birth of each issue. I still find it difficult to solicit and follow-up reviews, and I still burn the midnight oil on weekends and holidays, patiently guiding authors in revising their manuscripts. Nevertheless, our journal has come a long way from where it was when we started (although it has not reached as far and as quickly as I would have wanted it to). Much depends on our authors and the caliber of their contributions, and our reviewers and the quality and timeliness of their reviews. However, despite our efforts to conduct education and training sessions on Medical Writing and Peer Review, the new batch of submissions and reviews each year evinces the need to repeat these regularly. In this regard, the increasing response-ability of our associate editors and continuing support of our society are needed to ensure our progress.
 
 This year, we welcome Dr. Eris Llanes as our new Managing Editor as we thank and congratulate Dr. Tony Chua (who retains his position as Associate Editor) for serving in that role for the past 12 years. We have finally migrated from our previous platform to the Public Knowledge Platform - Open Journal Systems (PKP-OJS) available from https://pjohns.pso-hns.org/index.php/pjohns/index. The PSO-HNS has become a member of the Publishers International Linking Association (PILA), which manages and maintains, deposits and retrieves, Metadata and Digital Identifiers inclusive of associated software and know-how. This will enable us to register Digital Object Identifiers (DOIs) for all our content using the Crossref® system (https://www.crossref.org/about/), making our “research outputs easy to find, cite, link, and assess.”7 We are also subscribing to the Crossref® Similarity Check plagiarism detection software service powered by iThenticate® (https://www.crossref.org/services/similarity-check/)7 and are exploring ways and means of converting all our articles to eXtensible Markup Language (XML) format. These steps reflect our continuing efforts to comply with the requirements for indexing in the Directory of Open Access Journals (DOAJ)8 and our re-application for indexing in Scopus®.9 These steps would not have been possible without the full support of the PSO-HNS Board of Trustees under the leadership of our President, Dr. Aggie Remulla, for which we are truly grateful.
 
 Indeed, the past 12 years may represent a complete cycle (such as 12 hours on a clock, or months in a year, or 12 signs of the zodiac), the first steps in the rebirth of our journal. Although they may not count among the “memorable dozens” of my life, each of these years may be likened to a rose (with its attendant thorns) – a bouquet of a dozen roses that I offer to all of you.
 “for there’s no rose without a thorn,
 no night without the morn,
 no gain without some meaningful loss …”10
APA, Harvard, Vancouver, ISO, and other styles
27

Song, Insun, and Jongho Nang. "Design of a Video Metadata Schema and Implementation of an Authoring Tool for User Edited Contents Creation." Journal of KIISE 42, no. 3 (2015): 413–18. http://dx.doi.org/10.5626/jok.2015.42.3.413.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Rahmanzadeh Heravi, Bahareh, and Jarred McGinnis. "Introducing Social Semantic Journalism." Journal of Media Innovations 2, no. 1 (2015): 131–40. http://dx.doi.org/10.5617/jmi.v2i1.868.

Full text
Abstract:
In the event of breaking news, a wealth of crowd-sourced data, in the form of text, video and image, becomesavailable on the Social Web. In order to incorporate this data into a news story, the journalist mustprocess, compile and verify content within a very short timespan. Currently this is done manually andis a time-consuming and labour-intensive process for media organisations. This paper proposes SocialSemantic Journalism as a solution to help those journalists and editors. Semantic metadata, natural languageprocessing (NLP) and other technologies will provide the framework for Social Semantic Journalismto help journalists navigate the overwhelming amount of UGC for detecting known and unknown newsevents, verifying information and its sources, identifying eyewitnesses and contextualising the event andnews coverage journalists will be able to bring their professional expertise to this increasingly overwhelminginformation environment. This paper describes a framework of technologies that can be employed byjournalists and editors to realise Social Semantic Journalism.
APA, Harvard, Vancouver, ISO, and other styles
29

D.P., Gangwar, Anju Pathania, Anand -, and Shivanshu -. "AUTHENTICATION OF DIGITAL MP4 VIDEO RECORDINGS USING FILE CONTAINERS AND METADATA PROPERTIES." International Journal of Computer Science Engineering 10, no. 2 (2021): 28–38. http://dx.doi.org/10.21817/ijcsenet/2021/v10i2/211002004.

Full text
Abstract:
The authentication of digital video recording plays a very important role in forensic science as well as for other crime investigation purposes. The field of forensic examination of digital video is continuously facing new challenges. At present the authentication of the video is carried out on the basis of pixel-based analysis. Due to the change in technology, it was felt that a new approach is required for the authentication of digital video recordings. In the present work a new approach i.e. analysis of media Information and structural analysis of video containers (boxes/ atoms) of mp4 file format have been applied for identification of original and edited videos. This work is limited only for Mp4 file format because the MP4 compressed format is widely used in most of the mobile phone for video recording and transmission purposes. For this purpose, we recorded more than 200 video samples using more than 20 different mobile phones of different make and models and more than 12 video editors, which are available in open source used for editing purpose. The original and edited MP4 video files were analyzed for their different metadata and structural contents analysis of different file containers (boxes/atoms) using different freeware tools. The details of the work are described below.
APA, Harvard, Vancouver, ISO, and other styles
30

Senchyne, Jonathan. "The Digital Afterlife of Nineteenth-Century Black Writing: Response to Genealogies of Black Modernity." American Literary History 32, no. 4 (2020): 797–803. http://dx.doi.org/10.1093/alh/ajaa034.

Full text
Abstract:
Abstract This response to the forum on the nineteenth-century genealogies of Black modernity explores how nineteenth-century racial and publication contexts continue to shape the digital circulation and digital afterlife of authors. A born-digital publication of a work by George Moses Horton reveals how nineteenth-century racial paratexts re-emerge in metadata and algorithmic capitalism. When Horton is hailed by a data marketing company seeking to claim him as an asset, when his writing is routed through or claimed by white editors and (re)printers (myself included), and when his work is circulated in networks and spaces far beyond his own, the nineteenth-century conditions that structured Horton’s life and writing are revealed to shape his digital afterlife.
APA, Harvard, Vancouver, ISO, and other styles
31

Goldenholz, Daniel M., Shira R. Goldenholz, Kaarkuzhali B. Krishnamurthy, et al. "Using mobile location data in biomedical research while preserving privacy." Journal of the American Medical Informatics Association 25, no. 10 (2018): 1402–6. http://dx.doi.org/10.1093/jamia/ocy071.

Full text
Abstract:
Abstract Location data are becoming easier to obtain and are now bundled with other metadata in a variety of biomedical research applications. At the same time, the level of sophistication required to protect patient privacy is also increasing. In this article, we provide guidance for institutional review boards (IRBs) to make informed decisions about privacy protections in protocols involving location data. We provide an overview of some of the major categories of technical algorithms and medical–legal tools at the disposal of investigators, as well as the shortcomings of each. Although there is no “one size fits all” approach to privacy protection, this article attempts to describe a set of practical considerations that can be used by investigators, journal editors, and IRBs.
APA, Harvard, Vancouver, ISO, and other styles
32

Charbonneau, Deborah H., and Joan E. Beaudoin. "State of Data Guidance in Journal Policies: A Case Study in Oncology." International Journal of Digital Curation 10, no. 2 (2016): 136–56. http://dx.doi.org/10.2218/ijdc.v10i2.375.

Full text
Abstract:
This article reports the results of a study examining the state of data guidance provided to authors by 50 oncology journals. The purpose of the study was the identification of data practices addressed in the journals’ policies. While a number of studies have examined data sharing practices among researchers, little is known about how journals address data sharing. Thus, what was discovered through this study has practical implications for journal publishers, editors, and researchers. The findings indicate that journal publishers should provide more meaningful and comprehensive data guidance to prospective authors. More specifically, journal policies requiring data sharing, should direct researchers to relevant data repositories, and offer better metadata consultation to strengthen existing journal policies. By providing adequate guidance for authors, and helping investigators to meet data sharing mandates, scholarly journal publishers can play a vital role in advancing access to research data.
APA, Harvard, Vancouver, ISO, and other styles
33

Beißwenger, Michael, Wolfgang Imo, Marcel Fladrich, and Evelyn Ziegler. "https://www.mocoda2.de: a database and web-based editing environment for collecting and refining a corpus of mobile messaging interactions." European Journal of Applied Linguistics 7, no. 2 (2019): 333–44. http://dx.doi.org/10.1515/eujal-2019-0004.

Full text
Abstract:
AbstractThis paper reports on findings from the MoCoDa2 project which is creating a corpus of private CMC interactions from smartphone apps based on donations by their users. Different from other projects in the field, the project involves users not only as donators but also as editors of their data: In a web-based editing environment which provides users with access to their raw data, they are supported in pseudonymising their data and enhancing them with rich metadata on the interactional context, meta-data on the interlocutors and their relations, and on embedded media files. The resulting corpus will be a useful resource not only for quantitative but also for qualitative CMC research. For representation and annotation of the data the project builds on best practices from previous projects in the field and cooperates with a language technology partner.
APA, Harvard, Vancouver, ISO, and other styles
34

Menychtas, Andreas, David Tomás, Marco Tiemann, et al. "Dynamic Social and Media Content Syndication for Second Screen." International Journal of Virtual Communities and Social Networking 7, no. 2 (2015): 50–69. http://dx.doi.org/10.4018/ijvcsn.2015040103.

Full text
Abstract:
Social networking apps, sites and technologies offer a wide range of opportunities for businesses and developers to exploit the vast amount of information and user-generated content produced through social networking. In addition, the notion of second screen TV usage appears more influential than ever, with viewers continuously seeking further information and deeper engagement while watching their favourite movies or TV shows. In this work, the authors present SAM, an innovative platform that combines social media, content syndication and targets second screen usage to enhance media content provisioning, renovate the interaction with end-users and enrich their experience. SAM incorporates modern technologies and novel features in the areas of content management, dynamic social media, social mining, semantic annotation and multi-device representation to facilitate an advanced business environment for broadcasters, content and metadata providers, and editors to better exploit their assets and increase their revenues.
APA, Harvard, Vancouver, ISO, and other styles
35

Stenzel, Alexandra, and Florian Rommel. "Prototyping of creation, implementation and visualization of correlation rules and microDocs." SHS Web of Conferences 102 (2021): 02006. http://dx.doi.org/10.1051/shsconf/202110202006.

Full text
Abstract:
Semantic Correlation Rules (SCR) and microDocs are new concepts in the field of content delivery. SCR allow to define relationships between information units based on their metadata and, therefore, allow for the dynamic aggregation of microDocs. The creation of SCR heavily relies on the capabilities of modern content management systems (CMS) or ontology editors. The evaluation and visualization of the emerging microDocs, on the other hand, relies on the capabilities of content delivery portals (CDP). At this time, the support of both concepts in most software solutions currently being used, is only partly existent. This paper aims to demonstrate, how these currently existing limitations can be overcome, to reveal important factors to be considered and to showcase future possibilities of the aforementioned concepts. For this purpose, we developed a series of prototypes and conceptional visuals regarding creation of SCR, aggregation of microDocs and their visual appearance taking human perception into account.
APA, Harvard, Vancouver, ISO, and other styles
36

Lapeña, José Florencio. "Open Access: DOAJ and Plan S, Digitization and Disruption." Philippine Journal of Otolaryngology Head and Neck Surgery 34, no. 2 (2019): 4–6. http://dx.doi.org/10.32412/pjohns.v34i2.1111.

Full text
Abstract:
“Those with access to these resources — students, librarians, scientists — 
 you have been given a privilege. You get to feed at this banquet of knowledge 
 while the rest of the world is locked out. But you need not — indeed, morally, 
 you cannot — keep this privilege for yourselves. You have a duty to share it 
 with the world.”
 - Aaron Swartz1 (who killed himself at the age of 26,
 facing a felony conviction and prison sentence
 for downloading millions of academic journal articles)
 
 The Philippine Journal of Otolaryngology Head and Neck Surgery was accepted into the Directory of Open Access Journals (DOAJ) on October 9, 2019. The DOAJ is “a community-curated online directory that indexes and provides access to high quality, open access, peer-reviewed journals”2 and is often cited as a source of quality open access journals in research and scholarly publishing circles that has been considered a sort of “whitelist” as opposed to the now-defunct Beall’s (black) Lists.3
 As of this writing, the DOAJ includes 13,912 journals with 10,983 searchable at article level, from 130 countries with a total of 4,410,788 articles.2 Our article metadata is automatically supplied to, and all our articles are searchable on DOAJ. Because it is OpenURL compliant, once an article is on DOAJ, it is automatically harvestable. This is important for increasing the visibility of our journal, as there are more than 900,000 page views and 300,000 unique visitors a month to DOAJ from all over the world.2 Moreover, many aggregators, databases, libraries, publishers and search portals (e.g. Scopus, Serial Solutions and EBSCO) collect DOAJ free metadata and include it in their products. The DOAJ is also Open Archives Initiative (OAI) compliant, and once an article is in DOAJ, it is automatically linkable.4 
 Being indexed in DOAJ affirms that we are a legitimate open access journal, and enhances our compliance with Plan S.5 The Plan S initiative for Open Access publishing launched in September 2018 requires that from 2021, “all scholarly publications on the results from research funded by public or private grants provided by national, regional, and international research councils and funding bodies, must be published in Open Access Journals, on Open Access Platforms, or made immediately available through Open Access Repositories without embargo.”5 Such open access journals must be listed in DOAJ and identified as Plan S compliant.
 There are mixed reactions to Plan S. A recent editorial observes that subscription and hybrid journals (including such major highly-reputable journals as the New England Journal of Medicine, JAMA, Science and Nature) will be excluded,6 quoting the COAlition S argument that “there is no valid reason to maintain any kind of subscription-based business model for scientific publishing in the digital world.”5 As Gee and Talley put it, “will the rise of open access journals spell the end of the subscription model?”6
 If full open access will be unsustainable for such a leading hybrid medical journal as the Medical Journal of Australia,6 what will happen to the many smaller, low- and middle-income country (southern) journals that cannot sustain a fully open-access model? For instance, challenges facing Philippine journals have been previously described.7 
 According to Tecson-Mendoza, “these challenges relate to (1) the proliferation of journals and related problems, such as competition for papers and sub-par journals; (2) journal funding and operation; (3) getting listed or accredited in major citation databases; (4) competition for papers; (5) reaching a wider and bigger readership and paper contribution from outside the country; and (6) meeting international standards for academic journal publications.”7 Her 2015 study listed 777 Philippine scholarly journals, of which eight were listed in both the (then) Thomson Reuters (TR) and Scopus master lists, while an additional eight were listed in TR alone and a further twelve were listed in Scopus alone.7 To date, there are 11,207 confirmed Philippine periodicals listed on the International Standard Serial Number (ISSN) Portal,8 but these include non-scientific and non-scholarly publications like magazines, newsletters, song hits, and annual reports. What does the future have in store for small scientific publications from the global south?
 I previously shared my insights from the Asia Pacific Association of Medical Journal Editors (APAME) 2019 Convention (http://apame2019.whocc.org.cn) on the World Association of Medical Editors (WAME) Newsletter, a private Listserve for WAME members only.9 These reflections on transformation pressures journals are experiencing were the subject of long and meaningful conversations with the editor of the Philippine Journal of Pathology, Dr. Amado Tandoc III during the APAME 2019 Convention in Xi’an China from September 3-5, 2019. Here are three main points:
 
 the real need for and possibility of joining forces- for instance, the Journal of the ASEAN Federation of Endocrinology Societies (JAFES) currently based in the Philippines has fully absorbed previous national endocrinology journals of Malaysia and the Philippines, which have ceased to exist. While this merger has resulted in a much stronger regional journal, it would be worthwhile to consider featuring the logos and linking the archives of the discontinued journals on the JAFES website. Should the Philippine Journal of Otolaryngology Head and Neck Surgery consider exploring a similar model for the ASEAN Otorhinolaryngological – Head and Neck Federation? Or should individual specialty journals in the Philippines merge under a unified Philippine Medical Association Journal or the National Health Science Journal Acta Medica Philippina? Such mergers would dramatically increase the pool of authors, reviewers and editors and provide a sufficient number of higher-quality articles to publish monthly (or even fortnightly) and ensure indexing in MEDLINE (PubMed).
 the migration from cover-to-cover traditional journals (contents, editorial, sections, etc.) to publishing platforms (e.g. should learned Philippine societies and institutions consider establishing a single platform instead of trying to sustain their individual journals)? Although many scholarly Philippine journals have a long and respectable history, a majority were established after 2000,7 possibly reflecting compliance with requirements of the Commission on Higher Education (CHED) for increased research publications. Many universities, constituent colleges, hospitals, and even academic and clinical departments strove to start their own journals. The resulting journal population explosion could hardly be sustained by the same pool of contributors and reviewers.
 
 In our field for example, faculty members of departments of otorhinolaryngology who submitted papers to their departmental journals were unaware that simultaneously submitting these manuscripts to their hospital and/or university journals was a form of misconduct. Moreover, they were not happy when our specialty journal refused to publish their papers as this would constitute duplicate publication. The problem stemmed from their being required to submit papers for publication in department, hospital and/or university journals instead of crediting their submissions to our pre-existing specialty journal. This escalated the tension on all sides, to the detriment of the new journals (some department journals ceased publication after one or two issues) and authors (whose articles in these defunct journals are effectively lost).
 The older specialty journals are also suffering from the increased number of players with many failing to publish their usual number of issues or to publish them on time. But how many (if any at all) of these journals (especially specialty journals) would agree to yield to a merger with others (necessitating the end of their individual journal)? Would a common platform (rather than a common journal) provide a solution?
 
 more radically, the individual journal as we know it today (including the big northern journals) will cease to exist- as individual OA articles (including preprints) and open (including post-publication) review become freely available and accessible to all. However proud editors may be of the journals they design and develop from cover to cover, with all the special sections and touches that make their “babies” unique, readers access and download individual articles rather than entire journals. A similar fate befell the music industry a decade ago. From the heyday of vinyl (33 and 78 rpm long-playing albums and 45 rpm singles) and 8-tracks, to cassettes, then compact disks (CD’s) and videos, the US recorded music industry was down 63% in 2009 from its peak in the late 70’s, and down 45% from where it was in 1973.10 In 2011, DeGusta observed that “somewhat unsurprisingly, the recording industry makes almost all their money from full-length albums” but “equally unsurprising, no one is buying full albums anymore,” concluding that “digital really does appear to have brought about the era of the single.10 As McDowell opines, “In the end, the digital transforms not only the ability to disrupt standard publishing practices but instead it has already disrupted and continues to break these practices open for consideration and transformation.”11
 
 Where to then, scientific journals? Without endorsing either, will Sci-Hub (https://sci-hub.se) be to scholarly publishing what Spotify (https://www.spotify.com) is to the music industry? A sobering thought that behooves action.
APA, Harvard, Vancouver, ISO, and other styles
37

Seo, Sunkyung, and Jihyun Kim. "Data journals: types of peer review, review criteria, and editorial committee members’ positions." Science Editing 7, no. 2 (2020): 130–35. http://dx.doi.org/10.6087/kcse.207.

Full text
Abstract:
Purpose: This study analyzed the peer review systems, criteria, and editorial committee structures of data journals, aiming to determine the current state of data peer review and to offer suggestions.Methods: We analyzed peer review systems and criteria for peer review in nine data journals indexed by Web of Science, as well as the positions of the editorial committee members of the journals. Each data journal’s website was initially surveyed, and the editors-in-chief were queried via email about any information not found on the websites. The peer review criteria of the journals were analyzed in terms of data quality, metadata quality, and general quality.Results: Seven of the nine data journals adopted single-blind and open review peer review methods. The remaining two implemented modified models, such as interactive and community review. In the peer review criteria, there was a shared emphasis on the appropriateness of data production methodology and detailed descriptions. The editorial committees of the journals tended to have subject editors or subject advisory boards, while a few journals included positions with the responsibility of evaluating the technical quality of data.Conclusion: Creating a community of subject experts and securing various editorial positions for peer review are necessary for data journals to achieve data quality assurance and to promote reuse. New practices will emerge in terms of data peer review models, criteria, and editorial positions, and further research needs to be conducted.
APA, Harvard, Vancouver, ISO, and other styles
38

Nogueras-Iso, Javier, Miguel Ángel Latre, Rubén Béjar, Pedro R. Muro-Medrano, and F. Javier Zarazaga-Soria. "A model driven approach for the development of metadata editors, applicability to the annotation of geographic information resources." Data & Knowledge Engineering 81-82 (November 2012): 118–39. http://dx.doi.org/10.1016/j.datak.2012.09.001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Derfert-Wolf, Lidia. "Bazy bibliograficzne a POL-index. Plusy i minusy, szanse i zagrożenia na podstawie doświadczeń BazTech." Studia o Książce i Informacji (dawniej: Bibliotekoznawstwo) 35 (July 12, 2017): 11–28. http://dx.doi.org/10.19195/2300-7729.35.1.

Full text
Abstract:
Bibliographic databases and POL-index — strengths, weaknesses, opportunities and threats — based on BazTech experienceThe theme of the article is focused on creating aPolish citation database POL-index in the context of cooperation between the various data providers: the editors / publishers of journals and bibliograph­ic databases. It presents the concept of the POL-index and the Polish Impact Coefficient as well as the related legal acts, POL-index management and its contents. It also explains the experiences of bibliographic databases creators associated with transmission of metadata of the articles from scientific journals to POL-the index. The article indicates the best practices of these activities of BazTech database, especially the various aspects of cooperation with the journal editors. The aim of the article is to analyze the strengths and weaknesses of cooperation between different partners in creating of POL-index, opportunities and threats for further development of the system itself, as well as bibliographic databases. The method of literature’s analysis, primary source and statistical data were used. Conclusions — The most important strengths of the cooperation in the POL-index are benefits for databases, including greater timeliness and completeness of data as well as an increase of importance of these services. The weaknesses were: inconsistencies in the law, lack of satisfactory information among publishers, diversity of data formats and frequent changes of POL-index operators. The key opportunity is the potential of the bibliographic databases, which may constitute the core of POL-index, and the threat — lower rate of completeness of the citations in the POL-index.
APA, Harvard, Vancouver, ISO, and other styles
40

Feng, Xiao, Daniel S. Park, Cassondra Walker, A. Townsend Peterson, Cory Merow, and Monica Papeş. "A checklist for maximizing reproducibility of ecological niche models." Nature Ecology & Evolution 3, no. 10 (2019): 1382–95. http://dx.doi.org/10.1038/s41559-019-0972-5.

Full text
Abstract:
Abstract Reporting specific modelling methods and metadata is essential to the reproducibility of ecological studies, yet guidelines rarely exist regarding what information should be noted. Here, we address this issue for ecological niche modelling or species distribution modelling, a rapidly developing toolset in ecology used across many aspects of biodiversity science. Our quantitative review of the recent literature reveals a general lack of sufficient information to fully reproduce the work. Over two-thirds of the examined studies neglected to report the version or access date of the underlying data, and only half reported model parameters. To address this problem, we propose adopting a checklist to guide studies in reporting at least the minimum information necessary for ecological niche modelling reproducibility, offering a straightforward way to balance efficiency and accuracy. We encourage the ecological niche modelling community, as well as journal reviewers and editors, to utilize and further develop this framework to facilitate and improve the reproducibility of future work. The proposed checklist framework is generalizable to other areas of ecology, especially those utilizing biodiversity data, environmental data and statistical modelling, and could also be adopted by a broader array of disciplines.
APA, Harvard, Vancouver, ISO, and other styles
41

Mrowinski, Maciej J., Agata Fronczak, Piotr Fronczak, Olgica Nedic, and Aleksandar Dekanski. "The hurdles of academic publishing from the perspective of journal editors: a case study." Scientometrics 125, no. 1 (2020): 115–33. http://dx.doi.org/10.1007/s11192-020-03619-x.

Full text
Abstract:
Abstract In this paper, we provide insight into the editorial process as seen from the perspective of journal editors. We study a dataset obtained from the Journal of the Serbian Chemical Society, which contains information about submitted and rejected manuscripts, in order to find differences between local (Serbian) and external (non-Serbian) submissions. We show that external submissions (mainly from India, Iran and China) constitute the majority of all submissions, while local submissions are in the minority. Most of submissions are rejected for technical reasons (e.g. wrong manuscript formatting or problems with images) and many users resubmit the same paper without making necessary corrections. Manuscripts with just one author are less likely to pass the technical check, which can be attributed to missing metadata. Articles from local authors are better prepared and require fewer resubmissions on average before they are accepted for peer review. The peer review process for local submissions takes less time than for external papers and local submissions are more likely to be accepted for publication. Also, while there are more men than women among external users, this trend is reversed for local users. In the combined group of local and external users, articles submitted by women are more likely to be published than articles submitted by men.
APA, Harvard, Vancouver, ISO, and other styles
42

Rasmussen, Karsten Boye. "As open as possible and as closed as needed." IASSIST Quarterly 43, no. 3 (2019): 1–2. http://dx.doi.org/10.29173/iq965.

Full text
Abstract:
Welcome to the third issue of volume 43 of the IASSIST Quarterly (IQ 43:3, 2019).
 Yes, we are open! Open data is good. Just a click away. Downloadable 24/7 for everybody. An open government would make the decisionmakers’ data open to the public and the opposition. As an example, communal data on bicycle paths could be open, so more navigation apps would flourish and embed the information in maps, which could suggest more safe bicycle routes. However, as demonstrated by all three articles in this IQ issue, very often research data include information that requires restrictions concerning data access. The second paper states that data should be ‘as open as possible and as closed as needed’. This phrase originates from a European Union Horizon 2020 project called the Open Research Data Pilot, in ‘Guidelines on FAIR Data Management in Horizon 2020’ (July 2016). Some data need to be closed and not freely available. So once more it shows that a simple solution of total openness and one-size-fits-all is not possible. We have to deal with more complicated schemes depending on the content of data. Luckily, experienced people at data institutions are capable of producing adapted solutions. 
 The first article ‘Restricting data’s use: A spectrum of concerns in need of flexible approaches’ describes how data producers have legitimate needs for restricting data access for users. This understanding is quite important as some users might have an automatic objection towards all restrictions on use of data. The authors Dharma Akmon and Susan Jekielek are at ICPSR at the University of Michigan. ICPSR has been the U.S. research archive since 1962, so they have much practice in long-term storage of digital information. From a short-term perspective you might think that their primary task is to get the data in use and thus would be opposed to any kind of access restrictions. However, both producers and custodians of data are very well aware of their responsibility for determining restrictions and access. The caveat concerns the potential harm through disclosure, often exemplified by personal data of identifiable individuals. The article explains how dissemination options differ in where data are accessed and what is required for access. If you are new to IASSIST, the article also gives an excellent short introduction to ICPSR and how this institution guards itself and its users against the hazards of data sharing.
 In the second article ‘Managing data in cross-institutional projects’, the reader gains insight into how FAIR data usage benefits a cross-institutional project. The starting point for the authors - Zaza Nadja Lee Hansen, Filip Kruse, and Jesper Boserup Thestrup – is the FAIR principles that data should be: findable, accessible, interoperable, and re-useable. The authors state that this implies that the data should be as open as possible. However, as expressed in the ICPSR article above, data should at the same time be as closed as needed. Within the EU, the mention of GDPR (General Data Protection Regulation) will always catch the attention of the economical responsible at any institution because data breaches can now be very severely fined. The authors share their experience with implementation of the FAIR principles with data from several cross-institutional projects. The key is to ensure that from the beginning there is agreement on following the specific guidelines, standards and formats throughout the project. The issues to agree on are, among other things, storage and sharing of data and metadata, responsibilities for updating data, and deciding which data format to use. The benefits of FAIR data usage are summarized, and the article also describes the cross-institutional projects. The authors work as a senior consultant/project manager at the Danish National Archives, senior advisor at The Royal Danish Library, and communications officer at The Royal Danish Library. The cross-institutional projects mentioned here stretch from Kierkegaard’s writings to wind energy.
 While this issue started by mentioning that ICPSR was founded in 1962, we end with a more recent addition to the archive world, established at Qatar University’s Social and Economic Survey Research Institute (SESRI) in 2017. The paper ‘Data archiving for dissemination within a Gulf nation’ addresses the experience of this new institution in an environment of cultural and political sensitivity. With a positive view you can regard the benefits as expanding. The start is that archive staff get experience concerning policies for data selection, restrictions, security and metadata. This generates benefits and expands to the broader group of research staff where awareness and improvements relate to issues like design, collection and documentation of studies. Furthermore, data sharing can be seen as expanding in the Middle East and North Africa region and generating a general improvement in the relevance and credibility of statistics generated in the region. Again, the FAIR principles of findable, accessible, interoperable, and re-useable are gaining momentum and being adopted by government offices and data collection agencies. In the article, the story of SESRI at Qatar University is described ahead of sections concerning data sharing culture and challenges as well as issues of staff recruitment, architecture and workflow. Many of the observations and considerations in the article will be of value to staff at both older and infant archives. The authors of the paper are the senior researcher and lead archivist at the archive of the Qatar University Brian W. Mandikiana, and Lois Timms-Ferrara and Marc Maynard – CEO and director of technology at Data Independence (Connecticut, USA). 
 Submissions of papers for the IASSIST Quarterly are always very welcome. We welcome input from IASSIST conferences or other conferences and workshops, from local presentations or papers especially written for the IQ. When you are preparing such a presentation, give a thought to turning your one-time presentation into a lasting contribution. Doing that after the event also gives you the opportunity of improving your work after feedback. We encourage you to login or create an author login to https://www.iassistquarterly.com (our Open Journal System application). We permit authors 'deep links' into the IQ as well as deposition of the paper in your local repository. Chairing a conference session with the purpose of aggregating and integrating papers for a special issue IQ is also much appreciated as the information reaches many more people than the limited number of session participants and will be readily available on the IASSIST Quarterly website at https://www.iassistquarterly.com. Authors are very welcome to take a look at the instructions and layout:
 https://www.iassistquarterly.com/index.php/iassist/about/submissions
 Authors can also contact me directly via e-mail: kbr@sam.sdu.dk. Should you be interested in compiling a special issue for the IQ as guest editor(s) I will also be delighted to hear from you.
 Karsten Boye Rasmussen - September 2019
APA, Harvard, Vancouver, ISO, and other styles
43

Tonin, Fernanda S., Ariane G. Araujo, Mariana M. Fachi, Vinicius L. Ferreira, Roberto Pontarolo, and Fernando Fernandez-Llimos. "Lag times in the publication of network meta-analyses: a survey." BMJ Open 11, no. 9 (2021): e048581. http://dx.doi.org/10.1136/bmjopen-2020-048581.

Full text
Abstract:
ObjectiveWe assessed the extent of lag times in the publication and indexing of network meta-analyses (NMAs).Study designThis was a survey of published NMAs on drug interventions.SettingNMAs indexed in PubMed (searches updated in May 2020).Primary and secondary outcome measuresLag times were measured as the time between the last systematic search and the article submission, acceptance, online publication, indexing and Medical Subject Headings (MeSH) allocation dates. Time-to-event analyses were performed considering independent variables (geographical origin, Journal Impact Factor, Scopus CiteScore, open access status) (SPSS V.24, R/RStudio).ResultsWe included 1245 NMAs. The median time from last search to article submission was 6.8 months (204 days (IQR 95–381)), and to publication was 11.6 months. Only 5% of authors updated their search after first submission. There is a very slightly decreasing historical trend of acceptance (rho=−0.087; p=0.010), online publication (rho=−0.080; p=0.008) and indexing (rho=−0.080; p=0.007) lag times. Journal Impact Factor influenced the MeSH allocation process, but not the other lag times. The comparison between open access versus subscription journals confirmed meaningless differences in acceptance, online publication and indexing lag times.ConclusionEfforts by authors to update their search before submission are needed to reduce evidence production time. Peer reviewers and editors should ensure authors’ compliance with NMA standards. The accuracy of these findings depends on the accuracy of the metadata used; as we evaluated only NMA on drug interventions, results may not be generalisable to all types of studies.
APA, Harvard, Vancouver, ISO, and other styles
44

Bodard, Gabriel, and Polina Yordanova. "Publication, Testing and Visualization with EFES: A tool for all stages of the EpiDoc XML editing process." Studia Universitatis Babeș-Bolyai Digitalia 65, no. 1 (2020): 17–35. http://dx.doi.org/10.24193/subbdigitalia.2020.1.02.

Full text
Abstract:
"EpiDoc is a set of recommendations, schema and other tools for the encoding of ancient texts, especially inscriptions and papyri, in TEI XML, that is now used by upwards of a hundred projects around the world, and large numbers of scholars seek training in EpiDoc encoding every year. The EpiDoc Front-End Services tool (EFES) was designed to fill the important need for a publication solution for researchers and editors who have produced EpiDoc encoded texts but do not have access to digital humanities support or a well-funded IT service to produce a publication for them. This paper will discuss the use of EFES not only for final publication, but as a tool in the editing and publication workflow, by editors of inscriptions, papyri and similar texts including those on coins and seals. The edition visualisations, indexes and search interface produced by EFES are able to serve as part of the validation, correction and research apparatus for the author of an epigraphic corpus, iteratively improving the editions long before final publication. As we will argue, this research process is a key component of epigraphic and papyrological editing practice, and studying these needs will help us to further enhance the effectiveness of EFES as a tool. To this end we also plan to add three major functionalities to the EFES toolbox: (1) date visualisation and filter—building on the existing “date slider,” and inspired by partner projects such as Pelagios and Godot; (2) geographic visualization features, again building on Pelagios code, allowing the display of locations within a corpus or from a specific set of search results in a map; (3) export of information and metadata from the corpus as Linked Open Data, following the recommendations of projects such as the Linked Places format, SNAP, Chronontology and Epigraphy.info, to enable the semantic sharing of data within and beyond the field of classical and historical editions. Finally, we will discuss the kinds of collaboration that will be required to bring about desired enhancements to the EFES toolset, especially in this age of research-focussed, short-term funding. Embedding essential infrastructure work of this kind in research applications for specific research and publication projects will almost certainly need to be part of the solution. Keywords: Text Encoding, Ancient Texts, Epigraphy, Papyrology, Digital Publication, Linked Open Data, Extensible Stylesheet Language Transformations"
APA, Harvard, Vancouver, ISO, and other styles
45

Boyer, Doug M., Gregg F. Gunnell, Seth Kaufman, and Timothy M. McGeary. "MORPHOSOURCE: ARCHIVING AND SHARING 3-D DIGITAL SPECIMEN DATA." Paleontological Society Papers 22 (September 2016): 157–81. http://dx.doi.org/10.1017/scs.2017.13.

Full text
Abstract:
AbstractAdvancement of understanding in paleontology and biology has always been hindered by difficulty in accessing comparative data. With current and burgeoning technology, the severity of this hindrance can be substantially reduced. Researchers and museum personnel generating three-dimensional (3-D) digital models of museum specimens can archive them using internet repositories that can then be explored and utilized by other researchers and private individuals without a museum trip. We focus on MorphoSource, the largest web archive for 3-D museum data at present. We describe the site, how to use it most effectively in its current form, and best practices for file formats and metadata inclusion to aid the growing community wishing to utilize it for distributing 3-D digital data. The potential rewards of successfully crowd sourcing the digitization of museum collections from the research community are great, as it should ensure rapid availability of the most important datasets. Challenges include long-term governance (i.e., maintaining site functionality, supporting large amounts of digital storage, and monitoring/updating file to prevent bit rot, which is the slow and random corruption of electronic data over time, and data format obsolescence, which is the problem of data becoming unreadable or ineffective because of the loss of functional software necessary for access), and utilization by the community (i.e., detecting and minimizing user error in creating data records, incentivizing data sharing by researchers and institutions alike, and protecting stakeholder rights to data, while maximizing accessibility and discoverability).MorphoSource serves as a proof-of-concept of how these kinds of challenges can be met. Accordingly, it is generally recognized as the most appropriate repository for large, raw datasets of fossil organisms and/or comparative samples. Its existence has begun to transform data transparency standards because journal reviewers, editors, and grant officers now often suggest or require that 3-D data be made available through this site.
APA, Harvard, Vancouver, ISO, and other styles
46

Smith, Arfon M., Kyle E. Niemeyer, Daniel S. Katz, et al. "Journal of Open Source Software (JOSS): design and first-year review." PeerJ Computer Science 4 (February 12, 2018): e147. http://dx.doi.org/10.7717/peerj-cs.147.

Full text
Abstract:
This article describes the motivation, design, and progress of the Journal of Open Source Software (JOSS). JOSS is a free and open-access journal that publishes articles describing research software. It has the dual goals of improving the quality of the software submitted and providing a mechanism for research software developers to receive credit. While designed to work within the current merit system of science, JOSS addresses the dearth of rewards for key contributions to science made in the form of software. JOSS publishes articles that encapsulate scholarship contained in the software itself, and its rigorous peer review targets the software components: functionality, documentation, tests, continuous integration, and the license. A JOSS article contains an abstract describing the purpose and functionality of the software, references, and a link to the software archive. The article is the entry point of a JOSS submission, which encompasses the full set of software artifacts. Submission and review proceed in the open, on GitHub. Editors, reviewers, and authors work collaboratively and openly. Unlike other journals, JOSS does not reject articles requiring major revision; while not yet accepted, articles remain visible and under review until the authors make adequate changes (or withdraw, if unable to meet requirements). Once an article is accepted, JOSS gives it a digital object identifier (DOI), deposits its metadata in Crossref, and the article can begin collecting citations on indexers like Google Scholar and other services. Authors retain copyright of their JOSS article, releasing it under a Creative Commons Attribution 4.0 International License. In its first year, starting in May 2016, JOSS published 111 articles, with more than 40 additional articles under review. JOSS is a sponsored project of the nonprofit organization NumFOCUS and is an affiliate of the Open Source Initiative (OSI).
APA, Harvard, Vancouver, ISO, and other styles
47

Gouripeddi, Ram, Danielle Groat, Samir E. Abdelrahman, et al. "3339 Development of a Competency-based Informatics Course for Translational Researchers." Journal of Clinical and Translational Science 3, s1 (2019): 66–67. http://dx.doi.org/10.1017/cts.2019.156.

Full text
Abstract:
OBJECTIVES/SPECIFIC AIMS: Translational researchers often require the use of informatics methods in their work. Lack of an understanding of key informatics principles and methods limits the abilities of translational researchers to successfully implement Findable, Accessible, Interoperable, Reusable (FAIR) principles in grant proposal submissions and performed studies. In this study we describe our work in addressing this limitation in the workforce by developing a competency-based, modular course in informatics to meet the needs of diverse translational researchers. METHODS/STUDY POPULATION: We established a Translational Research Informatics Education Collaborative (TRIEC) consisting of faculty at the University of Utah (UU) with different primary expertise in informatics methods, and working in different tiers of the translational spectrum. The TRIEC, in collaboration with the Foundation of Workforce Development of the Utah Center for Clinical and Translational Science (CCTS), gathered informatics needs of early investigators by consolidating requests for informatics services, assistance provided in grant writing, and consultations. We then reviewed existing courses and literature for informatics courses that focused on clinical and translational researchers [3–9]. Using the structure and content of the identified courses, we developed an initial draft of a syllabus for a Translational Research Informatics (TRI) course which included key informatics topics to be covered and learning activities, and iteratively refined it through discussions. The course was approved by the UU Department of Biomedical Informatics, UU Graduate School and the CCTS. RESULTS/ANTICIPATED RESULTS: The TRI course introduces informatics PhD students, clinicians, and public health practitioners who have a demonstrated interest in research, to fundamental principles and tools of informatics. At the completion of the course, students will be able to describe and identify informatics tools and methods relevant to translational research and demonstrate inter-professional collaboration in the development of a research proposal addressing a relevant translational science question that utilizes the state-of-the-art in informatics. TRI covers a diverse set of informatics content presented as modules: genomics and bioinformatics, electronic health records, exposomics, microbiomics, molecular methods, data integration and fusion, metadata management, semantics, software architectures, mobile computing, sensors, recruitment, community engagement, secure computing environments, data mining, machine learning, deep learning, artificial intelligence and data science, open source informatics tools and platforms, research reproducibility, and uncertainty quantification. The teaching methods for TRI include (1) modular didactic learning consisting of presentations and readings and face-to-face discussions of the content, (2) student presentations of informatics literature relevant to their final project, and (3) a final project consisting of the development, critique and chalk talk and formal presentations of informatics methods and/or aims of an National Institutes of Health style K or R grant proposal. For (3), the student presents their translational research proposal concept at the beginning of the course, and works with members of the TRIEC with corresponding expertise. The final course grade is a combination of the final project, paper presentations and class participation. We offered TRI to a first cohort of students in the Fall semester of 2018. DISCUSSION/SIGNIFICANCE OF IMPACT: Translational research informatics is a sub-domain of biomedical informatics that applies and develops informatics theory and methods for translational research. TRI covers a diverse set of informatics topics that are applicable across the translational spectrum. It covers both didactic material and hands-on experience in using the material in grant proposals and research studies. TRI’s course content, teaching methodology and learning activities enable students to initially learn factual informatics knowledge and skills for translational research correspond to the ‘Remember, Understand, and Apply’ levels of the Bloom’s taxonomy [10]. The final project provides opportunity for applying these informatics concepts corresponding to the ‘Analyze, Evaluate, and Create’ levels of the Bloom’s taxonomy [10]. This inter-professional, competency-based, modular course will develop an informatics-enabled workforce trained in using state-of-the-art informatics solutions, increasing the effectiveness of translational science and precision medicine, and promoting FAIR principles in research data management and processes. Future work includes opening the course to all Clinical and Translational Science Award hubs and publishing the course material as a reference book. While student evaluations for the first cohort will be available end of the semester, true evaluation of TRI will be the number of trainees taking the course and successful grant proposal submissions. References: 1. Wilkinson MD, Dumontier M, et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. 2016 Mar 15. 2. National Center for Advancing Translational Sciences. Translational Science Spectrum. National Center for Advancing Translational Sciences. 2015 [cited 2018 Nov 15]. Available from: https://ncats.nih.gov/translation/spectrum 3. Hu H, Mural RJ, Liebman MN. Biomedical Informatics in Translational Research. 1 edition. Boston: Artech House; 2008. 264 p. 4. Payne PRO, Embi PJ, Niland J. Foundational biomedical informatics research in the clinical and translational science era: a call to action. J Am Med Inform Assoc JAMIA. 2010;17(6):615–6. 5. Payne PRO, Embi PJ, editors. Translational Informatics: Realizing the Promise of Knowledge-Driven Healthcare. Softcover reprint of the original 1st ed. 2015 edition. Springer; 2016. 196 p. 6. Richesson R, Andrews J, editors. Clinical Research Informatics. 2nd ed. Springer International Publishing; 2019. (Health Informatics). 7. Robertson D, MD GHW, editors. Clinical and Translational Science: Principles of Human Research. 2 edition. Amsterdam: Academic Press; 2017. 808 p. 8. Shen B, Tang H, Jiang X, editors. Translational Biomedical Informatics: A Precision Medicine Perspective. Softcover reprint of the original 1st ed. 2016 edition. S.l.: Springer; 2018. 340 p. 9. Valenta AL, Meagher EA, Tachinardi U, Starren J. Core informatics competencies for clinical and translational scientists: what do our customers and collaborators need to know? J Am Med Inform Assoc. 2016 Jul 1;23(4):835–9. 10. Anderson LW, Krathwohl DR, Airasian PW, Cruikshank KA, Mayer RE, Pintrich PR, Raths J, Wittrock MC. A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives, Abridged Edition. 1 edition. New York: Pearson; 2000.
APA, Harvard, Vancouver, ISO, and other styles
48

Iglesias-Osores, Sebastian. "Posicionamiento en buscadores para la difusión digital de artículos científicos." Revista Experiencia en Medicina del Hospital Regional Lambayeque 5, no. 3 (2019): 160–61. http://dx.doi.org/10.37065/rem.v5i3.370.

Full text
Abstract:
Sr. Editor. Con el advenimiento de la World Wide Web, la "búsqueda" de información se ha convertido en un importante pilar en la globalización, competitivo y comercial. Las Bibliotecas electrónicas científicas en línea son un jugador dentro de este mercado. Otras partes interesadas incluyen, entre otros, editores, integradores de contenido en línea y motores de búsqueda en Internet (Summann & Lossau, 2004). La optimización de motores de búsqueda (SEO) es el proceso de mejorar la posición de un contenido web para que la página aparezca en los resultados de búsqueda de los principales motores de búsqueda. Todos los motores de búsqueda tienen una forma única de clasificar la importancia de un sitio web. Algunos motores de búsqueda se centran en el contenido, mientras que otros revisan las metaetiquetas para identificar quién y qué es el negocio de un sitio web(Lawrence & Giles, 1999).
 La Web invisible académica consta de todas las bases de datos y colecciones relevantes para la academia pero que los motores de búsqueda de Internet de propósito general no pueden buscar. La indexación de esta parte de la Web invisible es fundamental para los motores de búsqueda científicos(Lewandowski & Mayr, 2006) en la que muchos artículos de investigadores están incluidos. Las publicaciones científicas publicadas en la web, se pueden acoger a los criterios del SEO para obtener mejores calificaciones para aparecer en los primeros lugares de las listas de búsqueda, que da más ventaja al momento de ser citado, es de suma importancia observar de cerca a los lectores y su comportamiento de uso para utilizar una suerte de optimización académica que implica la creación, publicación y modificación de literatura académica de una manera que facilita a los motores de búsqueda académicos rastrearlos e indexarlos(Beel, Gipp, & Wilde, 2010) como el agregado de metadatos en el título y el resumen, su uso es especialmente útil porque permiten a los motores de búsqueda atribuir valores semánticos al contenido de la página web y clasificarlos adecuadamente en los resultados de búsqueda logrando hasta un 90% de indexación(Onaifo & Rasmussen, 2013).
 Se deben utilizar en los artículos científicos las palabras clave de DeCS BIREME (http://decs.bvs.br/), MESH (https://meshb.nlm.nih.gov), e introducir en el texto del título y el cuerpo palabras relacionadas al tema tratado que se encuentren en las primeras posiciones del Planificador de Palabras de Google (https://ads.google.com/aw/keywordplanner/home) y Google Trends (https://trends.google.es/trends/), esto junto a una buena redacción y aplicación de las normas de redacción científica ayudará a que el artículo tenga un mejor visibilidad y posicionamiento de parte de los buscadores en la web. Estas herramientas también se deberían aplicar al momento de la construcción de los sitios web y repositorios de las revistas para que los artículos ahí publicados tengan mayor visibilidad. La divulgación y la diseminación de la ciencia son los objetivos de toda nueva investigación, que muchas veces no se cumple a cabalidad, con el uso de estas herramientas estaremos ayudando a que los trabajos se divulgan con más amplitud.
APA, Harvard, Vancouver, ISO, and other styles
49

Syed, Sana, Marium Naveed Khan, Alexis Catalano, et al. "3165 Diseased and Healthy Gastrointestinal Tissue Data Mining requires an Engaged Transdisciplinary team." Journal of Clinical and Translational Science 3, s1 (2019): 131–32. http://dx.doi.org/10.1017/cts.2019.299.

Full text
Abstract:
OBJECTIVES/SPECIFIC AIMS: To establish an effective team of researchers working towards developing and validating prognostic models employing use of image analyses and other numerical metadata to better understand pediatric undernutrition, and to learn how different approaches can be brought together collaboratively and efficiently. METHODS/STUDY POPULATION: Over the past 18 months we have established a transdisciplinary team spanning three countries and the Schools of Medicine, Engineering, Data Science and Global Health. We first identified two team leaders specifically a pediatric physician scientist (SS) and a data scientist/engineer (DB). The leaders worked together to recruit team members, with the understanding that different ideas are encouraged and will be used collaboratively to tackle the problem of pediatric undernutrition. The final data analytic and interpretative core team consisted of four data science students, two PhD students, an undergraduate biology major, a recent medical graduate, and a PhD research scientist. Additional collaborative members included faculty from Biomedical Engineering, the School of Medicine (Pediatrics and Pathology) along with international Global Health faculty from Pakistan and Zambia. We learned early on that it was important to understand what each of the member’s motivation for contributing to the project was along with aligning that motivation with the overall goals of the team. This made us help prioritize team member tasks and streamline ideas. We also incorporated a mechanism of weekly (monthly/bimonthly for global partners) meetings with informal oral presentations which consisted of each member’s current progress, thoughts and concerns, and next experimental goals. This method enabled team leaders to have a 3600 mechanism of feedback. Overall, we assessed the effectiveness of our team by two mechanisms: 1) ongoing team member feedback, including team leaders, and 2) progress of the research project. RESULTS/ANTICIPATED RESULTS: Our feedback has shown that on initial development of the team there was hesitance in communication due to the background diversity of our various member along with different cultural/social expectations. We used ice-breaking methods such as dedicated time for brief introductions, career directions, and life goals for each team member. We subsequently found that with the exception of one, all other team members noted our working environment professional and conducive to productivity. We also learnt from our method of ongoing constant feedback that at times, due to the complexity of different disciplines, some information was lost due to the difference in educational backgrounds. We have now employed new methods to relay information more effectively, with the use of not just sharing literature but also by explaining the content. The progress of our research project has varied over the past 4-6 months. There was a steep learning curve for almost every member, for example all the data science students had never studied anything related to medicine during their education, including minimal if none exposure to the ethics of medical research. Conversely, team members with medical/biology backgrounds had minimal prior exposure to computational modeling, computer engineering and the verbage of communicating mathematical algorithms. While this may have slowed our progress we learned that by asking questions and engaging every member it was easier to delegate tasks effectively. Once our team reached an overall understanding of each member’s goals there was a steady progress in the project, with new results and new methods of analysis being tested every week. DISCUSSION/SIGNIFICANCE OF IMPACT: We expect that our on-going collaboration will result in the development of new and novel modalities to understand and diagnose pediatric undernutrition, and can be used as a model to tackle several other problems. As with many team science projects, credit and authorship are challenges that we are outlining creative strategies for as suggested by International Committee of Medical Journal Editors (ICMJE) and other literature.
APA, Harvard, Vancouver, ISO, and other styles
50

Corrêa, André Garcia, and Daniel Ribeiro Silva Mill. "Hierarquia social dos objetos: o capital científico das tecnologias digitais de informação e comunicação no campo da educação (Social Hierarchy of objects: The scientific capital of the Digital Information and Communication Technologies in the field of Education)." Revista Eletrônica de Educação 14 (July 28, 2020): 3756106. http://dx.doi.org/10.14244/198271993756.

Full text
Abstract:
This research makes empirical tests of a Bourdieu concept for sociology of science: the social hierarchy of objects. Looking at the specific field of Education, the research sought to measure the position of Digital Information and Communication Technologies (DICT) within this hierarchy. To this end, we collected metadata of thesis defended in postgraduate programs in Education in Brazil with grade five and higher between 1996 and 2016. The data indicated the production by HEI and geographically. Also the keywords were analyzed in a Network and indicators of centrality and density were used to map the hierarchy of objects and, consequently, the distribution of scientific capital among them in the field. Empirical tests have shown that DICT and distance education, as necessarily mediated by a technology, have relevance within the field as a concentration of symbolic capital. The analysis showed that the Hierarchy formed by the technologies segment was denser than the total network and that there was a considerable weight for the distance education modality. Regarding the hierarchy of objects in the DE subfield, a certain autonomy was observed in relation to the complete field, since its objects turned to subjects more important to the modality related to the student. Finally, statements from other DE studies were compared with the research data that showed some quantitative divergences, but qualitative convergences emphasizing the same observed trends and corroborating the analyzes of this research.ResumoEsta investigação faz testes empíricos de um conceito de Bourdieu para a sociologia da ciência: a Hierarquia Social dos Objetos. Olhando para o campo específico da Educação, a investigação procurou mensurar a posição das Tecnologias Digitais de Informação e Comunicação (TDIC) dentro desta hierarquia. Para tanto, foram coletados metadados de teses defendidas em Programas de pós-graduação em Educação no Brasil com nota cinco e superior entre os anos de 1996 e 2016. Os dados indicaram a produção por IES e geograficamente bem como as palavras-chave foram analisadas em rede e indicadores de centralidade e densidade foram utilizados para mapear a hierarquia dos objetos e, por consequência, a distribuição de capital científico entre eles no campo. Os testes empíricos mostraram que as TDIC e também a EaD, por ter necessariamente mediação por uma tecnologia, têm relevância dentro do campo enquanto concentração de capital simbólico. As análises mostraram que a Hierarquia formada pelo recorte de Tecnologias chegava a ser mais densa que a rede total e que havia um peso considerável para a modalidade a distância. Sobre a hierarquia de objetos no subcampo EaD, observou-se uma certa autonomia em relação ao campo completo, pois seus objetos se voltavam a assuntos mais caros à modalidade relacionados ao aluno. Por fim, foram confrontadas afirmações de outras investigações sobre EaD com os dados da investigação que mostraram algumas divergências quantitativas, mas convergências qualitativas enfatizando as mesmas tendências observadas e corroborando as análises desta investigação.Palavras-chave: Educação, Sociologia da ciência, Produção científica, Tecnologias Digitais de Informação e Comunicação.Keywords: Education, Sociology of science, Scientific production, Digital information and communication technologies.ReferencesÁVILA, Patrícia. A distribuição do capital científico: diversidade interna e permeabilidade externa no campo científico. Sociologia – problemas e práticas, Lisboa – Portugal, n. 25, p. 9-49, 1997.BLONDEL, Vincent D.; GUILLAUME, Jean-Çoup; LAMBIOTTE, Renaud; LEFEBVRE, Etienne. Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment, Trieste – Itália, n.10, p. 1000, 2008.BORGATTI, Stephen P.; EVERETT, Martin G.; JOHNSON, Jeffrey C. Analyzing Social Networks. Londres: SAGE, 2013.BOURDIEU, Pierre. Método científico e hierarquia social dos objetos. In: NOGUEIRA, Maria Alice; CATANI, Afrânio (Org.). Escritos de educação. Petrópolis: Editora Vozes, 2007. p. 33-38.BOURDIEU, Pierre. Os usos sociais da ciência: por uma sociologia clínica do campo científico. 1. ed. São Paulo: Editora UNESP, 2004. 88 p.CABRAL, Ana Lúcia Tinoco; TARCIA, Rita Maria Lino. O novo papel do professor na Ead. In: LITTO, Fredric Michael; FORMIGA, Marcos. Educação a distância: o estado da arte. v. 2; São Paulo: Pearson Education do Brasil, 2011. 443p.CORRÊA, André Garcia; MILL, Daniel Ribeiro Silva. Análise da percepção do docente virtual no ensino de música pela educação a distância. Acta Scientiarum. Education, Maringá, v. 38, n. 4, p. 425-436, Out.-Dez., 2016.COSTA, Larissa et al. Redes: uma introdução às dinâmicas da conectividade e da auto-organização. 1. ed. Brasília: WWF-Brasil, 2003. 91 p.KENSKI, Vani Moreira; MEDEIROS, Rosângela de Araújo; ORDÉAS, Jean. Grupos que pesquisam Educação a Distância no Brasil: primeiras aproximações. In: MILL, Daniel Ribeiro Silva et al. (Orgs.). Educação a distância: dimensões da pesquisa, da mediação e da formação. 1. ed. São Paulo: Artesanato Educacional, 2018. 194 p.MILL, Daniel Ribeiro Silva; OLIVEIRA, Márcia Rozenfeld Gomes. A Educação a distância em pesquisas acadêmicas: uma análise bibliométrica em teses do campo educacional. Educar em Revista, Curitiba, Educação especial n.4, 2014. p.15-36.MOORE, Michael G.; KEARSLEY, Greg. Educação a distância: uma visão integrada. São Paulo: Cengage Learning, 2010. 398 p.PELLEGRINI, Thalita de Oliveira; SILVA, Sheila Serafim da; FERREIRA, Maxwel de Azevedo. O perfil da pesquisa acadêmica sobre educação a distância no Brasil e no mundo. REAd, Porto Alegre, v. 23, n. especial, p.371-393, dez. 2017.SANTOS, Elaine Maria dos et al. Educação a distância no Brasil: Evolução da produção científica. In: CONGRESSO INTERNACIONAL DE EDUCAÇÃO A DISTÂNCIA, 13., 2007, Curitiba. Anais [...]. São Paulo: ABED, 2007.e3756106
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography