To see the other types of publications on this topic, follow the link: Tokenisation.

Journal articles on the topic 'Tokenisation'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Tokenisation.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Lima, Gonçalo, Robert Barnes, and Charles Kerrigan. "The benefits of asset tokenisation within securitisation." Journal of Securities Operations & Custody 16, no. 4 (September 1, 2024): 366. http://dx.doi.org/10.69554/jetd8525.

Full text
Abstract:
Securitisation has allowed banks to move from an originate-to-hold to an originate-to-distribute model. While it is widely accepted that this helped banks to achieve higher profitability and diversification, it is also regarded as the main cause of the 2007–08 global financial crisis. The lack of transparency between the securities issued and the performance of the underlying loans led to extreme risk taking and amplified the impacts once the loans started to underperform. This paper explores asset tokenisation, which can bring similar benefits to securitisation while enabling more effective management of the risks due to the traceability and immutability of distributed ledger technology (DLT). The paper argues that tokenisation of assets has now progressed beyond the experimentation phase and is being adopted by major commercial banks, central banks and financial market infrastructures (FMI). In addition, it describes the regulatory tailwinds for market participants to get involved in deploying and using the technology that makes tokenisation possible. While tokens and securities are both claims on assets, tokenisation’s additional capabilities of traceability and programmability enable the terms of a claim to be modified programmatically under specific circumstances, for example through a smart contract. A further positive attribute of tokenisation is that it can significantly improve and compress the workflow of existing and new securities, bringing considerable benefits from both operational and cost perspectives. The paper goes on to argue that generalised adoption of DLT along with harmonised standards, interoperability and integration for tokenisation feature among key requirements on which market participants and technology providers are actively working. Finally, the paper makes the point that cryptographically proven data also acts as a stepping-stone for high-quality artificial intelligence (AI) implementations, which can continue to expand productivity and profitability for regulated financial institutions.
APA, Harvard, Vancouver, ISO, and other styles
2

B, SONIKA. "Financial and Operational Impacts of Tokenization in Enterprise Environments – A Case Study of Palo Alto Networks." International Scientific Journal of Engineering and Management 04, no. 05 (May 16, 2025): 1–9. https://doi.org/10.55041/isjem03654.

Full text
Abstract:
INTRODUCTION With the increasing dependency on digital technology, finances are becoming ever more susceptible to cyberattacks, data theft, and fraud. As online banking, e- commerce, and mobile wallets have been rapidly increasing, never before has it been so crucial to have secure and sustainable digital infrastructures. One of the most important technological innovations for protecting sensitive financial data is tokenisation, a method that substitutes sensitive information, like credit card numbers and personally identifiable information (PII), with special identifiers called tokens. These tokens, which are worthless to exploit, guarantee that even if systems are breached, the original sensitive information is still safe. This report investigates the financial and operational impacts of tokenisation, in this case, in enterprise settings like Palo Alto Networks, a top cybersecurity company. In contrast to conventional research that mainly addresses the technical or cybersecurity implications of tokenisation, this research focuses on the financial and G&A operation impacts of tokenisation. It seeks to examine how tokenisation affects routine operations like financial reporting, compliance management, cost control, and audit readiness. The author's internship experience with Palo Alto Networks, assisting IT and G&A teams with month-end activities and financial reporting, brought to the fore the tangible fiscal benefits of tokenisation. To illustrate, not only does tokenisation lower auditing expenses, but it also minimises the fiscal liability of data breaches and simplifies compliance with rules such as PCI-DSS, GDPR, and SOX. With the growing relevance of tokenisation in regulatory as well as operational finance, financial professionals must appreciate the cost, compliance, and risk implications of security technologies. Hence, this report aims to fill the gap between technology implementation and financial operation, highlighting interdependencies between security investments and enterprise financial performance. With primary research, such as surveys and interviews with the finance and IT personnel of Palo Alto Networks, this research will have real-time data from the practical uses of tokenisation. The results will advise how financial operations evolve according to changing cybersecurity protocols and how technologies such as tokenisation are incorporated into financial planning and strategy. In a time when digital transformation is redefining industries, tokenisation presents a scalable and efficient way to secure financial information. This study will help one understand how tokenisation, being a critical security control, offers concrete financial and operational benefits to organisations, assisting finance teams in aligning strategies with risk management and security protocols.
APA, Harvard, Vancouver, ISO, and other styles
3

Sengounder, Mr Nallasivam. "Modern Real Estate Investment Methods for Public Accessibility and the Global Rise of Real Estate Tokenization: A Case Study on Brazil and Thailand." International Journal for Research in Applied Science and Engineering Technology 13, no. 3 (March 31, 2025): 635–38. https://doi.org/10.22214/ijraset.2025.67332.

Full text
Abstract:
The real estate industry is undergoing a profound transformation with the integration of digital investment methods, particularly real estate tokenisation. This modern investment approach leverages blockchain technology to divide property ownership into digital tokens, making real estate assets more accessible to a broader range of investors. Tokenisation facilitates fractional ownership by reducing the financial barriers typically associated with property investments and allowing individuals to acquire portions of valuable real estate assets rather than whole properties. This study investigates the impacts of tokenisation on public access to real estate investments in Brazil and Thailand, highlighting their effects on market growth, investor participation, and economic sustainability. This case study looks at the regulatory frameworks, new market trends, and technological advances in these regions and compares how they were adopted, how far they have come, and the problems that come with that. The researchers seek to determine how tokenisation has influenced real estate investment strategies, reshaped traditional ownership models, and fostered economic inclusion. Additionally, it evaluates the broader implications of tokenisation for the global real estate sector, offering insights into its future trajectory.
APA, Harvard, Vancouver, ISO, and other styles
4

Pretorius, Laurette, Biffie Viljoen, Ansu Berg, and Rigardt Pretorius. "Tswana finite state tokenisation." Language Resources and Evaluation 49, no. 4 (December 24, 2014): 831–56. http://dx.doi.org/10.1007/s10579-014-9292-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

ТУЛЬЧИНСЬКА, Світлана, та Микита ФАЛЬКО. "УПРАВЛІННЯ АКТИВІЗАЦІЄЮ ФІНАНСОВОГО ПОТЕНЦІАЛУ ПІДПРИЄМСТВА НА ОСНОВІ ТОКЕНІЗАЦІЇ". Herald of Khmelnytskyi National University. Economic sciences 334, № 5 (26 вересня 2024): 143–50. https://doi.org/10.31891/2307-5740-2024-334-19.

Full text
Abstract:
The article dedicated the current state of management of activation of the financial potential of enterprises in Ukraine on the basis of analysis of the total liquidity, structure of current assets and liabilities of economic entities. The authors formulate strategies for managing the activation of financial potential - activation of assets, activation of equity and liabilities, parallel activation of assets and liabilities - and their main components. Approaches to the implementation of the proposed strategies are defined, which are divided into two main types - classical and alternative. The first approach is based on traditional methods of financial management, while the second approach - alternative - proposes the introduction of the latest digital technologies, such as blockchain, and the use of online platforms to improve the efficiency of financial resource management. The article focuses on the model of the alternative approach, in particular, the process of asset tokenisation, which is considered as an innovative way of managing the activation of the financial potential of enterprises. The advantages of asset tokenisation, which allows enterprises to increase their liquidity, attract additional financial resources, and ensure greater transparency and security of operations through blockchain technology, are considered. The authors focus on the feasibility of applying tokenisation of securities in Ukraine based on the low level of development of the stock market in the country, the high level of perception of cryptocurrency in the country, the developed regulatory framework and the technical feasibility of tokenisation. The article provides a detailed algorithm of the securities tokenisation process, which can be used as a basis for further development of financial technologies in Ukraine.
APA, Harvard, Vancouver, ISO, and other styles
6

Fam, Rashel, and Yves Lepage. "A Study of Analogical Density in Various Corpora at Various Granularity." Information 12, no. 8 (August 5, 2021): 314. http://dx.doi.org/10.3390/info12080314.

Full text
Abstract:
In this paper, we inspect the theoretical problem of counting the number of analogies between sentences contained in a text. Based on this, we measure the analogical density of the text. We focus on analogy at the sentence level, based on the level of form rather than on the level of semantics. Experiments are carried on two different corpora in six European languages known to have various levels of morphological richness. Corpora are tokenised using several tokenisation schemes: character, sub-word and word. For the sub-word tokenisation scheme, we employ two popular sub-word models: unigram language model and byte-pair-encoding. The results show that the corpus with a higher Type-Token Ratio tends to have higher analogical density. We also observe that masking the tokens based on their frequency helps to increase the analogical density. As for the tokenisation scheme, the results show that analogical density decreases from the character to word. However, this is not true when tokens are masked based on their frequencies. We find that tokenising the sentences using sub-word models and masking the least frequent tokens increase analogical density.
APA, Harvard, Vancouver, ISO, and other styles
7

Milkau, Udo. "Will tokenisation deliver efficiency? And what kind?" Journal of Securities Operations & Custody 17, no. 1 (December 1, 2024): 79. http://dx.doi.org/10.69554/gcpr5193.

Full text
Abstract:
This paper attempts to provide a careful and balanced look at some of the benefits and challenges of tokenisation of securities. A fundamental problem is the lack of consistency in how ‘tokenisation’ should be defined. According to a report by McKinsey & Company in 2023, ‘Tokenization adoption was poised for success six years ago, but progress was limited … the path could be different this time’. In the past, tokenisation was: (1) limited to a process of creating a representation of financial, intellectual or physical assets on a blockchain (ie distributed ledger technology [DLT]); and (2) discussed as a narrative of disintermediation and programmability as a basis for efficiency gains. As it became clear that DLT, with its basic game-theoretical approach, comes with high costs and opaque governance, traditional platforms with high efficiency such as the European TARGET2-Security (T2S) with atomic settlement and delivery-versus-payment (DvP) show up as blueprints for efficiency. A proposal of the Bank for International Settlement (BIS) for a unified ledger, Project Guardian of the Monetary Authority of Singapore (MAS), both in 2023, and an announcement of the U.S. Securities Industry and Financial Markets Association (SIFMA) in 2024 about ‘settlement on a common regulated venue … [of] tokenized assets’ can be regarded as paradigms for a new and pragmatic approach, with coordination and synchronisation as key objectives in the context of financial market infrastructures.
APA, Harvard, Vancouver, ISO, and other styles
8

Martin, Luther. "Protecting credit card information: encryption vs tokenisation." Network Security 2010, no. 6 (June 2010): 17–19. http://dx.doi.org/10.1016/s1353-4858(10)70084-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Haggett, Shawn, and Greg Knowles. "Tokenisation and compression of Java class files." Journal of Systems Architecture 58, no. 1 (January 2012): 1–12. http://dx.doi.org/10.1016/j.sysarc.2011.09.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Barbaroux, Nicolas, and Jean Barthélemy. "Quel support monétaire pour accompagner la « tokenisation » financière ?" Revue française d'économie Vol XXXVIII, no. 4 (May 2, 2024): 113–48. http://dx.doi.org/10.3917/rfe.238.0113.

Full text
Abstract:
La tokenisation financière invite à réfléchir à de nouveaux supports monétaires. Elle consiste en l’enregistrement et le transfert de droits et de valeurs au sein d’un dispositif numérique, distribué et décentralisé. Elle promet des gains d’efficience dans les échanges financiers et d’être un terreau fertile pour l’innovation financière. Cependant, pour donner tous ses fruits, cette innovation nécessite des supports monétaires compatibles : une « monnaie tokenisée ». Trois candidats sont souvent mentionnés : les stablecoins, les dépôts bancaires tokenisés et la monnaie numérique de banque centrale (MNBC) interbancaire. Cet article évalue ces candidats à l’aune de trois critères : le degré de fragmentation monétaire, le financement de l’économie et l’éthique de la monnaie. À la lumière de ces trois critères, cet article plaide pour une approche duale, fondée sur la coexistence d’au moins deux formes de support monétaire : des supports monétaires privés – par exemple les dépôts tokenisés – et un support monétaire public – la MNBC interbancaire.
APA, Harvard, Vancouver, ISO, and other styles
11

Ortiz-Yepes, Diego. "A critical review of the EMV payment tokenisation specification." Computer Fraud & Security 2014, no. 10 (October 2014): 5–12. http://dx.doi.org/10.1016/s1361-3723(14)70539-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Lochlainn, Mícheál Mac. "Sintéiseoir 1.0: a multidialectical TTS application for Irish." ReCALL 22, no. 2 (May 2010): 152–71. http://dx.doi.org/10.1017/s0958344010000054.

Full text
Abstract:
AbstractThis paper details the development of a multidialectical text-to-speech (TTS) application, Sintéiseoir, for the Irish language. This work is being carried out in the context of Irish as a lesser-used language, where learners and other L2 speakers have limited direct exposure to L1 speakers and speech communities, and where native sound systems and vocabularies can be seen to be receding even among L1 speakers – particularly the young.Sintéiseoir essentially implements the diphone concatenation model, albeit augmented to include phones, half-phones and, potentially, other phonic units. It is based on a platform-independent framework comprising a user interface, a set of dialect-specific tokenisation engines, a concatenation engine and a playback device.The tokenisation strategy is entirely rule-based and does not refer to dictionary look-ups. Provision has been made for prosodic processing in the framework but has not yet been implemented. Concatenation units are stored in the form of WAV files on the local file system.Sintéiseoir’s user interface (UI) provides a text field that allows the user to submit a grapheme string for synthesis and a prompt to select a dialect. It also filters input to reject graphotactically invalid strings, restrict input to alphabetic and certain punctuation marks found in Irish orthography, and ensure that a dialect has, indeed, been selected.The UI forwards the filtered grapheme string to the appropriate tokenisation engine. This searches for specified substrings and maps them to corresponding tokens that themselves correspond to concatenation units.The resultant token string is then forwarded to the concatenation engine, which retrieves the relevant concatenation units, extracts their audio data and combines them in a new unit. This is then forwarded to the playback device.The terms of reference for the initial development of Sintéiseoir specified that it should be capable of uttering, individually, the 99 most common Irish lemmata in the dialects of An Spidéal, Músgraí Uí Fhloínn and Gort a’ Choirce, which are internally consistent dialects within the Connacht, Munster and Ulster regions, respectively, of the dialect continuum. Audio assets to satisfy this requirement have already been prepared, and have been found to produce reasonably accurate output. The tokenisation engine is, however, capable of processing a wider range of input strings and when required concatenation units are found to be unavailable, returns a report via the user interface.
APA, Harvard, Vancouver, ISO, and other styles
13

Chawda, Krishi, Kush Mehta, Tanishka Borkar, and Sarita Ambadekar. "NFTs and property tokenisation: advancements and applications in blockchain technology." International Journal of Blockchains and Cryptocurrencies 5, no. 1 (2024): 44–66. http://dx.doi.org/10.1504/ijbc.2024.140167.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Saario, Lassi, Tanja Säily, Samuli Kaislaniemi, and Terttu Nevalainen. "The burden of legacy: Producing the Tagged Corpus of Early English Correspondence Extension (TCEECE)." Research in Corpus Linguistics 9, no. 1 (2021): 104–31. http://dx.doi.org/10.32714/ricl.09.01.07.

Full text
Abstract:
This paper discusses the process of part-of-speech tagging the Corpus of Early English Correspondence Extension (CEECE), as well as the end result. The process involved normalisation of historical spelling variation, conversion from a legacy format into TEI-XML, and finally, tokenisation and tagging by the CLAWS software. At each stage, we had to face and work around problems such as whether to retain original spelling variants in corpus markup, how to implement overlapping hierarchies in XML, and how to calculate the accuracy of tagging in a way that acknowledges errors in tokenisation. The final tagged corpus is estimated to have an accuracy of 94.5 per cent (in the C7 tagset), which is circa two percentage points (pp) lower than that of present-day corpora but respectable for Late Modern English. The most accurate tag groups include pronouns and numerals, whereas adjectives and adverbs are among the least accurate. Normalisation increased the overall accuracy of tagging by circa 3.7pp. The combination of POS tagging and social metadata will make the corpus attractive to linguists interested in the interplay between language-internal and -external factors affecting variation and change.
APA, Harvard, Vancouver, ISO, and other styles
15

Melnyk, Andriy. "CHALLENGES AND OPPORTUNITIES IN IMPLEMENTING WEB 3.0 STRATEGIES." Three Seas Economic Journal 5, no. 3 (November 15, 2024): 43–50. http://dx.doi.org/10.30525/2661-5150/2024-3-6.

Full text
Abstract:
This article examines the problems and opportunities that organisations face in implementing a Web 3.0 strategy. It explores the decentralised and transparent characteristics of Web 3.0 technologies, including blockchain, smart contracts and decentralised apps (dApps), and their future impact on business operations. Methodology. The article applies a qualitative methodology using case studies of Fortune Global 500 companies to offer practical advice on Web 3.0 adoption, the obstacles they face and recommendations for overcoming these challenges. Results. Research shows that while the adoption of Web 3.0 offers significant promise for improving operational efficiency, transparency and customer trust, significant obstacles such as regulatory uncertainty, integration costs and technological complexity still need to be overcome. Overcoming industry-specific challenges and implementing customised solutions is crucial to the effective use of Web 3.0 technologies. Practical implications. The article offers practical solutions for businesses to effectively integrate Web 3.0 technologies, which will help increase operational efficiency, strengthen trust with consumers and open up new revenue opportunities through tokenisation and decentralised finance. Value/Originality. The article offers practical solutions for businesses to effectively integrate Web 3.0 technologies, which will help increase operational efficiency, strengthen trust with consumers and open up new revenue opportunities through tokenisation and decentralised finance.
APA, Harvard, Vancouver, ISO, and other styles
16

Bouraga, Sarah. "Tokenisation approaches on blockchain: state-of-the-art and classification framework." International Journal of Information Technology and Management 24, no. 1/2 (2025): 145–61. https://doi.org/10.1504/ijitm.2025.144144.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Santos, Igor, Carlos Laorden, Borja Sanz, and Pablo G. Bringas. "Reversing the effects of tokenisation attacks against content-based spam filters." International Journal of Security and Networks 8, no. 2 (2013): 106. http://dx.doi.org/10.1504/ijsn.2013.055944.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Corcoran, Padraig, Geraint Palmer, Laura Arman, Dawn Knight, and Irena Spasić. "Creating Welsh Language Word Embeddings." Applied Sciences 11, no. 15 (July 27, 2021): 6896. http://dx.doi.org/10.3390/app11156896.

Full text
Abstract:
Word embeddings are representations of words in a vector space that models semantic relationships between words by means of distance and direction. In this study, we adapted two existing methods, word2vec and fastText, to automatically learn Welsh word embeddings taking into account syntactic and morphological idiosyncrasies of this language. These methods exploit the principles of distributional semantics and, therefore, require a large corpus to be trained on. However, Welsh is a minoritised language, hence significantly less Welsh language data are publicly available in comparison to English. Consequently, assembling a sufficiently large text corpus is not a straightforward endeavour. Nonetheless, we compiled a corpus of 92,963,671 words from 11 sources, which represents the largest corpus of Welsh. The relative complexity of Welsh punctuation made the tokenisation of this corpus relatively challenging as punctuation could not be used for boundary detection. We considered several tokenisation methods including one designed specifically for Welsh. To account for rich inflection, we used a method for learning word embeddings that is based on subwords and, therefore, can more effectively relate different surface forms during the training phase. We conducted both qualitative and quantitative evaluation of the resulting word embeddings, which outperformed previously described word embeddings in Welsh as part of larger study including 157 languages. Our study was the first to focus specifically on Welsh word embeddings.
APA, Harvard, Vancouver, ISO, and other styles
19

Pirgmann, Michael. "Impact of Tokenisation on Economics Demographics and Economics of Selected Crowdfunding Investments." ACTA VŠFS 17, no. 1 (June 30, 2023): 7–22. http://dx.doi.org/10.37355/acta-2023/1-01.

Full text
Abstract:
The text analyzed data from two German crowdfunding investments issued by one initiator in the real estate industry, involving a total of 1,692 investors. One fund has a classical investment structure while the initiator used an innovative approach with a tokenised investment product based on the Ethereum blockchain technology in the second one. This paper tries to find the impact on the demographics of the funds and the profitabi lity for the initiator when making a structural change from a classical investment with a minimum investment size of EUR 1,000 to a tokenised investment structure with a mini mum investment of EUR 1. The results show no impact from tokenisation on the structure of the investors´ gender and only minor impact on the age of the investors. But the findings highlight that lowering the minimum investment to EUR 1 via tokenisation to attract more investors, diminishes the profit margin for the initiator substantially. Because every inves tor comes at an acquisition cost, only a certain number of loss-making small investments in relation to profitable high investments can be compensated. Since it is not possible to precisely determine the density distribution of investment sizes in advance to placing a new investment, it is recommended to give up the minimum investment size of EUR 1 to avoid the risk of attracting too many investors at a deficit, hence diminishing the profit margin of the funds.
APA, Harvard, Vancouver, ISO, and other styles
20

Basu, Paritosh. "Digital Transformation with Digital Assets - Tokenisation and Management by Technology Driven Processes." Management Accountant Journal 57, no. 6 (June 30, 2022): 86. http://dx.doi.org/10.33516/maj.v57i6.86-93p.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

GAJDOŠ, Ľuboš. "Chinese legal texts – Quantitative Description." Acta Linguistica Asiatica 7, no. 1 (June 28, 2017): 77–87. http://dx.doi.org/10.4312/ala.7.1.77-87.

Full text
Abstract:
The aim of the paper is to provide a quantitative description of legal Chinese. This study adopts the approach of corpus-based analyses and it shows basic statistical parameters of legal texts in Chinese, namely the length of a sentence, the proportion of part of speech etc. The research is conducted on the Chinese monolingual corpus Hanku. The paper also discusses the issues of statistical data processing from various corpora, e.g. the tokenisation and part of speech tagging and their relevance to study of registers variation.
APA, Harvard, Vancouver, ISO, and other styles
22

Kharitonova, Yulia S. "Utilitarian Non-Replaceable Tokens in the Russian Law Enforcement: Problems of Legalisation." Zakon 20 (September 2023): 58–67. http://dx.doi.org/10.37239/0869-4400-2023-20-9-58-67.

Full text
Abstract:
The rise of distributed ledger technology and the growth of the crypto-economy has led to the evolution of non fungible tokens from digital image rights authentication to utility solutions that allow consumers to benefit from the possession of rights in the community. The legal analysis of utilitarian non-fungible tokens leads to the need to apply the provisions of the law on utilitarian digital rights to tokenisation services. However, possessing the features of digital rights, utilitarian NFTs cannot always be the object of investment, which requires the exclusion of such tokens from the scope of regulation of the law on crowdfunding.
APA, Harvard, Vancouver, ISO, and other styles
23

Davidová, Marie, Shanu Sharma, Dermott McMeel, and Fernando Loizides. "Co-De|GT: The Gamification and Tokenisation of More-Than-Human Qualities and Values." Sustainability 14, no. 7 (March 23, 2022): 3787. http://dx.doi.org/10.3390/su14073787.

Full text
Abstract:
The article explores how the quality of life within a deprived urban environment might be improved through the ‘gamification’ of and interaction with, more-than-human elements within the environment. It argues that such quality may be achieved through the community’s multicentered value from the bottom up. This is shown through the case study of the Co-De|GT urban mobile application that was developed in the Synergetic Landscapes unit through real-life research by design experimental studio teaching. Complimentary experimentation took place during the Relating Systems Thinking and Design 10 symposium in the Co-De|BP workshop, where experts were able to be collocated for interactive real-time data gathering. This application addresses the need for collective action towards more-than-human synergy across an urban ecosystem through gamification, community collaboration and DIY culture. It intends to generate a sustainable, scalable token economy where humans and non-humans play equal roles, earning, trading and being paid for goods and services to test such potentials for future economies underpinned by blockchain. This work diverges from dominant economic models that do not recognise the performance of and the limits to, material extraction from the ecosystem. The current economic model has led to the global financial crisis (GFC). Furthermore, it is based on the unsustainable perpetual consumption of services and goods, which may lead to the untangling and critical failure of the market system globally. Therefore, this work investigates how gamification and tokenization may support a complementary and parallel economic market that sustains and grows urban ecosystems. While the research does not speculate on policy implications, it posits how such markets may ameliorate some of the brittleness apparent in the global economic model. It demonstrates a systemic approach to urban ecosystem performance for the future post-Anthropocene communities and economies.
APA, Harvard, Vancouver, ISO, and other styles
24

Basu, Paritosh. "Web3 and Tokenisation for Asset Creation by Common People with Fragmented Ownership through BTaaS." Management Accountant Journal 58, no. 9 (September 1, 2023): 63. http://dx.doi.org/10.33516/maj.v58i9.63-69p.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

VILARES, DAVID, MIGUEL A. ALONSO, and CARLOS GÓMEZ-RODRÍGUEZ. "A syntactic approach for opinion mining on Spanish reviews." Natural Language Engineering 21, no. 1 (August 9, 2013): 139–63. http://dx.doi.org/10.1017/s1351324913000181.

Full text
Abstract:
AbstractWe describe an opinion mining system which classifies the polarity of Spanish texts. We propose an NLP approach that undertakes pre-processing, tokenisation and POS tagging of texts to then obtain the syntactic structure of sentences by means of a dependency parser. This structure is then used to address three of the most significant linguistic constructions for the purpose in question: intensification, subordinate adversative clauses and negation. We also propose a semi-automatic domain adaptation method to improve the accuracy of our system in specific application domains, by enriching semantic dictionaries using machine learning methods in order to adapt the semantic orientation of their words to a particular field. Experimental results are promising in both general and specific domains.
APA, Harvard, Vancouver, ISO, and other styles
26

Sannikova, Larisa. "Blockchain technology and corporate law." Law & Digital Technologies 2, no. 2 (2022): 3. http://dx.doi.org/10.18254/s278229070024057-8.

Full text
Abstract:
The article analyses legislative initiatives aimed at regulating the use of blockchain in corporate governance. The tokenisation equity opens up new opportunities for companies to attract investment. As a result, many traditional companies are interested in converting traditional securities into security tokens. Countries aspiring to lead the blockchain industry are seeking to establish a legal framework for security tokens and a blockchain-based registration system for them. The use of blockchain brings with it not only the digital transformation of companies, but also the emergence of a new type of organization - decentralized autonomous organization (DAO). Existing legal forms are not appropriate for the DAO, which requires the creation of a new type of legal entity. Changes to corporate law that address these trends will eliminate legal risk and drive digital transformation of companies
APA, Harvard, Vancouver, ISO, and other styles
27

Daraselia, Sophiko. "Issues in training the TreeTagger for Georgian." Corpora 19, no. 3 (November 2024): 317–32. http://dx.doi.org/10.3366/cor.2024.0316.

Full text
Abstract:
The paper describes the process of retraining the TreeTagger program ( Schmid, 1994 ) for the Georgian language. This includes considering some general procedures such as designing a training corpus, creating a tagging lexicon, and training the TreeTagger on Georgian texts. I use a novel katag tagset and enclitic tokenisation approach in part-of-speech tagging. The katag tagset is based on a new morphosyntactic language model (Daraselia and Hardie, forthcoming). In this paper, I address some major disambiguation considerations that were revealed when training the TreeTagger on Georgian texts. I discuss some ways to get around these matters, such as implementing a workaround to the tagging lexicon. I report on the performance of the TreeTagger program and compare how different parameters such as the size of the training lexicon or context and affix lengths affect the Tagger’s performance.
APA, Harvard, Vancouver, ISO, and other styles
28

Karyan, Ara, and Vladimir Karyan. "THE CONCEPT, GOAL, AND SOLUTIONS OF BUSINESS DIGITAL TOKENISATION IN DEVELOPING COUNTRIES, ON THE EXAMPLES OF ARMENIA." Bulletin of Yerevan University G: Economics 15, no. 2 (44) (January 30, 2025): 131–46. https://doi.org/10.46991/bysu.g/2024.15.2.131.

Full text
Abstract:
The idea of a business model based on tokenized assets is used in the article to describe and build the concept and main approaches, criteria, tools of business digitalization. For competitiveness, high-quality, and inclusive growth of the real sector of the economy, the Republic of Armenia must focus on the paradigm of the digital economy with a matrix of information technology plus an exact industry of the real sector (IT + seprate industries sectors). The benefits and advantages of the token-based business model toolset are outlined in detail in aspects of decentralisation, innovative responsiveness, immutability of entered information, cryptographic security, transparency, the ability to carry out peer-to-peer transactions without the need for verification and regulation by a central authority, and ultimately increasing the level of governance and efficiency, liquidity, and attraction of alternative investment vehicles. The systems of business decentralisation at the organisational and managerial level, various channels for the exchange and sale of tokens are interpreted. A concept has been put forward to solve the problems of digitalization in Armenia simultaneously: a) in the ICT sector, b) in the real sector of the economy, c) in the financial sector; and, of course, in the educational field. The concept of goal setting is built on the axis of convergent development, which, in turn, will bring a synergy result — a new quality of competitiveness for the beneficiaries of all the mentioned sectors as well as for consumers. It is emphasised that, in-depth understanding of the of the legal regulation of business digitalization is more important for the real sector of the economy, since it is aimed at ensuring: promotion of the generation of innovations; business management efficiency; reducing product and service costs and increasing productivity and efficiency of management; and disclosure of alternative investment channels for businesses.
APA, Harvard, Vancouver, ISO, and other styles
29

Hardie, Andrew. "Part-of-speech ratios in English corpora." International Journal of Corpus Linguistics 12, no. 1 (March 16, 2007): 55–81. http://dx.doi.org/10.1075/ijcl.12.1.05har.

Full text
Abstract:
Using part-of-speech (POS) tagged corpora, Hudson (1994) reports that approximately 37% of English tokens are nouns, where ‘noun’ is a superordinate category including nouns, pronouns and other word-classes. It is argued here that difficulties relating to the boundaries of Hudson’s ‘noun’ category demonstrate that there is no uncontroversial way to derive such a superordinate category from POS tagging. Decisions regarding the boundary of the ‘noun’ category have small but statistically significant effects on the ratio that emerges for ‘nouns’ as a whole. Tokenisation and categorisation differences between tagging schemes make it problematic to compare the ratio of ‘nouns’ across different tagsets. The precise figures for POS ratios are therefore effectively artefacts of the tagset. However, these objections to the use of POS ratios do not apply to their use as a metric of variation for comparing datasets tagged with the same tagging scheme.
APA, Harvard, Vancouver, ISO, and other styles
30

Davidová, Marie, and Kateřina Zímová. "COLreg: The Tokenised Cross-Species Multicentred Regenerative Region Co-Creation." Sustainability 13, no. 12 (June 10, 2021): 6638. http://dx.doi.org/10.3390/su13126638.

Full text
Abstract:
This article argues that whilst our recent economic models are dependent on the overall ecosystem, they do not reflect this fact. As a result of this, we are facing Anthropocene mass extinction. The paper presents a collaborative regenerative region (COLreg) co-creation and tokenisation, involving multiple human and non-human, living and non-living stakeholders. It unfolds different stages of multicentred, systemic co-design via collaborative gigamapping. In the first steps, certain stakeholders are present and certain are represented, whilst in the final stages of generative development, all stakeholders, even those who were previously just potential stakeholders, take an active role. The ‘COLreg’ project represents a holistic approach that reflects today’s most burning issues, such as biodiversity decrease, unsustainable food production, unsustainable economic models, and social systems. It combines top-down and bottom-up approaches to co-create to achieve regional social and environmental justice for the coming symbiotic post-Anthropocene era.
APA, Harvard, Vancouver, ISO, and other styles
31

Burilov, Vlad. "Regulation of Crypto Tokens and Initial Coin Offerings in the EU." European Journal of Comparative Law and Governance 6, no. 2 (June 3, 2019): 146–86. http://dx.doi.org/10.1163/22134514-00602003.

Full text
Abstract:
Much like initial public offerings produce publicly traded securities, Initial Coin Offerings (icos) produce crypto tokens tradeable on crypto exchanges. Despite an apparent need for investor protection the ico and the tokenisation phenomenon have yet to be addressed by legislative action on the EU level. The paper studies the suitability of the EU regulatory framework to capture tokenised financial instruments and utility tokens based on the views of the EU supervisory and national competent authorities. It is argued that EU regulators shall first ensure legal certainty by defining the scope of tokenised financial instruments subject to MiFID. Further, authorisation and ongoing requirements shall be adapted to address the risks posed by distributed technology and direct global access of investors to crypto markets. Finally, there is no immediate need for a bespoke EU-wide regime governing utility tokens; fragmentation of the market is a positive development providing a testing field for future supranational initiatives.
APA, Harvard, Vancouver, ISO, and other styles
32

Ko, Ju Hee, and Hyun Woo Nam. "A Study on the Framework of the Real-Related Asset NFT Art System." Korea Institute of Design Research Society 10, no. 1 (March 31, 2025): 358–71. https://doi.org/10.46248/kidrs.2025.1.358.

Full text
Abstract:
The purpose of this study is to propose a framework for an NFT system linked to real-world art assets(RWA). The results of this research are: First, the asset price, volatility, fungibility, issuance purpose, legal issues, and core functions of RWA and NFTs are compared and analysed, and a linkage plan is proposed for art tokenisation, NFT-based ownership authentication, DeFi and lending, marketplace construction, insurance and security, etc. Second, the RWA-NFT art system design should be designed in the form of partial or complete segregation based on art value. Third, the system architecture of the RWA-based art platform should be designed in three layers, with a total of six systems, including art appraisal, KYC, NFT ownership management, DeFi trading and liquidity supply, insurance, and royalties. Through this research, the reliability and transparency issues of NFTs can be solved with art physical-linked assets, and a service and academic data base for the RWA-NFT system has been established.
APA, Harvard, Vancouver, ISO, and other styles
33

Pardi, Anabele-Linda, and Mario Paolucci. "A Chemical Analysis of Hybrid Economic Systems—Tokens and Money." Mathematics 9, no. 20 (October 16, 2021): 2607. http://dx.doi.org/10.3390/math9202607.

Full text
Abstract:
With the influence of digital technology in our daily lives continuously growing, we investigate methods with the purpose of assessing the stability, sustainability, and design of systems of token economies that include tokens and conventional currencies. Based on a chemical approach, we model markets with a minimum number of variables and compare the transaction rates, stability, and token design properties at different levels of tokenisation. The kinetic study reveals that in certain conditions, if the price of a product contains both conventional money and tokens, one can treat this combination as one composite currency. The dynamic behaviour of the analysed systems is proven to be dynamically stable for the chosen models. Moreover, by applying the supply and demand law to recalculate the prices of products, the necessity of previous knowledge of certain token attributes—token divisibility and token–money exchange rates—emerges. The chemical framework, along with the analytic methods that we propose, is flexible enough to be adjusted to a variety of conditions and offer valuable information about economic systems.
APA, Harvard, Vancouver, ISO, and other styles
34

Fuensanta, Galindo-Reyes, and De la Varga-Salto José María. "Business model innovation in real estate: Applying the business model navigator to conventional and Tokenised models." Edelweiss Applied Science and Technology 8, no. 6 (October 31, 2024): 3465–74. http://dx.doi.org/10.55214/25768484.v8i6.2737.

Full text
Abstract:
This study explores the application of the Business Model Navigator framework to enhance business models within the real estate sector, particularly in response to recent technological advancements and shifting market dynamics. By analyzing two case studies—a traditional luxury property developer and a blockchain-based tokenisation platform—the research demonstrates how selected BMN patterns can drive value creation, improve operational efficiency, and enhance scalability. The traditional luxury real estate model, while excelling in delivering high-end properties and tailored customer experiences, faces significant scalability challenges due to its capital-intensive nature. In contrast, the blockchain-based tokenised model offers improved liquidity and broader market accessibility by enabling fractional ownership, though it encounters obstacles related to regulatory compliance and market acceptance of blockchain technology. This research concludes that both models can significantly benefit from the strategic incorporation of BMN patterns, which enhance adaptability and competitiveness in an evolving real estate landscape. The findings offer strategic recommendations for both traditional and innovative real estate companies seeking to optimise their business models through the use of BMN patterns.
APA, Harvard, Vancouver, ISO, and other styles
35

BY, TOMAS. "Some notes on the PARC 700 Dependency Bank." Natural Language Engineering 13, no. 3 (June 11, 2007): 261–82. http://dx.doi.org/10.1017/s1351324907004548.

Full text
Abstract:
The PARC 700 dependency bank is a potentially very useful resource for parser evaluation that has, so to speak, a high barrier to entry, because of tokenisation that is quite different from the source of the data, the Penn Treebank, and because there is no representation of word order, producing an uncertainty factor of some 15%. There is also a small, but perhaps not insignificant, number of errors. When using the dependency bank for evaluation, it seems likely that these things will cause inflated counts for mismatches, so to obtain more accurate measurements, it is desirable to eliminate them. The work reported here consists of an automatic conversion of the dependency bank into a Prolog representation where the word order is explicit, as well as graphical representations of the dependency trees for all 700 sentences, automatically generated from the Prolog data. As a side effect of the transformation, errors were detected and corrected. It is hoped that this work will lead to more widespread use of the PARC 700 dependency bank for parser evaluation.
APA, Harvard, Vancouver, ISO, and other styles
36

Sadeghi, Mohammad, and Jesús Vegas. "How well does Google work with Persian documents?" Journal of Information Science 43, no. 3 (March 1, 2016): 316–27. http://dx.doi.org/10.1177/0165551516640437.

Full text
Abstract:
The performance evaluation of an information retrieval system is a decisive aspect of the measure of the improvements in search technology. The Google search engine, as a tool for retrieving information on the Web, is used by almost 92% of Iranian users. The purpose of this paper is to study Google’s performance in retrieving relevant information from Persian documents. The information retrieval effectiveness is based on the precision measures of the search results done to a website that we have built with the documents of a TREC standard corpus. We asked Google for 100 topics available on the corpus and we compared the retrieved webpages with the relevant documents. The obtained results indicated that the morphological analysis of the Persian language is not fully taken into account by the Google search engine. The incorrect text tokenisation, considering the stop words as the content keywords of a document and the wrong ‘variants encountered’ of words found by Google are the main reasons that affect the relevance of the Persian information retrieval on the Web for this search engine.
APA, Harvard, Vancouver, ISO, and other styles
37

Serrano, Will. "Smart or Intelligent Assets or Infrastructure: Technology with a Purpose." Buildings 13, no. 1 (January 4, 2023): 131. http://dx.doi.org/10.3390/buildings13010131.

Full text
Abstract:
Smart or intelligent built assets including infrastructure, buildings, real estate, and cities provide enhanced functionality to their different users such as occupiers, passengers, consumers, patients, managers or operators. This enhanced functionality enabled by the Internet of Things (IoT), Artificial Intelligence (AI), Big Data, Mobile Apps, Virtual Reality (VR) and 5G does not only translate into a superior user experience; technology also supports sustainability and energy consumption to meet regulation (ESG, NZC) while optimising asset management and operations for enhanced business economic performance. The main peculiarity is that technology is standardised, ubiquitous and independent from the physical built assets whereas asset users including humans, machines and devices are also common to different assets. This article analyses the atomic differences between built assets and proposes an asset omni-management model based on micro-management of services that will support the macro-functionality of the asset. The proposed key concept is based on the standardisation of different assets based on common and specific functionality and services delivered by the technology stack that is supporting already the transition to Industry 5.0 based on Web 3.0 and Tokenisation.
APA, Harvard, Vancouver, ISO, and other styles
38

Lutfiatun Qoriah, Ira Safitri, Layla Husna Nur Chifdzi, Latifatun Nisak, and Pungky Lela Saputri. "Peran Fintech dalam Mendorong Transaksi Berkelanjutan dan Investasi Hijau Global." CEMERLANG : Jurnal Manajemen dan Ekonomi Bisnis 5, no. 1 (January 2, 2025): 80–90. https://doi.org/10.55606/cemerlang.v5i1.3436.

Full text
Abstract:
Fintech has emerged as a key driving force in fostering sustainable transactions and green investments at the global level. Through product and service innovation, fintech is making access to sustainable financial products easier, encouraging wider participation in the green economy. The role of fintech in driving global sustainable transactions and green investments is increasingly significant as the need for environmentally friendly financial solutions grows. Financial technology (fintech) facilitates wider access to green finance through inclusive and efficient digital platforms. Fintech contributes to carbon footprint reduction by adopting digital-based technologies to replace traditional resource-intensive financial processes. In addition, fintech also encourages green investment by providing services such as crowdlending for renewable energy projects, green asset tokenisation, and sustainability-based portfolio monitoring applications. Collaboration between fintech, regulators, and the traditional financial sector is key in creating an ecosystem that supports the transition to a low-carbon economy. This article discusses the strategic role of fintech in driving sustainable transactions and global green investments, the challenges faced, as well as the future potential to achieve sustainable development goals (SDGs).
APA, Harvard, Vancouver, ISO, and other styles
39

R. Cooper, Christopher. "The Construction Complexity Calculator (ConPlex): A tool for calculating Nelson’s (2024) construction-based complexity measure." Research in Corpus Linguistics 13, no. 2 (2025): 124–43. https://doi.org/10.32714/ricl.13.02.05.

Full text
Abstract:
The current study aims to increase the accessibility of Nelson’s (2024) recently suggested construction-based complexity measure by providing a tool that can calculate the measure for single or multiple texts. To validate the tool, complexity scores for the International Corpus Network of Asian Learners of English corpus (ICNALE) were compared with Nelson’s (2024) results. In addition, complexity scores were calculated for a new dataset, the Common European Framework of Reference English Listening Corpus (CEFR), along with the MERLIN corpus, which includes learner writing samples from learners of Czech, German, and Italian. Complexity scores generally increased across CEFR levels in all of the datasets. However, the complexity scores in the current study tend to be higher than the original study due to differences in the sentence splitting approach. The sentence tokenisation method used is deemed to be more appropriate, and it may be concluded that the Construction Complexity Calculator (ConPlex) tool accurately calculates Nelson’s measure. It is hoped that the tool will allow researchers to calculate the complexity of constructions at the text level for a wide range of research purposes.
APA, Harvard, Vancouver, ISO, and other styles
40

R. Cooper, Christopher. "The Construction Complexity Calculator (ConPlex): A tool for calculating Nelson’s (2024) construction-based complexity measure." Research in Corpus Linguistics 13, no. 2 (2025): 124–43. https://doi.org/10.32714/13.02.05.

Full text
Abstract:
The current study aims to increase the accessibility of Nelson’s (2024) recently suggested construction-based complexity measure by providing a tool that can calculate the measure for single or multiple texts. To validate the tool, complexity scores for the International Corpus Network of Asian Learners of English corpus (ICNALE) were compared with Nelson’s (2024) results. In addition, complexity scores were calculated for a new dataset, the Common European Framework of Reference English Listening Corpus (CEFR), along with the MERLIN corpus, which includes learner writing samples from learners of Czech, German, and Italian. Complexity scores generally increased across CEFR levels in all of the datasets. However, the complexity scores in the current study tend to be higher than the original study due to differences in the sentence splitting approach. The sentence tokenisation method used is deemed to be more appropriate, and it may be concluded that the Construction Complexity Calculator (ConPlex) tool accurately calculates Nelson’s measure. It is hoped that the tool will allow researchers to calculate the complexity of constructions at the text level for a wide range of research purposes.
APA, Harvard, Vancouver, ISO, and other styles
41

Esch, Maria, Jinbo Chen, Stephan Weise, Keywan Hassani-Pak, Uwe Scholz, and Matthias Lange. "A Query Suggestion Workflow for Life Science IR-Systems." Journal of Integrative Bioinformatics 11, no. 2 (June 1, 2014): 15–26. http://dx.doi.org/10.1515/jib-2014-237.

Full text
Abstract:
Summary Information Retrieval (IR) plays a central role in the exploration and interpretation of integrated biological datasets that represent the heterogeneous ecosystem of life sciences. Here, keyword based query systems are popular user interfaces. In turn, to a large extend, the used query phrases determine the quality of the search result and the effort a scientist has to invest for query refinement. In this context, computer aided query expansion and suggestion is one of the most challenging tasks for life science information systems. Existing query front-ends support aspects like spelling correction, query refinement or query expansion. However, the majority of the front-ends only make limited use of enhanced IR algorithms to implement comprehensive and computer aided query refinement workflows. In this work, we present the design of a multi-stage query suggestion workflow and its implementation in the life science IR system LAILAPS. The presented workflow includes enhanced tokenisation, word breaking, spelling correction, query expansion and query suggestion ranking. A spelling correction benchmark with 5,401 queries and manually selected use cases for query expansion demonstrate the performance of the implemented workflow and its advantages compared with state-of-the-art systems.
APA, Harvard, Vancouver, ISO, and other styles
42

Gong, Hui, and Han Wang. "The new era of Bitcoin: Profits, progress and potential in the inscription-driven crypto industry." Journal of Digital Banking 9, no. 2 (September 1, 2024): 150. http://dx.doi.org/10.69554/vvok4884.

Full text
Abstract:
The introduction of the Ordinals Protocol and BRC-20 tokens, featuring the pioneering concept of Inscriptions, signifies a pivotal evolution in the Bitcoin network. Officially launched on 21st January, 2023, this innovation embeds a unique blend of tokenisation and non-fungible attributes directly into Bitcoin’s base layer, akin to engraving precious metals. This paper thoroughly examines the technical underpinnings and market implications of BRC-20 tokens, focusing on the stark contrasts with traditional ERC-20 tokens and initial coin offerings. Unlike ERC-20, Inscriptions integrate seamlessly into Bitcoin’s existing structure, offering a decentralised model that strengthens the principles of a true fair launch in decentralised finance. The study critically assesses the reception within the Bitcoin community, balancing the potential enhancements in Blockchain functionality against concerns regarding foundational principles. Furthermore, it explores the profound implications for stakeholders such as miners, exchanges and developers and evaluates the transformative impact on the cryptocurrency ecosystem at large. Ultimately, this paper elucidates the role of Inscriptions in redefining the trajectory of Bitcoin and shaping the future landscape of Blockchain technologies, emphasising their potential to foster equity and innovation in token distribution.
APA, Harvard, Vancouver, ISO, and other styles
43

Sudarman, Haris, and Y. Yulhendri. "Analisis dan Deteksi Kemiripan Teks Berbasis Python dengan Algoritma Levenshtein Distance." Jurasik (Jurnal Riset Sistem Informasi dan Teknik Informatika) 10, no. 1 (February 28, 2025): 257. https://doi.org/10.30645/jurasik.v10i1.869.

Full text
Abstract:
Improvements in information technology have complicated the issue of plagiarism in academia, particularly in higher education. This project intends to create a plagiarism detection tool that examines the similarity of PDF files to established references utilizing the Levenshtein Distance method. The suggested system can effectively and precisely identify plagiarism by a series of procedures, such as text extraction, linguistic Preprocessing (tokenisation and stopword removal), and calculating the degree of similarity using the Levenshtein Distance method. Testing was carried out on various scenarios, including variations in document size and plagiarism levels. The experimental results show that the higher the level of similarity between the document and the reference, the longer the computing time required. However, this system can detect plagiarism with a fairly good success rate, even in documents with a low level of similarity. Black box testing confirms that this application can work according to the expected specifications, namely inputting PDF documents, detecting plagiarism, and providing accurate similarity percentage results. This research contributes to providing a plagiarism detection tool that can help maintain academic integrity, with the possibility of further development through integration with machine learning and user interface improvements.
APA, Harvard, Vancouver, ISO, and other styles
44

Tkachenko, Kostiantyn. "Using of NLP Methods in Intelligent Educational Systems." Digital Platform: Information Technologies in Sociocultural Sphere 7, no. 1 (June 10, 2024): 80–96. http://dx.doi.org/10.31866/2617-796x.7.1.2024.307009.

Full text
Abstract:
For the effective organisation of educational processes supported by relevant intelligent learning systems, it is important to choose the right technologies that would ensure individualisation of learning, adequate perception of learning content, and the so-called “understanding” of texts in Ukrainian provided by students (description of the solution to a task, answers provided in their own words, not selected from the test answer options, questions to the system, etc.), prototyping, constant iteration during natural language text recognition and processing, and maximum reliability and efficiency of learning processes. The purpose of the article is to study and analyse various methods of natural language processing, and the concept of NLP, and to consider common problems and prospects for developing a software product for processing Ukrainian-language text in online courses that support intelligent learning systems based on it. The research methods are the main methodological approaches and technological tools for analysing natural language texts in intelligent educational systems and developing a system for supporting NLP (Natural Language Processing) technology in the linguistic analysis of texts in Ukrainian. Such methods include, in particular: systemic and comparative analyses to identify the features of intelligence and information (with elements of intellectualisation) systems; the method of expert evaluation, which involves the study of literary sources and information resources, interviews and surveys of experts, as well as the processes of developing and testing intelligent and information systems. The novelty of the study is the analysis of modern technologies for the development of online educational process support systems through the organisation of processes of perception of information provided by students in natural language, the results of which can be used in the development of their software product to support the educational process in Ukrainian, ensuring the improvement of learning efficiency through the use of NLP technology in the process of studying the relevant academic content. Conclusions. The paper analyses modern NLP methods. The analysis has led to the selection of tokenisation, normalisation, stemming and lemmatisation methods for use in intelligent learning systems in the linguistic analysis of the so-called “free” communication in the natural (Ukrainian) language of students in the process of studying the educational content of online courses. During the tokenisation of Ukrainian-language texts, we solved such problems as eliminating so-called “merged” tokens, correcting spelling mistakes, identifying common prefixes in compound words and their impact on the semantics of the corresponding lexemes, identifying common prefixes in abbreviations, and bringing words to their normal form. Lemmatisation is especially important for the Ukrainian language (with its large number of cases of nouns, adjectives, word forms, etc.) and it requires the use of specially compiled dictionaries of the subject area under consideration. In these dictionaries, word forms are presented in the forms of lemmas (i.e., nouns are presented in the nominative case).
APA, Harvard, Vancouver, ISO, and other styles
45

Hariguna, Taqwa, and Vera Rachmawati. "Community Opinion Sentiment Analysis on Social Media Using Naive Bayes Algorithm Methods." IJIIS: International Journal of Informatics and Information Systems 2, no. 1 (March 1, 2019): 33–38. http://dx.doi.org/10.47738/ijiis.v2i1.11.

Full text
Abstract:
The election of Governor is an election event for the Regional Head for the future of the region and the country. The Central Java Governor election in 2018 was held jointly on 27 June 2018, which was followed by 2 candidate pairs of the governor. Its many responses from people through twitter's social media to bring up opinions from the public. Sentiment analysis of 2 research objects of Central Java Governor 2018 candidates with a total of 400 tweets with each candidate being 200 tweets. The used of tweets are divided into 3 classes: positive class, neutral class and negative class. In this study the classification process used the Naive Bayes Classifier (NBC) method, while for data preprocessing is using Cleansing, Punctuation Removal, Stopword Removal, and Tokenisation, to determine the sentiment class with the Lexicon Based method produces the highest accuracy in the Ganjar Pranowo dataset with an accuracy of 87,9545%, Precision value is 0.891%, Recall value is 0.88% and F-Measure is 0.851% while Sudirman Said dataset has an accuracy rate of 84.322%, Precision value of 0.867%, Recall value of 0.843% and F-Measure of 0.815%. From these results, we can conclude that the Ganjar Pranowo dataset was higher compared to Sudirman Said's dataset.
APA, Harvard, Vancouver, ISO, and other styles
46

Scalabrini, Elaine, Jessica Ferreira, Paula Odete Fernandes, and Thiago Moraes. "From reviews to emotions: Analysing Bragança’s tourism attractions on TripAdvisor." European Journal of Tourism, Hospitality and Recreation 14, no. 2 (December 1, 2024): 299–311. https://doi.org/10.2478/ejthr-2024-0022.

Full text
Abstract:
Abstract Over the past decade, sentiment analysis has emerged as a pivotal tool in tourism-related texts, driven by the sheer volume of tourist attractions and the wealth of online information. Tourists increasingly turn to travel websites to access specific information that often eludes standard evaluations of tourist attractions. Forums particularly illuminate specific information needs and their ties to potential destinations. Among these platforms, TripAdvisor has become a favoured choice for posting reviews, ratings, and facilitating online bookings. In this context, this study aims to analyse and assess sentiment in reviews sourced from the online platform TripAdvisor, focusing on tourist attractions in the northern Portuguese destination of Bragança. The research spotlights the disparity between qualitative and quantitative rankings. The study also underscores the importance of data pre-processing, including removing irrelevant information and stop words. Pre-processing was crucial in refining sentiment prediction accuracy, highlighting the differentiated roles of these words in context and meaning. Despite utilising advanced techniques such as tokenisation, TF-IDF weighting, logistic regression, and n-grams, the study’s models encountered challenges in achieving high accuracy in sentiment prediction. Even the incorporation of bigrams did not yield substantial performance improvements, with the models frequently inclined to overestimate negative and positive sentiments.
APA, Harvard, Vancouver, ISO, and other styles
47

Exter, Evgeny, and Milan Radosavljevic. "Blockchain Technology in Commercial Real Estate: Developing a Conceptual Design for Smart Contracts." Communication of the Association for Information Systems 55 (2024): 434–58. http://dx.doi.org/10.17705/1cais.05517.

Full text
Abstract:
Commercial real estate transaction processes are inherently complex, multi-faceted, and multifarious due to multiple intermediaries, a high volume of signed documentation, high costs, and the illiquid nature of these transactions. An emerging area in blockchain technology is smart contracts which hold much potential to transform the commercial real estate industry through the digitalization of decentralized business models that ensure transaction transparency and validity. Despite the potential benefits of smart contracts, their use in the commercial real estate industry is at a nascent stage. In response to this gap, this study proposes a novel approach using Ethereum blockchain technology to enable intermediaries to transact in an informationally symmetrical way within an open real estate in Switzerland. The proposed conceptual model employs tokenisation on the blockchain and is developed using the action design science research methodology. The model undergoes several stages of evolution from pre-design, which is validated through expert interviews, to arrive at the final conceptual design. The study identifies key factors that influence the application of blockchain in real estate transactions, namely adoption, governance and compliance, transaction costs, transparency and immutability, security, and scalability. The results indicate that smart contracts have the potential to significantly reduce transaction costs and improve efficiency in the commercial real estate industry.
APA, Harvard, Vancouver, ISO, and other styles
48

Tkachenko, Kostiantyn. "Using of NLP Methods in Intelligent Educational Systems." Digital Platform: Information Technologies in Sociocultural Sphere 7, no. 1 (June 10, 2024): 80–96. https://doi.org/10.31866/2617-796X.7.1.2024.307009.

Full text
Abstract:
For the effective organisation of educational processes supported by relevant intelligent learning systems, it is important to choose the right technologies that would ensure individualisation of learning, adequate perception of learning content, and the so-called &ldquo;understanding&rdquo; of texts in Ukrainian provided by students (description of the solution to a task, answers provided in their own words, not selected from the test answer options, questions to the system, etc.), prototyping, constant iteration during natural language text recognition and processing, and maximum reliability and efficiency of learning processes. <strong>The purpose of the article</strong>&nbsp;is to study and analyse various methods of natural language processing, and the concept of NLP, and to consider common problems and prospects for developing a software product for processing Ukrainian-language text in online courses that support intelligent learning systems based on it. <strong>The research methods</strong>&nbsp;are the main methodological approaches and technological tools for analysing natural language texts in intelligent educational systems and developing a system for supporting NLP (Natural Language Processing) technology in the linguistic analysis of texts in Ukrainian. Such methods include, in particular: systemic and comparative analyses to identify the features of intelligence and information (with elements of intellectualisation) systems; the method of expert evaluation, which involves the study of literary sources and information resources, interviews and surveys of experts, as well as the processes of developing and testing intelligent and information systems. <strong>The novelty of the study</strong>&nbsp;is the analysis of modern technologies for the development of online educational process support systems through the organisation of processes of perception of information provided by students in natural language, the results of which can be used in the development of their software product to support the educational process in Ukrainian, ensuring the improvement of learning efficiency through the use of NLP technology in the process of studying the relevant academic content. <strong>Conclusions.</strong>&nbsp;The paper analyses modern NLP methods. The analysis has led to the selection of tokenisation, normalisation, stemming and lemmatisation methods for use in intelligent learning systems in the linguistic analysis of the so-called &ldquo;free&rdquo; communication in the natural (Ukrainian) language of students in the process of studying the educational content of online courses. During the tokenisation of Ukrainian-language texts, we solved such problems as eliminating so-called &ldquo;merged&rdquo; tokens, correcting spelling mistakes, identifying common prefixes in compound words and their impact on the semantics of the corresponding lexemes, identifying common prefixes in abbreviations, and bringing words to their normal form. Lemmatisation is especially important for the Ukrainian language (with its large number of cases of nouns, adjectives, word forms, etc.) and it requires the use of specially compiled dictionaries of the subject area under consideration. In these dictionaries, word forms are presented in the forms of lemmas (i.e., nouns are presented in the nominative case).
APA, Harvard, Vancouver, ISO, and other styles
49

Gibbon, Dafydd, Katarzyna Klessa, and Jolanta Bachan. "Duration and speed of speech events: A selection of methods." Lingua Posnaniensis 56, no. 1 (July 24, 2015): 59–83. http://dx.doi.org/10.2478/linpo-2014-0004.

Full text
Abstract:
AbstractThe study of speech timing, i.e. the duration and speed or tempo of speech events, has increased in importance over the past twenty years, in particular in connection with increased demands for accuracy, intelligibility and naturalness in speech technology, with applications in language teaching and testing, and with the study of speech timing patterns in language typology. H owever, the methods used in such studies are very diverse, and so far there is no accessible overview of these methods. Since the field is too broad for us to provide an exhaustive account, we have made two choices: first, to provide a framework of paradigmatic (classificatory), syntagmatic (compositional) and functional (discourse-oriented) dimensions for duration analysis; and second, to provide worked examples of a selection of methods associated primarily with these three dimensions. Some of the methods which are covered are established state-of-the-art approaches (e.g. the paradigmatic Classification and Regression Trees, CART , analysis), others are discussed in a critical light (e.g. so-called ‘rhythm metrics’). A set of syntagmatic approaches applies to the tokenisation and tree parsing of duration hierarchies, based on speech annotations, and a functional approach describes duration distributions with sociolinguistic variables. Several of the methods are supported by a new web-based software tool for analysing annotated speech data, the Time Group Analyser.
APA, Harvard, Vancouver, ISO, and other styles
50

Goh, Tuan-Jun, Lee-Ying Chong, Siew-Chin Chong, and Pey-Yun Goh. "A Campus-based Chatbot System using Natural Language Processing and Neural Network." Journal of Informatics and Web Engineering 3, no. 1 (February 14, 2024): 96–116. http://dx.doi.org/10.33093/jiwe.2024.3.1.7.

Full text
Abstract:
A chatbot is designed to simulate human conversation and provide instant responses to users. Chatbots have gained popularity in providing automated customer support and information retrieval among organisations. Besides, it also acts as a virtual assistant to communicate with users by delivering updated answers based on users' input. Most chatbots still use the traditional rule-based chatbot, which can only respond to pre-defined sentences, making the users unlikely to use the chatbot. This paper aims to design and build a campus chatbot for the Faculty of Information Science &amp; Technology (FIST) of Multimedia University that facilitates the study life of FIST students. Before the FIST chatbot can be used, natural language processing techniques such as tokenisation, lemmatisation and bag of word model are used to generate the input that can be used to train the neural network model (multilayer perceptron model). It makes the FIST chatbot comprehends user intent by analysing their questions, enabling it to address a broader range of inquiries and cater to the student's need with accurate answers or information related to the Faculty of Information Science &amp; Technology. Besides, we also developed the backend interface allowing the admin to add and edit the dataset in the proposed chatbot and enable it continuously responds to the student with the latest and updated information.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography