Academic literature on the topic 'Automatically happened outcomes'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Automatically happened outcomes.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Automatically happened outcomes"

1

Dhritikesh, Chakrabarty. "Extended Definition of Statistical Probability: Estimate of Probability Distribution of Rainy Days in Southern Part of India." Partners Universal International Innovation Journal (PUIIJ) 01, no. 06 (2023): 148–58. https://doi.org/10.5281/zenodo.10392846.

Full text
Abstract:
Recently, the statistical definition of probability introduced by von Mises, which was based on the outcomes of actually performed experimentation, has been extended to the situation where the outcomes of the trials happened automatically. This extended definition of probability has been applied in estimating the probability distribution of rainy days in each of the 12 months at four stations in southern part of India, namely Bangalore, Chennai, Hyderabad & Trivandrum, with a view of obtaining a picture of tendency of rainfall there. This article presents the findings of estimates obtained in the study. It has been found from the study that at each of the four stations, there does not exist any month that is certainly non-rainy and there exists months that are certain rainy.
APA, Harvard, Vancouver, ISO, and other styles
2

Dhritikesh, Chakrabarty. "Numbers of Rainy Days at Chennai, Kolkata, Mumbai, and New Delhi: Most Likely to Occur." Partners Universal International Research Journal (PUIRJ) ISSN: 2583-5602 02, no. 03 (2023): 210–17. https://doi.org/10.5281/zenodo.8372740.

Full text
Abstract:
Definition of probability based on the data on automatically happened outcomes, formulated in a recent study by the application of the logic behind concept of empirical probability, has been applied in estimating most likely number of rainy days at each of the four stations in India namely Chennai, Kolkata, Mumbai, and New Delhi to be occurred in each of the 12 months. It has been found from the study that rainfall is almost certain to be occurred in each of the months from June to November at Chennai, from May to October at Kolkata, from June to September at Mumbai and from June to August at New Delhi while rainfall is certain to be occurred in each of the months from September to November at Chennai, from June to September at Kolkata, from July to September at Mumbai and from July to August at New Delhi. On the other hand, non-occurrence of rainy day or equivalently non-occurrence of rainfall is not certain in any month at any of the four stations.    
APA, Harvard, Vancouver, ISO, and other styles
3

Dhritikesh, Chakrabarty. "Inverse Application of Probability: Favorable Number of Rainy Days in Indian Context." Partners Universal International Research Journal (PUIRJ) 02, no. 04 (2023): 74–85. https://doi.org/10.5281/zenodo.10424163.

Full text
Abstract:
An attempt has been made on determining the favorable number of outcomes associated with a natural phenomenon by the inverse application of the classical definition of probability and the direct application of the extension of its empirical definition extended to the situation where outcomes of the associated trials happen automatically with a numerical example of estimating the number of days favorable to be rainy in each of the 12 months at the 30 stations in India with a view to obtaining a picture of tendency of rainfall at  the stations.  
APA, Harvard, Vancouver, ISO, and other styles
4

Harper, Jason, Atul K. Madan, Craig A. Ternovits, and David S. Tichansky. "What Happens to Patients who Do Not Follow-Up after Bariatric Surgery?" American Surgeon 73, no. 2 (2007): 181–84. http://dx.doi.org/10.1177/000313480707300219.

Full text
Abstract:
Loss of follow-up is a concern when tracking long-term clinical outcomes after bariatric surgery. The results of patients who are “lost to follow-up” are not known. After bariatric surgery, the lack of follow-up may result in less weight loss for patients. This study investigated the hypothesis that there are differences between patients who do not automatically return for their annual follow-up and those that do return. Patients who were greater than 14 months postoperative after laparoscopic gastric bypass were contacted if they had not returned for their annual appointment. They were seen in clinic and/or a phone interview was performed for follow-up. These patients (Group A) were compared with patients who returned to see us for their annual appointment (Group B) without us having to notify them. There were 105 consecutive patients, with 48 patients who did not automatically return for their annual appointment. Only six of these patients could not ultimately be contacted. There was no difference in preoperative body mass index between the two groups. Percentage excess body weight loss was greater in Group B (76 vs 65%; P < 0.003). More patients had successful weight loss (defined as within 50% of ideal body weight) in Group B (50 [88%] vs 28 [67%]; P < 0.02). We found that a significant number of patients will not comply with regular follow-up care after laparoscopic gastric bypass unless they are prompted to do so by their bariatric clinic. These patients have worse clinical outcome ( i.e., less weight loss). Caution should be taken when examining the results of any bariatric study where there is a significant loss to follow-up.
APA, Harvard, Vancouver, ISO, and other styles
5

Hemavathi. "Enhancing ECA (Event- Condition-Action) Rules: Fine-Tuning BERT for Security and Privacy Violation Detection." Journal of Information Systems Engineering and Management 10, no. 34s (2025): 751–83. https://doi.org/10.52783/jisem.v10i34s.5869.

Full text
Abstract:
In the world of Internet of Things, Event Condition Action rules are the secret sauce for smart device interaction. An Event triggers the rule. If a specific Condition is met, then an Action happens automatically. This article addresses trigger–action platforms, which empower users to define custom behaviors for IoT devices and web services through conditional rules. While these platforms enhance user creativity in automation, they also pose significant risks, such as unintentional disclosure of private information or exposure to cyber threats. The proposed solution leverages Natural Language Processing techniques to identify automation rules within these platforms that may compromise user security or privacy. Natural Language Processing based models are applied to analyze the semantic and contextual information of trigger–action rules, utilizing classification techniques on various rule features. Evaluation on the If-This-Then-That platform, using a dataset of 76,741 rules labeled through an ensemble of three semi-supervised learning techniques, demonstrates that the results from the Bidirectional Encoder Representations from Transformers based model training demonstrate promising outcomes, with an average validation accuracy of 89% over 2 epochs. The Test Accuracy of around 90.65% is achieved. Predicted outputs showcase the model's ability to categorize applets into different risk classes, including instances of cyber security threats and physical harm.
APA, Harvard, Vancouver, ISO, and other styles
6

Liao, Yanhui, Ling Wang, Tao Luo, et al. "Brief mindfulness-based intervention of ‘STOP (Stop, Take a Breath, Observe, Proceed) touching your face’: a study protocol of a randomised controlled trial." BMJ Open 10, no. 11 (2020): e041364. http://dx.doi.org/10.1136/bmjopen-2020-041364.

Full text
Abstract:
IntroductionFace-touching behaviour often happens frequently and automatically, and poses potential risk for spreading infectious disease. Mindfulness-based interventions (MBIs) have shown its efficacy in the treatment of behaviour disorders. This study aims to evaluate an online mindfulness-based brief intervention skill named ‘STOP (Stop, Take a Breath, Observe, Proceed) touching your face’ in reducing face-touching behaviour.Methods and analysisThis will be an online-based, randomised, controlled, trial. We will recruit 1000 participants, and will randomise and allocate participants 1:1 to the ‘STOP touching your face’ (both 750-word text and 5 min audio description by online) intervention group (n=500) and the wait-list control group (n=500). All participants will be asked to monitor and record their face-touching behaviour during a 60 min period before and after the intervention. Primary outcome will be the efficacy of short-term mindfulness-based ‘STOP touching your face’ intervention for reducing the frequency of face-touching. The secondary outcomes will be percentage of participants touching their faces; the correlation between the psychological traits of mindfulness and face-touching behaviour; and the differences of face-touching behaviour between left-handers and right-handers. Analysis of covariance, regression analysis, χ2 test, t-test, Pearson’s correlations will be applied in data analysis. We will recruit 1000 participants from April to July 2020 or until the recruitment process is complete. The follow-up will be completed in July 2020. We expect all trial results to be available by the end of July 2020.Ethics and disseminationThe study protocol has been approved by the Ethics Committee of Sir Run Run Shaw Hospital, an affiliate of Zhejiang University, Medical College (No. 20200401-32). Study results will be disseminated via social media and peer-reviewed publications.Trial registration numberNCT04330352.
APA, Harvard, Vancouver, ISO, and other styles
7

Dubey, Sakshi. "Pulmonary Disease Prediction by Using Machine Learning Technique." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 05 (2025): 1–9. https://doi.org/10.55041/ijsrem46795.

Full text
Abstract:
Abstract - Pulmonary Disease is one of the leading causes of Cancer related deaths world wide and its early diagnosis and treatment are essential to cure the patient normally indicated by small growths in the lungs called nodules. It usually happens because cells in the lungs start increasing uncontrollably. Finding these Lung nodules is important for detecting lung Cancer, these nodules are typically detected through CT scans, but manual interpretation can be time-consuming and prone to human error. Through a process of feature extraction and selection, our model was trained to identify patterns and subtle abnormalities indicative of lung cancer within the imaging data. Machine learning and deep learning models have shown promise in enhancing the accuracy and efficiency of lung cancer detection. Convolutional Neural Networks (CNN), a type of deep learning technique, have recently shown promising results in image-based medical diagnosis. Identifying lung cancer from a healthcare picture collection using a CNN-based approach, with an emphasis on histological image data. The suggested CNN method uses the inherent hierarchical properties of medical images to automatically identify distinguishing elements that point to lung cancer. The limitations of small healthcare picture datasets are effectively addressed through transfer learning from large image datasets and refining taught models. A large dataset of patients with lung cancer is used to create and evaluate the CNN model that makes use of the VGG-19 architecture. Cancer may be automatically diagnosed using the power of Machine Learning (ML) with medical images. ML can classify cancer cell images more accurately with less time and lower cost. This research modifies the Convolutional Neural Network (CNN) model as pre-trained Visual Geometry Group19 (VGG19) for classifying lung cancer biopsy images with improved augmentation technique. By enhancing VGG19's generalizability to large-scale datasets and optimizing it for medical imaging, our study seeks to address these issues. According to experimental data, our method improves early diagnosis and detection performance, lowering false positives and facilitating more efficient lung cancer screening and treatment planning. The results observed with fine-tuned VGG19 model with improved augmentation technique are up to 98.73% accuracy. By harnessing the power of the Convolutional Neural Network, we offer a promising solution for early detection, thereby facilitating timely interventions and ultimately enhancing patient outcomes in the fight against lung cancer. A large dataset of patients with lung cancer is used to create and evaluate the CNN model that makes use of the VGG-19 architecture. Keywords: Lung cancer detection, feature extraction, model evaluation, nodule detection, Convolutional Neural Networks
APA, Harvard, Vancouver, ISO, and other styles
8

Esteve Del Valle, Marc, Anatoliy Gruzd, Caroline Haythornthwaite, Priya Kumar, Sara Gilbert, and Drew Paulin. "Learning in the wild." Proceedings of the International Conference on Networked Learning 11 (May 14, 2018): 157–64. http://dx.doi.org/10.54337/nlc.v11.8750.

Full text
Abstract:
The theoretical lenses, empirical measures and analytical tools associated with social network analysis comprise a wealth of knowledge that can be used to analyse networked learning. This has popularized the use of the social network analysis approach to understand and visualize structures and dynamics in online learning networks, particularly where data could be automatically gathered and analysed. Research in the field of social network learning analysis has (a) used social network visualizations as a feedback mechanism and an intervention to enhance online social learning activities (Bakharia & Dawson, 2011; Schreurs, Teplovs, Ferguson, de Laat, & Buckingham Shum, 2013), (b) investigated what variables predicted the formation of learning ties in networked learning processes (Cho, Gay, Davidson, & Ingraffea, 2007), (c) predicted learning outcomes in online environments (Russo & Koesten, 2005), and (d) studied the nature of the learning ties (de Laat, 2006). This paper expands the understanding of the variables predicting the formation of learning ties in online informal environments. Reddit, an online news sharing site that is commonly referred to as ‘the front page of the Internet’, has been chosen as the environment for our investigation because conversations on it emerge from the contributions of members, and it combines perspectives of experts and non-experts (Moore & Chuang, 2017) taking place in a plethora of subcultures (subreddits) occurring outside traditional settings. We study two subreddit communities, ‘AskStatistics’, and ‘AskSocialScience’, in which we believe that informal learning is likely to happen in Reddit, and which offer avenues for comparison both in terms of the communication dynamics and learning processes occurring between members. We gathered all the interactions amongst the users of these two subreddit communities for a 1-year period, from January 1st, 2015 until December 31st, 2015. Exponential Random Graph models (ERGm) were employed to determine the endogenous (network) and exogenous (node attributes) factors facilitating the networked ties amongst the users of these communities. We found evidence that Redditors’ networked ties arise from network dynamics (reciprocity and transitivity) and from the Redditors’ role as a moderator in the subreddit communities. These results shed light into the understanding of the variables predicting the formation of ties in informal networked learning environments, and more broadly contribute to the development of the field of social network learning analysis.
APA, Harvard, Vancouver, ISO, and other styles
9

Tewes, Federico R. "Artificial Intelligence in the American Healthcare Industry: Looking Forward to 2030." Journal of Medical Research and Surgery 3, no. 5 (2022): 107–8. http://dx.doi.org/10.52916/jmrs224089.

Full text
Abstract:
Artificial intelligence (AI) has the potential to speed up the exponential growth of cutting-edge technology, much way the Internet did. Due to intense competition from the private sector, governments, and businesspeople around the world, the Internet has already reached its peak as an exponential technology. In contrast, artificial intelligence is still in its infancy, and people all over the world are unsure of how it will impact their lives in the future. Artificial intelligence, is a field of technology that enables robots and computer programmes to mimic human intellect by teaching a predetermined set of software rules to learn by repetitive learning from experience and slowly moving toward maximum performance. Although this intelligence is still developing, it has already demonstrated five different levels of independence. Utilized initially to resolve issues. Next, think about solutions. Third, respond to inquiries. Fourth, use data analytics to generate forecasts. Fifth, make tactical recommendations. Massive data sets and "iterative algorithms," which use lookup tables and other data structures like stacks and queues to solve issues, make all of this possible. Iteration is a strategy where software rules are regularly adjusted to patterns in the data for a certain number of iterations. The artificial intelligence continuously makes small, incremental improvements that result in exponential growth, which enables the computer to become incredibly proficient at whatever it is trained to do. For each round of data processing, the artificial intelligence tests and measures its performance to develop new expertise. In order to address complicated problems, artificial intelligence aims to create computer systems that can mimic human behavior and exhibit human-like thought processes [1]. Artificial intelligence technology is being developed to give individualized medication in the field of healthcare. By 2030, six different artificial intelligence sectors will have considerably improved healthcare delivery through the utilization of larger, more accessible data sets. The first is machine learning. This area of artificial intelligence learns automatically and produces improved results based on identifying patterns in the data, gaining new insights, and enhancing the outcomes of whatever activity the system is intended to accomplish. It does this without being trained to learn a particular topic. Here are several instances of machine learning in the healthcare industry. The first is the IBM Watson Genomics, which aids in rapid disease diagnosis and identification by fusing cognitive computing with genome-based tumour sequencing. Second, a project called Nave Bayes allows for the prediction of diabetes years before an official diagnosis, before it results in harm to the kidneys, the heart, and the nerves. Third, employing two machine learning approaches termed classification and clustering to analyse the Indian Liver Patient Data (ILPD) set in order to predict liver illness before this organ that regulates metabolism becomes susceptible to chronic hepatitis, liver cancer, and cirrhosis [2]. Second, deep learning. Deep learning employs artificial intelligence to learn from data processing, much like machine learning does. Deep learning, on the other hand, makes use of synthetic neural networks that mimic human brain function to analyse data, identify relationships between the data, and provide outputs based on positive and negative reinforcement. For instance, in the fields of Magnetic Resonance Imaging (MRI) and Computed Tomography (CT), deep learning aids in the processes of picture recognition and object detection. Deep learning algorithms for the early identification of Alzheimer's, diabetic retinopathy, and breast nodule ultrasound detection are three applications of this cutting-edge technology in the real world. Future developments in deep learning will make considerable improvements in pathology and radiology pictures [3]. Third, neural networks. The artificial intelligence system can now accept massive data sets, find patterns within the data, and respond to queries regarding the information processed because the computer learning process resembles a network of neurons in the human brain. Let's examine a few application examples that are now applicable to the healthcare sector. According to studies from John Hopkins University, surgical errors are a major contributor to medical malpractice claims since they happen more than 4,000 times a year in just the United States due to the human error of surgeons. Neural networks can be used in robot-assisted surgery to model and plan procedures, evaluate the abilities of the surgeon, and streamline surgical activities. In one study of 379 orthopaedic patients, it was discovered that robotic surgery using neural networks results in five times fewer complications than surgery performed by a single surgeon. Another application of neural networks is in visualising diagnostics, which was proven to physicians by Harvard University researchers who inserted an image of a gorilla to x-rays. Of the radiologists who saw the images, 83% did not recognise the gorilla. The Houston Medical Research Institute has created a breast cancer early detection programme that can analyse mammograms with 99 percent accuracy and offer diagnostic information 30 times faster than a human [4]. Cognitive computing is the fourth. Aims to replicate the way people and machines interact, showing how a computer may operate like the human brain when handling challenging tasks like text, speech, or image analysis. Large volumes of patient data have been analysed, with the majority of the research to date focusing on cancer, diabetes, and cardiovascular disease. Companies like Google, IBM, Facebook, and Apple have shown interest in this work. Cognitive computing made up the greatest component of the artificial market in 2020, with 39% of the total [5]. Hospitals made up 42% of the market for cognitive computing end users because of the rising demand for individualised medical data. IBM invested more than $1 billion on the development of the WATSON analytics platform ecosystem and collaboration with startups committed to creating various cloud and application-based systems for the healthcare business in 2014 because it predicted the demand for cognitive computing in this sector. Natural Language Processing (NLP) is the fifth. This area of artificial intelligence enables computers to comprehend and analyse spoken language. The initial phase of this pre-processing is to divide the data up into more manageable semantic units, which merely makes the information simpler for the NLP system to understand. Clinical trial development is experiencing exponential expansion in the healthcare sector thanks to NLP. First, the NLP uses speech-to-text dictation and structured data entry to extract clinical data at the point of care, reducing the need for manual assessment of complex clinical paperwork. Second, using NLP technology, healthcare professionals can automatically examine enormous amounts of unstructured clinical and patient data to select the most suitable patients for clinical trials, perhaps leading to an improvement in the patients' health [6]. Computer vision comes in sixth. Computer vision, an essential part of artificial intelligence, uses visual data as input to process photos and videos continuously in order to get better results faster and with higher quality than would be possible if the same job were done manually. Simply put, doctors can now diagnose their patients with diseases like cancer, diabetes, and cardiovascular disorders more quickly and at an earlier stage. Here are a few examples of real-world applications where computer vision technology is making notable strides. Mammogram images are analysed by visual systems that are intended to spot breast cancer at an early stage. Automated cell counting is another example from the real world that dramatically decreases human error and raises concerns about the accuracy of the results because they might differ greatly depending on the examiner's experience and degree of focus. A third application of computer vision in the real world is the quick and painless early-stage tumour detection enabled by artificial intelligence. Without a doubt, computer vision has the unfathomable potential to significantly enhance how healthcare is delivered. Other than for visual data analysis, clinicians can use this technology to enhance their training and skill development. Currently, Gramener is the top company offering medical facilities and research organisations computer vision solutions [7]. The usage of imperative rather than functional programming languages is one of the key difficulties in creating artificial intelligence software. As artificial intelligence starts to increase exponentially, developers employing imperative programming languages must assume that the machine is stupid and supply detailed instructions that are subject to a high level of maintenance and human error. In software with hundreds of thousands of lines of code, human error detection is challenging. Therefore, the substantial amount of ensuing maintenance may become ridiculously expensive, maintaining the high expenditures of research and development. As a result, software developers have contributed to the unreasonably high cost of medical care. Functional programming languages, on the other hand, demand that the developer use their problem-solving abilities as though the computer were a mathematician. As a result, compared to the number of lines of code needed by the programme to perform the same operation, mathematical functions are orders of magnitude shorter. In software with hundreds of thousands of lines of code, human error detection is challenging. Therefore, the substantial amount of ensuing maintenance may become ridiculously expensive, maintaining the high expenditures of research and development. As a result, software developers have contributed to the unreasonably high cost of medical care. Functional programming languages, on the other hand, demand that the developer use their problem-solving abilities as though the computer were a mathematician. As a result, compared to the number of lines of code needed by the programme to perform the same operation, mathematical functions are orders of magnitude shorter. The bulk of software developers that use functional programming languages are well-trained in mathematical logic; thus, they reason differently than most American software developers, who are more accustomed to following step-by-step instructions. The market for artificial intelligence in healthcare is expected to increase from $3.4 billion in 2021 to at least $18.7 billion by 2027, or a 30 percent annual growth rate before 2030, according to market research firm IMARC Group. The only outstanding query is whether these operational reductions will ultimately result in less expensive therapies.
APA, Harvard, Vancouver, ISO, and other styles
10

Rohman, Hendra, Ais Prawirya, and Fadia Fadia. "Pengodean Kasus Cedera, Keracunan Dan External Cause Pada Sistem Informasi Puskesmas." JURNAL ILMU KESEHATAN BHAKTI SETYA MEDIKA 9, no. 1 (2025). https://doi.org/10.56727/dz6nn732.

Full text
Abstract:
In health center, there was still errors in filling in disease codes. In this case was on cases of injury, poisoning and external cause codes that were not coded up to the 5th character. This happened because doctors and nurses did not understand the coding procedures. This study aims to identify coding process, calculate the percentage of code accuracy, identify factors causing inaccuracy in coding injuries, poisoning and external causes at the Bambanglipuro Health Center. The provision of disease diagnosis codes at health center was carried out after the nurse had finished filling in the assessment, doctor input diagnosis in SIMPUS, and ICD code automatically appeared. The percentage of correct diagnosis codes for cases of poisoning injuries and external causes of outpatients at Bambanglipuro Health Center, Bantul in period 2023 from a total sample of 71 medical records, number of correct diagnosis codes was 20 medical records (28%), and number of incorrect diagnosis codes was 51 medical records (72%). The cause was human factor (human), namely human resources who did not meet the competence of medical creators, special training had not been provided for coding officers and external causes were not coded. Method factors are that there is no SOP on disease coding system. Measuring implementation of disease diagnosis coding is carried out by re-examination by medical record officers who have competence in disease coding. Fulfilment of human resources according to qualifications affects work outcomes in UKRM. The ICD database on SIMPUS needs to be reviewed and data updated by vendors so that code selection can be available more specifically. Classification of code determination with ICD rules can describe the journey of a patient's medical record history more specifically.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Conference papers on the topic "Automatically happened outcomes"

1

AitAli, R., P. Arevalo, D. Dashevskiy, et al. "Hybrid Data Driven Intelligent Algorithm for Stuck Pipe Prevention." In ADIPEC. SPE, 2023. http://dx.doi.org/10.2118/216325-ms.

Full text
Abstract:
Abstract In today's drilling industry, it is essential to utilize both downhole and surface real-time sensor systems along with physics-based models to detect drilling hazards at an early stage and take timely measures to mitigate drilling related risks reducing non-productive time (NPT) and invisible lost time (ILT). Stuck pipe events are a major cause of NPT with an estimated cost to the oil and gas industry in the range of hundreds of Millions USD per year. Given its impact to drilling operations, stuck pipe prevention has high relevance and attention in the industry. In fact, in the last decade, the number of initiatives using drilling data coupled with advanced algorithms (combination of artificial intelligence (AI) and machine learning (ML)) to better understand and prevent stuck pipe events has greatly increased. Different approaches utilizing surface data are available ranging from data driven models, physics-based models, or even hybrid approaches combining physics-based models with data driven models. In any case, the outcome is an algorithm tasked with the identification of stuck-pipe events before they happen. If such an algorithm is deployed at the rig site (i.e., running on an edge device), ingesting surface and downhole data in real-time, the potential to improve the drilling process in terms of performance and safety greatly increases. Furthermore, safer drilling operations have an impact not only on the reduction of the overall capital expenditure (CAPEX) for well construction, but also on associated carbon emissions. The approach presented in this paper is based on drilling automation applications focused on the integrity of the drilling process. The system includes a set of advanced algorithms coupled with digital twins (e.g., physics-based models of the wellbore) running on an edge device deployed at the rig site, to create a comprehensive monitoring and alert solution for surface hookload. The monitoring system consists of three main components: a reference environment given by digital twins, which provides safe operating envelopes (SOE) defined by overpull and buckling as boundaries; a set of algorithms to detect and sample common indicators of torque and drag automatically, such as pick-up (PU), slack-off (SO), rotation off bottom (ROB) torques and loads; and a higher layer to identify trends and deviations between the samples to create early warnings related to stuck-pipe symptoms. The monitoring system implemented can be deployed for all kinds of drilling operations (i.e., drilling, tripping, circulating). By providing early warnings of stuck pipe like symptoms, the system enables users (i.e., rig crew, drilling operations, drilling optimization engineers) to mitigate such symptoms in time, hence avoiding costly consequences.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Automatically happened outcomes"

1

Hakizimana, Naphtal, and Fabrizio Santoro,. Technology Evolution and Tax Compliance: Evidence from Rwanda. Institute of Development Studies, 2024. http://dx.doi.org/10.19088/ictd.2024.038.

Full text
Abstract:
Data on economic transactions is crucial for tax administrations to be able to enforce tax compliance, and technology can be key to obtaining information. In the last decade, African tax administrations have increasingly adopted technological advances such as integrated systems, electronic filing, and electronic billing machines (EBMs). EBMs allow taxpayers to digitise their transactions and transfer billing information automatically to the revenue authority. They have high potential, as they allow firms to lower their administrative and compliance costs, streamline transactions, improve record-keeping, strengthen their administrative capacity and, in the case of small businesses, improve their ability to attract clients and engage in trade thanks to improved accuracy and transparency. Rwanda is one of Africa’s fastest growing and most technology-oriented countries. The government is highly reliant on technology to improve tax revenues. In 2013, the Rwandan Revenue Authority (RRA) introduced EBMs through a machine called EBM1. This used a SIM card, through which VAT-registered taxpayers transmitted sale transaction data to the RRA in real time. Like any technology, there were practical challenges, such as the cost of acquiring and maintaining the machine, limitations in storing information and lack of remote support. As a result, an improved, free, software version called EBM2 was rolled out in 2017 and is still in use. This can digitise and store receipts, capture core business information like inventory and type of items sold, automatically validate buyers’ identity and provide support online. This paper evaluates the impact of the implementation of EBM2 on VAT and income tax compliance. Thanks to a collaboration with the RRA, we looked at around 60,000 EBM users’ monthly/quarterly VAT and annual income tax returns from 2013 to 2020. We focus specifically on two groups: those who had previously used EBM1 and shifted to EBM2 (shifters), and those who only adopted EBM2 (new users). Taking advantage of the fact that EBM2 adoption happened over time, we conduct a difference-difference strategy to estimate the impact of EBM2 on key outcomes for both VAT and income tax, including the discrepancy in reported turnover between the two tax heads. Please note: This is a revised version of RiB99: https://opendocs.ids.ac.uk/opendocs/handle/20.500.12413/18228
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography