Academic literature on the topic 'AI completeness'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'AI completeness.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "AI completeness"

1

Malik, Prasanta, and Samiran Das. "AI-statistical limit points and AI-statistical cluster points." Filomat 36, no. 5 (2022): 1573–85. http://dx.doi.org/10.2298/fil2205573m.

Full text
Abstract:
In this paper using a non-negative regular summability matrix A and a non trivial admissible ideal I of subsets of N we have introduced the notion of AI-statistical limit point as a generalization of A-statistical limit point of sequences of real numbers. We have also studied some basic properties of the sets of all AI-statistical limit points and AI-statistical cluster points of real sequences including their interrelationship. Also introducing additive property of AI-density zero sets we have established AI-statistical analogue of some completeness theorems of R.
APA, Harvard, Vancouver, ISO, and other styles
2

Benjumeda Wynhoven, Isabel María, and Claudio Córdova Lepe. "Analysis of IPV success treatment from an AI approach." PLOS One 20, no. 6 (2025): e0323945. https://doi.org/10.1371/journal.pone.0323945.

Full text
Abstract:
Intimate partner violence (IPV) is a serious social problem in Chile. Understanding the patterns of internalization and the motivations maintaining it is crucial to design optimal treatments that ensure adherence and completeness. This, in addition, is essential to prevent revictimization and improve the quality of life of both victims and their children.The present study analyzes the success of a psychological treatment offered by a Chilean foundation helping IPV victims. A database analysis containing 1,279 cases was performed applying classical statistics and artificial intelligence methods. The aim of the research was to search for cluster grouping and to create a classification model that is able to predict IPV treatment completeness. The main results demonstrate the presence of two main clusters, one including victims who completed the treatment (cluster 1) and a second one containing victims who did not complete the treatment (cluster 2). Cluster classification using an XGBoost model of the treatment completeness had an accuracy of 81%. The results showed that living with the aggressor, age and educational level had the greatest impact on the classification. Considering these factors as input variables allow for a higher precision on the treatment completeness prediction. To our knowledge, this is the first study performed in Chile that uses AI for cluster grouping and for analyzing the variables contributing to the success of an IPV victims’ treatment.
APA, Harvard, Vancouver, ISO, and other styles
3

Govender, Reginald Gerald. "My AI students: Evaluating the proficiency of three AI chatbots in <i>completeness</i> and <i>accuracy</i>." Contemporary Educational Technology 16, no. 2 (2024): ep509. http://dx.doi.org/10.30935/cedtech/14564.

Full text
Abstract:
A new era of artificial intelligence (AI) has begun, which can radically alter how humans interact with and profit from technology. The confluence of chat interfaces with large language models lets humans write a natural language inquiry and receive a natural language response from a machine. This experimental design study tests the capabilities of three popular AI chatbot services referred to as my AI students: Microsoft Bing, Google Bard, and OpenAI ChatGPT on &lt;i&gt;completeness&lt;/i&gt; and &lt;i&gt;accuracy&lt;/i&gt;. A Likert scale was used to rate c&lt;i&gt;ompleteness &lt;/i&gt;and &lt;i&gt;accuracy,&lt;/i&gt; respectively, a three-point and five-point. Descriptive statistics and non-parametric tests were used to compare marks and scale ratings. The results show that AI chatbots were awarded a score of 80.0% overall. However, they struggled with answering questions from the higher Bloom’s taxonomic levels. The median &lt;i&gt;completeness&lt;/i&gt; was 3.00 with a mean of 2.75 and the median &lt;i&gt;accuracy&lt;/i&gt; was 5.00 with a mean of 4.48 across all Bloom’s taxonomy questions (n=128). Overall, the&lt;i&gt; completeness&lt;/i&gt; of the solution was rated mostly incomplete due to limited response (76.2%), while &lt;i&gt;accuracy&lt;/i&gt; was rated mostly correct (83.3%). In some cases, generative text was found to be verbose and disembodied, lacking perspective and coherency. Microsoft Bing ranked first among the three AI text generative tools in providing correct answers (92.0%). The Kruskal-Wallis test revealed a significant difference in &lt;i&gt;completeness &lt;/i&gt;(asymp. sig.=0.037, p&amp;lt;0.05) and &lt;i&gt;accuracy&lt;/i&gt; (asymp. sig.=0.006, p&amp;lt;0.05) among the three AI chatbots. A series of Mann and Whitney tests were carried out showing no significance between AI chatbots for &lt;i&gt;completeness&lt;/i&gt; (all p-values&amp;gt;0.015 and 0&amp;lt;r&amp;lt;0.2), while a significant difference was found for &lt;i&gt;accuracy&lt;/i&gt; between Google Bard and Microsoft Bing (asymp. sig.=0.002, p&amp;lt;0.05, r=0.3 medium effect). The findings suggest that while AI chatbots can generate comprehensive and correct responses, they may have limits when dealing with more complicated cognitive tasks.
APA, Harvard, Vancouver, ISO, and other styles
4

Zavalin, Vyacheslav, and Oksana L. Zavalina. "Are we there yet? Evaluation of AI-generated metadata for online information resources." Information Research an international electronic journal 30, iConf (2025): 732–40. https://doi.org/10.47989/ir30iconf47215.

Full text
Abstract:
Introduction. Generative AI tools are increasingly used in creating descriptive metadata the quality of which is key for information discovery and support of information user tasks. Machine-readable online information resources such as websites naturally lend themselves to automatic metadata creation. Yet, assessments of AI-generated metadata for them are lacking. AI metadata quality research to date is limited to 2 metadata standards. Method. This experimental study assessed the quality of AI-generated descriptive metadata in 4 most widely used standards: Dublin core, MODS, MARC, and BIBFRAME. Three generative AI tools – Gemini, Gemini advanced, and ChatGPT4 – were used to create metadata for an educational website. Analysis. Zero-shot queries prompting AI tools to generate metadata followed the same structure and included the link to metadata scheme’s openly accessible documentation. Comparative in-depth analysis of accuracy and completeness of entire resulting AI-generated metadata records was performed. Results. Overall, AI-generated metadata does not meet the quality threshold. ChatGPT performs somewhat better than 2 other tools on completeness, but accuracy is similarly low in all 3 tools. Conclusions. Current metadata-generating effectiveness of AI tools does not allow to conclude that involvement of human metadata experts in creation of quality (and therefore functional) metadata can be significantly reduced without strong negative impact on information discovery.
APA, Harvard, Vancouver, ISO, and other styles
5

Lee, Sena, and Jung-Won Lee. "The Impact of AI Travel Planner Tourism Information Quality on Continuance Intention: Focusing on Generation Z College Students." Convergence Tourism Contents Society 10, no. 3 (2024): 137–52. https://doi.org/10.22556/jctc.2024.10.3.137.

Full text
Abstract:
Purpose: This study aimed to examine the structural relationship among AI travel planner tourism information quality, perceived usefulness, satisfaction, and continuance intention by applying the modified Post Acceptance Model(PAM) centered on Generation Z. Methods: This study conducted a survey of Generation Z college students who have experience using AI travel planners in travel apps. The survey period was approximately two weeks from November 2024, and a convenience sampling method was used to secure a sample of 152 people and conduct an analysis. Results: The empirical analysis results of this study are as follows. First, the AI travel planner tourism information quality factors were derived as completeness and appropriateness. Second, the AI travel planner tourism information quality factors completeness and appropriateness were found to affect perceived usefulness. Third, the perceived usefulness of the AI travel planner was found to affect satisfaction. Fourth, satisfaction was found to affect continuance intention. These results highlight the importance of providing high-quality, reliable, personalized information through AI travel planners to improve user engagement, satisfaction, and retention. Conclusion: The significance of this study lies in the fact that it systematically verified the travel plan derivation process of AI travel planner users by confirming the influence of AI travel planner tourism information quality on perceived usefulness, satisfaction, and continuance intention.
APA, Harvard, Vancouver, ISO, and other styles
6

Abidin, Zain. "Evaluating the Precision and Dependability of Medical Answers Generated by ChatGPT." Journal of Science, Technology, Education, Art and Medicine 1, no. 1 (2024): 13. https://doi.org/10.63137/jsteam.744858.

Full text
Abstract:
Objective This study aims to assess the accuracy and depth of ChatGPT’s responses to medical questions posed by physicians, providing preliminary evidence of its reliability in offering precise and comprehensive information. Furthermore, the study will shed light on the limitations inherent in AI-generated medical advice. Methods This research involved 10 physicians formulating questions for ChatGPT without patient-specific data. Approximately 29% of the 35 invited doctors participated, creating eight questions each. The questions covered easy, medium, and hard levels, with yes/no or descriptive responses. ChatGPT’s responses were evaluated by physicians for accuracy and completeness using established Likert scales. An internal validation re-submitted questions with low accuracy scores, and statistical measures analyzed the outcomes, revealing insights into response consistency and variation over time. Results The analysis of 80 ChatGPT-generated answers revealed a median accuracy score of 4 (mean 4.7, SD 2.6) and a median completeness score of 2 (mean 1.8, SD 1.5). Notably, 30% of responses achieved the highest accuracy score (6), and 38.7% were rated nearly all correct (5), while 8% were deemed completely incorrect (1). Inaccurate answers were more common for physician-rated hard questions. Completeness varied, with 45% considered comprehensive, 37.5% adequate, and 17.5% incomplete. Modest correlation (Spearman’s r = 0.3) existed between accuracy and completeness across all questions. Conclusion Integrating language models like ChatGPT in medical practice shows promise, but cautious considerations are crucial for safe use. While AI-generated responses display commendable accuracy and completeness, ongoing refinement is needed for reliability. This research lays a foundation for AI integration in healthcare, underscoring the importance of continuous evaluation and regulatory measures to ensure safe and effective implementation.
APA, Harvard, Vancouver, ISO, and other styles
7

Bourré, Natalie. "Can readers spot the AI impostor in healthcare writing?" Medical Writing 32, no. 3 (2023): 38–40. http://dx.doi.org/10.56012/fwhk6920.

Full text
Abstract:
The use of artificial intelligence (AI) writing assistants in the healthcare industry is becoming increasingly prevalent. These tools can help medical writers to generate content more quickly and efficiently, but they also raise concerns about the accuracy and completeness of the information that is produced. This study investigated whether readers can distinguish between health-related texts written by humans and those generated by AI writing assistants. A survey of 164 respondents found that slightly more than half could correctly identify the source of the healthcare text. Differences between healthcare professionals and non-healthcare professionals were not statistically significant. Medical writers were better at recognising that a text had been written by an AI model than were non-medical writers (P&lt;.05). These findings suggest that it is important for organisations to establish clear guidelines regarding the use of AI writing assistants in healthcare. The authors of health-related content should be required to identify whether their work has been completed by a human or an AI writer, and organisations should develop processes for evaluating the accuracy and completeness of AI-generated content. This study has several limitations, including the small sample size. However, the findings provide valuable insights into the need for organisations to develop clear guidelines for their use.
APA, Harvard, Vancouver, ISO, and other styles
8

Ponzo, Valentina, Rosalba Rosato, Maria Carmine Scigliano, et al. "Comparison of the Accuracy, Completeness, Reproducibility, and Consistency of Different AI Chatbots in Providing Nutritional Advice: An Exploratory Study." Journal of Clinical Medicine 13, no. 24 (2024): 7810. https://doi.org/10.3390/jcm13247810.

Full text
Abstract:
Background: The use of artificial intelligence (AI) chatbots for obtaining healthcare advice is greatly increased in the general population. This study assessed the performance of general-purpose AI chatbots in giving nutritional advice for patients with obesity with or without multiple comorbidities. Methods: The case of a 35-year-old male with obesity without comorbidities (Case 1), and the case of a 65-year-old female with obesity, type 2 diabetes mellitus, sarcopenia, and chronic kidney disease (Case 2) were submitted to 10 different AI chatbots on three consecutive days. Accuracy (the ability to provide advice aligned with guidelines), completeness, and reproducibility (replicability of the information over the three days) of the chatbots’ responses were evaluated by three registered dietitians. Nutritional consistency was evaluated by comparing the nutrient content provided by the chatbots with values calculated by dietitians. Results: Case 1: ChatGPT 3.5 demonstrated the highest accuracy rate (67.2%) and Copilot the lowest (21.1%). ChatGPT 3.5 and ChatGPT 4.0 achieved the highest completeness (both 87.3%), whereas Gemini and Copilot recorded the lowest scores (55.6%, 42.9%, respectively). Reproducibility was highest for Chatsonic (86.1%) and lowest for ChatGPT 4.0 (50%) and ChatGPT 3.5 (52.8%). Case 2: Overall accuracy was low, with no chatbot achieving 50% accuracy. Completeness was highest for ChatGPT 4.0 and Claude (both 77.8%), and lowest for Copilot (23.3%). ChatGPT 4.0 and Pi Ai showed the lowest reproducibility. Major inconsistencies regarded the amount of protein recommended by most chatbots, which suggested simultaneously to both reduce and increase protein intake. Conclusions: General-purpose AI chatbots exhibited limited accuracy, reproducibility, and consistency in giving dietary advice in complex clinical scenarios and cannot replace the work of an expert dietitian.
APA, Harvard, Vancouver, ISO, and other styles
9

K, Sabitha. "AI-Powered Cybercrime Reporting System." International Journal for Research in Applied Science and Engineering Technology 13, no. 5 (2025): 3596–601. https://doi.org/10.22214/ijraset.2025.70702.

Full text
Abstract:
Abstract:With the exponential increase in digital threats, the traditional cybercrime reporting process remains largely unstructured and inaccessible to common users.This article proposes an AI-powered cybercrime reporting system that takes advantage of natural language processing and machine learning to offer an intelligent, guided interface for victims to report incidents.The system employs a fine-tuned RoBERTa-base model to classify cybercrimes into 23 predefined categories based on user descriptions and dynamically adjusts the reporting flow to collect appropriate data. Additionally, it enables secure digital evidence handling and automated PDF report generation for law enforcement. The proposed system improves reporting accuracy, user confidence, and evidence completeness, representing a transformative change in digital law enforcement support tools
APA, Harvard, Vancouver, ISO, and other styles
10

Burnette, Hannah, Aliyah Pabani, Mitchell S. von Itzstein, et al. "Use of artificial intelligence chatbots in clinical management of immune-related adverse events." Journal for ImmunoTherapy of Cancer 12, no. 5 (2024): e008599. http://dx.doi.org/10.1136/jitc-2023-008599.

Full text
Abstract:
BackgroundArtificial intelligence (AI) chatbots have become a major source of general and medical information, though their accuracy and completeness are still being assessed. Their utility to answer questions surrounding immune-related adverse events (irAEs), common and potentially dangerous toxicities from cancer immunotherapy, are not well defined.MethodsWe developed 50 distinct questions with answers in available guidelines surrounding 10 irAE categories and queried two AI chatbots (ChatGPT and Bard), along with an additional 20 patient-specific scenarios. Experts in irAE management scored answers for accuracy and completion using a Likert scale ranging from 1 (least accurate/complete) to 4 (most accurate/complete). Answers across categories and across engines were compared.ResultsOverall, both engines scored highly for accuracy (mean scores for ChatGPT and Bard were 3.87 vs 3.5, p&lt;0.01) and completeness (3.83 vs 3.46, p&lt;0.01). Scores of 1–2 (completely or mostly inaccurate or incomplete) were particularly rare for ChatGPT (6/800 answer-ratings, 0.75%). Of the 50 questions, all eight physician raters gave ChatGPT a rating of 4 (fully accurate or complete) for 22 questions (for accuracy) and 16 questions (for completeness). In the 20 patient scenarios, the average accuracy score was 3.725 (median 4) and the average completeness was 3.61 (median 4).ConclusionsAI chatbots provided largely accurate and complete information regarding irAEs, and wildly inaccurate information (“hallucinations”) was uncommon. However, until accuracy and completeness increases further, appropriate guidelines remain the gold standard to follow
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "AI completeness"

1

Vasiliu, Laurenţiu, Dumitru Roman, and Radu Prodan. "Extreme and Sustainable Graph Processing for Green Finance Investment and Trading." In AI, Data, and Digitalization. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-53770-7_8.

Full text
Abstract:
AbstractThe Graph-Massivizer project, funded by the Horizon Europe research and innovation program, aims to create a high-performance and sustainable platform for extreme data processing. This paper focuses on one use case that addresses the limitations of financial market data for green and sustainable investments. The project allows for the fast, semi-automated creation of realistic and affordable synthetic (extreme) financial datasets of any size for testing and improving AI-enhanced financial algorithms for green investment and trading. Synthetic data usage removes biases, ensures data affordability and completeness, consolidates financial algorithms, and provides a statistically relevant sample size for advanced back-testing.
APA, Harvard, Vancouver, ISO, and other styles
2

Yampolskiy, Roman V. "Turing Test as a Defining Feature of AI-Completeness." In Studies in Computational Intelligence. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-29694-9_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Šekrst, Kristina. "AI-Completeness: Using Deep Learning to Eliminate the Human Factor." In Guide to Deep Learning Basics. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-37591-1_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rondina, Marco, Antonio Vetrò, and Juan Carlos De Martin. "Completeness of Datasets Documentation on ML/AI Repositories: An Empirical Investigation." In Progress in Artificial Intelligence. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-49008-8_7.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Klievtsova, Nataliia, Janik-Vasily Benzin, Juergen Mangler, Timotheus Kampik, and Stefanie Rinderle-Ma. "Process Modeler vs. Chatbot: Is Generative AI Taking over Process Modeling?" In Lecture Notes in Business Information Processing. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-82225-4_47.

Full text
Abstract:
Abstract Large language models (LLMs) have become a promising tool for automating complex tasks such as process model generation from text. In order to evaluate the capabilities of LLMs in generating process models, it is crucial to provide means to assess the output quality. A few studies have already provided key performance indicators for assessing aspects such as completeness of the models in a quantitative way. In this paper, we focus on the qualitative assessment of generated process models generated by LLMs based on a user survey. By analyzing user preferences, we aim to determine whether LLM-generated process models meet the needs and expectations of experts. Our analysis reveals that 60% of users, regardless of their modeling experience, prefer LLM-generated models over human-created ground truth models.
APA, Harvard, Vancouver, ISO, and other styles
6

Rittelmeyer, Jack Daniel, and Kurt Sandkuhl. "A Survey to Evaluate the Completeness and Correctness of a Morphological Box for AI Solutions." In Lecture Notes in Business Information Processing. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-61003-5_11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

"◾ AI-Completeness: The Problem Domain of Superintelligent Machines." In Artificial Superintelligence. Chapman and Hall/CRC, 2015. http://dx.doi.org/10.1201/b18612-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Saxena, Rahul, Goutami Katage, Chandan Kumar, Nawaj Mehtab Pathan, and Mohammadazar Nisar Bargir. "AI Redefining Healthcare Documentation for Tomorrow." In Advances in Healthcare Information Systems and Administration. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-3989-3.ch003.

Full text
Abstract:
Healthcare providers heavily rely on accurate and comprehensive documentation to ensure the best possible care for patients, plan treatments, and conduct research. However, traditional documentation methods are often time-consuming, prone to errors, and generate massive amounts of paperwork. Fortunately, AI has the potential to revolutionize healthcare documentation and streamline the process for healthcare providers. AI can automate various aspects of healthcare documentation, such as transcription, coding, and billing, by leveraging machine learning algorithms and natural language processing. AI-driven solutions can also enhance the accuracy and completeness of patient records by detecting patterns that may not be easily noticeable to healthcare providers. This can lead to more informed decision-making and personalized treatment plans for patients. AI can standardize and structure data, facilitating seamless information exchange between different healthcare systems and enhancing interoperability. Overall, the integration of AI in healthcare documentation holds significant potential for transforming healthcare delivery and offering more efficient, accurate, and integrated care.
APA, Harvard, Vancouver, ISO, and other styles
9

Lin, Zu-Chun, Malcolm Koo, Wan-Jung Chang, et al. "Comparing Artificial Intelligence-Based Versus Conventional Endotracheal Tube Monitoring Systems in Clinical Practice." In Studies in Health Technology and Informatics. IOS Press, 2024. http://dx.doi.org/10.3233/shti240230.

Full text
Abstract:
Endotracheal tube dislodgement is a common patient safety incident in clinical settings. Current clinical practices, primarily relying on bedside visual inspections and equipment checks, often fail to detect endotracheal tube displacement or dislodgement promptly. This study involved the development of a deep learning, artificial intelligence (AI)-based system for monitoring tube displacement. We also propose a randomized crossover experiment to evaluate the effectiveness of this AI-based monitoring system compared to conventional methods. The assessment will focus on immediacy in detecting and handling of tube anomalies, the completeness and accuracy of shift transitions, and the degree of innovation diffusion. The findings from this research are expected to offer valuable insights into the development and integration of AI in enhancing care provision and facilitating innovation diffusion in medical and nursing research.
APA, Harvard, Vancouver, ISO, and other styles
10

Settibathini, Venkata Surendra Kumar, Ankit Virmani, Manoj Kuppam, Nithya S., S. Manikandan, and Elayaraja C. "Shedding Light on Dataset Influence for More Transparent Machine Learning." In Explainable AI Applications for Human Behavior Analysis. IGI Global, 2024. http://dx.doi.org/10.4018/979-8-3693-1355-8.ch003.

Full text
Abstract:
From healthcare to banking, machine learning models are essential. However, their decision-making processes can be mysterious, challenging others who rely on their insights. The quality and kind of training and evaluation datasets determine these models' transparency and performance. This study examines how dataset factors affect machine learning model performance and interpretability. This study examines how data quality, biases, and volume affect model functionality across a variety of datasets. The authors find that dataset selection and treatment are crucial to transparent and accurate machine learning results. Accuracy, completeness, and relevance of data affect the model's learning and prediction abilities. Due to sampling practises or historical prejudices in data gathering, dataset biases can affect model predictions, resulting in unfair or unethical outcomes. Dataset size is also important, according to our findings. Larger datasets offer greater learning opportunities but might cause processing issues and overfitting. Smaller datasets may not capture real-world diversity, resulting in underfitting and poor generalisation. These views and advice are useful for practitioners. These include ways for pre-processing data to reduce bias, assuring data quality, and determining acceptable dataset sizes. Addressing these dataset-induced issues can improve machine learning model transparency and effectiveness, making them solid, ethical tools for many applications.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "AI completeness"

1

Skaf, Ali, Samir Dawaliby, and Arezki Aberkane. "The NP-Completeness of Quay Crane Scheduling Problem." In 8th International Conference on Artificial Intelligence and Applications (AI 2022). Academy and Industry Research Collaboration Center (AIRCC), 2022. http://dx.doi.org/10.5121/csit.2022.121802.

Full text
Abstract:
This paper discusses the computational complexity of the quay crane scheduling problem (QCSP) in a maritime port. To prove that a problem is NP-complete, there should be no polynomial time algorithm for the exact solution, and only heuristic approaches are used to obtain near-optimal solutions but in reasonable time complexity. To address this, first we formulate the QCSP as a mixed integer linear programming to solve it to optimal, and next we theoretically prove that the examined problem is NP-complete.
APA, Harvard, Vancouver, ISO, and other styles
2

Torre, Damiano, Sallam Abualhaija, Mehrdad Sabetzadeh, et al. "An AI-assisted Approach for Checking the Completeness of Privacy Policies Against GDPR." In 2020 IEEE 28th International Requirements Engineering Conference (RE). IEEE, 2020. http://dx.doi.org/10.1109/re48521.2020.00025.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

"On Representing Natural Languages and Bio-molecular Structures using Matrix Insertion-deletion Systems and its Computational Completeness." In AI Methods for Interdisciplinary Research in Language and Biology. SciTePress - Science and and Technology Publications, 2011. http://dx.doi.org/10.5220/0003308900470056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Talavera-Martinez, Lidia, Antonio Nadal-Martinez, Marc Munar, and Manuel Gonzalez-Hidalgo. "Exploring CNNs and information aggregation models to improve pulmonary X-ray segmentation." In LatinX in AI at Computer Vision and Pattern Recognition Conference 2024. Journal of LatinX in AI Research, 2024. http://dx.doi.org/10.52591/lxai202406175.

Full text
Abstract:
This study explores aggregation and consensus methods to combine lung segmentations from various neural network models in X-ray images, aiming to enhance accuracy and completeness. Through extensive experimentation, the research identifies the most effective aggregation method, with WOWA aggregation and a maximum-based consensus approach outperforming individual models. This underscores the importance of aggregation techniques in optimizing anatomical structure segmentation in medical imaging.
APA, Harvard, Vancouver, ISO, and other styles
5

Chen, Tongjian, Weiping Wang, and Xiaofang Wang. "Application of AI in Optimization of Machining Conditions." In ASME 1991 International Computers in Engineering Conference and Exposition. American Society of Mechanical Engineers, 1991. http://dx.doi.org/10.1115/cie1991-0006.

Full text
Abstract:
Abstract In these days researchers have been attempting to build up a versatile optimization method which is adaptable for all the purposes but no ideal one has been appeared. The paper proposes a new consideration for practising the optimization of machining conditions in various machining processes on workshop scenes. The optimizing strategy is through an expert system of selection to determine a most effective algorithm from the current sophiticated optimizing algorithms collected in the knowledge base as subroutines, then to run the algorithem program and obtain the optimized results by means of the interactive function of expert system. The method not only has the versatile property to be used in various sorts of machining easily but also keeps the completeness of each sophiticated optimizing method developed for a special machining process without compromise.
APA, Harvard, Vancouver, ISO, and other styles
6

Lourenco, Vitor, and Aline Paes. "A Modality-level Explainable Framework for Misinformation Checking in Social Networks." In LatinX in AI at Neural Information Processing Systems Conference 2022. Journal of LatinX in AI Research, 2022. http://dx.doi.org/10.52591/lxai202211283.

Full text
Abstract:
The widespread of false information is a rising concern worldwide with critical social impact, inspiring the emergence of fact-checking organizations to mitigate misinformation dissemination. However, human-driven verification leads to a time-consuming task and a bottleneck to have checked trustworthy information at the same pace they emerge. Since misinformation relates not only to the content itself but also to other social features, this paper addresses automatic misinformation checking in social networks from a multimodal perspective. Moreover, as simply naming a piece of news as incorrect may not convince the citizen and, even worse, strengthen confirmation bias, the proposal is a modality-level explainable-prone misinformation classifier framework. Our framework comprises a misinformation classifier assisted by explainable methods to generate modality-oriented explainable inferences. Preliminary findings show that the misinformation classifier does benefit from multimodal information encoding and the modality-oriented explainable mechanism increases both inferences’ interpretability and completeness.
APA, Harvard, Vancouver, ISO, and other styles
7

Ibeling, Duligur, and Thomas Icard. "On the Conditional Logic of Simulation Models." In Twenty-Seventh International Joint Conference on Artificial Intelligence {IJCAI-18}. International Joint Conferences on Artificial Intelligence Organization, 2018. http://dx.doi.org/10.24963/ijcai.2018/258.

Full text
Abstract:
We propose analyzing conditional reasoning by appeal to a notion of intervention on a simulation program, formalizing and subsuming a number of approaches to conditional thinking in the recent AI literature. Our main results include a series of axiomatizations, allowing comparison between this framework and existing frameworks (normality-ordering models, causal structural equation models), and a complexity result establishing NP-completeness of the satisfiability problem. Perhaps surprisingly, some of the basic logical principles common to all existing approaches are invalidated in our causal simulation approach. We suggest that this additional flexibility is important in modeling some intuitive examples.
APA, Harvard, Vancouver, ISO, and other styles
8

Mello, Rômulo Chrispim de, Jorão Gomes Jr., Jairo Francisco de Souza, and Victor Ströele. "Constructing a KBQA Framework: Design and Implementation." In Proceedings of the Brazilian Symposium on Multimedia and the Web. Sociedade Brasileira de Computação - SBC, 2024. http://dx.doi.org/10.5753/webmedia.2024.243150.

Full text
Abstract:
The exponential growth of data on the internet has made information retrieval increasingly challenging. Knowledge-based Question-Answering (KBQA) framework offers an efficient solution that quickly provides accurate and relevant information. However, these frameworks face significant challenges, especially when dealing with complex queries involving multiple entities and properties. This paper studies KBQA frameworks, focusing on improving entity recognition, property extraction, and query generation using advanced Natural Language Processing (NLP) and Artificial Intelligence (AI) techniques. We implemented and evaluated combination tools for extracting entities and properties, with the combination of models achieving the best performance. Our evaluation metrics included entity and property retrieval, SPARQL query completeness, and accuracy. The results demonstrated the effectiveness of our approach, with high accuracy rates in identifying entities and properties.
APA, Harvard, Vancouver, ISO, and other styles
9

Jenkins, Michael, Elizabeth Thiry, Richard Stone, Caroline Kingsley, and Calvn Leather. "Exploring Generative AI as a Proxy User for Early Stage User Research - Preliminary Findings." In AHFE 2023 Hawaii Edition. AHFE International, 2023. http://dx.doi.org/10.54941/ahfe1004305.

Full text
Abstract:
The potential of generative AI has exploded of late, largely due in part because of the improved accessibility that tools like ChatGPT afford for non-data-scientist / developer users. One potential area of application is for Generative AI models to serve as proxy users in early stage user research. User research is a crucial component of product development, helping to understand user needs, preferences, and behaviors. However, conducting user research can be time-consuming, resource-intensive, and may require access to a user population that is challenging to access (e.g., military users). Generative AI models have shown remarkable progress in generating human-like text and simulating user interactions based on a significant corpus of training materials that serves as the knowledge base for the AI’s reasoning. This paper provides preliminary findings from explorations on the feasibility of leveraging generative AI as a proxy user to inform early stage user research. Using the GPT-4.0 architecture and the Open-AI ChatGPT user interface (chat.openai.com), we conducted preliminary research for six different candidate end user populations. This was accomplished by generating generic product descriptions, notional user personas each respective product, contextualizing ChatGPT to act as the user persona, and then asking a series of generic user experience research (UXR) questions of the GPT model. Responses from ChatGPT were then scored by three UXR / Human Factors subject-matter experts to evaluate the perceived utility of ChatGPT’s responses in terms of supporting early stage product design as a proxy human user. By evaluating the effectiveness of generative AI as a proxy user, this research aims to shed light on its potential benefits and limitations in supporting early stage user research efforts. While additional research is still needed (e.g., comparing the results of ChatGPT to responses generated by actual end users, having SME’s evaluate the accuracy and completeness of ChatGPT’s responses), preliminary findings are promising for the potential that generative AI models hold to serve as early stage proxy users to inform research and product design efforts in domains where significant corpuses of data already exist for model training, and where access to human end users may be restricted our otherwise prohibited.
APA, Harvard, Vancouver, ISO, and other styles
10

Gurina, Ekaterina, Ksenia Antipova, Nikita Klyuchnikov, Dmitry Koroteev, and Amel Gammoudi. "Drilling with AI-Based Systems: To Trust, or Not to Trust Model Forecast with Low Quality Real-Time Data?" In GOTECH. SPE, 2025. https://doi.org/10.2118/224548-ms.

Full text
Abstract:
Abstract Data-driven systems typically require input data of the same quality as that used to train their underlying algorithms in order to function effectively. However, drilling support data often falls short of these standards. The performance of AI-based systems designed to make drilling more efficient is directly affected by the consistency, frequency, completeness, and accuracy (e.g., presence of anomalies) of the incoming real-time drilling data stream. The paper discusses data quality control approaches for real-time drilling data, data pre-processing steps, and the influence of these approaches on the final AI-based model decisions. The impact of low quality real-time data on the decision-making process is considered using the example of an information system designed for predicting drilling accidents on real wells using data from the WITSML protocol stream. The methodology describes different aspects of consistency analysis of drilling parameters among each other and data quality calculations. Additionally, the impact of missing data fragments and missing channels on the prediction accuracy is analyzed. The quality of the algorithm operation as part of the information system is evaluated as the percentage of correct predictions and the number of false alarms per well per day. Lack of connectivity, data loss and anomalies are almost ubiquitous in the drilling support process. Single data losses of less than 10 minutes do not significantly change the accident prediction curve if they do not contain precursors of emergency situations. At the same time, small losses in individual channels can be recovered considering normal drilling conditions. Data quality is a critical factor for any AI algorithm. Automatically detecting and handling corrupted or inconsistent data enhances the decision-making process of AI-based systems.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography