Academic literature on the topic '4602 Artificial intelligence'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic '4602 Artificial intelligence.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "4602 Artificial intelligence"

1

Philip, Ninan Sajeeth, and K. Babu Joseph. "Boosting the differences: A fast Bayesian classifier neural network." Intelligent Data Analysis 4, no. 6 (December 22, 2000): 463–73. http://dx.doi.org/10.3233/ida-2000-4602.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Liberatore, Paolo. "Revision by History." Journal of Artificial Intelligence Research 52 (February 25, 2015): 287–329. http://dx.doi.org/10.1613/jair.4608.

Full text
Abstract:
This article proposes a solution to the problem of obtaining plausibility information, which is necessary to perform belief revision: given a sequence of revisions, together with their results, derive a possible initial order that has generated them; this is different from the usual assumption of starting from an all-equal initial order and modifying it by a sequence of revisions. Four semantics for iterated revision are considered: natural, restrained, lexicographic and reinforcement. For each, a necessary and sufficient condition to the existence of an order generating a given history of revisions and results is proved. Complexity is proved coNP complete in all cases but one (reinforcement revision with unbounded sequence length).
APA, Harvard, Vancouver, ISO, and other styles
3

Ippolito, M. G., G. Morana, E. Riva Sanseverino, and F. Vuinovich. "Ant Colony Search Algorithm for Optimal Strategical Planning of Electrical Distribution Systems Expansion." Applied Intelligence 23, no. 3 (December 2005): 139–52. http://dx.doi.org/10.1007/s10489-005-4604-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Piechowiak, Sylvain, and Joaquin Rodriguez. "The Localization and Correction of Errors in Models: A Constraint-Based Approach." Applied Intelligence 23, no. 3 (December 2005): 153–64. http://dx.doi.org/10.1007/s10489-005-4605-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kott, Alexander, Ray Budd, Larry Ground, Lakshmi Rebbapragada, and John Langston. "Building a Tool for Battle Planning: Challenges, Tradeoffs, and Experimental Findings." Applied Intelligence 23, no. 3 (December 2005): 165–89. http://dx.doi.org/10.1007/s10489-005-4606-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Tsai, Hung-Hsu, and Ji-Shiung Cheng. "Adaptive Signal-Dependent Audio Watermarking Based on Human Auditory System and Neural Networks." Applied Intelligence 23, no. 3 (December 2005): 191–206. http://dx.doi.org/10.1007/s10489-005-4607-y.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Li, Ju Hui, Meng Hiot Lim, and Qi Cao. "A QoS-Tunable Scheme for ATM Cell Scheduling Using Evolutionary Fuzzy System." Applied Intelligence 23, no. 3 (December 2005): 207–18. http://dx.doi.org/10.1007/s10489-005-4608-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Debnath, Rameswar, Masakazu Muramatsu, and Haruhisa Takahashi. "An Efficient Support Vector Machine Learning Method with Second-Order Cone Programming for Large-Scale Problems." Applied Intelligence 23, no. 3 (December 2005): 219–39. http://dx.doi.org/10.1007/s10489-005-4609-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Kitahara, Tetsuro, Masataka Goto, and Hiroshi G. Okuno. "Pitch-Dependent Identification of Musical Instrument Sounds." Applied Intelligence 23, no. 3 (December 2005): 267–75. http://dx.doi.org/10.1007/s10489-005-4612-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Mostafa, Mohamed K., Ahmed S. Mahmoud, Mohamed S. Mahmoud, and Mahmoud Nasr. "Computational-Based Approaches for Predicting Biochemical Oxygen Demand (BOD) Removal in Adsorption Process." Adsorption Science & Technology 2022 (May 10, 2022): 1–15. http://dx.doi.org/10.1155/2022/9739915.

Full text
Abstract:
Predicting the adsorption performance to remove organic pollutants from wastewater is an essential environmental-related topic, requiring knowledge of various statistical tools and artificial intelligence techniques. Hence, this study is the first to develop a quadratic regression model and artificial neural network (ANN) for predicting biochemical oxygen demand (BOD) removal under different adsorption conditions. Nanozero-valent iron encapsulated into cellulose acetate (CA/nZVI) was synthesized, characterized by XRD, SEM, and EDS, and used as an efficient adsorbent for BOD reduction. Results indicated that the medium pH and adsorption time should be adjusted around 7 and 30 min, respectively, to maintain the highest BOD removal efficiency of 96.4% at initial BOD concentration C o = 100 mg/L, mixing rate = 200 rpm, and adsorbent dosage of 3 g/L. An optimized ANN structure of 5–10–1, with the “trainlm” back-propagation learning algorithm, achieved the highest predictive performance for BOD removal ( R 2 : 0.972, Adj- R 2 : 0.971, RMSE: 1.449, and SSE: 56.680). Based on the ANN sensitivity analysis, the relative importance of the adsorption factors could be arranged as pH > adsorbent dosage > time ≈ stirring speed > C o . A quadratic regression model was developed to visualize the impacts of adsorption factors on the BOD removal efficiency, optimizing pH at 7.3 and time at 46.2 min. The accuracy of the quadratic regression and ANN models in predicting BOD removal was approximately comparable. Hence, these computational-based methods could further maximize the performance of CA/nZVI material for removing BOD from wastewater under different adsorption conditions. The applicability of these modeling techniques would guide the stakeholders and industrial sector to overcome the nonlinearity and complexity issues related to the adsorption process.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "4602 Artificial intelligence"

1

Cerda-Villafana, Gustavo. "Artificial intelligence techniques in flood forecasting." Thesis, University of Bristol, 2005. http://hdl.handle.net/1983/09d0faea-8622-4609-a33c-e4baefa304f5.

Full text
Abstract:
The need for reliable, easy to set up and operate, hydrological forecasting systems is an appealing challenge to researchers working in the area of flood risk management. Currently, advancements in computing technology have provided water engineering with powerful tools in modelling hydrological processes, among them, Artificial Neural Networks (ANN) and genetic algorithms (GA). These have been applied in many case studies with different level of success. Despite the large amount of work published in this field so far, it is still a challenge to use ANN models reliably in a real-time operational situation. This thesis is set to explore new ways in improving the accuracy and reliability of ANN in hydrological modelling. The study is divided into four areas: signal preprocessing, integrated GA, schematic application of weather radar data, and multiple input in flow routing. In signal preprocessing, digital filters were adopted to process the raw rainfall data before they are fed into ANN models. This novel technique demonstrated that significant improvement in modelling could be achieved. A GA, besides finding the best parameters of the ANN architecture, defined the moving average values for previous rainfall and flow data used as one of the inputs to the model. A distributed scheme was implemented to construct the model exploiting radar rainfall data. The results from weather radar rainfall were not as good as the results from raingauge estimations which were used for comparison. Multiple input has been carried out modelling a river junction with excellent results and an extraction pump with results not so promising. Two conceptual models for flow routing modelling and a transfer function model for rainfall-runoff modelling have been used to compare the ANN model's performance, which was close to the estimations generated by the conceptual models and better than the transfer function model. The flood forecasting system implemented in East Anglia by the Environment Agency, and the NERC HYREX project have been the main data sources to test the model.
APA, Harvard, Vancouver, ISO, and other styles
2

Muncy, David. "Automated Conjecturing Approach for Benzenoids." VCU Scholars Compass, 2016. http://scholarscompass.vcu.edu/etd/4608.

Full text
Abstract:
Benzenoids are graphs representing the carbon structure of molecules, defined by a closed path in the hexagonal lattice. These compounds are of interest to chemists studying existing and potential carbon structures. The goal of this study is to conjecture and prove relations between graph theoretic properties among benzenoids. First, we generate conjectures on upper bounds for the domination number in benzenoids using invariant-defined functions. This work is an extension of the ideas to be presented in a forthcoming paper. Next, we generate conjectures using property-defined functions. As the title indicates, the conjectures we prove are not thought of on our own, rather generated by a process of automated conjecture-making. This program, named Cᴏɴᴊᴇᴄᴛᴜʀɪɴɢ, is developed by Craig Larson and Nico Van Cleemput.
APA, Harvard, Vancouver, ISO, and other styles
3

Kartsaklis, Dimitrios. "Compositional distributional semantics with compact closed categories and Frobenius algebras." Thesis, University of Oxford, 2014. http://ora.ox.ac.uk/objects/uuid:1f6647ef-4606-4b85-8f3b-c501818780f2.

Full text
Abstract:
The provision of compositionality in distributional models of meaning, where a word is represented as a vector of co-occurrence counts with every other word in the vocabulary, offers a solution to the fact that no text corpus, regardless of its size, is capable of providing reliable co-occurrence statistics for anything but very short text constituents. The purpose of a compositional distributional model is to provide a function that composes the vectors for the words within a sentence, in order to create a vectorial representation that re ects its meaning. Using the abstract mathematical framework of category theory, Coecke, Sadrzadeh and Clark showed that this function can directly depend on the grammatical structure of the sentence, providing an elegant mathematical counterpart of the formal semantics view. The framework is general and compositional but stays abstract to a large extent. This thesis contributes to ongoing research related to the above categorical model in three ways: Firstly, I propose a concrete instantiation of the abstract framework based on Frobenius algebras (joint work with Sadrzadeh). The theory improves shortcomings of previous proposals, extends the coverage of the language, and is supported by experimental work that improves existing results. The proposed framework describes a new class of compositional models thatfind intuitive interpretations for a number of linguistic phenomena. Secondly, I propose and evaluate in practice a new compositional methodology which explicitly deals with the different levels of lexical ambiguity (joint work with Pulman). A concrete algorithm is presented, based on the separation of vector disambiguation from composition in an explicit prior step. Extensive experimental work shows that the proposed methodology indeed results in more accurate composite representations for the framework of Coecke et al. in particular and every other class of compositional models in general. As a last contribution, I formalize the explicit treatment of lexical ambiguity in the context of the categorical framework by resorting to categorical quantum mechanics (joint work with Coecke). In the proposed extension, the concept of a distributional vector is replaced with that of a density matrix, which compactly represents a probability distribution over the potential different meanings of the specific word. Composition takes the form of quantum measurements, leading to interesting analogies between quantum physics and linguistics.
APA, Harvard, Vancouver, ISO, and other styles
4

Kaywan, Payam. "Human Depression Analysis: An Experimental Study of the Use of AI Botics for Early Detection." Thesis, 2022. https://vuir.vu.edu.au/43946/.

Full text
Abstract:
The world is facing a shortage of professional medical staff, a situation which has been exacerbated by the COVID-19 pandemic which has significantly increased challenges globally and has had an adverse impact on the health care system. This has also led to additional barriers to patient care access, specifically for individuals who are in need of constant special care. To address the issue of limited access to medical professionals, medical assistance can be provided to patients in the form of a chatbot which acts as a proxy between psychiatrists and patients and is available and accessible 24/7. Although there has been a degree of success in developing medical chatbots, many medical professionals believe that the use of chatbots in early depression detection needs to be more practical which will require further research. In this research, we address the well-known and common shortcomings which have been discussed in the recent literature. Three of these shortcomings are as follows: firstly, there is a lack of open-ended questions to enable participants to interact openly and without any restrictions about their moods and emotions as most bots in the literature constrain the participants’ responses by limiting them to multiple choice questions which means the participants are not able to open up and describe their real feelings freely. Secondly, there is a lack of semantic analysis to draw exact meaning from a text. Thirdly, there is a requirement for participants to make a long-term commitment in terms of their involvement in the research. This research introduces a depression analysis chatbot, DEPRA, which aims to resolve some of these shortcomings and challenges by asking open-ended questions, providing semantic analyses and automatic depression scoring. DEPRA is developed using contemporary bot platforms, Dialogflow on Google cloud-based infrastructure, and is integrated with social network platforms such as Facebook. Most chatbots today are designed for therapeutic purposes. However, the DEPRA chatbot is designed with a focus on the detection of depression in its early stages. DEPRA is designed based on a structured early detection depression Standard Interview Guideline the Hamilton Depression Scale (SIGH-D) and Inventory of Depressive Symptomatology (IDS-C), which is used by professional psychiatrists in triage sessions with patients. DEPRA has been trained with personalized utterances from a focus group. This research utilizes Natural Language Processing (NLP) to identify the depression level of participants based on their recorded conversation. DEPRA uses a scoring system to determine the participant’s depression level and severity. This research also details a non-clinical trial with 50 participants who interacted with the DEPRA chatbot. Due to the ethical limitations of this research, such as only residents of Australia and participants to be in the age group of 18 to 80 years old, we have approached a dataset with only 50 participants. This size of dataset was suitable to conduct and run the research. However, the future studies will target a more comprehensive dataset. This study was a first stage of utilizing Chatbot for early detection of depression. In this stage our goal was to develop the system not to run the clinical trial. Therefore, we required a sample that could assist us mainly to identify the accuracy of the system developed. Future work is to access evaluation by human expert which goes into the next phase of the project and could also include extending the sample and/or enhancing the system further and the assistance would be offered to western health. Therefore, at this stage 50+ sample sufficed to capture various responses by the people that had participants of different level of depression. To evaluate the autoscoring feature of DEPRA, the accuracy of the Machine Learning (ML) algorithms is calculated. Accordingly, manual scoring is compared with the calculated depression scores. The average accuracy of the 27 questions related to the linear SVC of the 26 participants’ experiment is 88%, the SGD algorithm of 40 participants’ experiment is 80%, and the linear SVC of 50 participants’ experiment is 87%. Furthermore, the overall satisfaction rate of using DEPRA was 79% indicating that the participants had a high rate of user satisfaction and engagement.
APA, Harvard, Vancouver, ISO, and other styles
5

Sarki, Rubina. "Automatic Detection of Diabetic Eye Disease Through Deep Learning Using Fundus Images." Thesis, 2021. https://vuir.vu.edu.au/42641/.

Full text
Abstract:
Diabetes is a life-threatening disease that affects various human body organs, including eye retina. Advanced Diabetic Eye disease (DED) leads to permanent vision loss; thus, early detection of DED symptoms is essential to prevent disease escalation and timely treatment. Studies have shown that 90% of DED cases can be avoided with early diagnosis and treatment. Ophthalmologists use fundus images for DED screening to identify the relevant DED lesions. Due to the growing number of diabetic patients, it is becoming unaffordable for the volume of fundus images to be manually examined. Moreover, changes in the eye anatomy during its early stage are frequently untraceable by human eye due to subtle nature of the features, and a large volume of fundus images puts a significant strain on limited specialist resources, rendering manual analysis practically infeasible. Therefore, considering the popularity of deep learning in real-world applications, this research scrutinized deep learning-based methods to facilitate early DED detection and address the issues currently faced. Despite promising results on the binary classification of healthy and severe DED, highly accurate detection of early anatomical changes in the eye using Deep Learning remains a challenge in wide-scale practical application. Similarly, all previous fundus retinal image classification studies assigned a multi-class classification problems are still a challenge in Deep Learning. While studies conducted in the past have released high classification performance outputs managed by hyper- parameters settings, applying the binary classification model to the actual clinical environment in which visiting patients suffer from different DED diseases is technically tricky. Nevertheless, mild and multi-class DED classification aimed studies have been very minimal. Furthermore, it is observed that previous researches lack in addressing the development of automated detection of early DED, jointly in one system. Detection of DED in one system is considered to be essential for treatment in terms of specific lesions. Identification of the abnormalities in that specific retinal region can provide specific treatment to the target region of the eye, which is mostly affected. In this thesis, we explore different novel Deep Learning methods for automated detection of early (healthy and one mild) and multi-class (three or more) DED employing retinal fundus images. For this purpose, we explore transfer learning based models and build a new convolutional neural network method in automatic feature extraction and classification, based on deep neural networks. To develop an enhanced system certain number of original deep learning approach has been combined with various other advanced techniques such as: (i) image pre-processing, (ii) data augmentation, (iii) DED feature extraction and segmentation (iv) model fine-tune, and (v) model optimization selection. Therefore, the results of the analysis of several retinal image features demonstrate that deep learning can attend a state-of-the-art accuracy for early DED diagnosis.
APA, Harvard, Vancouver, ISO, and other styles
6

Singh, Ravinder. "Extracting Human Behaviour and Personality Traits from Social Media." Thesis, 2021. https://vuir.vu.edu.au/42639/.

Full text
Abstract:
Online social media has evolved as an integral part of human society. It facilitates collaboration and information flow, and has emerged as a crucial instrument for business and government organizations alike. Online platforms are being used extensively for work, entertainment, collaboration and communication. These positive aspects are, however, overshadowed by their shortcomings. With the constant evolution and expansion of social media platforms, a significant shift has occurred in the way some humans interact with others. Online social media platforms have inadvertently emerged as networking hubs for individuals exhibiting antisocial behaviour (ASB), putting vulnerable groups of people at risk. Online ASB is one of the most common personality disorders seen on these platforms, and is challenging to address due to its complexities. Human rights are the keystones of sturdy communities. Respect for these rights, based on the values of equality, dignity and appreciation, is vital and an integral part of strong societies. Every individual has a fundamental right to freely participate in all legal activities, including socializing in both the physical and online worlds. ASB, ranging from threatening, aggression, disregard for safety and failure to conform to lawful behaviour, deter such participation and must be dealt with accordingly. Online ASB is the manifestation of everyday sadism and violates the elementary rights (to which all individuals are entitled) of Its victims. Not only does it interfere with social participation, it also forces individuals into anxiety, depression and suicidal ideation. The consequences of online ASB for victims' and families' mental health are often far-reaching, severe and long-lasting, and can even create a social welfare burden. The behaviour can, not only inhibit constructive user participation with social media, it defies the sole purpose of these platforms: to facilitate communication and collaboration at scale. ASB needs to be detected and curtailed, encouraging fair user participation and preventing vulnerable groups of people from falling victim to such behaviour. Considering the large variety, high contribution speed and high volume of social media data, a manual approach to detecting and classifying online ASB is not a feasible option. Furthermore, a traditional approach based on a pre-defined lexicon and rule-based feature engineering may still fall short of capturing the subtle and latent features of the diverse and enormous volume of social media data. State-of-the-art deep learning, which is a sub-field of machine learning, has produced astonishing results in numerous text classification undertakings, and has outperformed the aforementioned techniques. However, given the complexity associated with implementing deep learning algorithms and their relatively recent development, models based on the technology have significantly been under-utilized when working with online behaviour studies. Specifically, no prior study has undertaken the task of fine-grained and user- generated social media content classification related to online ASB utilizing the deep learning technology. This thesis introduces a novel three-part framework, based on deep learning, with the objectives of: i) Detecting behaviour and personality traits from online platforms; (ii) Binary detection of online antisocial behaviour and (iii) Multiclass antisocial behaviour detection from social media corpora. A high accuracy classification model is presented proceeded by extensive experimentation with different machine learning and deep learning algorithms, fine tuning of hyper- parameters, and using different feature extraction techniques. Disparate behaviour and personality traits, including ASB and its four variants are detected with a significantly high accuracy from online social media platforms. Along the way, three medium-sized gold standard benchmark data set have been constructed. The proposed approach is seminal and offers a step towards efficient and effective methods of online ASB prevention. The approach and the findings within this thesis are significant and crucial as these lay the groundwork for detecting and eliminating all types of undesirable and unacceptable social behaviour traits from online platforms.
APA, Harvard, Vancouver, ISO, and other styles
7

Zaroug, Abdelrahman. "Machine Learning Model for the Prediction of Human Movement Biomechanics." Thesis, 2021. https://vuir.vu.edu.au/42489/.

Full text
Abstract:
An increasingly useful application of machine learning (ML) is in predicting features of human actions. If it can be shown that algorithm inputs related to actual movement mechanics can predict a limb or limb segment’s future trajectory, a range of apparently intractable problems in movement science could be solved. The forecasting of lower limb trajectories can anticipate movement characteristics that may predict the risk of tripping, slipping or balance loss. Particularly in the design of human augmentation technology such as the exoskeleton, human movement prediction will improve the synchronisation between the user and the device greatly enhancing its efficacy. Long Short Term Memory (LSTM) neural neworks are a subset of ML algoithms that proven a wide success in modelling the human movement data. The aim of this thesis was to examine four LSTM neural nework architectures (Vanilla, Stacked, Bidirectional and Autoencoders) in predicting the future trajectories of lower limb kinematics, i.e. Angular Velocity (AV) and Linear Acceleration (LA). This work also aims to investigate whether linear statistical methods such as the Linear Regression (LR) is enough to predict the trajectories of lower limb kinematics. Kinematics data (LA and AV) of foot, shank and thigh were collected from 13 male and 3 female participants (28 ± 4 years old, 1.72 ± 0.07 m in height, 66 ± 10 kg in mass) who walked for 10 minutes at 4 different walking speeds on a 0% gradient treadmill. Walking -1 -1 speeds included preferred walking speed (PWS 4.34 ± 0.43 km.h ), imposed speed (5km.h , 15.4% ± 7.6% faster), slower speed (-20% PWS 3.59 ± 0.47 km.h-1) and faster speed (+20% PWS 5.26 ± 0.53 km.h-1). The sliding window technique was adopted for training and testing the LSTM models with total kinematics time-series data of 17,638 strides for all trials. The aim and findings of this work were carried out in 3 studies. Study 1 confirmed the possibility of predicting the future trajectories of human lower limb kinematics using LSTM autoencoders (ED-LSTM) and the LR during an imposed walking speed (5km.h-1). Both models achieved satisfactory predicted trajectories up to 0.06s. A prediction horizon of 0.06s can be used to compensate for delays in an exoskeleton’s feed-forward controller to better estimate the human motions and synchronise with intended movement trajectories. Study 2 (Chapter 4) indicated that the LR model is not suitable for the prediction of future lower limb kinematics at PWS. The LSTM perfromace results suggested that the ED-LSTM and the Stacked LSTM are more accurate to predict the future lower limb kinematics up to 0.1s at PWS and imposed walking speed (5km.h-1). The average duration for a gait cycle rages between 0.98-1.07s, and a prediction horizon of 0.1 accounts for about 10% of the gait cycle. Such a forecast may assist users in anticipating a low foot clearance to develop early countermeasures such as slowing down or stopping. Study 3 (Chapter 5) have shown that at +20% PWS the LSTM models’ performance obtained better predictions compared to all tested walking speed conditions (i.e. PWS, -20% PWS and 5km.h-1). While at -20% PWS, results indicated that at slower walking speeds all of the LSTM architectures obtained weaker predictions compared to all tested walking speeds (i.e. PWS, +20% PWS and 5km.h-1). In addition to the applications of a known future trajectories at the PWS mentioned in study 1 and 2, the prediction at fast and slow walking speeds familiarise the developed ML models with changes in human walking speed which are known to have large effects on lower limb kinematics. When intelligent ML methods are familiarised with the degree of kinematic changes due to speed variations, it could be used to improve human-machine interface in bionics design for various walking speeds The key finding of the three studies is that the ED-LSTM was found to be the most accurate -1 model to predict and adapt to the human motion kinematics at PWS, ±20% PWS and 5km.h up to 0.1s. The ability to predict future lower limb motions may have a wide range of applications including the design and control of bionics allowing better human-machine interface and mitigating the risk of tripping and balance loss.
APA, Harvard, Vancouver, ISO, and other styles
8

Khan, Asim. "Automated Detection and Monitoring of Vegetation Through Deep Learning." Thesis, 2022. https://vuir.vu.edu.au/43941/.

Full text
Abstract:
Healthy vegetation are essential not just for environmental sustainability but also for the development of sustainable and liveable cities. It is undeniable that human activities are altering the vegetation landscape, with harmful implications for the climate. As a result, autonomous detection, health evaluation, and continual monitoring of the plants are required to ensure environmental sustainability. This thesis presents research on autonomous vegetation management using recent advances in deep learning. Currently, most towns do not have a system in place for detection and continual vegetation monitoring. On the one hand, a lack of public knowledge and political will could be a factor; on the other hand, no efficient and cost-effective technique of monitoring vegetation health has been established. Individual plants health condition data is essential since urban trees often develop as stand-alone objects. Manual annotation of these individual trees is a time-consuming, expensive, and inefficient operation that is normally done in person. As a result, skilled manual annotation cannot cover broad areas, and the data they create is out of date. However, autonomous vegetation management poses a number of challenges due to its multidisciplinary nature. It includes automated detection, health assessment, and monitoring of vegetation and trees by integrating techniques from computer vision, machine learning, and remote sensing. Other challenges include a lack of analysis-ready data and imaging diversity, as well as dealing with their dependence on weather variability. With a core focus on automation of vegetation management using deep learning and transfer learning, this thesis contributes novel techniques for Multi-view vegetation detection, robust calculation of vegetation index, and real- time vegetation health assessment using deep convolutional neural networks (CNNs) and deep learning frameworks. The thesis focuses on four general aspects: a) training CNN with possibly inaccurate labels and noisy image dataset; b) deriving semantic vegetation segmentation from the ordinal information contained in the image; c) retrieving semantic vegetation indexes from street-level imagery; and d) developing a vegetation health assessment and monitoring system. Firstly, it is essential to detect and segment the vegetation, and then calculate the pixel value of the semantic vegetation index. However, because the images in multi- sensory data are not identical, all image datasets must be registered before being fed into the model training. The dataset used for vegetation detection and segmentation was acquired from multi-sensors. The whole dataset was multi-temporal based; therefore, it was registered using deep affine features through a convolutional neural network. Secondly, after preparing the dataset, vegetation was segmented by using Deep CNN, a fully convolutional network, and U-net. Although the vegetation index interprets the health of a particular area’s vegetation when assessing small and large vegetation (trees, shrubs, grass, etc.), the health of large plants, such as trees, is determined by steam. In contrast, small plants’ leaves are evaluated to decide whether they are healthy or unhealthy. Therefore, initially, small plant health was assessed through their leaves by training a deep neural network and integrating that trained model into an internet of things (IoT) device such as AWS DeepLens. Another deep CNN was trained to assess the health of large plants and trees like Eucalyptus. This one could also tell which trees were healthy and which ones were unhealthy, as well as their geo-location. Thus, we may ultimately analyse the vegetation’s health in terms of the vegetation index throughout time on the basis of a semantic-based vegetation index and compute the index in a time-series fashion. This thesis shows that computer vision, deep learning and remote sensing approaches can be used to process street-level imagery in different places and cities, to help manage urban forests in new ways, such as biomass-surveillance and remote vegetation monitoring.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "4602 Artificial intelligence"

1

Gao, Xiao-Zhi, Rajesh Kumar, Sumit Srivastava, and Bhanu Pratap Soni, eds. Applications of Artificial Intelligence in Engineering. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-4604-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Gil-González, Ana-Belén, ed. Artificial Intelligence in the Energy Industry. MDPI, 2022. http://dx.doi.org/10.3390/books978-3-0365-4606-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

AAAI Workshop on Artificial Intelligence with Biased or Scarce Data (AIBSD). MDPI, 2022. http://dx.doi.org/10.3390/books978-3-0365-4682-7.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "4602 Artificial intelligence"

1

Kuo, Tsai-Bao, Steven A. Wong, and Richard A. Startzman. "Artificial Intelligence in Formation Evaluation." In Automated Pattern Analysis in Petroleum Exploration, 33–60. New York, NY: Springer New York, 1992. http://dx.doi.org/10.1007/978-1-4612-4388-5_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Oberst, Byron B., and John M. Long. "Expert Decision Support: Artificial Intelligence." In Computers in Private Practice Management, 173–81. New York, NY: Springer New York, 1987. http://dx.doi.org/10.1007/978-1-4612-4746-3_16.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Sobel, Michael E. "Causal inference in artificial intelligence." In Selecting Models from Data, 183–96. New York, NY: Springer New York, 1994. http://dx.doi.org/10.1007/978-1-4612-2660-4_19.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Roosta, Seyed H. "Artificial Intelligence and Parallel Processing." In Parallel Processing and Parallel Algorithms, 501–34. New York, NY: Springer New York, 2000. http://dx.doi.org/10.1007/978-1-4612-1220-1_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Peleska, Jan, Anne E. Haxthausen, and Thierry Lecomte. "Standardisation Considerations for Autonomous Train Control." In Lecture Notes in Computer Science, 286–307. Cham: Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19762-8_22.

Full text
Abstract:
AbstractIn this paper, we review software-based technologies already known to be, or expected to become essential for autonomous train control systems with grade of automation GoA 4 (unattended train operation) in existing open railway environments. It is discussed which types of technology can be developed and certified already today on the basis of existing railway standards. Other essential technologies, however, require modifications or extensions of existing standards, in order to provide a certification basis for introducing these technologies into non-experimental “real-world” rail operation. Regarding these, we check the novel pre-standard ANSI/UL 4600 with respect to suitability as a certification basis for safety-critical autonomous train control functions based on methods from artificial intelligence. As a thought experiment, we propose a novel autonomous train controller design and perform an evaluation according to ANSI/UL 4600. This results in the insight that autonomous freight trains and metro trains using this design could be evaluated and certified on the basis of ANSI/UL 4600 .
APA, Harvard, Vancouver, ISO, and other styles
6

Bobrow, Daniel G. "Concluding Remarks from the Artificial Intelligence Perspective." In Topics in Information Systems, 569–73. New York, NY: Springer New York, 1986. http://dx.doi.org/10.1007/978-1-4612-4980-1_45.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Nielsen, Norman R. "Application of Artificial Intelligence Techniques to Simulation." In Advances in Simulation, 1–19. New York, NY: Springer New York, 1991. http://dx.doi.org/10.1007/978-1-4612-3040-3_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Higgins, Michael C. "The Role for Artificial Intelligence in Critical Care." In Computers and Medicine, 354–95. New York, NY: Springer New York, 1994. http://dx.doi.org/10.1007/978-1-4612-2698-7_24.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sharma, Sunil, and Yashwant Singh Rawal. "The Possibilities of Artificial Intelligence in the Hotel Industry." In Algorithms for Intelligent Systems, 695–702. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-4604-8_53.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bahekar, Kirti Bala. "Comprehensive Analysis of Classification Techniques Based on Artificial Immune System and Artificial Neural Network Algorithms." In Algorithms for Intelligent Systems, 845–53. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-4604-8_68.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "4602 Artificial intelligence"

1

Kondratiev, O. K. "Systems of seismic records processing and analysis using artificial intelligence." In Geophysics of the 21st Century - The Leap into the Future. European Association of Geoscientists & Engineers, 2003. http://dx.doi.org/10.3997/2214-4609-pdb.38.f154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Dragulinescu, Andrei. "Doping concentration variation in the barrier layers of a 462 nm In0.02Ga0.98N QW laser for structure performance improvement." In 2013 International Conference on Electronics, Computers and Artificial Intelligence (ECAI). IEEE, 2013. http://dx.doi.org/10.1109/ecai.2013.6636166.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Akin, S., S. Yilmaz, and C. Demircioglu. "Optimum Rock Bit Program Selection by Integrated Geostatistics and Artificial Intelligence." In 63rd EAGE Conference & Exhibition. European Association of Geoscientists & Engineers, 2001. http://dx.doi.org/10.3997/2214-4609-pdb.15.ior-11.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dragulinescu, Andrei. "Study of the effect of in composition variation in the active region and barrier layers on the structure performance of 462 nm InGaN QW lasers." In 2013 International Conference on Electronics, Computers and Artificial Intelligence (ECAI). IEEE, 2013. http://dx.doi.org/10.1109/ecai.2013.6636165.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Yang, Yiyang, Zhiguo Gong, Qing Li, Leong Hou U, Ruichu Cai, and Zhifeng Hao. "A Robust Noise Resistant Algorithm for POI Identification from Flickr Data." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/460.

Full text
Abstract:
Point of Interests (POI) identification using social media data (e.g. Flickr, Microblog) is one of the most popular research topics in recent years. However, there exist large amounts of noises (POI irrelevant data) in such crowd-contributed collections. Traditional solutions to this problem is to set a global density threshold and remove the data point as noise if its density is lower than the threshold. However, the density values vary significantly among POIs. As the result, some POIs with relatively lower density could not be identified. To solve the problem, we propose a technique based on the local drastic changes of the data density. First we define the local maxima of the density function as the Urban POIs, and the gradient ascent algorithm is exploited to assign data points into different clusters. To remove noises, we incorporate the Laplacian Zero-Crossing points along the gradient ascent process as the boundaries of the POI. Points located outside the POI region are regarded as noises. Then the technique is extended into the geographical and textual joint space so that it can make use of the heterogeneous features of social media. The experimental results show the significance of the proposed approach in removing noises.
APA, Harvard, Vancouver, ISO, and other styles
6

Yao, Quanming, James T. Kwok, Fei Gao, Wei Chen, and Tie-Yan Liu. "Efficient Inexact Proximal Gradient Algorithm for Nonconvex Problems." In Twenty-Sixth International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization, 2017. http://dx.doi.org/10.24963/ijcai.2017/462.

Full text
Abstract:
While proximal gradient algorithm is originally designed for convex optimization, several variants have been recently proposed for nonconvex problems. Among them, nmAPG [Li and Lin, 2015] is the state-of-art. However, it is inefficient when the proximal step does not have closed-form solution, or such solution exists but is expensive, as it requires more than one proximal steps to be exactly solved in each iteration. In this paper, we propose an efficient accelerate proximal gradient (niAPG) algorithm for nonconvex problems. In each iteration, it requires only one inexact (less expensive) proximal step. Convergence to a critical point is still guaranteed, and a O(1/k) convergence rate is derived. Experiments on image inpainting and matrix completion problems demonstrate that the proposed algorithm has comparable performance as the state-of-the-art, but is much faster.
APA, Harvard, Vancouver, ISO, and other styles
7

Shinoda, Kazuhiko, Hirotaka Kaji, and Masashi Sugiyama. "Binary Classification from Positive Data with Skewed Confidence." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/460.

Full text
Abstract:
Positive-confidence (Pconf) classification [Ishida et al., 2018] is a promising weakly-supervised learning method which trains a binary classifier only from positive data equipped with confidence. However, in practice, the confidence may be skewed by bias arising in an annotation process. The Pconf classifier cannot be properly learned with skewed confidence, and consequently, the classification performance might be deteriorated. In this paper, we introduce the parameterized model of the skewed confidence, and propose the method for selecting the hyperparameter which cancels out the negative impact of the skewed confidence under the assumption that we have the misclassification rate of positive samples as a prior knowledge. We demonstrate the effectiveness of the proposed method through a synthetic experiment with simple linear models and benchmark problems with neural network models. We also apply our method to drivers’ drowsiness prediction to show that it works well with a real-world problem where confidence is obtained based on manual annotation.
APA, Harvard, Vancouver, ISO, and other styles
8

Hajimoradlou, Ainaz, Gioachino Roberti, and David Poole. "Predicting Landslides Using Locally Aligned Convolutional Neural Networks." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. California: International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/462.

Full text
Abstract:
Landslides, movement of soil and rock under the influence of gravity, are common phenomena that cause significant human and economic losses every year. Experts use heterogeneous features such as slope, elevation, land cover, lithology, rock age, and rock family to predict landslides. To work with such features, we adapted convolutional neural networks to consider relative spatial information for the prediction task. Traditional filters in these networks either have a fixed orientation or are rotationally invariant. Intuitively, the filters should orient uphill, but there is not enough data to learn the concept of uphill; instead, it can be provided as prior knowledge. We propose a model called Locally Aligned Convolutional Neural Network, LACNN, that follows the ground surface at multiple scales to predict possible landslide occurrence for a single point. To validate our method, we created a standardized dataset of georeferenced images consisting of the heterogeneous features as inputs, and compared our method to several baselines, including linear regression, a neural network, and a convolutional network, using log-likelihood error and Receiver Operating Characteristic curves on the test set. Our model achieves 2-7% improvement in terms of accuracy and 2-15% boost in terms of log likelihood compared to the other proposed baselines.
APA, Harvard, Vancouver, ISO, and other styles
9

Zhang, Hongjing, and Ian Davidson. "Deep Descriptive Clustering." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/460.

Full text
Abstract:
Recent work on explainable clustering allows describing clusters when the features are interpretable. However, much modern machine learning focuses on complex data such as images, text, and graphs where deep learning is used but the raw features of data are not interpretable. This paper explores a novel setting for performing clustering on complex data while simultaneously generating explanations using interpretable tags. We propose deep descriptive clustering that performs sub-symbolic representation learning on complex data while generating explanations based on symbolic data. We form good clusters by maximizing the mutual information between empirical distribution on the inputs and the induced clustering labels for clustering objectives. We generate explanations by solving an integer linear programming that generates concise and orthogonal descriptions for each cluster. Finally, we allow the explanation to inform better clustering by proposing a novel pairwise loss with self-generated constraints to maximize the clustering and explanation module's consistency. Experimental results on public data demonstrate that our model outperforms competitive baselines in clustering performance while offering high-quality cluster-level explanations.
APA, Harvard, Vancouver, ISO, and other styles
10

Zhang, Qi, Jingjie Li, Qinglin Jia, Chuyuan Wang, Jieming Zhu, Zhaowei Wang, and Xiuqiang He. "UNBERT: User-News Matching BERT for News Recommendation." In Thirtieth International Joint Conference on Artificial Intelligence {IJCAI-21}. California: International Joint Conferences on Artificial Intelligence Organization, 2021. http://dx.doi.org/10.24963/ijcai.2021/462.

Full text
Abstract:
Nowadays, news recommendation has become a popular channel for users to access news of their interests. How to represent rich textual contents of news and precisely match users' interests and candidate news lies in the core of news recommendation. However, existing recommendation methods merely learn textual representations from in-domain news data, which limits their generalization ability to new news that are common in cold-start scenarios. Meanwhile, many of these methods represent each user by aggregating the historically browsed news into a single vector and then compute the matching score with the candidate news vector, which may lose the low-level matching signals. In this paper, we explore the use of the successful BERT pre-training technique in NLP for news recommendation and propose a BERT-based user-news matching model, called UNBERT. In contrast to existing research, our UNBERT model not only leverages the pre-trained model with rich language knowledge to enhance textual representation, but also captures multi-grained user-news matching signals at both word-level and news-level. Extensive experiments on the Microsoft News Dataset (MIND) demonstrate that our approach constantly outperforms the state-of-the-art methods.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography