Academic literature on the topic 'Brain model: artificial intelligence'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Brain model: artificial intelligence.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Brain model: artificial intelligence"

1

Viorel, Gaftea. "BRAIN Journal - Computational Intelligence in a Human Brain Model." BRAIN - Broad Research in Artificial Intelligence and Neuroscience 7, no. 2 (2016): 17–24. https://doi.org/10.5281/zenodo.1044298.

Full text
Abstract:
ABSTRACT This paper focuses on the current trends in the domain of brain research and on the current stage of development of the research for software and hardware solutions, communication capabilities between human beings and machines, new technologies, nanoscience and Internet of Things (IoT) devices. The proposed model for the Human Brain assumes the main similarities between human intelligence and the chess game thinking process. Tactical and strategic reasoning and the need to follow the rules of the chess game are all very similar to the activities of the human brain. The main objective for a living being and the chess game player are the same: securing a position, surviving and eliminating the adversaries. The brain resolves these goals and, moreover, the being’s movement, actions and speech are sustained by the vital five senses and equilibrium. The chess game strategy helps us understand the human brain better and to replicate easier in the proposed ‘Software and Hardware’ SAH Model
APA, Harvard, Vancouver, ISO, and other styles
2

Kostas, Zotos. "Computer Algebra Systems & Artificial Intelligence." BRAIN. Broad Research in Artificial Intelligence and Neuroscience 15, no. 2 (2024): 427–36. https://doi.org/10.18662/brain/15.2/584.

Full text
Abstract:
<em>From four-function calculators to calculators (or computers) with Computer Algebra System (CAS) software, Mathematics computing technology has advanced. With just a few button pushes, CASs can solve a wide range of mathematical problems, which is a true quantum leap in technology. The implications of having software in the classroom that can, for example, expand and factorize algebraic expressions, solve equations, differentiate functions, and find anti-derivatives are causing the mathematical community to engage in a heated debate about whether this is one of the most exciting or frightening developments in the history of education.&nbsp;</em><em>It was only a matter of time before Artificial Intelligence entered the field of Science. This is now also the case with Mathematics, one of the dominant, perhaps the most basic, but also the most "difficult" of the sciences. The human mind, for better or for worse, has its limits. As we see in every manifestation of our lives, in this case, technology is being enlisted to help humanity take the next step, whether it has to do with automation and practical matters, or with knowledge and exploration. Creating a model that is understandable to humans is the primary objective of Artificial Intelligence. Additionally, concepts and methods from numerous mathematical fields can be used to prepare these models.&nbsp;</em><em>In this paper, we will examine the use of AI in CASs and explore some ways to optimize them. The documentation sheets are the data source that we used to examine their characteristics. The research results reveal that there are many tips that we can follow to accelerate performance.</em>
APA, Harvard, Vancouver, ISO, and other styles
3

Weigand, Edda. "Dialogue and Artificial Intelligence." Language and Dialogue 9, no. 2 (2019): 294–315. http://dx.doi.org/10.1075/ld.00042.wei.

Full text
Abstract:
Abstract The article focuses on a few central issues of dialogic competence-in-performance which are still beyond the reach of models of Artificial Intelligence (AI). Learning machines have made an amazing step forward but still face barriers which cannot be crossed yet. Linguistics is still described at the level of Chomsky’s view of language competence. Modelling competence-in-performance requires a holistic model, such as the Mixed Game Model (Weigand 2010), which is capable of addressing the challenge of the ‘architecture of complexity’ (Simon 1962). The complex cannot be ‘the ontology of the world’ (Russell and Norwig 2016). There is no autonomous ontology, no hierarchy of concepts; it is always human beings who perceive the world. ‘Anything’, in the end, depends on the human brain.
APA, Harvard, Vancouver, ISO, and other styles
4

Zotos, Kostas. "Computer Algebra Systems & Artificial Intelligence." BRAIN. Broad Research in Artificial Intelligence and Neuroscience 15, no. 2 (2024): 427–36. http://dx.doi.org/10.18662/brain/15.2/584.

Full text
Abstract:
From four-function calculators to calculators (or computers) with Computer Algebra System (CAS) software, Mathematics computing technology has advanced. With just a few button pushes, CASs can solve a wide range of mathematical problems, which is a true quantum leap in technology. The implications of having software in the classroom that can, for example, expand and factorize algebraic expressions, solve equations, differentiate functions, and find anti-derivatives are causing the mathematical community to engage in a heated debate about whether this is one of the most exciting or frightening developments in the history of education. It was only a matter of time before Artificial Intelligence entered the field of Science. This is now also the case with Mathematics, one of the dominant, perhaps the most basic, but also the most "difficult" of the sciences. The human mind, for better or for worse, has its limits. As we see in every manifestation of our lives, in this case, technology is being enlisted to help humanity take the next step, whether it has to do with automation and practical matters, or with knowledge and exploration. Creating a model that is understandable to humans is the primary objective of Artificial Intelligence. Additionally, concepts and methods from numerous mathematical fields can be used to prepare these models. In this paper, we will examine the use of AI in CASs and explore some ways to optimize them. The documentation sheets are the data source that we used to examine their characteristics. The research results reveal that there are many tips that we can follow to accelerate performance.
APA, Harvard, Vancouver, ISO, and other styles
5

Liu, Weijie. "Enhancing Brain-Computer Interface Performance and Security through Advanced Artificial Intelligence Techniques." Applied and Computational Engineering 154, no. 1 (2025): 1–6. https://doi.org/10.54254/2755-2721/2025.tj23002.

Full text
Abstract:
The brain-computer interface has become a rapidly developing field, but it has also brought many problems with its development. The main issues are the sparse amount of brain-computer interface data, the inaccurate decoding and classification of data, and the data security of the brain-computer interface. With the development of artificial intelligence, artificial intelligence also provides solutions to many problems. This study mainly uses artificial intelligence algorithms to solve these problems. This paper reviews the integration of artificial intelligence techniquesspecifically transfer learning, generative adversarial networks (GANs), Transformer models, and federated learningto address critical challenges in brain-computer interfaces (BCIs), including data scarcity, classification accuracy, and data security. The hybrid model has many outstanding performances in solving the brain-computer interface problem, and this paper mainly mentions the joint extraction of spatiotemporal features of the CNN-Transformer to make up for the shortcomings of a single model and improve the overall performance. The GAN-TL hybrid model can effectively reduce the influence of individual differences on the model. This paper illustrates the advantages of the hybrid model, which is also the main direction of future research. It highlights how hybrid AI models significantly enhance BCI performance while outlining current limitations and future research directions to ensure robust, efficient, and secure BCI applications.
APA, Harvard, Vancouver, ISO, and other styles
6

Volobuev, Andrei N., Vasiliy F. Pyatin, Natalya P. Romanchuk, Petr I. Romanchuk, and Svetlana V. Bulgakova. "Modeling of stochastic brain function in artificial intelligence." Science and Innovations in Medicine 4, no. 3 (2019): 8–14. http://dx.doi.org/10.35693/2500-1388-2019-4-3-8-14.

Full text
Abstract:
Objectives -research of stochastic brain function in respect to creation of artificial intelligence. Material and methods. Mathematical modeling principles were used for simulation of brain functioning in a stochastic mode. Results. Two types of brain activity were considered: determinated type, usually modeled using the perceptron, and stochastic type. It is shown, that stochastic brain function modeling is the necessary condition for AI to become capable of creativity, generation of new knowledge. Mathematical modeling of a neural network of the cerebral cortex, consisting of the set of the cyclic neuronal circuits (memory units), was performed for the stochastic mode of brain functioning. Models of "two-dimensional" and "one-dimensional" brain were analyzed. The pattern of excitation in memory units was calculated in the "one-dimensional" brain model. Conclusion. Relying on the knowledge of the stochastic mode of brain function, a way of creation of AI can be offered. а-rhythm of a patient is a recommended focus of the therapist's attention in diagnostics and treatment of brain disorders. It was noted, that the alpha wave amplitude and frequency could indicate the cognitive, creative and intuitive abilities of a person.
APA, Harvard, Vancouver, ISO, and other styles
7

Yashchenko, V. O. "Artificial brain. Biological and artificial neural networks, advantages, disadvantages, and prospects for development." Mathematical machines and systems 2 (2023): 3–17. http://dx.doi.org/10.34121/1028-9763-2023-2-3-17.

Full text
Abstract:
The article analyzes the problem of developing artificial neural networks within the framework of creating an artificial brain. The structure and functions of the biological brain are considered. The brain performs many functions such as controlling the organism, coordinating movements, processing information, memory, thinking, attention, and regulating emotional states, and consists of billions of neurons interconnected by a multitude of connections in a biological neural network. The structure and functions of biological neural networks are discussed, and their advantages and disadvantages are described in detail compared to artificial neural networks. Biological neural networks solve various complex tasks in real-time, which are still inaccessible to artificial networks, such as simultaneous perception of information from different sources, including vision, hearing, smell, taste, and touch, recognition and analysis of signals from the environment with simultaneous decision-making in known and uncertain situations. Overall, despite all the advantages of biological neural networks, artificial intelligence continues to rapidly progress and gradually win positions over the biological brain. It is assumed that in the future, artificial neural networks will be able to approach the capabilities of the human brain and even surpass it. The comparison of human brain neural networks with artificial neural networks is carried out. Deep neural networks, their training and use in various applications are described, and their advantages and disadvantages are discussed in detail. Possible ways for further development of this direction are analyzed. The Human Brain project aimed at creating a computer model that imitates the functions of the human brain and the advanced artificial intelligence project – ChatGPT – are briefly considered. To develop an artificial brain, a new type of neural network is proposed – neural-like growing networks, the structure and functions of which are similar to natural biological networks. A simplified scheme of the structure of an artificial brain based on a neural-like growing network is presented in the paper.
APA, Harvard, Vancouver, ISO, and other styles
8

Vinny, Madhulika, and Pawan Singh. "Review on the Artificial Brain Technology: BlueBrain." Journal of Informatics Electrical and Electronics Engineering (JIEEE) 1, no. 1 (2020): 1–11. http://dx.doi.org/10.54060/jieee/001.01.003.

Full text
Abstract:
Blue brain is a supercomputer programmed such that it can function as an artificial brain, which can also be called a virtual brain. IBM is developing this virtual brain which would be the world’s first such created machine. Its main aim is to create a machine in which the information of the actual brain can be uploaded. This would ensure that a person’s knowledge, personality, memories, and intelligence are preserved and safe. The Blue Brain project utilizes the technologies of reverse engineering and artificial intelligence at its core and is implemented through the use of supercomputers and nanobots. Special software like BBP-SDK are also specifically developed for the Blue Brain project. The Blue Brain project is centered towards finding viable solutions to brain-disorders, a working model close to the actual brain which would help in greater understanding of the human brain and the human mind and the state of consciousness, a step towards building an independently thinking machine, and finally collecting information of hundreds of years from the human brains and storing it in the form of a databases. The Blue Brain project mimics the human brain by acquiring the data from its surrounding through special software, interpreting through neural electrophysiology and morphology, and simulating them on computers. Thus, The Blue Brain project is a powerful tool for the study and analysis of the human brain and for the advancement of the human brain and society.
APA, Harvard, Vancouver, ISO, and other styles
9

Bayaral, Sedat, Evrim Gül, and Derya Avcı. "Classification of Brain Tumors Using Artificial Intelligence." International Journal of Innovative Engineering Applications 9, no. 1 (2025): 8–22. https://doi.org/10.46460/ijiea.1563426.

Full text
Abstract:
Brain MRI is a medical image obtained by MRI, which stands for "Magnetic Resonance Imaging". Brain MRI uses magnetic fields and radio waves to create detailed images of the brain and surrounding tissues. Today, deep learning algorithms are used to detect brain tumors or classify different brain regions. In this study, feature extraction has been performed with current deep learning models using a dataset consisting of 7023 open access images obtained from patients from various parts of the world, and the results were evaluated by training Support Vector Machine (SVM) and XGBoost models with the extracted features. In this study, 4 deep learning models, VGG16, VGG19, ResNet50 and MobileNetV2, have been used for feature extraction. In order to achieve higher performance, transfer learning method is used in this study, which allows the weights of models that are pre-trained with large data sets to be used in other models. The weights of the models trained with ImageNet were included in the study to improve performance and save time. Although the original layer structures of the models are fixed, the GlobalAveragePooling2D layer has been added to the CNN models to improve performance and generalize the features extracted from deep learning models. Brain MRI images divided into 4 classes as glioma tumor, meningioma tumor, pituitary tumor and no tumor. Auxiliary functions have been used to obtain optimum values for the parameters used for training the models. Accuracy, F1-score, precision and sensitivity metrics used to evaluate the training results. When the results are evaluated, the best performance with an F1-score of 97.87% is obtained by classifying the features extracted from the ResNet50 CNN model with Support Vector Machine (SVM).
APA, Harvard, Vancouver, ISO, and other styles
10

K P, VISHNUPRIYA, JWALA JOSE, PRINCE JOY, SRITHA S, and GIBI K. S. "Brain-Inspired Artificial Intelligence: Revolutionizing Computing and Cognitive Systems." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 12 (2024): 1–8. https://doi.org/10.55041/ijsrem39825.

Full text
Abstract:
Brain-inspired artificial intelligence (AI) is a rapidly evolving field that seeks to model computational systems after the structure, processes, and functioning of the human brain. By drawing from neuroscience and cognitive science, brain-inspired AI aims to improve the efficiency, scalability, and adaptability of machine learning algorithms. This paper explores the key technologies and advancements in the realm of brain-inspired AI, including neural networks, neuromorphic hardware, brain-computer interfaces, and algorithms inspired by biological learning mechanisms. Additionally, we will analyze the challenges and future opportunities in achieving more brain-like cognitive systems. The integration of these technologies promises a paradigm shift in AI research, bringing us closer to artificial general intelligence (AGI) while creating more energy-efficient and resilient systems. Keywords Brain-inspired AI, Neural Networks, Neuromorphic Computing, Spiking Neural Networks, Artificial General Intelligence, Brain-Computer Interfaces, Cognitive Architectures.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Brain model: artificial intelligence"

1

Mendeleck, Andre. "Um modelo conexionista para a geração de movimentos voluntarios em ambiente desestruturado." [s.n.], 1995. http://repositorio.unicamp.br/jspui/handle/REPOSIP/263849.

Full text
Abstract:
Orientador: Douglas Eduardo Zampieri<br>Tese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Mecanica<br>Made available in DSpace on 2018-07-20T21:35:26Z (GMT). No. of bitstreams: 1 Mendeleck_Andre_D.pdf: 34563816 bytes, checksum: eb6f74befb171f93b3b0ab498352357e (MD5) Previous issue date: 1995<br>Resumo: Neste trabalho apresentamos uma estrutura neuronal artificial com autoaprendizado para o auxílio à geração de trajetórias em um ambiente desestruturado, O objetivo é formar uma sequência de valores de referência que podem auxiliar a definição de um caminho ou uma trajetória. A estrutura que estamos propondo foi inspirada no sistema neural biológico (principalmente a região hipocampal e cerebelar), redes neurais (principalmente rede tipo perceptron com treinamento pelo método de backpropagation) e nas teorias de aprendizado (principalmente a proposta por R. M. Gagne)<br>Abstract: In this paper we present a model using a conectionistic neural artificial net, operating in real time, to generate voluntary movement, with self learning, in unknown place. The model can help a robot to defme a trajectory and to go round impediments. The neuronal estructure is based in hippocampal theory and R. M. Gagne learning theory<br>Doutorado<br>Mecanica dos Sólidos e Projeto Mecanico<br>Doutor em Engenharia Mecânica
APA, Harvard, Vancouver, ISO, and other styles
2

Kogeyama, Renato. "Who is the cowboy in Washington?: beating google at their own game with neuroscience and cryptography." reponame:Repositório Institucional do FGV, 2014. http://hdl.handle.net/10438/13524.

Full text
Abstract:
Submitted by RENATO Kogeyama (rkogeyama@gmail.com) on 2015-03-06T14:50:01Z No. of bitstreams: 1 Dissertação final.pdf: 1794273 bytes, checksum: b90c57e65dc2272d6edcdbabe5703b90 (MD5)<br>Approved for entry into archive by Janete de Oliveira Feitosa (janete.feitosa@fgv.br) on 2015-03-10T12:44:03Z (GMT) No. of bitstreams: 1 Dissertação final.pdf: 1794273 bytes, checksum: b90c57e65dc2272d6edcdbabe5703b90 (MD5)<br>Approved for entry into archive by Marcia Bacha (marcia.bacha@fgv.br) on 2015-03-12T19:58:58Z (GMT) No. of bitstreams: 1 Dissertação final.pdf: 1794273 bytes, checksum: b90c57e65dc2272d6edcdbabe5703b90 (MD5)<br>Made available in DSpace on 2015-03-12T19:59:13Z (GMT). No. of bitstreams: 1 Dissertação final.pdf: 1794273 bytes, checksum: b90c57e65dc2272d6edcdbabe5703b90 (MD5) Previous issue date: 2014-12-17<br>Who was the cowboy in Washington? What is the land of sushi? Most people would have answers to these questions readily available,yet, modern search engines, arguably the epitome of technology in finding answers to most questions, are completely unable to do so. It seems that people capture few information items to rapidly converge to a seemingly 'obvious' solution. We will study approaches for this problem, with two additional hard demands that constrain the space of possible theories: the sought model must be both psychologically and neuroscienti cally plausible. Building on top of the mathematical model of memory called Sparse Distributed Memory, we will see how some well-known methods in cryptography can point toward a promising, comprehensive, solution that preserves four crucial properties of human psychology.
APA, Harvard, Vancouver, ISO, and other styles
3

Gomez, Chloé. "DeepStim Project. Modeling states of consciousness and their modulation by electrical Deep Brain Stimulation : from experimental data to computational models." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASL027.

Full text
Abstract:
Le diagnostic des patients dans le coma est souvent difficile. Les examens cérébraux ren- seignent les médecins sur l’étendue des lésions cérébrales mais ne permettent pas de déterminer avec précision l’état de conscience du patient. De plus, aucune approche thérapeutique ne permet une restauration systématique de la conscience.Des études pionnières menées sur des patients et des Primates Non Humains (PNH) ont montré que la Stimulation Cérébrale Profonde (SCP) des noy- aux intralaminaires du thalamus pouvait restaurer ou améliorer la conscience lorsqu’elle est altérée.Cependant, les conséquences corticales associées à la SCP restent largement inconnues et imprévis- ibles. Les techniques d’imagerie fonctionnelle, telles que l’Imagerie par Résonance Magnétique fonctionnelle de repos (IRMf de repos), peuvent aider à identifier des signatures de la conscience.L’activité cérébrale au repos, organisée en réseaux, peut être modélisée à l’aide de la connectivité fonc- tionnelle. Cette thèse vise à disséquer, à l’aide du modèle PNH, les effets sur la connectivité fonction- nelle d’une modulation de la conscience induite par des agents anesthésiques ou de la SCP à l’échelle du cerveau entier. Cela nécessite le développement de modèles interprétables et prédictifs des effets d’une telle modulation sur la fonction cérébrale globale. Pour identifier les schémas récurrents dominants (c’est-à-dire les différents états du cerveau) à partir de la connectivité fonctionnelle, une technique d’apprentissage automatique non supervisée (K-Means) a été proposée précédem- ment. Dans le cadre de cette thèse, nous dévelop- pons de nouveaux outils d’analyse en tirant parti des avancées des techniques d’apprentissage pro- fond auto-supervisé. Nous émettons l’hypothèse que l’identification de variables latentes dans les signaux IRMf de repos peut nous informer sur la modulation des états de conscience. Tout d’abord, nous cherchons à identifier une signature spatiale, moyennée temporellement, de la conscience à la fois dans l’état éveillé et sous anesthésie. Nous utilisons une méthode de variables latentes qui dé- compose les signaux IRMf de repos en réseaux fonctionnels associés à l’accès conscient. Afin d’étudier la restauration de la conscience, nous étendons cette analyse aux PNH éveillés ou réveil- lés par DBS du thalamus central. Notre mod- èle suggère de manière automatique que le cortex antérieur et le cortex postérieur contribuent tous deux à la conscience, un sujet qui fait débat au sein de la communauté scientifique. En outre, il souligne l’importance des régions clés au sein de l’espace de travail neuronal global, une théorie im- portante concernant l’accès à la conscience. Suite à cette analyse moyennée temporellement, recon- naissant l’importance de la dynamique temporelle dans l’analyse de la conscience, nous proposons de remettre en question les méthodes conven- tionnelles de connectivité fonctionnelle dynamique.Nous utilisons un modèle d’apprentissage profond contrastif pour prédire les schémas cérébraux car- actéristiques de differents états de conscience. Les expériences démontrent que les prédictions du modèle basées sur la connectivité fonctionnelle dy- namique mettent en avant des transitions entre les schémas cérébraux. Enfin, pour mieux comprendre la dynamique des états de conscience, nous nous écartons du cadre conventionnel de classification en sous-groupes et introduisons une méthode de réduction de dimensions. Cette approche vise à condenser ces états en un nombre limité de vari- ables interprétables et explicables. Nos résultats indiquent que l’approche catégorielle traditionnelle ne permet pas de saisir de manière adéquate le con- tinuum de la dynamique des états de conscience<br>Diagnosis of patients with coma is of- ten difficult. Brain examinations inform physicians about the extent of brain damage but do not ac- curately determine the patient’s level of conscious- ness. Moreover, no therapeutic approach allows a systematic restoration of consciousness. Pioneer- ing studies in patients and Non-Human Primates (NHP) have shown that Deep Brain Stimulation (DBS) of the intralaminar nuclei of the thalamus could restore or improve consciousness when it is impaired. However, the cortical consequences as- sociated with DBS remain largely unknown and un- predictable. Functional imaging techniques, such as Resting-State functional Magnetic Resonance Imaging (RS-fMRI), can help identify signatures of consciousness. Brain activity at rest, organized into networks, can be modeled using functional connectivity. This thesis aims to dissect, using the NHP model, the effects on functional connectiv- ity of a modulation of consciousness induced by anesthetic agents or DBS on a whole-brain scale.This requires the development of interpretable and predictive models of the effects of such modula- tion on global brain function. To identify domi- nant recurrent patterns (i.e., different brain states) from functional connectivity, an unsupervised ma- chine learning technique (K-Means) has been pre- viously proposed. As part of this thesis, we de- velop new analysis tools by taking advantage of the advances in self-supervised deep learning tech- niques. We hypothesized that identifying latent variables in RS-fMRI signals can inform us about the modulation of states of consciousness. First, we aim to identify a time-averaged spatial signature of consciousness in both the awake state and under anesthesia. This is achieved through a la- tent variables method that decomposes resting- state fMRI signals based on functional networks associated with conscious access. In a transla- tional effort to investigate consciousness restora- tion, we extend this analysis to awake or awak- ened NHPs by DBS of the central thalamus. Our model autonomously suggests that both the ante- rior and posterior cortex contribute to conscious- ness, a debatable topic in the scientific community. Additionally, it underscores the significance of key regions within the global neuronal workspace, a prominent theory regarding conscious access. Fol- lowing this time-averaged analysis, recognizing the critical importance of temporal integration in con- sciousness analysis, we propose to challenge con- ventional dynamic functional connectivity meth- ods. We employ a contrastive deep learning model to predict brain patterns characteristic of various consciousness states. Experiments demonstrate that the model predictions based on dynamic func- tional connectivity facilitate the examination of different transient brain states. Lastly, to gain a deeper understanding of the dynamics of con- sciousness states, we diverge from the conventional subgroup classification framework and introduce a dimension-reduction method. This approach aims to condense these states into a limited number of interpretable and explicable variables. Our findings indicate that the traditional categorical approach inadequately captures the continuum of conscious- ness state dynamics
APA, Harvard, Vancouver, ISO, and other styles
4

Voils, Danny. "Scale Invariant Object Recognition Using Cortical Computational Models and a Robotic Platform." PDXScholar, 2012. https://pdxscholar.library.pdx.edu/open_access_etds/632.

Full text
Abstract:
This paper proposes an end-to-end, scale invariant, visual object recognition system, composed of computational components that mimic the cortex in the brain. The system uses a two stage process. The first stage is a filter that extracts scale invariant features from the visual field. The second stage uses inference based spacio-temporal analysis of these features to identify objects in the visual field. The proposed model combines Numenta's Hierarchical Temporal Memory (HTM), with HMAX developed by MIT's Brain and Cognitive Science Department. While these two biologically inspired paradigms are based on what is known about the visual cortex, HTM and HMAX tackle the overall object recognition problem from different directions. Image pyramid based methods like HMAX make explicit use of scale, but have no sense of time. HTM, on the other hand, only indirectly tackles scale, but makes explicit use of time. By combining HTM and HMAX, both scale and time are addressed. In this paper, I show that HTM and HMAX can be combined to make a com- plete cortex inspired object recognition model that explicitly uses both scale and time to recognize objects in temporal sequences of images. Additionally, through experimentation, I examine several variations of HMAX and its
APA, Harvard, Vancouver, ISO, and other styles
5

Aitkenhead, Matthew. "Using artificial intelligence to model complex systems." Thesis, University of Aberdeen, 2003. http://digitool.abdn.ac.uk/R?func=search-advanced-go&find_code1=WSN&request1=AAIU602065.

Full text
Abstract:
Two observations underpin this thesis; 1. There is a need for automated pattem-recognition techniques that allow processes requiring skills normally associated with the human brain to be carried out rapidly, reliably and cheaply, and; 2. The current methods applied to solving artificial intelligence (AI) problems are insufficient to the task of creating generalised systems capable of pattem-recognition and environmental interaction. Neural networks (NNs) are a good method of solving AI problems that are difficult or impossible to solve using knowledge-based or symbolic techniques. NNs provide the flexibility to analyse poorly-defined systems or systems that are general in nature, and also provide the ability to learn from noisy, complex data sets. The main problem with the use of NNs to date has been that one NN's structure and dynamics may work for a specific problem, but if this problem is changed slightly then it is difficult to determine the optimal settings for the network to enable it to adapt to the new situation. The use of evolutionary methods is emphasised throughout this thesis as a way of optimising NN system performance. Several methods have been developed through the course of this thesis that improve the performance of NN models. One of the most important is the use of a biologically plausible node and connection modification algorithm. In this method, local effects such as the activation levels of nodes at either end of a connection or a node's past activation history are the only input parameters which network components use for their adjustment. Included in the biological plausibility argument are NN structuring methods that mimic specific areas of the brain. One example is the visual system, in which a pyramidal structure is applied that permits a hierarchical pattern recognition process to develop. This process builds the image recognition up from small 'substructures' in successive layers, allowing the system to recognise objects that are not specifically defined by the user. Arguments are made that an AI systems's utility is limited if it does not have the capability of interacting with its environment. A system that merely observes without attempting to alter or exist within an environment is only half of the story. From a biological standpoint, intelligence is the result of successive generations of organisms interacting with and altering their environment. Limiting an AI system's ability to interact with the environment can only place restrictions on the capabilities of that system, not improve them. Following development of a suite of applicable pattem-recognition techniques, work is carried out in order to implement these methods within a simple environment. For the moment, a virtual 'block world' is used that is relatively easy and cheap to manipulate. The importance of both modularity and sensory feedback to the ability to develop complex behaviours is investigated, with these two concepts included in the overall evolutionary strategy of system development. The results obtained show that the techniques developed provide a pattem- recognition and learning system that is capable of being applied to general problems and that learns without human intervention. In comparison to classical NN techniques the systems developed show superior learning abilities and can be applied in less specific situations. The use of modularity and sensory feedback in the animat simulations has allowed the development of behavioural patterns that are difficult to achieve using homogeneous, input-output systems. Evolutionary methods have allowed system optimisation in a way that is impossible to achieve through trial and error, and which also permit the system to be easily fine-tuned towards specific problems and situations. With current advances in computer speed and memory capacity, it is now possible to implement NNs comparable in size to the nervous systems of small animals. The methods used here provide the potential to provide these NNs with the sophistication displayed by their organic counterparts.
APA, Harvard, Vancouver, ISO, and other styles
6

Machado, Beatriz. "Artificial intelligence to model bedrock depth uncertainty." Thesis, KTH, Jord- och bergmekanik, 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-252317.

Full text
Abstract:
The estimation of bedrock level for soil and rock engineering is a challenge associated to many uncertainties. Nowadays, this estimation is performed by geotechnical or geophysics investigations. These methods are expensive techniques, that normally are not fully used because of limited budget. Hence, the bedrock levels in between investigations are roughly estimated and the uncertainty is almost unknown. Machine learning (ML) is an artificial intelligence technique that uses algorithms and statistical models to predict determined tasks. These mathematical models are built dividing the data between training, testing and validation samples so the algorithm improve automatically based on passed experiences. This thesis explores the possibility of applying ML to estimate the bedrock levels and tries to find a suitable algorithm for the prediction and estimation of the uncertainties. Many diferent algorithms were tested during the process and the accuracy level was analysed comparing with the input data and also with interpolation methods, like Kriging. The results show that Kriging method is capable of predicting the bedrock surface with considerably good accuracy. However, when is necessary to estimate the prediction interval (PI), Kriging presents a high standard deviation. The machine learning presents a bedrock surface almost as smooth as Kriging with better results for PI. The Bagging regressor with decision tree was the algorithm more capable of predicting an accurate bedrock surface and narrow PI.<br>BIG and BeFo project "Rock and ground water including artificial intelligence
APA, Harvard, Vancouver, ISO, and other styles
7

Coletti, Mark. "An analysis of a model-based evolutionary algorithm| Learnable Evolution Model." Thesis, George Mason University, 2014. http://pqdtopen.proquest.com/#viewpdf?dispub=3625081.

Full text
Abstract:
<p>An evolutionary algorithm (EA) is a biologically inspired metaheuristic that uses mutation, crossover, reproduction, and selection operators to evolve solutions for a given problem. Learnable Evolution Model (LEM) is an EA that has an evolutionary algorithm component that works in tandem with a machine learner to collaboratively create populations of individuals. The machine learner infers rules from best and least fit individuals, and then this knowledge is exploited to improve the quality of offspring. </p><p> Unfortunately, most of the extant work on LEM has been <i>ad hoc </i>, and so there does not exist a deep understanding of how LEM works. And this lack of understanding, in turn, means that there is no set of best practices for implementing LEM. For example, most LEM implementations use rules that describe value ranges corresponding to areas of higher fitness in which offspring should be created. However, we do not know the efficacy of different approaches for sampling those intervals. Also, we do not have sufficient guidance for assembling training sets of positive and negative examples from populations from which the ML component can learn. </p><p> This research addresses those open issues by exploring three different rule interval sampling approaches as well as three different training set configurations on a number of test problems that are representative of the types of problems that practitioners may encounter. Using the machine learner to create offspring induces a unique emergent selection pressure separate from the selection pressure that manifests from parent and survivor selection; an outcome of this research is a partially ordered set of the impact that these rule interval sampling approaches and training set configurations have on this selection pressure that practitioners can use for implementation guidance. That is, a practitioner can modulate selection pressure by traversing a set of design configurations within a Hasse graph defined by partially ordered selection pressure. </p>
APA, Harvard, Vancouver, ISO, and other styles
8

Chang, Spencer J. "Brain Tumor Classification Using Hit-or-Miss Capsule Layers." DigitalCommons@CalPoly, 2019. https://digitalcommons.calpoly.edu/theses/2006.

Full text
Abstract:
The job of classifying or annotating brain tumors from MRI images can be time-consuming and difficult, even for radiologists. To increase the survival chances of a patient, medical practitioners desire a means for quick and accurate diagnosis. While datasets like CIFAR, ImageNet, and SVHN have tens of thousands, hundreds of thousands, or millions of samples, an MRI dataset may not have the same luxury of receiving accurate labels for each image containing a tumor. This work covers three models that classify brain tumors using a combination of convolutional neural networks and of the concept of capsule layers. Each network utilizes a hit-or-miss capsule layer to relate classes to capsule vectors in a one-to-one relationship. Additionally, this work proposes the use of deep active learning for picking the samples that can give the best model, PSP-HitNet, the most information when adding mini-batches of unlabeled data into the master, labeled training dataset. By using an uncertainty estimated querying strategy, PSP-HitNet approaches the best validation accuracy possible within the first 12-24% of added data from the unlabeled dataset, whereas random choosing takes until 30-50% of the unlabeled to reach the same performance.
APA, Harvard, Vancouver, ISO, and other styles
9

Golesorkhi, Mehrshad. "The Brain's Intrinsic Spatiotemporal Structure and Its Potential Application in Artificial Intelligence." Thesis, Université d'Ottawa / University of Ottawa, 2021. http://hdl.handle.net/10393/42211.

Full text
Abstract:
Neuroscience focuses largely on how the brain mediates perception and cognition. However, this leaves open the basic organization and hierarchies of the brain’s neural activity by itself prior to and independent of its role in cognition. A recent model characterizes the brain’s intrinsic features in terms of temporo- spatial dynamical (rather than cognitive) terms – the brain’s spatiotemporal hierarchies shape what is called ‘brain’s intrinsicality’. The brain’s intrinsicality may provide potential applications in designing artificial intelligence (AI). In this dissertation, I explore ‘intrinsic neural timescales’ and their spatial topography as one main building block of the brain’s intrinsicality. First, I present empirical investigation of temporal hierarchy and information flux as two basic facets of brain’s intrinsicality using magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) data. That is complemented by introducing the notion of intrinsicality through intrinsic neural timescales and how they shape input processing in the brain. Then, I propose a model for input processing through intrinsic neural timescales and provide some notes on how that model can be implemented in an artificial agent. I conclude that the spatiotemporal dynamics of the brain’s intrinsicality provides potential key insights for Artificial General Intelligence (AGI).
APA, Harvard, Vancouver, ISO, and other styles
10

董鵬飛 and Pang-fei Tung. "IntelliMap: a new GIS model with intelligence." Thesis, The University of Hong Kong (Pokfulam, Hong Kong), 1999. http://hub.hku.hk/bib/B31221804.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Brain model: artificial intelligence"

1

Haken, H. Brain dynamics. 2nd ed. Springer, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

1947-, Kitamura Tadashi, ed. What should be computed to understand and model brain function?: From robotics, soft computing, biology and neuroscience to cognitive philosophy. ill., 2001.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

1964-, Beim Graben P., ed. Lectures in supercomputational neuroscience: Dynamics in complex brain networks. Springer, 2008.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Joshi, Rajiv, Eduard Alarcon, Arvind Kumar, and Matt Ziegler. From Artificial Intelligence to Brain Intelligence. River Publishers, 2022. http://dx.doi.org/10.1201/9781003338215.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Edelkamp, Stefan, and Alessio Lomuscio, eds. Model Checking and Artificial Intelligence. Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-74128-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Peled, Doron A., and Michael J. Wooldridge, eds. Model Checking and Artificial Intelligence. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-00431-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

van der Meyden, Ron, and Jan-Georg Smaus, eds. Model Checking and Artificial Intelligence. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-20674-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Yueming, ed. Human Brain and Artificial Intelligence. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-1288-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

De Gregorio, Massimo, Vito Di Maio, Maria Frucci, and Carlo Musio, eds. Brain, Vision, and Artificial Intelligence. Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11565123.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Zeng, An, Dan Pan, Tianyong Hao, Daoqiang Zhang, Yiyu Shi, and Xiaowei Song, eds. Human Brain and Artificial Intelligence. Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-15-1398-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Brain model: artificial intelligence"

1

Shi, Zhongzhi, and Zeqin Huang. "Cognitive Model of Brain-Machine Integration." In Artificial General Intelligence. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-27005-6_17.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lőrincz, András. "Learning the States: A Brain Inspired Neural Model." In Artificial General Intelligence. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-22887-2_36.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wang, Zefan, Kuiyu Wang, and Xiaolin Hu. "Accelerating Allen Brain Institute’s Large-Scale Computational Model of Mice Primary Visual Cortex." In Artificial Intelligence. Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-20503-3_57.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Rahgooy, Taher, and K. Brent Venable. "Learning Preferences in a Cognitive Decision Model." In Human Brain and Artificial Intelligence. Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-15-1398-5_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Krauss, Patrick. "AI as a Model for the Brain." In Artificial Intelligence and Brain Research. Springer Berlin Heidelberg, 2024. http://dx.doi.org/10.1007/978-3-662-68980-6_21.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Domenella, Rosaria Grazia, and Alessio Plebe. "A Neural Model of Human Object Recognition Development." In Brain, Vision, and Artificial Intelligence. Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11565123_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Lin, Baihan. "Neural Networks as Model Selection with Incremental MDL Normalization." In Human Brain and Artificial Intelligence. Springer Singapore, 2019. http://dx.doi.org/10.1007/978-981-15-1398-5_14.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Aznar, F., M. Sempere, M. Pujol, and R. Rizo. "A Cognitive Model for Autonomous Agents Based on Bayesian Programming." In Brain, Vision, and Artificial Intelligence. Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11565123_27.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Frydrych, M., L. Lensu, S. Parkkinen, J. Parkkinen, and T. Jaaskelainen. "Photoelectric Response of Bacteriorhodopsin in Thin PVA Films and Its Model." In Brain, Vision, and Artificial Intelligence. Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11565123_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Gao, Daiheng, Zhenzhi Wu, Yujie Wu, Guoqi Li, and Jing Pei. "ARLIF: A Flexible and Efficient Recurrent Neuronal Model for Sequential Tasks." In Human Brain and Artificial Intelligence. Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-1288-6_1.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Brain model: artificial intelligence"

1

Yadav, Vishakha, Sushil Kumar Saroj, and Rohit Kumar Tiwari. "GhostNet Model Based Brain Tumor Classification." In 2025 3rd International Conference on Communication, Security, and Artificial Intelligence (ICCSAI). IEEE, 2025. https://doi.org/10.1109/iccsai64074.2025.11064547.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lafta, Alaa M., Waleed Hameed, Angham Khalid Hussain, et al. "Brain Inspired Cognitive Architecture of Hierarchical Distributed Model Based on Artificial Intelligence." In 2024 International Conference on Smart Systems for Electrical, Electronics, Communication and Computer Engineering (ICSSEECC). IEEE, 2024. http://dx.doi.org/10.1109/icsseecc61126.2024.10649526.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Daher, Muhammet Kusey, and Abdullah Elewi. "Deep CNN Model for Classifying Neurological Diseases using MRI Brain Images." In 2024 8th International Artificial Intelligence and Data Processing Symposium (IDAP). IEEE, 2024. http://dx.doi.org/10.1109/idap64064.2024.10710896.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hong, Chenyi, Hualiang Wang, Zhuoxuan Wu, Zuozhu Liu, and Junhui Lv. "FoTNet: An Effective Deep Learning Model for Preoperative Differentiation of Common Malignant Brain Tumors." In 2024 IEEE International Conference on Medical Artificial Intelligence (MedAI). IEEE, 2024. https://doi.org/10.1109/medai62885.2024.00073.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Abiodun, Moses Kazeem, Abidemi Emmanuel Adeniyi, Joseph Bamidele Awotunde, et al. "Brain Tumor Detection and Segmentation Using Deep Learning Models." In 2024 6th World Symposium on Artificial Intelligence (WSAI). IEEE, 2024. https://doi.org/10.1109/wsai62426.2024.10829366.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Yi, Wanlin, and Xia Shi. "An adaptive deep brain stimulation for tremor: based on a computational model." In 2024 5th International Conference on Computers and Artificial Intelligence Technology (CAIT). IEEE, 2024. https://doi.org/10.1109/cait64506.2024.10963311.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Miao, Zhengqing, Anja Meunier, Michal Robert Žák, and Moritz Grosse-Wentrup. "Exploring Artificial Neural Network Models for c-VEP Decoding in a Brain-Artificial Intelligence Interface." In 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). IEEE, 2024. https://doi.org/10.1109/bibm62325.2024.10821771.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Sharma, Amit, and Sunny Arora. "Hybrid Machine Learning Model with Textural Feature Analysis for Brain Tumour Detection." In 2024 International Conference on Artificial Intelligence and Emerging Technology (Global AI Summit). IEEE, 2024. https://doi.org/10.1109/globalaisummit62156.2024.10947920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

G, Saranya, Kumaran K, Vidhyalakshmi M, and Siva Priya M. S. "U-Net Model Based Classification on Brain Tumor in Magnetic Resonance Imaging (MRI) Multimodal." In 2024 4th International Conference on Artificial Intelligence and Signal Processing (AISP). IEEE, 2024. https://doi.org/10.1109/aisp61711.2024.10870818.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Xinya, Song, Duan Xingguang, Wang Xujia, Fang Fengxinyun, Tian Jiexi, and Li Changsheng. "CA-UNet: A Brain MRI Segmentation Model Based on U-Net with Attention Mechanism." In 2024 7th International Conference on Pattern Recognition and Artificial Intelligence (PRAI). IEEE, 2024. https://doi.org/10.1109/prai62207.2024.10827095.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Brain model: artificial intelligence"

1

Pasupuleti, Murali Krishna. Quantum Cognition: Modeling Decision-Making with Quantum Theory. National Education Services, 2025. https://doi.org/10.62311/nesx/rrvi225.

Full text
Abstract:
Abstract Quantum cognition applies quantum probability theory and mathematical principles from quantum mechanics to model human decision-making, reasoning, and cognitive processes beyond the constraints of classical probability models. Traditional decision theories, such as expected utility theory and Bayesian inference, struggle to explain context-dependent reasoning, preference reversals, order effects, and cognitive biases observed in human behavior. By incorporating superposition, interference, and entanglement, quantum cognitive models offer a probabilistic framework that better accounts for uncertainty, ambiguity, and adaptive decision-making in complex environments. This research explores the foundations of quantum cognition, its empirical validation in behavioral experiments and neuroscience, and its applications in artificial intelligence (AI), behavioral economics, and decision sciences. Additionally, it examines how quantum-inspired AI models enhance predictive analytics, machine learning algorithms, and human-computer interaction. The study also addresses challenges related to mathematical complexity, cognitive interpretation, and the potential link between quantum mechanics and brain function, providing a comprehensive framework for the integration of quantum cognition into decision science and AI-driven cognitive computing. Keywords Quantum cognition, quantum probability, decision-making models, cognitive science, superposition in cognition, interference effects, entanglement in decision-making, probabilistic reasoning, preference reversals, cognitive biases, order effects, quantum-inspired AI, behavioral economics, neural quantum theory, artificial intelligence, cognitive neuroscience, human-computer interaction, quantum probability in psychology, quantum decision theory, uncertainty modeling, predictive analytics, quantum computing in cognition.
APA, Harvard, Vancouver, ISO, and other styles
2

Strauss, K. D. The artificial intelligence model output analyzer. Office of Scientific and Technical Information (OSTI), 1994. http://dx.doi.org/10.2172/10129459.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Hannas, William, Huey-Meei Chang, Daniel Chou, and Brian Fleeger. China's Advanced AI Research: Monitoring China's Paths to "General" Artificial Intelligence. Center for Security and Emerging Technology, 2022. http://dx.doi.org/10.51593/20210064.

Full text
Abstract:
China is following a national strategy to lead the world in artificial intelligence by 2030, including by pursuing “general AI” that can act autonomously in novel circumstances. Open-source research identifies 30 Chinese institutions engaged in one or more of this project‘s aspects, including machine learning, brain-inspired AI, and brain-computer interfaces. This report previews a CSET pilot program that will track China’s progress and provide timely alerts.
APA, Harvard, Vancouver, ISO, and other styles
4

Gillespie, Nicole, Caitlin Curtis, Rossana Bianchi, Ali Akbari, and Rita Fentener van Vlissingen. Achieving Trustworthy AI: A Model for Trustworthy Artificial Intelligence. The University of Queensland and KPMG, 2020. http://dx.doi.org/10.14264/ca0819d.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Agrawal, Ajay, John McHale, and Alexander Oettl. Artificial Intelligence and Scientific Discovery: A Model of Prioritized Search. National Bureau of Economic Research, 2023. http://dx.doi.org/10.3386/w31558.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Melnyk, Yuriy, and Iryna Pypenko. Artificial intelligence as a factor revolutionizing higher education. KRPOCH, 2024. https://doi.org/10.26697/krpoch.melnyk.pypenko.report.pppmsf.2024.

Full text
Abstract:
The role of artificial intelligence-based chatbots in higher education practice was considered. The use of chatbots among higher education stakeholders (students and faculty) was studied. A model of stakeholder behaviour was developed. This model describes two ways of solving problems: with and without the use of artificial intelligence. Trends in the use of chatbots in higher education were identified: students were 26.9% more likely than faculty to use artificial intelligence-based chatbots to prepare for classes or complete assignments at their college/university; almost all students (68.0% of 68.3% who use chatbots) edited the results returned by generative chatbots at their request; students were 30.1% more likely than faculty to edit these results.
APA, Harvard, Vancouver, ISO, and other styles
7

Mazari, Mehran, Yahaira Nava-Gonzalez, Ly Jacky Nhiayi, and Mohamad Saleh. Smart Highway Construction Site Monitoring Using Artificial Intelligence. Mineta Transportation Institute, 2025. https://doi.org/10.31979/mti.2025.2336.

Full text
Abstract:
Construction is a large sector of the economy and plays a significant role in creating economic growth and national development,and construction of transportation infrastructure is critical. This project developed a method to detect, classify, monitor, and track objects during the construction, maintenance, and rehabilitation of transportation infrastructure by using artificial intelligence and a deep learning approach. This study evaluated the performance of AI and deep learning algorithms to compare their performance in detecting and classifying the equipment in various construction scenes. Our goal was to find the optimized balance between the model capabilities in object detection and memory processing requirements. Due to the lack of a comprehensive image database specifically developed for transportation infrastructure construction projects, the first portion of this study focused on preparing a comprehensive database of annotated images for various classes of equipment and machinery that are commonly used in roadway construction and rehabilitation projects. The second part of the project focused on training the deep learning models and improving the accuracy of the classification and detection algorithms. The outcomes of the trained and improved deep learning classification model were promising in terms of the precision and accuracy of the model in detecting specific objects at a highway construction site. It should be noted that the scope of this project was limited to the image and video data recorded from the ground-level and cannot be extended to Uncrewed Aerial System (UAS) data. This study provides valuable insights on the potentials of AI and deep learning to improve the monitoring and thus safety and efficiency of transportation infrastructure construction.
APA, Harvard, Vancouver, ISO, and other styles
8

André, Christophe, Manuel Bétin, Peter Gal, and Paul Peltier. Developments in Artificial Intelligence markets: New indicators based on model characteristics, prices and providers. Organisation for Economic Co-Operation and Development (OECD), 2025. https://doi.org/10.1787/9302bf46-en.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Pasupuleti, Murali Krishna. Neural Computation and Learning Theory: Expressivity, Dynamics, and Biologically Inspired AI. National Education Services, 2025. https://doi.org/10.62311/nesx/rriv425.

Full text
Abstract:
Abstract: Neural computation and learning theory provide the foundational principles for understanding how artificial and biological neural networks encode, process, and learn from data. This research explores expressivity, computational dynamics, and biologically inspired AI, focusing on theoretical expressivity limits, infinite-width neural networks, recurrent and spiking neural networks, attractor models, and synaptic plasticity. The study investigates mathematical models of function approximation, kernel methods, dynamical systems, and stability properties to assess the generalization capabilities of deep learning architectures. Additionally, it explores biologically plausible learning mechanisms such as Hebbian learning, spike-timing-dependent plasticity (STDP), and neuromodulation, drawing insights from neuroscience and cognitive computing. The role of spiking neural networks (SNNs) and neuromorphic computing in low-power AI and real-time decision-making is also analyzed, with applications in robotics, brain-computer interfaces, edge AI, and cognitive computing. Case studies highlight the industrial adoption of biologically inspired AI, focusing on adaptive neural controllers, neuromorphic vision, and memory-based architectures. This research underscores the importance of integrating theoretical learning principles with biologically motivated AI models to develop more interpretable, generalizable, and scalable intelligent systems. Keywords Neural computation, learning theory, expressivity, deep learning, recurrent neural networks, spiking neural networks, biologically inspired AI, infinite-width networks, kernel methods, attractor networks, synaptic plasticity, STDP, neuromodulation, cognitive computing, dynamical systems, function approximation, generalization, AI stability, neuromorphic computing, robotics, brain-computer interfaces, edge AI, biologically plausible learning.
APA, Harvard, Vancouver, ISO, and other styles
10

Sylvester, Prem, Wendy Hui Kyong Chun, Anikó Hannák, et al. Global Approaches to Auditing Artificial Intelligence: A Literature Review. Edited by Danaë Metaxa, Jack Bandy, Luis Adrián Castro-Quiroa, et al. International Panel on the Information Environment (IPIE), 2024. https://doi.org/10.61452/bwym7397.

Full text
Abstract:
This Synthesis Report is a literature review outlining the regulatory, industry, and academic approaches to AI audits. We review 78 articles published in peer-reviewed journals and as preprints, 21 documents from industry associations and standard-setting organizations, and national policy documents and regulations from 20 countries. Based on this review, we identify three key takeaways about the landscape of AI auditing: 1. To accurately assess the potential risks, and impacts of AI systems, we need a trustworthy audit ecosystem with complementary approaches from internal, external, and community auditors. 2. Auditors need better access to data and audit artifacts from the developers and deployers of AI systems. Comprehensive auditability requires documentation and disclosure of an AI system’s model and data components, associated risks and impacts, and easily understandable explanations of its outcomes. 3. Given that the development and use of AI systems impacts communities across the world, audit regimes must account for their global effects. Most existing audits have been conducted in North America, Europe, and other regions of the ‘global north’, with their results typically published in English and focused on effects within these regions. The impacts of AI systems, however, include shifts in social and environmental conditions beyond the immediate development or application contexts of a system.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!