Добірка наукової літератури з теми "Incremental neural network"

Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями

Оберіть тип джерела:

Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Incremental neural network".

Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.

Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.

Статті в журналах з теми "Incremental neural network"

1

Yang, Shuyuan, Min Wang, and Licheng Jiao. "Incremental constructive ridgelet neural network." Neurocomputing 72, no. 1-3 (2008): 367–77. http://dx.doi.org/10.1016/j.neucom.2008.01.001.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Siddiqui, Zahid Ali, and Unsang Park. "Progressive Convolutional Neural Network for Incremental Learning." Electronics 10, no. 16 (2021): 1879. http://dx.doi.org/10.3390/electronics10161879.

Повний текст джерела
Анотація:
In this paper, we present a novel incremental learning technique to solve the catastrophic forgetting problem observed in the CNN architectures. We used a progressive deep neural network to incrementally learn new classes while keeping the performance of the network unchanged on old classes. The incremental training requires us to train the network only for new classes and fine-tune the final fully connected layer, without needing to train the entire network again, which significantly reduces the training time. We evaluate the proposed architecture extensively on image classification task usin
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Ho, Jiacang, and Dae-Ki Kang. "Brick Assembly Networks: An Effective Network for Incremental Learning Problems." Electronics 9, no. 11 (2020): 1929. http://dx.doi.org/10.3390/electronics9111929.

Повний текст джерела
Анотація:
Deep neural networks have achieved high performance in image classification, image generation, voice recognition, natural language processing, etc.; however, they still have confronted several open challenges that need to be solved such as incremental learning problem, overfitting in neural networks, hyperparameter optimization, lack of flexibility and multitasking, etc. In this paper, we focus on the incremental learning problem which is related with machine learning methodologies that continuously train an existing model with additional knowledge. To the best of our knowledge, a simple and d
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Abramova, E. S., A. A. Orlov, and K. V. Makarov. "Possibi¬lities of Using Neural Network Incremental Learning." Bulletin of the South Ural State University. Ser. Computer Technologies, Automatic Control & Radioelectronics 21, no. 4 (2021): 19–27. http://dx.doi.org/10.14529/ctcr210402.

Повний текст джерела
Анотація:
The present time is characterized by unprecedented growth in the volume of information flows. Information processing underlies the solution of many practical problems. The intelligent infor-mation systems applications range is extremely extensive: from managing continuous technological processes in real-time to solving commercial and administrative problems. Intelligent information systems should have such a main property, as the ability to quickly process dynamical incoming da-ta in real-time. Also, intelligent information systems should be extracting knowledge from previously solved problems
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Mellado, Diego, Carolina Saavedra, Steren Chabert, Romina Torres, and Rodrigo Salas. "Self-Improving Generative Artificial Neural Network for Pseudorehearsal Incremental Class Learning." Algorithms 12, no. 10 (2019): 206. http://dx.doi.org/10.3390/a12100206.

Повний текст джерела
Анотація:
Deep learning models are part of the family of artificial neural networks and, as such, they suffer catastrophic interference when learning sequentially. In addition, the greater number of these models have a rigid architecture which prevents the incremental learning of new classes. To overcome these drawbacks, we propose the Self-Improving Generative Artificial Neural Network (SIGANN), an end-to-end deep neural network system which can ease the catastrophic forgetting problem when learning new classes. In this method, we introduce a novel detection model that automatically detects samples of
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Tomimori, Haruka, Kui-Ting Chen, and Takaaki Baba. "A Convolutional Neural Network with Incremental Learning." Journal of Signal Processing 21, no. 4 (2017): 155–58. http://dx.doi.org/10.2299/jsp.21.155.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Shiotani, Shigetoshi, Toshio Fukuda, and Takanori Shibata. "A neural network architecture for incremental learning." Neurocomputing 9, no. 2 (1995): 111–30. http://dx.doi.org/10.1016/0925-2312(94)00061-v.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Kim, Jonghong, WonHee Lee, Sungdae Baek, Jeong-Ho Hong, and Minho Lee. "Incremental Learning for Online Data Using QR Factorization on Convolutional Neural Networks." Sensors 23, no. 19 (2023): 8117. http://dx.doi.org/10.3390/s23198117.

Повний текст джерела
Анотація:
Catastrophic forgetting, which means a rapid forgetting of learned representations while learning new data/samples, is one of the main problems of deep neural networks. In this paper, we propose a novel incremental learning framework that can address the forgetting problem by learning new incoming data in an online manner. We develop a new incremental learning framework that can learn extra data or new classes with less catastrophic forgetting. We adopt the hippocampal memory process to the deep neural networks by defining the effective maximum of neural activation and its boundary to represen
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Roy, Kaushik, Christian Simon, Peyman Moghadam, and Mehrtash Harandi. "CL3: Generalization of Contrastive Loss for Lifelong Learning." Journal of Imaging 9, no. 12 (2023): 259. http://dx.doi.org/10.3390/jimaging9120259.

Повний текст джерела
Анотація:
Lifelong learning portrays learning gradually in nonstationary environments and emulates the process of human learning, which is efficient, robust, and able to learn new concepts incrementally from sequential experience. To equip neural networks with such a capability, one needs to overcome the problem of catastrophic forgetting, the phenomenon of forgetting past knowledge while learning new concepts. In this work, we propose a novel knowledge distillation algorithm that makes use of contrastive learning to help a neural network to preserve its past knowledge while learning from a series of ta
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Zhang, Junhui, Hongying Zan, Shuning Wu, Kunli Zhang, and Jianwei Huo. "Adaptive Graph Neural Network with Incremental Learning Mechanism for Knowledge Graph Reasoning." Electronics 13, no. 14 (2024): 2778. http://dx.doi.org/10.3390/electronics13142778.

Повний текст джерела
Анотація:
Knowledge graphs are extensively utilized in diverse fields such as search engines, recommendation systems, and dialogue systems, and knowledge graph reasoning plays an important role in the aforementioned domains. Graph neural networks demonstrate the capability to effectively capture and process the graph structure inherent in knowledge graphs, leveraging the relationships between nodes and edges to enable efficient reasoning. Current research on graph neural networks relies on predefined propagation paths. The models based on predefined propagation paths overlook the correlation between ent
Стилі APA, Harvard, Vancouver, ISO та ін.
Більше джерел

Дисертації з теми "Incremental neural network"

1

Lundberg, Emil. "Adding temporal plasticity to a self-organizing incremental neural network using temporal activity diffusion." Thesis, KTH, Skolan för datavetenskap och kommunikation (CSC), 2015. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-180346.

Повний текст джерела
Анотація:
Vector Quantization (VQ) is a classic optimization problem and a simple approach to pattern recognition. Applications include lossy data compression, clustering and speech and speaker recognition. Although VQ has largely been replaced by time-aware techniques like Hidden Markov Models (HMMs) and Dynamic Time Warping (DTW) in some applications, such as speech and speaker recognition, VQ still retains some significance due to its much lower computational cost — especially for embedded systems. A recent study also demonstrates a multi-section VQ system which achieves performance rivaling that of
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Flores, João Henrique Ferreira. "ARMA-CIGMN : an Incremental Gaussian Mixture Network for time series analysis and forecasting." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2015. http://hdl.handle.net/10183/116126.

Повний текст джерела
Анотація:
Este trabalho apresenta um novo modelo de redes neurais para análise e previsão de séries temporais: o modelo ARMA-CIGMN (do inglês, Autoregressive Moving Average Classical Incremental Gaussian Mixture Network) além dos resultados obtidos pelo mesmo. Este modelo se baseia em modificações realizadas em uma versão reformulada da IGMN. A IGMN Clássica, CIGMN, é similar à versão original da IGMN, porém baseada em uma abordagem estatística clássica, a qual também é apresentada neste trabalho. As modificações do algoritmo da IGMN foram feitas para melhor adpatação a séries temporais. O modelo ARMA-C
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Rouzier, Sophie. "Réseaux neuronaux et modularité." Grenoble INPG, 1998. http://www.theses.fr/1998INPG0032.

Повний текст джерела
Анотація:
Face a un probleme complexe, deux types de strategies peuvent etre envisagees : - la premiere consiste a utiliser differentes methodes pour resoudre le probleme global, - la deuxieme consiste a faire cooperer plusieurs methodes specialisees sur differents sous-problemes. Une architecture neuronale adoptant l'une ou l'autre de ces strategies fait preuve de modularite dans la mesure ou d'une part chaque module represente une methode, et d'autre part le traitement de l'ensemble resulte de la cooperation de l'ensemble des modules. Un des buts de ces architectures est l'amelioration des performance
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Hocquet, Guillaume. "Class Incremental Continual Learning in Deep Neural Networks." Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPAST070.

Повний текст джерела
Анотація:
Nous nous intéressons au problème de l'apprentissage continu de réseaux de neurones artificiels dans le cas où les données ne sont accessibles que pour une seule catégorie à la fois. Pour remédier au problème de l'oubli catastrophique qui limite les performances d'apprentissage dans ces conditions, nous proposons une approche basée sur la représentation des données d'une catégorie par une loi normale. Les transformations associées à ces représentations sont effectuées à l'aide de réseaux inversibles, qui peuvent alors être entraînés avec les données d'une seule catégorie. Chaque catégorie se v
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Buttar, Sarpreet Singh. "Applying Artificial Neural Networks to Reduce the Adaptation Space in Self-Adaptive Systems : an exploratory work." Thesis, Linnéuniversitetet, Institutionen för datavetenskap och medieteknik (DM), 2019. http://urn.kb.se/resolve?urn=urn:nbn:se:lnu:diva-87117.

Повний текст джерела
Анотація:
Self-adaptive systems have limited time to adjust their configurations whenever their adaptation goals, i.e., quality requirements, are violated due to some runtime uncertainties. Within the available time, they need to analyze their adaptation space, i.e., a set of configurations, to find the best adaptation option, i.e., configuration, that can achieve their adaptation goals. Existing formal analysis approaches find the best adaptation option by analyzing the entire adaptation space. However, exhaustive analysis requires time and resources and is therefore only efficient when the adaptation
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Ronco, Eric. "Incremental polynomial controller networks two self-organising non-linear controllers /." Thesis, Connect to electronic version, 1997. http://hdl.handle.net/1905/181.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Monica, Riccardo. "Deep Incremental Learning for Object Recognition." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2016. http://amslaurea.unibo.it/12331/.

Повний текст джерела
Анотація:
In recent years, deep learning techniques received great attention in the field of information technology. These techniques proved to be particularly useful and effective in domains like natural language processing, speech recognition and computer vision. In several real world applications deep learning approaches improved the state-of-the-art. In the field of machine learning, deep learning was a real revolution and a number of effective techniques have been proposed for both supervised and unsupervised learning and for representation learning. This thesis focuses on deep learning for object
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Pinto, Rafael Coimbra. "Continuous reinforcement learning with incremental Gaussian mixture models." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2017. http://hdl.handle.net/10183/157591.

Повний текст джерела
Анотація:
A contribução original desta tese é um novo algoritmo que integra um aproximador de funções com alta eficiência amostral com aprendizagem por reforço em espaços de estados contínuos. A pesquisa completa inclui o desenvolvimento de um algoritmo online e incremental capaz de aprender por meio de uma única passada sobre os dados. Este algoritmo, chamado de Fast Incremental Gaussian Mixture Network (FIGMN) foi empregado como um aproximador de funções eficiente para o espaço de estados de tarefas contínuas de aprendizagem por reforço, que, combinado com Q-learning linear, resulta em performance com
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Pinto, Rafael Coimbra. "Online incremental one-shot learning of temporal sequences." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2011. http://hdl.handle.net/10183/49063.

Повний текст джерела
Анотація:
Este trabalho introduz novos algoritmos de redes neurais para o processamento online de padrões espaço-temporais, estendendo o algoritmo Incremental Gaussian Mixture Network (IGMN). O algoritmo IGMN é uma rede neural online incremental que aprende a partir de uma única passada através de dados por meio de uma versão incremental do algoritmo Expectation-Maximization (EM) combinado com regressão localmente ponderada (Locally Weighted Regression, LWR). Quatro abordagens diferentes são usadas para dar capacidade de processamento temporal para o algoritmo IGMN: linhas de atraso (Time-Delay IGMN), u
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Chalup, Stephan Konrad. "Incremental learning with neural networks, evolutionary computation and reinforcement learning algorithms." Thesis, Queensland University of Technology, 2001.

Знайти повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
Більше джерел

Книги з теми "Incremental neural network"

1

Mundy, Peter. A Neural Networks, Information-Processing Model of Joint Attention and Social-Cognitive Development. Edited by Philip David Zelazo. Oxford University Press, 2013. http://dx.doi.org/10.1093/oxfordhb/9780199958474.013.0010.

Повний текст джерела
Анотація:
A neural networks approach to the development of joint attention can inform the study of the nature of human social cognition, learning, and symbolic thought process. Joint attention development involves increments in the capacity to engage in simultaneous or parallel processing of information about one’s own attention and the attention of other people. Infant practice with joint attention is both a consequence and an organizer of a distributed and integrated brain network involving frontal and parietal cortical systems. In this chapter I discuss two hypotheses that stem from this model. One i
Стилі APA, Harvard, Vancouver, ISO та ін.

Частини книг з теми "Incremental neural network"

1

Shen, Shaofeng, Qiang Gan, Furao Shen, Chaomin Luo, and Jinxi Zhao. "An Incremental Network with Local Experts Ensemble." In Neural Information Processing. Springer International Publishing, 2015. http://dx.doi.org/10.1007/978-3-319-26555-1_58.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Alpaydm, Ethem. "Grow-and-Learn: An Incremental Method for Category Learning." In International Neural Network Conference. Springer Netherlands, 1990. http://dx.doi.org/10.1007/978-94-009-0643-3_69.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Kakemoto, Yoshitsugu, and Shinchi Nakasuka. "Dynamics of Incremental Learning by VSF-Network." In Artificial Neural Networks – ICANN 2009. Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04274-4_71.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Rizzi, A., M. Biancavilla, and F. M. Frattale Mascioli. "Incremental Min-Max Network. Part 1: Continuous Spaces." In Perspectives in Neural Computing. Springer London, 1999. http://dx.doi.org/10.1007/978-1-4471-0811-5_43.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Furao, Shen, and Osamu Hasegawa. "An Incremental Neural Network for Non-stationary Unsupervised Learning." In Neural Information Processing. Springer Berlin Heidelberg, 2004. http://dx.doi.org/10.1007/978-3-540-30499-9_98.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Zhang, Tianyue, Baile Xu, and Furao Shen. "Fuzzy Self-Organizing Incremental Neural Network for Fuzzy Clustering." In Neural Information Processing. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-70087-8_3.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Driff, Lydia Nahla, and Habiba Drias. "Artificial Neural Network for Incremental Data Mining." In Advances in Intelligent Systems and Computing. Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-56535-4_14.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Shen, Furao, and Osamu Hasegawa. "Self-Organizing Incremental Neural Network and Its Application." In Artificial Neural Networks – ICANN 2010. Springer Berlin Heidelberg, 2010. http://dx.doi.org/10.1007/978-3-642-15825-4_74.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Wang, Xiaoyu, Lucian Gheorghe, and Jun-ichi Imura. "A Gaussian Process-Based Incremental Neural Network for Online Regression." In Neural Information Processing. Springer International Publishing, 2020. http://dx.doi.org/10.1007/978-3-030-63836-8_13.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Alfarozi, Syukron Abu Ishaq, Noor Akhmad Setiawan, Teguh Bharata Adji, Kuntpong Woraratpanya, Kitsuchart Pasupa, and Masanori Sugimoto. "Analytical Incremental Learning: Fast Constructive Learning Method for Neural Network." In Neural Information Processing. Springer International Publishing, 2016. http://dx.doi.org/10.1007/978-3-319-46672-9_30.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Тези доповідей конференцій з теми "Incremental neural network"

1

Khansama, Rasmi Ranjan, Rojalina Priyadarshini, and Surendra Kumar Nanda. "Predicting FOREX trend using incremental spiking neural network." In 2025 International Conference on Emerging Systems and Intelligent Computing (ESIC). IEEE, 2025. https://doi.org/10.1109/esic64052.2025.10962680.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Zhu, Kehan, Fuyi Hu, Yuanbin Ding, Yunyun Dong, and Ruxin Wang. "Incremental Soft Pruning to Get the Sparse Neural Network During Training." In 2024 International Joint Conference on Neural Networks (IJCNN). IEEE, 2024. http://dx.doi.org/10.1109/ijcnn60899.2024.10650747.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Wang, Shuting, Jinsha Li, and Junmin Li. "Learning Consensus for Multi-Agent Systems through Incremental Adaptive Neural Network Mechanism." In 2025 IEEE 14th Data Driven Control and Learning Systems (DDCLS). IEEE, 2025. https://doi.org/10.1109/ddcls66240.2025.11064984.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
4

Kalakan, Kongkan, and Chidchanok Lursinsap. "Stream Image Classification Using Class-wise Incremental Learning and Pre-trained Convolution Neural Network." In 2024 28th International Computer Science and Engineering Conference (ICSEC). IEEE, 2024. https://doi.org/10.1109/icsec62781.2024.10770709.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
5

Kou, Jialiang, Shengwu Xiong, Shuzhen Wan, and Hongbing Liu. "The Incremental Probabilistic Neural Network." In 2010 Sixth International Conference on Natural Computation (ICNC). IEEE, 2010. http://dx.doi.org/10.1109/icnc.2010.5583589.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
6

Mi, Fei, and Boi Faltings. "Memory Augmented Neural Model for Incremental Session-based Recommendation." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/300.

Повний текст джерела
Анотація:
Increasing concerns with privacy have stimulated interests in Session-based Recommendation (SR) using no personal data other than what is observed in the current browser session. Existing methods are evaluated in static settings which rarely occur in real-world applications. To better address the dynamic nature of SR tasks, we study an incremental SR scenario, where new items and preferences appear continuously. We show that existing neural recommenders can be used in incremental SR scenarios with small incremental updates to alleviate computation overhead and catastrophic forgetting. More imp
Стилі APA, Harvard, Vancouver, ISO та ін.
7

Wang, Jenq Haur, and Hsin Yang Wang. "Incremental Neural Network Construction for Text Classification." In 2014 International Symposium on Computer, Consumer and Control (IS3C). IEEE, 2014. http://dx.doi.org/10.1109/is3c.2014.254.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
8

Okada, Shogo, and Toyoaki Nishida. "Incremental clustering of gesture patterns based on a self organizing incremental neural network." In 2009 International Joint Conference on Neural Networks (IJCNN 2009 - Atlanta). IEEE, 2009. http://dx.doi.org/10.1109/ijcnn.2009.5178845.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
9

Huang, Shin-Ying, Fang Yu, Rua-Huan Tsaih, and Yennun Huang. "Network-traffic anomaly detection with incremental majority learning." In 2015 International Joint Conference on Neural Networks (IJCNN). IEEE, 2015. http://dx.doi.org/10.1109/ijcnn.2015.7280573.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.
10

Lu, Jie, Furao Shen, and Jinxi Zhao. "Using self-organizing incremental neural network (SOINN) For radial basis function networks." In 2014 International Joint Conference on Neural Networks (IJCNN). IEEE, 2014. http://dx.doi.org/10.1109/ijcnn.2014.6889649.

Повний текст джерела
Стилі APA, Harvard, Vancouver, ISO та ін.

Звіти організацій з теми "Incremental neural network"

1

Engel, Bernard, Yael Edan, James Simon, Hanoch Pasternak, and Shimon Edelman. Neural Networks for Quality Sorting of Agricultural Produce. United States Department of Agriculture, 1996. http://dx.doi.org/10.32747/1996.7613033.bard.

Повний текст джерела
Анотація:
The objectives of this project were to develop procedures and models, based on neural networks, for quality sorting of agricultural produce. Two research teams, one in Purdue University and the other in Israel, coordinated their research efforts on different aspects of each objective utilizing both melons and tomatoes as case studies. At Purdue: An expert system was developed to measure variances in human grading. Data were acquired from eight sensors: vision, two firmness sensors (destructive and nondestructive), chlorophyll from fluorescence, color sensor, electronic sniffer for odor detecti
Стилі APA, Harvard, Vancouver, ISO та ін.
2

Ramakrishnan, Aravind, Fangyu Liu, Angeli Jayme, and Imad Al-Qadi. Prediction of Pavement Damage under Truck Platoons Utilizing a Combined Finite Element and Artificial Intelligence Model. Illinois Center for Transportation, 2024. https://doi.org/10.36501/0197-9191/24-030.

Повний текст джерела
Анотація:
For robust pavement design, accurate damage computation is essential, especially for loading scenarios such as truck platoons. Studies have developed a framework to compute pavement distresses as function of lateral position, spacing, and market-penetration level of truck platoons. The established framework uses a robust 3D pavement model, along with the AASHTOWare Mechanistic–Empirical Pavement Design Guidelines (MEPDG) transfer functions to compute pavement distresses. However, transfer functions include high variability and lack physical significance. Therefore, as an improvement to effecti
Стилі APA, Harvard, Vancouver, ISO та ін.
3

Bailey Bond, Robert, Pu Ren, James Fong, Hao Sun, and Jerome F. Hajjar. Physics-informed Machine Learning Framework for Seismic Fragility Analysis of Steel Structures. Northeastern University, 2024. http://dx.doi.org/10.17760/d20680141.

Повний текст джерела
Анотація:
The seismic assessment of structures is a critical step to increase community resilience under earthquake hazards. This research aims to develop a Physics-reinforced Machine Learning (PrML) paradigm for metamodeling of nonlinear structures under seismic hazards using artificial intelligence. Structural metamodeling, a reduced-fidelity surrogate model to a more complex structural model, enables more efficient performance-based design and analysis, optimizing structural designs and ease the computational effort for reliability fragility analysis, leading to globally efficient designs while maint
Стилі APA, Harvard, Vancouver, ISO та ін.
Ми пропонуємо знижки на всі преміум-плани для авторів, чиї праці увійшли до тематичних добірок літератури. Зв'яжіться з нами, щоб отримати унікальний промокод!