Academic literature on the topic 'Regularization by architecture'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Regularization by architecture.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Regularization by architecture"

1

Salehin, Imrus, and Dae-Ki Kang. "A Review on Dropout Regularization Approaches for Deep Neural Networks within the Scholarly Domain." Electronics 12, no. 14 (2023): 3106. http://dx.doi.org/10.3390/electronics12143106.

Full text
Abstract:
Dropout is one of the most popular regularization methods in the scholarly domain for preventing a neural network model from overfitting in the training phase. Developing an effective dropout regularization technique that complies with the model architecture is crucial in deep learning-related tasks because various neural network architectures have been proposed, including convolutional neural networks (CNNs) and recurrent neural networks (RNNs), and they have exhibited reasonable performance in their specialized areas. In this paper, we provide a comprehensive and novel review of the state-of
APA, Harvard, Vancouver, ISO, and other styles
2

Marin, Ivana, Ana Kuzmanic Skelin, and Tamara Grujic. "Empirical Evaluation of the Effect of Optimization and Regularization Techniques on the Generalization Performance of Deep Convolutional Neural Network." Applied Sciences 10, no. 21 (2020): 7817. http://dx.doi.org/10.3390/app10217817.

Full text
Abstract:
The main goal of any classification or regression task is to obtain a model that will generalize well on new, previously unseen data. Due to the recent rise of deep learning and many state-of-the-art results obtained with deep models, deep learning architectures have become one of the most used model architectures nowadays. To generalize well, a deep model needs to learn the training data well without overfitting. The latter implies a correlation of deep model optimization and regularization with generalization performance. In this work, we explore the effect of the used optimization algorithm
APA, Harvard, Vancouver, ISO, and other styles
3

Kobayashi, Haruo, Takashi Matsumoto, Tetsuya Yagi, and Takuji Shimmi. "Image processing regularization filters on layered architecture." Neural Networks 6, no. 3 (1993): 327–50. http://dx.doi.org/10.1016/0893-6080(93)90002-e.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Karageorgos, Konstantinos, Anastasios Dimou, Federico Alvarez, and Petros Daras. "Implicit and Explicit Regularization for Optical Flow Estimation." Sensors 20, no. 14 (2020): 3855. http://dx.doi.org/10.3390/s20143855.

Full text
Abstract:
In this paper, two novel and practical regularizing methods are proposed to improve existing neural network architectures for monocular optical flow estimation. The proposed methods aim to alleviate deficiencies of current methods, such as flow leakage across objects and motion consistency within rigid objects, by exploiting contextual information. More specifically, the first regularization method utilizes semantic information during the training process to explicitly regularize the produced optical flow field. The novelty of this method lies in the use of semantic segmentation masks to teach
APA, Harvard, Vancouver, ISO, and other styles
5

Yulita, Winda, Uri Arta Ramadhani, Zunanik Mufidah, et al. "Improved human image density detection with comparison of YOLOv8 depth level architecture and drop-out implementation." Journal of Soft Computing Exploration 6, no. 1 (2025): 33–39. https://doi.org/10.52465/joscex.v6i1.556.

Full text
Abstract:
Energy inefficiency due to Air Conditioners (AC) running in empty rooms contribute to unnecessary energy consumption and increased CO₂ emissions. This study explores how different depth levels of the YOLOv8 architecture and dropout regularization can enhance human density detection for smarter AC control systems. By evaluating model accuracy through Mean Average Precision (mAP50-95), we provide quantitative insights into how these modifications improve detection performance. Our dataset consists of 1363 images taken in an office environment at ITERA under varying lighting conditions and differ
APA, Harvard, Vancouver, ISO, and other styles
6

Glänzer, Lukas, Husam E. Masalkhi, Anjali A. Roeth, Thomas Schmitz-Rode, and Ioana Slabu. "Vessel Delineation Using U-Net: A Sparse Labeled Deep Learning Approach for Semantic Segmentation of Histological Images." Cancers 15, no. 15 (2023): 3773. http://dx.doi.org/10.3390/cancers15153773.

Full text
Abstract:
Semantic segmentation is an important imaging analysis method enabling the identification of tissue structures. Histological image segmentation is particularly challenging, having large structural information while providing only limited training data. Additionally, labeling these structures to generate training data is time consuming. Here, we demonstrate the feasibility of a semantic segmentation using U-Net with a novel sparse labeling technique. The basic U-Net architecture was extended by attention gates, residual and recurrent links, and dropout regularization. To overcome the high class
APA, Harvard, Vancouver, ISO, and other styles
7

Zhang, Jiang, Liejun Wang, Yinfeng Yu, and Miaomiao Xu. "Nonlinear Regularization Decoding Method for Speech Recognition." Sensors 24, no. 12 (2024): 3846. http://dx.doi.org/10.3390/s24123846.

Full text
Abstract:
Existing end-to-end speech recognition methods typically employ hybrid decoders based on CTC and Transformer. However, the issue of error accumulation in these hybrid decoders hinders further improvements in accuracy. Additionally, most existing models are built upon Transformer architecture, which tends to be complex and unfriendly to small datasets. Hence, we propose a Nonlinear Regularization Decoding Method for Speech Recognition. Firstly, we introduce the nonlinear Transformer decoder, breaking away from traditional left-to-right or right-to-left decoding orders and enabling associations
APA, Harvard, Vancouver, ISO, and other styles
8

Bhatt, Dulari, Chirag Patel, Hardik Talsania, et al. "CNN Variants for Computer Vision: History, Architecture, Application, Challenges and Future Scope." Electronics 10, no. 20 (2021): 2470. http://dx.doi.org/10.3390/electronics10202470.

Full text
Abstract:
Computer vision is becoming an increasingly trendy word in the area of image processing. With the emergence of computer vision applications, there is a significant demand to recognize objects automatically. Deep CNN (convolution neural network) has benefited the computer vision community by producing excellent results in video processing, object recognition, picture classification and segmentation, natural language processing, speech recognition, and many other fields. Furthermore, the introduction of large amounts of data and readily available hardware has opened new avenues for CNN study. Se
APA, Harvard, Vancouver, ISO, and other styles
9

Bevza, M. "Tying of embeddings for improving regularization in neural networks for named entity recognition task." Bulletin of Taras Shevchenko National University of Kyiv. Series: Physics and Mathematics, no. 3 (2018): 59–64. http://dx.doi.org/10.17721/1812-5409.2018/3.8.

Full text
Abstract:
We analyze neural network architectures that yield state of the art results on named entity recognition task and propose a new architecture for improving results even further. We have analyzed a number of ideas and approaches that researchers have used to achieve state of the art results in a variety of NLP tasks. In this work, we present a few of them which we consider to be most likely to improve existing state of the art solutions for named entity recognition task. The architecture is inspired by recent developments in language modeling task. The suggested solution is based on a multi-task
APA, Harvard, Vancouver, ISO, and other styles
10

Albahar, Marwan Ali, and Muhammad Binsawad. "Deep Autoencoders and Feedforward Networks Based on a New Regularization for Anomaly Detection." Security and Communication Networks 2020 (July 10, 2020): 1–9. http://dx.doi.org/10.1155/2020/7086367.

Full text
Abstract:
Anomaly detection is a problem with roots dating back over 30 years. The NSL-KDD dataset has become the convention for testing and comparing new or improved models in this domain. In the field of network intrusion detection, the UNSW-NB15 dataset has recently gained significant attention over the NSL-KDD because it contains more modern attacks. In the present paper, we outline two cutting-edge architectures that push the boundaries of model accuracy for these datasets, both framed in the context of anomaly detection and intrusion classification. We summarize training methodologies, hyperparame
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Regularization by architecture"

1

Blot, Michaël. "Étude de l'apprentissage et de la généralisation des réseaux profonds en classification d'images." Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS412.

Full text
Abstract:
L'intelligence artificielle connait une résurgence ces dernières années. En cause, la capacité croissante à rassembler et à stocker un nombre considérable de données digitalisées. Ces immenses bases de données permettent aux algorithmes de machine learning de répondre à certaines tâches par apprentissage supervisé. Parmi les données digitalisées, les images demeurent prépondérantes dans l’environnement moderne. D'immenses datasets ont été constitués. De plus, la classification d'image a permis l’essor de modèles jusqu'alors négligés, les réseaux de neurones profonds ou deep learning. Cette fam
APA, Harvard, Vancouver, ISO, and other styles
2

Mantilla, Gaviria Iván Antonio. "New Strategies to Improve Multilateration Systems in the Air Traffic Control." Doctoral thesis, Editorial Universitat Politècnica de València, 2013. http://hdl.handle.net/10251/29688.

Full text
Abstract:
Develop new strategies to design and operate the multilateration systems, used for air traffic control operations, in a more efficient way. The design strategies are based on the utilization of metaheuristic optimization techniques and they are intended to found the optimal spatial distribution of the system ground stations, taking into account the most relevant system operation parameters. The strategies to operate the systems are based on the development of new positioning methods which allow solving the problems of uncertainty position and poor accuracy that the current systems can present.
APA, Harvard, Vancouver, ISO, and other styles
3

Krueger, David. "Designing Regularizers and Architectures for Recurrent Neural Networks." Thèse, 2016. http://hdl.handle.net/1866/14019.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Regularization by architecture"

1

Ellis, Steven J. R. The Third Retail Revolution. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198769934.003.0006.

Full text
Abstract:
This chapter considers the causes behind the reshaping of Roman retail landscapes during the third and last of the retail revolutions; while the process began in the Flavian era, its full expression was more often realized during the urban building boom of the early second century BCE. A homogenization of form now characterized the Roman taberna, as evidenced through the more simplified structural layouts of what are now rows and rows of one- and two-room shops. We also see it in the regularization of shop-fronts as identified through their standardized, mass-produced threshold stones. This ch
APA, Harvard, Vancouver, ISO, and other styles
2

Sangeetha, V., and S. Kevin Andrews. Introduction to Artificial Intelligence and Neural Networks. Magestic Technology Solutions (P) Ltd, Chennai, Tamil Nadu, India, 2023. http://dx.doi.org/10.47716/mts/978-93-92090-24-0.

Full text
Abstract:
Artificial Intelligence (AI) has emerged as a defining force in the current era, shaping the contours of technology and deeply permeating our everyday lives. From autonomous vehicles to predictive analytics and personalized recommendations, AI continues to revolutionize various facets of human existence, progressively becoming the invisible hand guiding our decisions. Simultaneously, its growing influence necessitates the need for a nuanced understanding of AI, thereby providing the impetus for this book, “Introduction to Artificial Intelligence and Neural Networks.” This book aims to equip it
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Regularization by architecture"

1

Nikulenkov, Mikhail, Kamil Khamitov, and Nina Popova. "Regularization Approach for Accelerating Neural Architecture Search." In Lecture Notes in Computer Science. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-22941-1_35.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Hssayni, El Houssaine, Nour-Eddine Joudar, and Mohamed Ettaouil. "Convolutional Neural Networks: Architecture Optimization and Regularization." In Digital Technologies and Applications. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-01942-5_18.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhu, Wenjie, Yabin Zhang, Pengfei Wang, Xin Jin, Wenjun Zeng, and Lei Zhang. "Architecture-Agnostic Unsupervised Gradient Regularization for Parameter-Efficient Transfer Learning." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-92089-9_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Liu, Yilin, Yunkui Pang, Jiang Li, Yong Chen, and Pew-Thian Yap. "Architecture-Agnostic Untrained Network Priors for Image Reconstruction with Frequency Regularization." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. https://doi.org/10.1007/978-3-031-72630-9_20.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sahu, Neha, and Anwar Sakreja. "Sentiment Analysis of Social Media Data Using Bayesian Regularization ANN (BRANN) Architecture." In International Conference on Advanced Computing Networking and Informatics. Springer Singapore, 2018. http://dx.doi.org/10.1007/978-981-13-2673-8_51.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Paaß, Gerhard, and Sven Giesselbach. "Pre-trained Language Models." In Artificial Intelligence: Foundations, Theory, and Algorithms. Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-23190-2_2.

Full text
Abstract:
AbstractThis chapter presents the main architecture types of attention-based language models, which describe the distribution of tokens in texts: Autoencoders similar to BERT receive an input text and produce a contextual embedding for each token. Autoregressive language models similar to GPT receive a subsequence of tokens as input. They produce a contextual embedding for each token and predict the next token. In this way, all tokens of a text can successively be generated. Transformer Encoder-Decoders have the task to translate an input sequence to another sequence, e.g. for language transla
APA, Harvard, Vancouver, ISO, and other styles
7

Alonso, César L., José Luis Montaña, Cruz Enrique Borges, Marina de la Cruz Echeandía, and Alfonso Ortega de la Puente. "Model Regularization in Coevolutionary Architectures Evolving Straight Line Code." In Studies in Computational Intelligence. Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-27534-0_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cajicá, Félix Armando Mejía, John A. García Henao, Carlos Jaime Barrios Hernández, and Michel Riveill. "Analysis of Regularization in Deep Learning Models on Testbed Architectures." In Communications in Computer and Information Science. Springer International Publishing, 2021. http://dx.doi.org/10.1007/978-3-030-68035-0_13.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Ihler, Sontje, Felix Kuhnke, and Svenja Spindeldreier. "A Comprehensive Study of Modern Architectures and Regularization Approaches on CheXpert5000." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-16431-6_62.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Khriss, Abdelaadim, Aissa Kerkour Elmiad, and Mohammed Badaoui. "Enhancing UNet Architectures for Remote Sensing Image Segmentation with Sinkhorn Regularization in Self-attention Mechanism." In Lecture Notes in Networks and Systems. Springer Nature Singapore, 2024. https://doi.org/10.1007/978-981-97-7710-5_43.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Regularization by architecture"

1

Rout, Jayanti, and Minati Mishra. "Enhanced CNN Architecture with Residual Blocks and Regularization for AI-Generated Image Detection." In 2025 IEEE International Conference on Interdisciplinary Approaches in Technology and Management for Social Innovation (IATMSI). IEEE, 2025. https://doi.org/10.1109/iatmsi64286.2025.10985062.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Unser, Michael. "Learning of Nonlinearities for Stable Iterative Image Reconstruction." In Computational Optical Sensing and Imaging. Optica Publishing Group, 2024. https://doi.org/10.1364/cosi.2024.cth5a.1.

Full text
Abstract:
We present a variational framework for learning nonlinearities within a recurrent neuronal architecture for image reconstruction. Our use of second-order total variation regularization induces solutions that are adaptive linear splines. Full-text article not available; see video presentation
APA, Harvard, Vancouver, ISO, and other styles
3

Naik, Pritish, Ilkka Pölönen, and Pauliina Salmi. "A New Version of an Endmember-Guided Autoencoder (EGAE-V2) with Improved Architecture and Regularization by Correlation Between Ground Truths and Latent Activations." In 2024 14th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS). IEEE, 2024. https://doi.org/10.1109/whispers65427.2024.10876451.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Dodge, Jesse, Roy Schwartz, Hao Peng, and Noah A. Smith. "RNN Architecture Learning with Sparse Regularization." In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, 2019. http://dx.doi.org/10.18653/v1/d19-1110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Kobayashi, H., T. Matsumoto, T. Yagi, and T. Shimmi. "A layered architecture for regularization vision chips." In 1991 IEEE International Joint Conference on Neural Networks. IEEE, 1991. http://dx.doi.org/10.1109/ijcnn.1991.170530.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Ye, Peng, Baopu Li, Yikang Li, Tao Chen, Jiayuan Fan, and Wanli Ouyang. "$\beta$-DARTS: Beta-Decay Regularization for Differentiable Architecture Search." In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2022. http://dx.doi.org/10.1109/cvpr52688.2022.01060.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Niehaus, Lukas, Ulf Krumnack, and Gunther Heidemann. "Weight Rescaling: Applying Initialization Strategies During Training." In 14th Scandinavian Conference on Artificial Intelligence SCAI 2024, June 10-11, 2024, Jönköping, Sweden. Linköping University Electronic Press, 2024. http://dx.doi.org/10.3384/ecp208010.

Full text
Abstract:
The training success of deep learning is known to depend on the initial statistics of neural network parameters. Various strategies have been developed to determine suitable mean and standard deviation for weight distributions based on network architecture. However, during training, weights often diverge from their initial scale. This paper introduces the novel concept of weight rescaling, which enforces weights to remain within their initial regime throughout the training process. It is demonstrated that weight rescaling serves as an effective regularization method, reducing overfitting and s
APA, Harvard, Vancouver, ISO, and other styles
8

Yu, Kaicheng, Rene Ranftl, and Mathieu Salzmann. "Landmark Regularization: Ranking Guided Super-Net Training in Neural Architecture Search." In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2021. http://dx.doi.org/10.1109/cvpr46437.2021.01351.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Takeishi, Naoya, and Yoshinobu Kawahara. "Knowledge-Based Regularization in Generative Modeling." In Twenty-Ninth International Joint Conference on Artificial Intelligence and Seventeenth Pacific Rim International Conference on Artificial Intelligence {IJCAI-PRICAI-20}. International Joint Conferences on Artificial Intelligence Organization, 2020. http://dx.doi.org/10.24963/ijcai.2020/331.

Full text
Abstract:
Prior domain knowledge can greatly help to learn generative models. However, it is often too costly to hard-code prior knowledge as a specific model architecture, so we often have to use general-purpose models. In this paper, we propose a method to incorporate prior knowledge of feature relations into the learning of general-purpose generative models. To this end, we formulate a regularizer that makes the marginals of a generative model to follow prescribed relative dependence of features. It can be incorporated into off-the-shelf learning methods of many generative models, including variation
APA, Harvard, Vancouver, ISO, and other styles
10

Bi, Pengfei, Yingwei Zhang, and Yunpeng Fan. "Fault monitoring method of expert prior knowledge learning based on regularization architecture." In 2016 Chinese Control and Decision Conference (CCDC). IEEE, 2016. http://dx.doi.org/10.1109/ccdc.2016.7531680.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!