Academic literature on the topic 'Transformer Architecture'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Transformer Architecture.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Transformer Architecture"

1

Alharthi, Musleh, and Ausif Mahmood. "Enhanced Linear and Vision Transformer-Based Architectures for Time Series Forecasting." Big Data and Cognitive Computing 8, no. 5 (2024): 48. http://dx.doi.org/10.3390/bdcc8050048.

Full text
Abstract:
Time series forecasting has been a challenging area in the field of Artificial Intelligence. Various approaches such as linear neural networks, recurrent linear neural networks, Convolutional Neural Networks, and recently transformers have been attempted for the time series forecasting domain. Although transformer-based architectures have been outstanding in the Natural Language Processing domain, especially in autoregressive language modeling, the initial attempts to use transformers in the time series arena have met mixed success. A recent important work indicating simple linear networks out
APA, Harvard, Vancouver, ISO, and other styles
2

Wijaya, Bryan Christofer, and Hendrik Santoso Sugiarto. "Transformer+transformer architecture for image captioning in Indonesian language." IAES International Journal of Artificial Intelligence (IJ-AI) 14, no. 3 (2025): 2338. https://doi.org/10.11591/ijai.v14.i3.pp2338-2346.

Full text
Abstract:
Image captioning in Indonesian language poses a significant challenge due to the complex interplay between visual and linguistic comprehension, as well as the scarcity of publicly available datasets. Despite considerable advancements in this field, research specifically targeting the Indonesian language remains scarce. In this paper, we propose a novel image captioning model employing a transformer-based architecture for both the encoder and decoder components. Our model is trained and evaluated on the pre-translated Flickr30k dataset in the Indonesian language. We conduct a comparative analys
APA, Harvard, Vancouver, ISO, and other styles
3

Selitskiy, Stanislav. "Batch Transformer Architecture: Case of Synthetic Image Generation for Emotion Expression Facial Recognition." Athens Journal of Sciences 12, no. 2 (2025): 129–50. https://doi.org/10.30958/ajs.12-2-4.

Full text
Abstract:
A novel Transformer variation architecture is proposed in the implicit sparse style. Unlike “traditional” Transformers, instead of attention to sequential or batch entities in their entirety of whole dimensionality, in the proposed Batch Transformers, attention to the “important” dimensions (primary components) is implemented. In such a way, the “important” dimensions or feature selection allows for a significant reduction of the bottleneck size in the encoder-decoder ANN architectures. The proposed architecture is tested on the synthetic image generation for the face recognition task in the c
APA, Harvard, Vancouver, ISO, and other styles
4

Jaiswal, Sushma, Harikumar Pallthadka, Rajesh P. Chinchewadi, and Tarun Jaiswal. "Optimized Image Captioning: Hybrid Transformers Vision Transformers and Convolutional Neural Networks: Enhanced with Beam Search." International Journal of Intelligent Systems and Applications 16, no. 2 (2024): 53–61. http://dx.doi.org/10.5815/ijisa.2024.02.05.

Full text
Abstract:
Deep learning has improved image captioning. Transformer, a neural network architecture built for natural language processing, excels at image captioning and other computer vision applications. This paper reviews Transformer-based image captioning methods in detail. Convolutional neural networks (CNNs) extracted image features and RNNs or LSTM networks generated captions in traditional image captioning. This method often has information bottlenecks and trouble capturing long-range dependencies. Transformer architecture revolutionized natural language processing with its attention strategy and
APA, Harvard, Vancouver, ISO, and other styles
5

Havrylovych, Mariia, and Valeriy Danylov. "Research on hybrid transformer-based autoencoders for user biometric verification." System research and information technologies, no. 3 (September 29, 2023): 42–53. http://dx.doi.org/10.20535/srit.2308-8893.2023.3.03.

Full text
Abstract:
Our current study extends previous work on motion-based biometric verification using sensory data by exploring new architectures and more complex input from various sensors. Biometric verification offers advantages like uniqueness and protection against fraud. The state-of-the-art transformer architecture in AI is known for its attention block and applications in various fields, including NLP and CV. We investigated its potential value for applications involving sensory data. The research proposes a hybrid architecture, integrating transformer attention blocks with different autoencoders, to e
APA, Harvard, Vancouver, ISO, and other styles
6

Indraneel Borgohain. "Cross-Modal AI Transformer Architecture: Bridging Multiple Data Modalities Through Advanced Neural Networks." Journal of Computer Science and Technology Studies 7, no. 4 (2025): 541–45. https://doi.org/10.32996/jcsts.2025.7.4.64.

Full text
Abstract:
This article explores the Cross-Modal AI Transformer architecture, a sophisticated framework designed to process and integrate information across multiple data modalities. The article examines the architectural framework, technical implementation, advanced features, and practical applications of these transformers. Through comprehensive analysis of various research findings, the article demonstrates how these architectures effectively bridge different modalities, including text, images, audio, and video. The article highlights the significance of multi-modal encoders, cross-modal attention mec
APA, Harvard, Vancouver, ISO, and other styles
7

S., S., Thulasi Bikku, P. Muthukumar, K. Sandeep, Jampani Chandra Sekhar, and V. Krishna Pratap. "Enhanced Intrusion Detection Using Stacked FT-Transformer Architecture." Journal of Cybersecurity and Information Management 8, no. 2 (2024): 19–29. http://dx.doi.org/10.54216/jcim.130202.

Full text
Abstract:
The function of network intrusion detection systems (NIDS) in protecting networks from cyberattacks is crucial. Many of the more conventional techniques rely on signature-based approaches, which have a hard time distinguishing between various types of assaults. Using stacked FT-Transformer architecture, this research suggests a new way to identify intrusions in networks. When it comes to dealing with complicated tabular data, FT-Transformers—a variant of the Transformer model—have shown outstanding performance. Because of the inherent tabular nature of network traffic data, FT-Transformers are
APA, Harvard, Vancouver, ISO, and other styles
8

Lei, Zhenxin, Man Yao, Jiakui Hu, et al. "Spike2Former: Efficient Spiking Transformer for High-performance Image Segmentation." Proceedings of the AAAI Conference on Artificial Intelligence 39, no. 2 (2025): 1364–72. https://doi.org/10.1609/aaai.v39i2.32126.

Full text
Abstract:
Spiking Neural Networks (SNNs) have a low-power advantage but perform poorly in image segmentation tasks. The reason is that directly converting neural networks with complex architectural designs for segmentation tasks into spiking versions leads to performance degradation and non-convergence. To address this challenge, we first identify the modules in the architecture design that lead to the severe reduction in spike firing, make targeted improvements, and propose Spike2Former architecture. Second, we propose normalized integer spiking neurons to solve the training stability problem of SNNs w
APA, Harvard, Vancouver, ISO, and other styles
9

Nabi, Muneeb, Rohit Pachauri, Shouaib Ahmad, Kanishk Varshney, Prachi Goel, and Apurva Jain. "Visual Image Captioning through Transformer." International Journal for Research in Applied Science and Engineering Technology 11, no. 12 (2023): 2047–50. http://dx.doi.org/10.22214/ijraset.2023.57766.

Full text
Abstract:
Abstract: The convergence of computer vision and natural language processing in Artificial Intelligence has sparked significant interest in recent years, largely propelled by the advancements in deep learning. One notable application born from this synergy is the automatic description of images in English. Image captioning involves the computer's ability to interpret visual information from an image and translate it into one or more descriptive phrases. Generating meaningful descriptions requires understanding the state, properties, and relationships between the depicted objects, demanding a g
APA, Harvard, Vancouver, ISO, and other styles
10

Vu, Minh Tri, Motoaki Hiraga, Nanako Miura, and Arata Masuda. "Failure Mode Classification for Rolling Element Bearings Using Time-Domain Transformer-Based Encoder." Sensors 24, no. 12 (2024): 3953. http://dx.doi.org/10.3390/s24123953.

Full text
Abstract:
In this paper, we propose a Transformer-based encoder architecture integrated with an unsupervised denoising method to learn meaningful and sparse representations of vibration signals without the need for data transformation or pre-trained data. Existing Transformer models often require transformed data or extensive computational resources, limiting their practical adoption. We propose a simple yet competitive modification of the Transformer model, integrating a trainable noise reduction method specifically tailored for failure mode classification using vibration data directly in the time doma
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Transformer Architecture"

1

Ogunnika, Olumuyiwa Temitope 1978. "A simple transformer-based resonator architecture for low phase noise LC oscillators." Thesis, Massachusetts Institute of Technology, 2003. http://hdl.handle.net/1721.1/28338.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2004.<br>Includes bibliographical references (leaves 86-87).<br>This thesis investigates the use of a simple transformer-coupled resonator to increase the loaded Q of a LC resonant tank. The windings of the integrated transformer replace the simple inductors as the inductive elements of the resonator. The resonator topology considered in this project is a simpler alternative to another proposed by Straayer et al [5] because it just requires a single varactor. A prime objective o
APA, Harvard, Vancouver, ISO, and other styles
2

Maneikis, Andrius. "Distribution On Load Tap Changer Control Using IEC61850 Client/Server Architecture." Thesis, KTH, Skolan för elektro- och systemteknik (EES), 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-193673.

Full text
Abstract:
Distributed generation is transforming the power system grid to decentralized system where separate units like wind power generators or solar panel shall coexist and operate in tandem in order to supplement each other and make one extensive system as a whole so called smart grid. It is utmost important to have a control ability over such units not only on a field level but on a system level as well. To be able to communicate with numerous devices and maintain interoperability universal standard is a must. Therefore, one of the core standards relevant to smart grids is IEC 61850 – Power Utility
APA, Harvard, Vancouver, ISO, and other styles
3

Barrère, Killian. "Architectures de Transformer légères pour la reconnaissance de textes manuscrits anciens." Electronic Thesis or Diss., Rennes, INSA, 2023. http://www.theses.fr/2023ISAR0017.

Full text
Abstract:
En reconnaissance d’écriture manuscrite, les architectures Transformer permettent de faibles taux d’erreur, mais sont difficiles à entraîner avec le peu de données annotées disponibles. Dans ce manuscrit, nous proposons des architectures Transformer légères adaptées aux données limitées. Nous introduisons une architecture rapide basée sur un encodeur Transformer, et traitant jusqu’à 60 pages par seconde. Nous proposons aussi des architectures utilisant un décodeur Transformer pour inclure l’apprentissage de la langue dans la reconnaissance des caractères. Pour entraîner efficacement nos archit
APA, Harvard, Vancouver, ISO, and other styles
4

Deschamps-Berger, Théo. "Social Emotion Recognition with multimodal deep learning architecture in emergency call centers." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG036.

Full text
Abstract:
Cette thèse porte sur les systèmes de reconnaissance automatique des émotions dans la parole, dans un contexte d'urgence médicale. Elle aborde certains des défis rencontrés lors de l'étude des émotions dans les interactions sociales et est ancrée dans les théories modernes des émotions, en particulier celles de Lisa Feldman Barrett sur la construction des émotions. En effet, la manifestation des émotions spontanées dans les interactions humaines est complexe et souvent caractérisée par des nuances, des mélanges et étroitement liée au contexte. Cette étude est fondée sur le corpus CEMO, composé
APA, Harvard, Vancouver, ISO, and other styles
5

MASERA, MAURIZIO. "Transform algorithms and architectures for video coding." Doctoral thesis, Politecnico di Torino, 2018. http://hdl.handle.net/11583/2709432.

Full text
Abstract:
During the last years, the increasing popularity of very high resolution formats and the growing of video applications have posed critical issues on both the coding efficiency and the complexity of video compression systems. This thesis focuses on the transform coding stage of the most recent video coding technologies, by addressing both the complexity evaluation and the design of custom hardware architectures. First, the thesis thoroughly analyzes the HEVC transform complexity, by relying on the proposed CI metric. A tool-by-tool investigation is performed to quantify the complexity of the tr
APA, Harvard, Vancouver, ISO, and other styles
6

Policarpi, Andrea. "Transformers architectures for time series forecasting." Master's thesis, Alma Mater Studiorum - Università di Bologna, 2022. http://amslaurea.unibo.it/25005/.

Full text
Abstract:
Time series forecasting is an important task related to countless applications, spacing from anomaly detection to healthcare problems. The ability to predict future values of a given time series is a non­trivial operation, whose complexity heavily depends on the number and the quality of data available. Historically, the problem has been addressed by statistical models and simple deep learning architectures such as CNNs and RNNs; recently many Transformer-based models have also been used, with excellent results. This thesis work aims to evaluate the performances of two transformer-based model
APA, Harvard, Vancouver, ISO, and other styles
7

Grzeszczak, Aleksander. "VLSI architecture for Discrete Wavelet Transform." Thesis, University of Ottawa (Canada), 1995. http://hdl.handle.net/10393/9908.

Full text
Abstract:
In this thesis, we present a new simple and efficient VLSI architecture (DWT-SA) for computing the Discrete Wavelet Transform. The proposed architecture is systolic in nature, modular and extendible to 1-D or 2-D DWT transform of any size. The DWT-SA has been designed, simulated and implemented in silicon. The following are the features of the DWT-SA architecture: (1) It has an efficient (close to 100%) hardware utilization. (2) It works with data streams of arbitrary size. (3) The design is cascadable, for computation of one, two or three dimensional DWT. (4) It requires a minimum interface c
APA, Harvard, Vancouver, ISO, and other styles
8

Arens, Maxime. "Apprentissage actif multi-labels pour des architectures transformers." Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES052.

Full text
Abstract:
L'annotation des données est cruciale pour l'apprentissage automatique, notamment dans les domaines techniques, où la qualité et la quantité des données annotées affectent significativement l'efficacité des modèles entraînés. L'utilisation de personnel humain est coûteuse, surtout lors de l'annotation pour la classification multi-labels, les instances pouvant être associées à plusieurs labels. L'apprentissage actif (AA) vise à réduire les coûts d'annotation en sélectionnant intelligemment des instances pour l'annotation, plutôt que de les annoter de manière aléatoire. L'attention récente porté
APA, Harvard, Vancouver, ISO, and other styles
9

Ferreira, Costa Levy [Verfasser]. "Modular Power Converters for Smart Transformer Architectures / Levy Ferreira Costa." Kiel : Universitätsbibliothek Kiel, 2019. http://d-nb.info/1197055312/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Skerry, Nathaniel S. (Nathaniel Standish) 1971. "Transformed materials : a material research center in Milan, Italy." Thesis, Massachusetts Institute of Technology, 2002. http://hdl.handle.net/1721.1/70358.

Full text
Abstract:
Thesis (M. Arch.)--Massachusetts Institute of Technology, Dept. of Architecture, 2002.<br>Includes bibliographical references (p. 74-75).<br>[Transformed Materials] is an exploration into today's design methodologies of architecture production. The emergence of architectural form is questioned in relation to the temporal state of design intent and the physical material construct. At a time when there is an increased awareness of the current state of technology, material innovation and methods of fabrication, there are new speculations of what materiality is and can be. This thesis will propose
APA, Harvard, Vancouver, ISO, and other styles
More sources

Books on the topic "Transformer Architecture"

1

Tiantian, Xu, ed. Architecture as transformer: DnA-Design and Architecture, Beijing : projects 2004-2018. Aedes, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

Lavigne, Lucie. Construire, rénover, transformer: Les meilleures idées d'architectes pour réussir votre projet. La Presse, 2011.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
3

Kaspari, Dieter, and Marlene Krapols. Umbau statt Abriss: Zur Erhaltung des industriellen Erbes in der EUREGIO Maas Rhein = Transformer au lieu de demolir! = Ombouwen in plaats van afbreken! Verlag Dr. Rudolf Georgi, 1995.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
4

Klanten, Robert. Build-on: Converted architecture and transformed buildings. Gestalten, 2009.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
5

Davis, Douglas. The museum transformed: Design and culture in the post-Pompidou age. Abbeville Press, 1990.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
6

France) Centre méridional de l'architecture et de la ville (Toulouse. Rêves de villes: Les habitants transforment leurs quartiers. Poïesis-AERA, 2007.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
7

Schröer, Ludger. Wiederentdeckt: Historische Transformatorenstationen im Münsterland. Lippe Verlag, 2018.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
8

Cooper, Guy. Paradise transformed: The private garden for the twenty-first century. Monacelli Press, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
9

Sanz, J. L. C. Radon and projection transform-based computer vision: Algorithms, a pipeline architecture, and industrial applications. Springer-Verlag, 1988.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
10

1936-, Herschman Joel, ed. Architecture transformed: A history of the photography of buildings from 1839 to the present. Architectural League of New York, 1987.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "Transformer Architecture"

1

Thakur, Kutub, Helen G. Barker, and Al-Sakib Khan Pathan. "Deep Learning Transformer Architecture." In Artificial Intelligence and Large Language Models. Chapman and Hall/CRC, 2024. http://dx.doi.org/10.1201/9781003474173-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Courant, Robin, Maika Edberg, Nicolas Dufour, and Vicky Kalogeiton. "Transformers and Visual Transformers." In Machine Learning for Brain Disorders. Springer US, 2012. http://dx.doi.org/10.1007/978-1-0716-3195-9_6.

Full text
Abstract:
AbstractTransformers were initially introduced for natural language processing (NLP) tasks, but fast they were adopted by most deep learning fields, including computer vision. They measure the relationships between pairs of input tokens (words in the case of text strings, parts of images for visual transformers), termed attention. The cost is exponential with the number of tokens. For image classification, the most common transformer architecture uses only the transformer encoder in order to transform the various input tokens. However, there are also numerous other applications in which the de
APA, Harvard, Vancouver, ISO, and other styles
3

Su, Xiu, Shan You, Jiyang Xie, et al. "ViTAS: Vision Transformer Architecture Search." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19803-8_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Hennig, Marc C. "Towards Accurate Predictions in ITSM: A Study on Transformer-Based Predictive Process Monitoring." In Lecture Notes in Business Information Processing. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-82225-4_16.

Full text
Abstract:
Abstract The accurate prediction of service process performance, particularly in IT service management (ITSM), is critical for adhering to service-level agreements and avoiding associated penalties. However, existing predictive process monitoring solutions, predominantly based on recurrent neural networks, have been found to be inadequate in handling ITSM processes. Notably, the heterogeneity in process artifacts and environments impairs process predictions. This research proposes a novel transformer-based architecture to effectively handle IT service process event logs. By integrating advance
APA, Harvard, Vancouver, ISO, and other styles
5

Huang, Zhaoyang, Xiaoyu Shi, Chao Zhang, et al. "FlowFormer: A Transformer Architecture for Optical Flow." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2022. http://dx.doi.org/10.1007/978-3-031-19790-1_40.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Rahul Chiranjeevi, V., G. Murali Krishna, and S. Rubesh. "Multiscript Handwriting Recognition Using RNN Transformer Architecture." In Data Management, Analytics and Innovation. Springer Nature Singapore, 2024. http://dx.doi.org/10.1007/978-981-97-3242-5_5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Ni, Bolin, Gaofeng Meng, Shiming Xiang, and Chunhong Pan. "NASformer: Neural Architecture Search for Vision Transformer." In Lecture Notes in Computer Science. Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-02375-0_4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Wang, Xinyu, Ziwei Tang, Yaohua Yi, and Chaohua Gan. "Transformer-Based Coattention: Neural Architecture for Reading Comprehension." In Lecture Notes in Electrical Engineering. Springer Singapore, 2020. http://dx.doi.org/10.1007/978-981-15-1864-5_75.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Żebrowski, Michał, and Jacek Komorowski. "Generating Image Captions in Polish Using Transformer Architecture." In Artificial Intelligence and Soft Computing. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-42505-9_28.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Xia, Ran, Wei Song, Xiangchun Liu, and Xiaobing Zhao. "Tripartite Architecture License Plate Recognition Based on Transformer." In Pattern Recognition and Computer Vision. Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-99-8432-9_33.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Transformer Architecture"

1

Dhapola, Shubham, Siddhant Goel, Daksh Rawat, Satvik Vats, and Vikrant Sharma. "Abstractive Text Summarization using Transformer Architecture." In 2024 IEEE 3rd World Conference on Applied Intelligence and Computing (AIC). IEEE, 2024. http://dx.doi.org/10.1109/aic61668.2024.10730840.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, Ruiyang. "Enhancing Image Generation with Diffusion Transformer Architecture." In International Conference on Engineering Management, Information Technology and Intelligence. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012937800004508.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Pan, Qi, Yanan Yu, Yaozhi Wang, Xu Li, and Yi Zhang. "Pedestrian Object Detection Based on Transformer Architecture." In 2024 6th Asia Symposium on Image Processing (ASIP). IEEE, 2024. http://dx.doi.org/10.1109/asip63198.2024.00014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bermeitinger, Bernhard, Tomas Hrycej, Massimo Pavone, Julianus Kath, and Siegfried Handschuh. "Reducing the Transformer Architecture to a Minimum." In 16th International Conference on Knowledge Discovery and Information Retrieval. SCITEPRESS - Science and Technology Publications, 2024. http://dx.doi.org/10.5220/0012891000003838.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Metilda Sagaya Mary, N. J., and S. Umesh. "Lite ASR Transformer: A Light Weight Transformer Architecture For Automatic Speech Recognition." In 2024 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2024. https://doi.org/10.1109/slt61566.2024.10832331.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Nabiilah, Ghinaa Zain, Simeon Yuda Prasetyo, Kelvin Asclepius Minor, and Jennifer Patricia. "Mental Disorder Indication Detection with Transformer Model Architecture." In 2024 International Conference on Information Technology Systems and Innovation (ICITSI). IEEE, 2024. https://doi.org/10.1109/icitsi65188.2024.10929319.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Langde, Rushikesh, Sagar Deep Deb, Arjun Abhishek, and Rajib Kumar Jha. "Vision Transformer Architecture for Efficient Leaf Disease Detection." In 2024 IEEE 21st India Council International Conference (INDICON). IEEE, 2024. https://doi.org/10.1109/indicon63790.2024.10958342.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

King, Ethan, Jaime Rodriguez, Diego Llanes, Timothy Doster, Tegan Emerson, and James Koch. "Stars: Sensor-Agnostic Transformer Architecture for Remote Sensing." In 2024 14th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS). IEEE, 2024. https://doi.org/10.1109/whispers65427.2024.10876423.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Xin, Xiaojie, Jiaou Zheng, and Xiaoyang Hu. "Small object detector based on hybrid transformer architecture." In International Conference on Mechatronic Engineering and Artificial Intelligence (MEAI 2024), edited by Liang Hu. SPIE, 2025. https://doi.org/10.1117/12.3064266.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Ahmed Chowdhury, Tausif Uddin, and Abdus Salam. "Generating Bengali Captions for Images Using Transformer Architecture." In 2024 27th International Conference on Computer and Information Technology (ICCIT). IEEE, 2024. https://doi.org/10.1109/iccit64611.2024.11022611.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "Transformer Architecture"

1

Mosalam, Khalid, Issac Pang, and Selim Gunay. Towards Deep Learning-Based Structural Response Prediction and Ground Motion Reconstruction. Pacific Earthquake Engineering Research Center, 2025. https://doi.org/10.55461/ipos1888.

Full text
Abstract:
This research presents a novel methodology that uses Temporal Convolutional Networks (TCNs), a state-of-the-art deep learning architecture, for predicting the time history of structural responses to seismic events. By leveraging accelerometer data from instrumented buildings, the proposed approach complements traditional structural analysis models, offering a computationally efficient alternative to nonlinear time history analysis. The methodology is validated across a broad spectrum of structural scenarios, including buildings with pronounced higher-mode effects and those exhibiting both line
APA, Harvard, Vancouver, ISO, and other styles
2

Wu, An-Yeu, and K. J. Liu. Algorithm-Based Low-Power Transform Coding Architectures. Part 2. Logarithmic Complexity, Unified Architecture, and Finite-Precision Analysis. Defense Technical Information Center, 1995. http://dx.doi.org/10.21236/ada445617.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Wu, An-Yeu, and K. J. Liu. Algorithm-Based Low-Power Transform Coding Architectures. Part 1. The Multirate Approach. Defense Technical Information Center, 1995. http://dx.doi.org/10.21236/ada445610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Unzeta, Bruno Bueno, Jan de Boer, Ruben Delvaeye, et al. Review of lighting and daylighting control systems. IEA SHC Task 61, 2021. http://dx.doi.org/10.18777/ieashc-task61-2021-0003.

Full text
Abstract:
There is a large number of control systems proposed either by lighting manufacturers or motor manufacturers for shading systems. In addition there are many other solutions proposed by specific manufacturers of Building Management Systems (BMS) or manufacturers of components to be installed in luminaires and switches, as well as in the electric lighting architecture (transformers, gateways to the internet, sensors, etc.). For many consumers -i.e.-the installer, the facility manager, or the final user (building occupant) – this forms a complex and dynamic market environment with high frequent ch
APA, Harvard, Vancouver, ISO, and other styles
5

Hovakimyan, Naira, Hunmin Kim, Wenbin Wan, and Chuyuan Tao. Safe Operation of Connected Vehicles in Complex and Unforeseen Environments. Illinois Center for Transportation, 2022. http://dx.doi.org/10.36501/0197-9191/22-016.

Full text
Abstract:
Autonomous vehicles (AVs) have a great potential to transform the way we live and work, significantly reducing traffic accidents and harmful emissions on the one hand and enhancing travel efficiency and fuel economy on the other. Nevertheless, the safe and efficient control of AVs is still challenging because AVs operate in dynamic environments with unforeseen challenges. This project aimed to advance the state-of-the-art by designing a proactive/reactive adaptation and learning architecture for connected vehicles, unifying techniques in spatiotemporal data fusion, machine learning, and robust
APA, Harvard, Vancouver, ISO, and other styles
6

Huang, Lei, Meng Song, Hui Shen, et al. Deep learning methods for omics data imputation. Engineer Research and Development Center (U.S.), 2024. http://dx.doi.org/10.21079/11681/48221.

Full text
Abstract:
One common problem in omics data analysis is missing values, which can arise due to various reasons, such as poor tissue quality and insufficient sample volumes. Instead of discarding missing values and related data, imputation approaches offer an alternative means of handling missing data. However, the imputation of missing omics data is a non-trivial task. Difficulties mainly come from high dimensionality, non-linear or nonmonotonic relationships within features, technical variations introduced by sampling methods, sample heterogeneity, and the non-random missingness mechanism. Several advan
APA, Harvard, Vancouver, ISO, and other styles
7

Dubcovsky, Jorge, Tzion Fahima, and Ann Blechl. Molecular characterization and deployment of the high-temperature adult plant stripe rust resistance gene Yr36 from wheat. United States Department of Agriculture, 2013. http://dx.doi.org/10.32747/2013.7699860.bard.

Full text
Abstract:
Stripe rust, caused by Puccinia striiformis f. sp. tritici is one of the most destructive fungal diseases of wheat. Virulent races that appeared within the last decade caused drastic cuts in yields. The incorporation of genetic resistance against this pathogen is the most cost-effective and environmentally friendly solution to this problem. However, race specific seedling resistance genes provide only a temporary solution because fungal populations rapidly evolve to overcome this type of resistance. In contrast, high temperature adult plant (HTAP) resistance genes provide a broad spectrum resi
APA, Harvard, Vancouver, ISO, and other styles
8

Kapulnik, Yoram, Maria J. Harrison, Hinanit Koltai, and Joseph Hershenhorn. Targeting of Strigolacatones Associated Pathways for Conferring Orobanche Resistant Traits in Tomato and Medicago. United States Department of Agriculture, 2011. http://dx.doi.org/10.32747/2011.7593399.bard.

Full text
Abstract:
This proposal is focused on examination of two plant interactions: parasitic with Orobanche, and symbiosis with arbuscular mycorrhiza fungi (AMF), and the involvement of a newly define plant hormones, strigolactones (SLs), in these plant interactions. In addition to strigolactones role in regulation of above-ground plant architecture, they are also known to be secreted from roots, and to be a signal for seed germination of the parasitic plants Orobanche. Moreover, secreted strigolactones were recognized as inducers of AMFhyphae branching. The present work was aimed at Generation of RNAi mutant
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!