To see the other types of publications on this topic, follow the link: Tensor Compilers.

Journal articles on the topic 'Tensor Compilers'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Tensor Compilers.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Deeds, Kyle, Willow Ahrens, Magdalena Balazinska, and Dan Suciu. "Galley: Modern Query Optimization for Sparse Tensor Programs." Proceedings of the ACM on Management of Data 3, no. 3 (2025): 1–24. https://doi.org/10.1145/3725301.

Full text
Abstract:
The tensor programming abstraction is a foundational paradigm which allows users to write high performance programs via a high-level imperative interface. Recent work on sparse tensor compilers has extended this paradigm to sparse tensors (i.e., tensors where most entries are not explicitly represented). With these systems, users define the semantics of the program and the algorithmic decisions in a concise language that can be compiled to efficient low-level code. However, these systems still require users to make complex decisions about program structure and memory layouts to write efficient
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Peiming, Alexander J. Root, Anlun Xu, Yinying Li, Fredrik Kjolstad, and Aart J. C. Bik. "Compiler Support for Sparse Tensor Convolutions." Proceedings of the ACM on Programming Languages 8, OOPSLA2 (2024): 275–303. http://dx.doi.org/10.1145/3689721.

Full text
Abstract:
This paper extends prior work on sparse tensor algebra compilers to generate asymptotically efficient code for tensor expressions with affine subscript expressions. Our technique enables compiler support for a wide range of sparse computations, including sparse convolutions and pooling that are widely used in ML and graphics applications. We propose an approach that gradually rewrites compound subscript expressions to simple subscript expressions with loops that exploit the sparsity pattern of the input sparse tensors. As a result, the time complexity of the generated kernels is bounded by the
APA, Harvard, Vancouver, ISO, and other styles
3

Liu, Jiawei, Yuxiang Wei, Sen Yang, Yinlin Deng, and Lingming Zhang. "Coverage-guided tensor compiler fuzzing with joint IR-pass mutation." Proceedings of the ACM on Programming Languages 6, OOPSLA1 (2022): 1–26. http://dx.doi.org/10.1145/3527317.

Full text
Abstract:
In the past decade, Deep Learning (DL) systems have been widely deployed in various application domains to facilitate our daily life, e.g., natural language processing, healthcare, activity recognition, and autonomous driving. Meanwhile, it is extremely challenging to ensure the correctness of DL systems (e.g., due to their intrinsic nondeterminism), and bugs in DL systems can cause serious consequences and may even threaten human lives. In the literature, researchers have explored various techniques to test, analyze, and verify DL models, since their quality directly affects the corresponding
APA, Harvard, Vancouver, ISO, and other styles
4

Dias, Adhitha, Logan Anderson, Kirshanthan Sundararajah, Artem Pelenitsyn, and Milind Kulkarni. "SparseAuto: An Auto-scheduler for Sparse Tensor Computations using Recursive Loop Nest Restructuring." Proceedings of the ACM on Programming Languages 8, OOPSLA2 (2024): 527–56. http://dx.doi.org/10.1145/3689730.

Full text
Abstract:
Automated code generation and performance enhancements for sparse tensor algebra have become essential in many real-world applications, such as quantum computing, physical simulations, computational chemistry, and machine learning. General sparse tensor algebra compilers are not always versatile enough to generate asymptotically optimal code for sparse tensor contractions. This paper shows how to generate asymptotically better schedules for complex sparse tensor expressions using kernel fission and fusion. We present generalized loop restructuring transformations to reduce asymptotic time comp
APA, Harvard, Vancouver, ISO, and other styles
5

Zhou, Chijin, Bingzhou Qian, Gwihwan Go, Quan Zhang, Shanshan Li, and Yu Jiang. "PolyJuice: Detecting Mis-compilation Bugs in Tensor Compilers with Equality Saturation Based Rewriting." Proceedings of the ACM on Programming Languages 8, OOPSLA2 (2024): 1309–35. http://dx.doi.org/10.1145/3689757.

Full text
Abstract:
Tensor compilers are essential for deploying deep learning applications across various hardware platforms. While powerful, they are inherently complex and present significant challenges in ensuring correctness. This paper introduces PolyJuice, an automatic detection tool for identifying mis-compilation bugs in tensor compilers. Its basic idea is to construct semantically-equivalent computation graphs to validate the correctness of tensor compilers. The main challenge is to construct equivalent graphs capable of efficiently exploring the diverse optimization logic during compilation. We approac
APA, Harvard, Vancouver, ISO, and other styles
6

Krastev, Aleksandar, Nikola Samardzic, Simon Langowski, Srinivas Devadas, and Daniel Sanchez. "A Tensor Compiler with Automatic Data Packing for Simple and Efficient Fully Homomorphic Encryption." Proceedings of the ACM on Programming Languages 8, PLDI (2024): 126–50. http://dx.doi.org/10.1145/3656382.

Full text
Abstract:
Fully Homomorphic Encryption (FHE) enables computing on encrypted data, letting clients securely offload computation to untrusted servers. While enticing, FHE has two key challenges that limit its applicability: it has high performance overheads (10,000× over unencrypted computation) and it is extremely hard to program. Recent hardware accelerators and algorithmic improvements have reduced FHE’s overheads and enabled large applications to run under FHE. These large applications exacerbate FHE’s programmability challenges. Writing FHE programs directly is hard because FHE schemes expose a restr
APA, Harvard, Vancouver, ISO, and other styles
7

Noor, Abdul Rafae, Dhruv Baronia, Akash Kothari, Muchen Xu, Charith Mendis, and Vikram S. Adve. "MISAAL: Synthesis-Based Automatic Generation of Efficient and Retargetable Semantics-Driven Optimizations." Proceedings of the ACM on Programming Languages 9, PLDI (2025): 1269–92. https://doi.org/10.1145/3729301.

Full text
Abstract:
Using program synthesis to select instructions for and optimize input programs is receiving increasing attention. However, existing synthesis-based compilers are faced by two major challenges that prohibit the deployment of program synthesis in production compilers: exorbitantly long synthesis times spanning several minutes and hours; and scalability issues that prevent synthesis of complex modern compute and data swizzle instructions, which have been found to maximize performance of modern tensor and stencil workloads. This paper proposes MISAAL, a synthesis-based compiler that employs a nove
APA, Harvard, Vancouver, ISO, and other styles
8

Chou, Stephen, Fredrik Kjolstad, and Saman Amarasinghe. "Format abstraction for sparse tensor algebra compilers." Proceedings of the ACM on Programming Languages 2, OOPSLA (2018): 1–30. http://dx.doi.org/10.1145/3276493.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Clément, Basile, and Albert Cohen. "End-to-end translation validation for the halide language." Proceedings of the ACM on Programming Languages 6, OOPSLA1 (2022): 1–30. http://dx.doi.org/10.1145/3527328.

Full text
Abstract:
This paper considers the correctness of domain-specific compilers for tensor programming languages through the study of Halide, a popular representative. It describes a translation validation algorithm for affine Halide specifications, independently of the scheduling language. The algorithm relies on “prophetic” annotations added by the compiler to the generated array assignments. The annotations provide a refinement mapping from assignments in the generated code to the tensor definitions from the specification. Our implementation leverages an affine solver and a general SMT solver, and scales
APA, Harvard, Vancouver, ISO, and other styles
10

Arora, Jai, Sirui Lu, Devansh Jain, et al. "TensorRight: Automated Verification of Tensor Graph Rewrites." Proceedings of the ACM on Programming Languages 9, POPL (2025): 832–63. https://doi.org/10.1145/3704865.

Full text
Abstract:
Tensor compilers, essential for generating efficient code for deep learning models across various applications, employ tensor graph rewrites as one of the key optimizations. These rewrites optimize tensor computational graphs with the expectation of preserving semantics for tensors of arbitrary rank and size. Despite this expectation, to the best of our knowledge, there does not exist a fully automated verification system to prove the soundness of these rewrites for tensors of arbitrary rank and size. Previous works, while successful in verifying rewrites with tensors of concrete rank, do not
APA, Harvard, Vancouver, ISO, and other styles
11

Bansal, Manya, Olivia Hsu, Kunle Olukotun, and Fredrik Kjolstad. "Mosaic: An Interoperable Compiler for Tensor Algebra." Proceedings of the ACM on Programming Languages 7, PLDI (2023): 394–419. http://dx.doi.org/10.1145/3591236.

Full text
Abstract:
We introduce Mosaic, a sparse tensor algebra compiler that can bind tensor expressions to external functions of other tensor algebra libraries and compilers. Users can extend Mosaic by adding new functions and bind a sub-expression to a function using a scheduling API. Mosaic substitutes the bound sub-expressions with calls to the external functions and automatically generates the remaining code using a default code generator. As the generated code is fused by default, users can productively leverage both fusion and calls to specialized functions within the same compiler. We demonstrate the be
APA, Harvard, Vancouver, ISO, and other styles
12

Zheng, Zhen, Zaifeng Pan, Dalin Wang, et al. "BladeDISC: Optimizing Dynamic Shape Machine Learning Workloads via Compiler Approach." Proceedings of the ACM on Management of Data 1, no. 3 (2023): 1–29. http://dx.doi.org/10.1145/3617327.

Full text
Abstract:
Compiler optimization plays an increasingly important role to boost the performance of machine learning models for data processing and management. With increasingly complex data, the dynamic tensor shape phenomenon emerges for ML models. However, existing ML compilers either can only handle static shape models or expose a series of performance problems for both operator fusion optimization and code generation in dynamic shape scenes. This paper tackles the main challenges of dynamic shape optimization: the fusion optimization without shape value, and code generation supporting arbitrary shapes
APA, Harvard, Vancouver, ISO, and other styles
13

Liu, Amanda, Gilbert Bernstein, Adam Chlipala, and Jonathan Ragan-Kelley. "A Verified Compiler for a Functional Tensor Language." Proceedings of the ACM on Programming Languages 8, PLDI (2024): 320–42. http://dx.doi.org/10.1145/3656390.

Full text
Abstract:
Producing efficient array code is crucial in high-performance domains like image processing and machine learning. It requires the ability to control factors like compute intensity and locality by reordering computations into different stages and granularities with respect to where they are stored. However, traditional pure, functional tensor languages struggle to do so. In a previous publication, we introduced ATL as a pure, functional tensor language capable of systematically decoupling compute and storage order via a set of high-level combinators known as reshape operators. Reshape operators
APA, Harvard, Vancouver, ISO, and other styles
14

Zhang, Yongliang, Yuanyuan Zhu, Hao Zhang, et al. "TGraph: A Tensor-centric Graph Processing Framework." Proceedings of the ACM on Management of Data 3, no. 1 (2025): 1–27. https://doi.org/10.1145/3709731.

Full text
Abstract:
Graph is ubiquitous in various real-world applications, and many graph processing systems have been developed. Recently, hardware accelerators have been exploited to speed up graph systems. However, such hardware-specific systems are hard to migrate across different hardware backends. In this paper, we propose the first tensor-based graph processing framework, Tgraph, which can be smoothly deployed and run on any powerful hardware accelerators (uniformly called XPU) that support Tensor Computation Runtimes (TCRs). TCRs, which are deep learning frameworks along with their runtimes and compilers
APA, Harvard, Vancouver, ISO, and other styles
15

Turner, Jack, Elliot J. Crowley, and Michael F. P. O'Boyle. "Neural Architecture Search as Program Transformation Exploration." Communications of the ACM 67, no. 10 (2024): 92–100. http://dx.doi.org/10.1145/3624775.

Full text
Abstract:
Improving the performance of deep neural networks (DNNs) is important to both the compiler and neural architecture search (NAS) communities. Compilers apply program transformations in order to exploit hardware parallelism and memory hierarchy. However, legality concerns mean they fail to exploit the natural robustness of neural networks. In contrast, NAS techniques mutate networks by operations such as the grouping or bottlenecking of convolutions, exploiting the resilience of DNNs. In this work, we express such neural architecture operations as program transformations whose legality depends o
APA, Harvard, Vancouver, ISO, and other styles
16

Liu, Jie, Zhongyuan Zhao, Zijian Ding, Benjamin Brock, Hongbo Rong, and Zhiru Zhang. "UniSparse: An Intermediate Language for General Sparse Format Customization." Proceedings of the ACM on Programming Languages 8, OOPSLA1 (2024): 137–65. http://dx.doi.org/10.1145/3649816.

Full text
Abstract:
The ongoing trend of hardware specialization has led to a growing use of custom data formats when processing sparse workloads, which are typically memory-bound. These formats facilitate optimized software/hardware implementations by utilizing sparsity pattern- or target-aware data structures and layouts to enhance memory access latency and bandwidth utilization. However, existing sparse tensor programming models and compilers offer little or no support for productively customizing the sparse formats. Additionally, because these frameworks represent formats using a limited set of per-dimension
APA, Harvard, Vancouver, ISO, and other styles
17

Gibson, Perry, José Cano, Elliot Crowley, Amos Storkey, and Michael O'Boyle. "DLAS: A Conceptual Model for Across-Stack Deep Learning Acceleration." ACM Transactions on Architecture and Code Optimization 22, no. 1 (2025): 1. https://doi.org/10.1145/3688609.

Full text
Abstract:
Deep Neural Networks (DNNs) are very computationally demanding, which presents a significant barrier to their deployment, especially on resource-constrained devices. Significant work from both the machine learning and computing systems communities has attempted to accelerate DNNs. However, the number of techniques available and the required domain knowledge for their exploration continue to grow, making design space exploration (DSE) increasingly difficult. To unify the perspectives from these two communities, this article introduces the Deep Learning Acceleration Stack (DLAS), a conceptual mo
APA, Harvard, Vancouver, ISO, and other styles
18

Zhang, Genghan, Olivia Hsu, and Fredrik Kjolstad. "Compilation of Modular and General Sparse Workspaces." Proceedings of the ACM on Programming Languages 8, PLDI (2024): 1213–38. http://dx.doi.org/10.1145/3656426.

Full text
Abstract:
Recent years have seen considerable work on compiling sparse tensor algebra expressions. This paper addresses a shortcoming in that work, namely how to generate efficient code (in time and space) that scatters values into a sparse result tensor. We address this shortcoming through a compiler design that generates code that uses sparse intermediate tensors (sparse workspaces) as efficient adapters between compute code that scatters and result tensors that do not support random insertion. Our compiler automatically detects sparse scattering behavior in tensor expressions and inserts necessary in
APA, Harvard, Vancouver, ISO, and other styles
19

Xu, Jingyu, Linying Pan, Qiang Zeng, Wenjian Sun, and Weixiang Wan. "Based on TPUGRAPHS Predicting Model Runtimes Using Graph Neural Networks." Frontiers in Computing and Intelligent Systems 6, no. 1 (2023): 66–69. http://dx.doi.org/10.54097/fcis.v6i1.13.

Full text
Abstract:
Deep learning frameworks are mainly divided into pytorch in academia and tensorflow in industry, where pytorch is a dynamic graph and tensor flow is a static graph, both of which are essentially directed and loopless computational graphs. In TensorFlow, data input into the model requires a good computational graph structure to be executed, and static graphs have more optimization methods and higher performance. The node of the graph is OP and the edge is tensor. The static diagram is fixed after the compilation is completed, so it is easier to deploy on the server. How to compile a static grap
APA, Harvard, Vancouver, ISO, and other styles
20

Chou, Stephen, and Saman Amarasinghe. "Compilation of dynamic sparse tensor algebra." Proceedings of the ACM on Programming Languages 6, OOPSLA2 (2022): 1408–37. http://dx.doi.org/10.1145/3563338.

Full text
Abstract:
Many applications, from social network graph analytics to control flow analysis, compute on sparse data that evolves over the course of program execution. Such data can be represented as dynamic sparse tensors and efficiently stored in formats (data layouts) that utilize pointer-based data structures like block linked lists, binary search trees, B-trees, and C-trees among others. These specialized formats support fast in-place modification and are thus better suited than traditional, array-based data structures like CSR for storing dynamic sparse tensors. However, different dynamic sparse tens
APA, Harvard, Vancouver, ISO, and other styles
21

Kjolstad, Fredrik, Shoaib Kamil, Stephen Chou, David Lugato, and Saman Amarasinghe. "The tensor algebra compiler." Proceedings of the ACM on Programming Languages 1, OOPSLA (2017): 1–29. http://dx.doi.org/10.1145/3133901.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Kvуetnyy, Roman, Yuriy Bunyak, Olga Sofina, et al. "TENSOR AND VECTOR APPROACHES TO OBJECTS RECOGNITION BY INVERSE FEATURE FILTERS." Informatyka, Automatyka, Pomiary w Gospodarce i Ochronie Środowiska 14, no. 1 (2024): 41–45. http://dx.doi.org/10.35784/iapgos.5494.

Full text
Abstract:
The investigation of the extraction of image objects features by filters based on tensor and vector data presentation is considered. The tensor data is obtained as a sum of rank-one tensors, given by the tensor product of the vector of lexicographic representation of image fragments pixels with itself. The accumulated tensor is approximated by one rank tensor obtained using singular values decomposition. It has been shown that the main vector of the decomposition can be considered as the object feature vector. The vector data is obtained by accumulating analogous vectors of image fragments pix
APA, Harvard, Vancouver, ISO, and other styles
23

Voronov, P. L. "DETERMINATION OF THE TRANSFORMATION TENSOR OF AN ASYMMETRIC THREE-PHASE COMPOUND NETWORK AT ITS PHYSICAL SEPARATION INTO PARTS." Vesti vysshikh uchebnykh zavedenii Chernozem ya 20, no. 2 (2024): 11–29. https://doi.org/10.53015/18159958_2024_20_2_11.

Full text
Abstract:
The relevance of the study is due to the fact that currently the strategic direction for the further development of power supply systems (SES) is the concept of introducing various intelligent distribution complexes based on controlled power electrical equipment and new generation power lines. At the same time, digital substations are being created, as well as specialized communication and automated information and measurement complexes, relay protection devices and automation of production processes based on power electronics, microelectronics and microprocessor technology. The article discus
APA, Harvard, Vancouver, ISO, and other styles
24

Kovach, Scott, Praneeth Kolichala, Tiancheng Gu, and Fredrik Kjolstad. "Indexed Streams: A Formal Intermediate Representation for Fused Contraction Programs." Proceedings of the ACM on Programming Languages 7, PLDI (2023): 1169–93. http://dx.doi.org/10.1145/3591268.

Full text
Abstract:
We introduce indexed streams, a formal operational model and intermediate representation that describes the fused execution of a contraction language that encompasses both sparse tensor algebra and relational algebra. We prove that the indexed stream model is correct with respect to a functional semantics. We also develop a compiler for contraction expressions that uses indexed streams as an intermediate representation. The compiler is only 540 lines of code, but we show that its performance can match both the TACO compiler for sparse tensor algebra and the SQLite and DuckDB query processing l
APA, Harvard, Vancouver, ISO, and other styles
25

Yadav, Rohan, Michael Garland, Alex Aiken, and Michael Bauer. "Task-Based Tensor Computations on Modern GPUs." Proceedings of the ACM on Programming Languages 9, PLDI (2025): 396–420. https://doi.org/10.1145/3729262.

Full text
Abstract:
Domain-specific, fixed-function units are becoming increasingly common in modern processors. As the computational demands of applications evolve, the capabilities and programming interfaces of these fixed-function units continue to change. NVIDIA’s Hopper GPU architecture contains multiple fixed-function units per compute unit, including an asynchronous data movement unit (TMA) and an asynchronous matrix multiplication unit (Tensor Core). Efficiently utilizing these units requires a fundamentally different programming style than previous architectures; programmers must now develop warp-special
APA, Harvard, Vancouver, ISO, and other styles
26

Asada, Yuki, Victor Fu, Apurva Gandhi, et al. "Share the tensor tea." Proceedings of the VLDB Endowment 15, no. 12 (2022): 3598–601. http://dx.doi.org/10.14778/3554821.3554853.

Full text
Abstract:
We demonstrate Tensor Query Processor (TQP): a query processor that automatically compiles relational operators into tensor programs. By leveraging tensor runtimes such as PyTorch, TQP is able to: (1) integrate with ML tools (e.g., Pandas for data ingestion, Tensorboard for visualization); (2) target different hardware (e.g., CPU, GPU) and software (e.g., browser) backends; and (3) end-to-end accelerate queries containing both relational and ML operators. TQP is generic enough to supports the TPC-H benchmark, and it provides performance that are comparable to, and often better than, that of sp
APA, Harvard, Vancouver, ISO, and other styles
27

Henry, Rawn, Olivia Hsu, Rohan Yadav, et al. "Compilation of sparse array programming models." Proceedings of the ACM on Programming Languages 5, OOPSLA (2021): 1–29. http://dx.doi.org/10.1145/3485505.

Full text
Abstract:
This paper shows how to compile sparse array programming languages. A sparse array programming language is an array programming language that supports element-wise application, reduction, and broadcasting of arbitrary functions over dense and sparse arrays with any fill value. Such a language has great expressive power and can express sparse and dense linear and tensor algebra, functions over images, exclusion and inclusion filters, and even graph algorithms. Our compiler strategy generalizes prior work in the literature on sparse tensor algebra compilation to support any function applied to s
APA, Harvard, Vancouver, ISO, and other styles
28

Li, Yixuan, José Wesley de Souza Magalhães, Alexander Brauckmann, Michael F. P. O'Boyle, and Elizabeth Polgreen. "Guided Tensor Lifting." Proceedings of the ACM on Programming Languages 9, PLDI (2025): 1984–2006. https://doi.org/10.1145/3729330.

Full text
Abstract:
Domain-specific languages (DSLs) for machine learning are revolutionizing the speed and efficiency of machine learning workloads as they enable users easy access to high-performance compiler optimizations and accelerators. However, to take advantage of these capabilities, a user must first translate their legacy code from the language it is currently written in, into the new DSL. The process of automatically lifting code into these DSLs has been identified by several recent works, which propose program synthesis as a solution. However, synthesis is expensive and struggles to scale without care
APA, Harvard, Vancouver, ISO, and other styles
29

Thorkildsen, Gunnar, and Helge B. Larsen. "The atomic anisotropic displacement tensor – completing the picture." Acta Crystallographica Section A Foundations and Advances 71, no. 4 (2015): 467–70. http://dx.doi.org/10.1107/s2053273315008372.

Full text
Abstract:
A simplified approach for calculating the equivalent isotropic displacement parameter is presented and the transformation property of the tensor representationUto point-group operations is analysed. Complete tables have been compiled for the restrictions imposed upon the tensor owing to the site symmetry associated with all special positions as listed in Hahn [(2011),International Tables for Crystallography, Vol. A,Space-group Symmetry, 5th revised ed. Chichester: John Wiley and Sons, Ltd].
APA, Harvard, Vancouver, ISO, and other styles
30

Liu, Gonghan, Yue Li, and Xiaoling Wang. "Expedited Tensor Program Compilation Based on LightGBM." Journal of Physics: Conference Series 2078, no. 1 (2021): 012019. http://dx.doi.org/10.1088/1742-6596/2078/1/012019.

Full text
Abstract:
Abstract If the traditional deep learning framework needs to support a new operator, it usually needs to be highly optimized by experts or hardware vendors to be usable in practice, which is inefficient. The deep learning compiler has proved to be an effective solution to this problem, but it still suffers from unbearably long overall optimization time. In this paper, aiming at the XGBoost cost model in Ansor, we train a cost model based on LightGBM algorithm, which accelerates the optimization time without compromising the accuracy. Experimentation with real hardware shows that our algorithm
APA, Harvard, Vancouver, ISO, and other styles
31

Sukonkin, M. A., and P. Yu Pushkarev. "Using synthetic magnetotelluric data to evaluate the efficiency of methods based on local-regional decomposition of the impedance tensor." Moscow University Bulletin Series 4 Geology, no. 6, 2024 (2024): 185–96. https://doi.org/10.55959/msu0579-9406-4-2024-63-6-185-196.

Full text
Abstract:
A characteristic simplified resistivity model of the earth’s crust has been compiled, containing a three-dimensional conductive sedimentary depression in a resistive basement. Two variants of the model are considered: with a uniform near-surface part and with multiple local near-surface inhomogeneities. Using three-dimensional modeling, synthetic magnetotelluric sounding (MTS) data were calculated using a system of profiles. In the data for the second variant of the model, a widespread effect of near-surface distortions is observed, leading to a static shift in the amplitude MTS curves, but no
APA, Harvard, Vancouver, ISO, and other styles
32

Popov, Yu I. "Fields of geometric objects associated with compiled hyperplane H ( ,L)  -distribution in affine space." Differential Geometry of Manifolds of Figures, no. 52 (2021): 97–116. http://dx.doi.org/10.5922/0321-4796-2020-52-10.

Full text
Abstract:
In the first-order frame a tangentially r-framed hyperband is given in the projective space. For simplicity of presentation, we adapt the frame by the field of the 1st kind normals. The tensor of nonholonomicity of cloth­ing L-planes field is introduced. The vanishing the nonholonomic tensor leads to three different interpretations of the hyperband. With the help of ТL-virtual normals of the 1st and 2nd kind of framed L-planes, we come to the following conclusion: in a third order differential neighborhood the bundle of the hyperband second kind normals generates a one-parameter bundle of ТL-v
APA, Harvard, Vancouver, ISO, and other styles
33

Sundram, Shiv, Muhammad Usman Tariq, and Fredrik Kjolstad. "Compiling Recurrences over Dense and Sparse Arrays." Proceedings of the ACM on Programming Languages 8, OOPSLA1 (2024): 250–75. http://dx.doi.org/10.1145/3649820.

Full text
Abstract:
We present a framework for compiling recurrence equations into native code. In our framework, users specify a system of recurrences, the types of data structures that store inputs and outputs, and scheduling commands for optimization. Our compiler then lowers these specifications into native code that respects the dependencies in the recurrence equations. Our compiler can generate code over both sparse and dense data structures, and determines if the recurrence system is solvable with the provided scheduling primitives. We evaluate the performance and correctness of the generated code on sever
APA, Harvard, Vancouver, ISO, and other styles
34

Alemayehu, Sisay, and Jima Asefa. "A Review of Earthquake Source Parameters in the Main Ethiopian Rift." International Journal of Geophysics 2023 (May 11, 2023): 1–14. http://dx.doi.org/10.1155/2023/8368175.

Full text
Abstract:
We assessed earthquake source parameters compiled from previous studies and international databases. In addition, moment tensor inversion is made from the broadband seismic data of two earthquakes that occurred in the region in 2017 and 2018 with magnitudes Mw 5.0 and 5.1, respectively. As a result, the two events’ reliable source parameters are retrieved. We found that earthquakes are distributed in the rift floor, at margins and adjacent plateaus. Because the majority of earthquakes occur on the rift floor, deformation is most likely caused by strain accumulation transferred from border faul
APA, Harvard, Vancouver, ISO, and other styles
35

Klochkov, Yu V., S. D. Fomin, O. V. Vakhnina, T. A. Sobolevskaya, M. Yu Klochkov, and A. S. Andreev. "Finite element modeling of the processes of elastic-plastic deformation of reclamation objects of the agro-industrial complex." IOP Conference Series: Earth and Environmental Science 965, no. 1 (2022): 012049. http://dx.doi.org/10.1088/1755-1315/965/1/012049.

Full text
Abstract:
Abstract Annotation. To study the processes of nonlinear deformation of reclamation objects and engineering systems of the agro-industrial complex, taking into account the plastic stage of the used structural material, a finite element model was created based on a volumetric prismatic discretization element with quadrangular bases. The plastic stage of deformation of the applied structural material of the object is taken into account on the basis of the provisions of the deformation theory of plasticity. The plasticity matrix at the (j + 1)-th stage of sequential loading was compiled as a resu
APA, Harvard, Vancouver, ISO, and other styles
36

Fang, Jingzhi, Yanyan Shen, Yue Wang, and Lei Chen. "ETO." Proceedings of the VLDB Endowment 15, no. 2 (2021): 183–95. http://dx.doi.org/10.14778/3489496.3489500.

Full text
Abstract:
Recently, deep neural networks (DNNs) have achieved great success in various applications, where low inference latency is important. Existing solutions either manually tune the kernel library or utilize search-based compilation to reduce the operator latency. However, manual tuning requires significant engineering effort, and the huge search space makes the search cost of the search-based compilation unaffordable in some situations. In this work, we propose ETO, a framework for speeding up DNN operator optimization based on reusing the information of performant tensor programs. Specifically, E
APA, Harvard, Vancouver, ISO, and other styles
37

Manabe, Hidetaka, and Yuichi Sano. "The State Preparation of Multivariate Normal Distributions using Tree Tensor Network." Quantum 9 (May 28, 2025): 1755. https://doi.org/10.22331/q-2025-05-28-1755.

Full text
Abstract:
The quantum state preparation of probability distributions is an important subroutine for many quantum algorithms. When embedding D-dimensional multivariate probability distributions by discretizing each dimension into 2n points, we need a state preparation circuit comprising a total of nD qubits, which is often difficult to compile. In this study, we propose a scalable method to generate state preparation circuits for D-dimensional multivariate normal distributions, utilizing tree tensor networks (TTN). We establish theoretical guarantees that multivariate normal distributions with 1D correla
APA, Harvard, Vancouver, ISO, and other styles
38

Bračevac, Oliver, Guannan Wei, Songlin Jia, et al. "Graph IRs for Impure Higher-Order Languages: Making Aggressive Optimizations Affordable with Precise Effect Dependencies." Proceedings of the ACM on Programming Languages 7, OOPSLA2 (2023): 400–430. http://dx.doi.org/10.1145/3622813.

Full text
Abstract:
Graph-based intermediate representations (IRs) are widely used for powerful compiler optimizations, either interprocedurally in pure functional languages, or intraprocedurally in imperative languages. Yet so far, no suitable graph IR exists for aggressive global optimizations in languages with both effects and higher-order functions: aliasing and indirect control transfers make it difficult to maintain sufficiently granular dependency information for optimizations to be effective. To close this long-standing gap, we propose a novel typed graph IR combining a notion of reachability types with a
APA, Harvard, Vancouver, ISO, and other styles
39

A.-M.Y. Muradova. "REHABILITATION OF ATHLETES AFTER SURGICAL INTERVENTION ON THE ACHILLES TENDON." Scientific News of Academy of Physical Education and Sport 2, no. 3 (2021): 76–80. http://dx.doi.org/10.28942/ssj.v2i3.254.

Full text
Abstract:
Rehabilitation of athletes after surgical intervention on the Achilles tendon. The frequency of injuries to the Achilles tendon increases every year in the population, since most people lead a sedentary lifestyle, but periodically show interest in physical activity. Most Achilles tendon injuries occur during sports games where rapid acceleration / deceleration and jumping are required, so professional athletes are most at risk of injury to the Achilles tendon. Currently, there are no approved requirements for the rehabilitation treatment of patients after surgical treatment of the Achilles ten
APA, Harvard, Vancouver, ISO, and other styles
40

Popov, Yu I. "Fields of geometric objects associated with compiled hyperplane-distribution in affine space." Differential Geometry of Manifolds of Figures, no. 51 (2020): 103–15. http://dx.doi.org/10.5922/0321-4796-2020-51-12.

Full text
Abstract:
A compiled hyperplane distribution is considered in an n-dimensional projective space . We will briefly call it a -distribution. Note that the plane L(A) is the distribution characteristic obtained by displacement in the center belonging to the L-subbundle. The following results were obtained: a) The existence theorem is proved: -distribution exists with arbitrary (3n – 5) functions of n arguments. b) A focal manifold is constructed in the normal plane of the 1st kind of L-subbundle. It was obtained by shifting the cen­ter A along the curves belonging to the L-distribution. A focal manifold is
APA, Harvard, Vancouver, ISO, and other styles
41

Guo, Hai Qing, Bo Wen, and Xiao Feng Bai. "Study of Seepage Properties of Fractured Rock Mass Based on Improved K-Means Clustering Algorithm." Applied Mechanics and Materials 405-408 (September 2013): 310–15. http://dx.doi.org/10.4028/www.scientific.net/amm.405-408.310.

Full text
Abstract:
Seepage properties of fractured rock mass are of prime importance for hydraulic engineering and accurate description of rock fracture geometry parameters is an important and basic task in rock hydraulics. In this paper, an improved K-means clustering algorithm for structural plane of fractured rock mass was first brought forward and the corresponding Matlab program for discontinuity orientations partitioning was compiled and then used in the fitting analysis of dominant orientations of certain dam foundation rock mass. On this basis, combining calculation formulas of multi-group fractures, the
APA, Harvard, Vancouver, ISO, and other styles
42

Utkin, Nikita D., and Andrei K. Dambis. "Calibrating the BHB star distance scale and the halo kinematic distance to the Galactic Centre." Monthly Notices of the Royal Astronomical Society 499, no. 1 (2020): 1058–71. http://dx.doi.org/10.1093/mnras/staa2819.

Full text
Abstract:
ABSTRACT We report the first determination of the distance to the Galactic Centre based on the kinematics of halo objects. We apply the statistical-parallax technique to the sample of ∼2500 blue horizontal branch (BHB) stars compiled by Xue et al. to simultaneously constrain the correction factor to the photometric distances of BHB stars as reported by those authors and the distance to the Galactic Centre to find R = 8.2 ± 0.6 kpc. We also find that the average velocity of our BHB star sample in the direction of Galactic rotation, V0 = −240 ± 4 km s−1, is greater by about 20 km s−1 in absolute
APA, Harvard, Vancouver, ISO, and other styles
43

YAO, WENJUAN, JIANWEI MA, XUEMEI LUO, and BOTE LUO. "NUMERICAL ANALYSIS OF TYMPANOSCLEROSIS AND TREATMENT EFFECT." Journal of Mechanics in Medicine and Biology 14, no. 04 (2014): 1450051. http://dx.doi.org/10.1142/s0219519414500511.

Full text
Abstract:
Tympanosclerosis is a typical middle ear disease, which is one of the main causes of conduction deafness. We investigate the effects of tympanosclerosis and lesion excision on sound transmission of the human ear by using finite element technique. Based on CT scan images from Zhongshan Hospital of Fudan University on the normal human middle ear, numerical values of the CT scans were obtained by further processing of the images using a self-compiled program. The CT data of the right ear from a healthy volunteer were digitalized and imported into PATRAN software to reconstruct the finite element
APA, Harvard, Vancouver, ISO, and other styles
44

Mutlu, Ahu Kömeç. "Seismicity, focal mechanism, and stress tensor analysis of the Simav region, western Turkey." Open Geosciences 12, no. 1 (2020): 479–90. http://dx.doi.org/10.1515/geo-2020-0010.

Full text
Abstract:
AbstractThis study focuses on the seismicity and stress inversion analysis of the Simav region in western Turkey. The latest moderate-size earthquake was recorded on May 19, 2011 (Mw 5.9), with a dense aftershock sequence of more than 5,000 earthquakes in 6 months. Between 2004 and 2018, data from earthquake events with magnitudes greater than 0.7 were compiled from 86 seismic stations. The source mechanism of 54 earthquakes with moment magnitudes greater than 3.5 was derived by using a moment tensor inversion. Normal faults with oblique-slip motions are dominant being compatible with the NE-S
APA, Harvard, Vancouver, ISO, and other styles
45

Lullove, Eric. "Acellular Fetal Bovine Dermal Matrix in the Treatment of Nonhealing Wounds in Patients with Complex Comorbidities." Journal of the American Podiatric Medical Association 102, no. 3 (2012): 233–39. http://dx.doi.org/10.7547/1020233.

Full text
Abstract:
Background: In contrast to the narrow indications for living skin equivalents, extracellular matrix biomaterials are clinically used in a wide range of wound-healing applications. Given the breadth of possible uses, the goal of this study was to retrospectively compile and analyze the clinical application and effectiveness of an extracellular matrix biomaterial derived from fetal bovine dermis (PriMatrix; TEI Biosciences, Boston, Massachusetts) in patients treated by a single physician and monitored postsurgically in an outpatient wound care center. Methods: A retrospective medical record revi
APA, Harvard, Vancouver, ISO, and other styles
46

Iyer, Rajan, Christopher O’Neill, and Manuel Malaver. "Helmholtz Hamiltonian Mechanics Electromagnetic Physics Gaging Charge Fields Having Novel Quantum Circuitry Model." Oriental Journal of Physical Sciences 5, no. 1-2 (2020): 30–48. http://dx.doi.org/10.13005/ojps05.01-02.06.

Full text
Abstract:
This article shows novel model Pauli-Dirac-Planck-quantum-circuit-assembly-gage, consisting of the monopole quasiparticles and electron-positron particle fields, demonstrating power of Iyer Markoulakis Helmholtz Hamiltonian mechanics of point vortex and gradient fields general formalism. Transforming this general metrics to Coulombic gaging metrics and performing gage charge fields calculations, derivation of assembly eigenvector matrix bundle constructs of magnetic monopoles, and electron positron particle gage metrics were successfully compiled, like SUSY (?( 1 &?@?*&1 )) Hermitian q
APA, Harvard, Vancouver, ISO, and other styles
47

Iyer, Rajan, Christopher O’Neill2, and Manuel Malaver. "Helmholtz Hamiltonian Mechanics Electromagnetic Physics Gaging Charge Fields Having Novel Quantum Circuitry Model." Oriental Journal of Physical Sciences 5, no. 1-2 (2020): 30–48. http://dx.doi.org/10.13005/10.13005/ojps05.01-02.06.

Full text
Abstract:
This article shows novel model Pauli-Dirac-Planck-quantum-circuit-assembly-gage, consisting of the monopole quasiparticles and electron-positron particle fields, demonstrating power of Iyer Markoulakis Helmholtz Hamiltonian mechanics of point vortex and gradient fields general formalism. Transforming this general metrics to Coulombic gaging metrics and performing gage charge fields calculations, derivation of assembly eigenvector matrix bundle constructs of magnetic monopoles, and electron positron particle gage metrics were successfully compiled, like SUSY (?( 1 &?@?*&1 )) Hermitian q
APA, Harvard, Vancouver, ISO, and other styles
48

Chai, Xin, Tan Sun, Zhaoxin Li, et al. "Cross-Shaped Heat Tensor Network for Morphometric Analysis Using Zebrafish Larvae Feature Keypoints." Sensors 25, no. 1 (2024): 132. https://doi.org/10.3390/s25010132.

Full text
Abstract:
Deep learning-based morphometric analysis of zebrafish is widely utilized for non-destructively identifying abnormalities and diagnosing diseases. However, obtaining discriminative and continuous organ category decision boundaries poses a significant challenge by directly observing zebrafish larvae from the outside. To address this issue, this study simplifies the organ areas to polygons and focuses solely on the endpoint positioning. Specifically, we introduce a deep learning-based feature endpoint detection method for quantitatively determining zebrafish larvae’s essential phenotype and orga
APA, Harvard, Vancouver, ISO, and other styles
49

Venkat, Anand, Tharindu Rusira, Raj Barik, Mary Hall, and Leonard Truong. "SWIRL: High-performance many-core CPU code generation for deep neural networks." International Journal of High Performance Computing Applications 33, no. 6 (2019): 1275–89. http://dx.doi.org/10.1177/1094342019866247.

Full text
Abstract:
Deep neural networks (DNNs) have demonstrated effectiveness in many domains including object recognition, speech recognition, natural language processing, and health care. Typically, the computations involved in DNN training and inferencing are time consuming and require efficient implementations. Existing frameworks such as TensorFlow, Theano, Torch, Cognitive Tool Kit (CNTK), and Caffe enable Graphics Processing Unit (GPUs) as the status quo devices for DNN execution, leaving Central Processing Unit (CPUs) behind. Moreover, existing frameworks forgo or limit cross layer optimization opportun
APA, Harvard, Vancouver, ISO, and other styles
50

Kim, Hee E., Mate E. Maros, Thomas Miethke, Maximilian Kittel, Fabian Siegel, and Thomas Ganslandt. "Lightweight Visual Transformers Outperform Convolutional Neural Networks for Gram-Stained Image Classification: An Empirical Study." Biomedicines 11, no. 5 (2023): 1333. http://dx.doi.org/10.3390/biomedicines11051333.

Full text
Abstract:
We aimed to automate Gram-stain analysis to speed up the detection of bacterial strains in patients suffering from infections. We performed comparative analyses of visual transformers (VT) using various configurations including model size (small vs. large), training epochs (1 vs. 100), and quantization schemes (tensor- or channel-wise) using float32 or int8 on publicly available (DIBaS, n = 660) and locally compiled (n = 8500) datasets. Six VT models (BEiT, DeiT, MobileViT, PoolFormer, Swin and ViT) were evaluated and compared to two convolutional neural networks (CNN), ResNet and ConvNeXT. Th
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!