To see the other types of publications on this topic, follow the link: Linear separability.

Journal articles on the topic 'Linear separability'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 journal articles for your research on the topic 'Linear separability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Smith, J. David, Morgan J. Murray, and John Paul Minda. "Straight talk about linear separability." Journal of Experimental Psychology: Learning, Memory, and Cognition 23, no. 3 (1997): 659–80. http://dx.doi.org/10.1037/0278-7393.23.3.659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Torres, Claudio, Pablo Pérez-Lantero, and Gilberto Gutiérrez. "Linear separability in spatial databases." Knowledge and Information Systems 54, no. 2 (May 27, 2017): 287–314. http://dx.doi.org/10.1007/s10115-017-1063-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Elizondo, David A., Ralph Birkenhead, Matias Gamez, Noelia Garcia, and Esteban Alfaro. "Linear separability and classification complexity." Expert Systems with Applications 39, no. 9 (July 2012): 7796–807. http://dx.doi.org/10.1016/j.eswa.2012.01.090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bauer, Ben, Pierre Jolicoeur, and William B. Cowan. "Distractor Heterogeneity versus Linear Separability in Colour Visual Search." Perception 25, no. 11 (November 1996): 1281–93. http://dx.doi.org/10.1068/p251281.

Full text
Abstract:
D'Zmura, and Bauer, Jolicoeur, and Cowan demonstrated that a target whose chromaticity was linearly separable from distractor chromaticities was relatively easy to detect in a search display, whereas a target that was not linearly separable from the distractor chromaticities resulted in steep search slopes. This linear separability effect suggests that efficient colour visual search is mediated by a chromatically linear mechanism. Failure of this mechanism leads to search performance strongly influenced by number of search items (set size). In their studies, linear separability was confounded with distractor heterogeneity and thus the results attributed to linear separability were also consistent with the model of visual search proposed by Duncan and Humphreys in which search performance is determined in part by distractor heterogeneity. We contrasted the predictions based on linear separability and on the Duncan and Humphreys model by varying the ratios of the quantities of the two distractors and demonstrated the potent effects of linear separability in a design that deconfounded linear separability and distractor heterogeneity.
APA, Harvard, Vancouver, ISO, and other styles
5

Tajine, M., and D. Elizondo. "New methods for testing linear separability." Neurocomputing 47, no. 1-4 (August 2002): 161–88. http://dx.doi.org/10.1016/s0925-2312(01)00587-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bruckstein, Alfred M., and Thomas M. Cover. "Monotonicity of Linear Separability Under Translation." IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-7, no. 3 (May 1985): 355–58. http://dx.doi.org/10.1109/tpami.1985.4767666.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gherardi, Marco. "Solvable Model for the Linear Separability of Structured Data." Entropy 23, no. 3 (March 4, 2021): 305. http://dx.doi.org/10.3390/e23030305.

Full text
Abstract:
Linear separability, a core concept in supervised machine learning, refers to whether the labels of a data set can be captured by the simplest possible machine: a linear classifier. In order to quantify linear separability beyond this single bit of information, one needs models of data structure parameterized by interpretable quantities, and tractable analytically. Here, I address one class of models with these properties, and show how a combinatorial method allows for the computation, in a mean field approximation, of two useful descriptors of linear separability, one of which is closely related to the popular concept of storage capacity. I motivate the need for multiple metrics by quantifying linear separability in a simple synthetic data set with controlled correlations between the points and their labels, as well as in the benchmark data set MNIST, where the capacity alone paints an incomplete picture. The analytical results indicate a high degree of “universality”, or robustness with respect to the microscopic parameters controlling data structure.
APA, Harvard, Vancouver, ISO, and other styles
8

Herrnberger, Bärbel, and Günter Ehret. "Linearity or separability?" Behavioral and Brain Sciences 21, no. 2 (April 1998): 269–70. http://dx.doi.org/10.1017/s0140525x98331179.

Full text
Abstract:
Sussman et al. state that auditory systems exploit linear correlations in the sound signal in order to identify perceptual categories. Can the auditory system recognize linearity? In bats and owls, separability of emergent features is an additional constraint that goes beyond linearity and for which linearity is not a necessary prerequisite.
APA, Harvard, Vancouver, ISO, and other styles
9

Ruts, Wim, Gert Storms, and James Hampton. "Linear separability in superordinate natural language concepts." Memory & Cognition 32, no. 1 (January 2004): 83–95. http://dx.doi.org/10.3758/bf03195822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hou, Jinchuan, and Xiaofei Qi. "Linear maps preserving separability of pure states." Linear Algebra and its Applications 439, no. 5 (September 2013): 1245–57. http://dx.doi.org/10.1016/j.laa.2013.04.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Elizondo, D. "The Linear Separability Problem: Some Testing Methods." IEEE Transactions on Neural Networks 17, no. 2 (March 2006): 330–44. http://dx.doi.org/10.1109/tnn.2005.860871.

Full text
APA, Harvard, Vancouver, ISO, and other styles
12

Unger, Giora, and Benny Chor. "Linear Separability of Gene Expression Data Sets." IEEE/ACM Transactions on Computational Biology and Bioinformatics 7, no. 2 (April 2010): 375–81. http://dx.doi.org/10.1109/tcbb.2008.90.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Bouldin, Richard. "Distance to invertible linear operators without separability." Proceedings of the American Mathematical Society 116, no. 2 (February 1, 1992): 489. http://dx.doi.org/10.1090/s0002-9939-1992-1097336-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

Horodecki, Michał, Paweł Horodecki, and Ryszard Horodecki. "Separability of Mixed Quantum States: Linear Contractions and Permutation Criteria." Open Systems & Information Dynamics 13, no. 01 (March 2006): 103–11. http://dx.doi.org/10.1007/s11080-006-7271-8.

Full text
Abstract:
Recently, a powerful separability criterion was introduced by O. Rudolf in [5] and by K. Chen et al. in [6] — basing on realignment of elements of density matrix. Composing the main idea behind the above criterion and the necessary and sufficient condition in terms of positive maps, we provide a characterization of separable states by means of linear contractions. The latter need not be positive maps. We extend the idea to multipartite systems, and find that, somewhat suprisingly, partial realigment (unlike partial transposition) can detect genuinely tri-parite entanglement. We generalize it by introducing a family of so called permutation separability criteria for multipartite states. Namely, any permutation of indices of density matrix written in product basis leads to a separability criterion. Partial transpose and realignment criterion are special cases of permutation criteria.
APA, Harvard, Vancouver, ISO, and other styles
15

Dadajonov, R. N., and N. R. Karimova. "Effective separability of positive and negative linear orders." Uzbek Mathematical Journal 2020, no. 2 (June 25, 2020): 37–43. http://dx.doi.org/10.29229/uzmj.2020-2-4.

Full text
APA, Harvard, Vancouver, ISO, and other styles
16

Ben-Israel, Adi, and Yuri Levin. "The geometry of linear separability in data sets." Linear Algebra and its Applications 416, no. 1 (July 2006): 75–87. http://dx.doi.org/10.1016/j.laa.2005.08.014.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Kong, Garry, David Alais, and Erik Van der Burg. "Investigating Linear Separability in Visual Search for Orientation." Journal of Vision 16, no. 12 (September 1, 2016): 1280. http://dx.doi.org/10.1167/16.12.1280.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

van Zanten, A. J. "Minimal-change order and separability in linear codes." IEEE Transactions on Information Theory 39, no. 6 (1993): 1988–89. http://dx.doi.org/10.1109/18.265509.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Ohfuti, Yasushi, and Kikuo Cho. "General separability of linear and nonlinear optical susceptibilities." Physical Review B 52, no. 7 (August 15, 1995): 4828–32. http://dx.doi.org/10.1103/physrevb.52.4828.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Eriksson, J., and V. Koivunen. "Identifiability, Separability, and Uniqueness of Linear ICA Models." IEEE Signal Processing Letters 11, no. 7 (July 2004): 601–4. http://dx.doi.org/10.1109/lsp.2004.830118.

Full text
APA, Harvard, Vancouver, ISO, and other styles
21

Asano, Akira, Yasuhiro Kasai, and Shunsuke Yokozeki. "Linear Separability of Positive Self-Dual Logical Filters." Optical Review 2, no. 5 (September 1995): 327–30. http://dx.doi.org/10.1007/s10043-995-0327-0.

Full text
APA, Harvard, Vancouver, ISO, and other styles
22

Hadian Jazi, Marjan, Alireza Bab-Hadiashar, and Reza Hoseinnezhad. "Analytical Analysis of Motion Separability." Scientific World Journal 2013 (2013): 1–15. http://dx.doi.org/10.1155/2013/878417.

Full text
Abstract:
Motion segmentation is an important task in computer vision and several practical approaches have already been developed. A common approach to motion segmentation is to use the optical flow and formulate the segmentation problem using a linear approximation of the brightness constancy constraints. Although there are numerous solutions to solve this problem and their accuracies and reliabilities have been studied, the exact definition of the segmentation problem, its theoretical feasibility and the conditions for successful motion segmentation are yet to be derived. This paper presents a simplified theoretical framework for the prediction of feasibility, of segmentation of a two-dimensional linear equation system. A statistical definition of a separable motion (structure) is presented and a relatively straightforward criterion for predicting the separability of two different motions in this framework is derived. The applicability of the proposed criterion for prediction of the existence of multiple motions in practice is examined using both synthetic and real image sequences. The prescribed separability criterion is useful in designing computer vision applications as it is solely based on the amount of relative motion and the scale of measurement noise.
APA, Harvard, Vancouver, ISO, and other styles
23

Theis, Fabian J. "A New Concept for Separability Problems in Blind Source Separation." Neural Computation 16, no. 9 (September 1, 2004): 1827–50. http://dx.doi.org/10.1162/0899766041336404.

Full text
Abstract:
The goal of blind source separation (BSS) lies in recovering the original independent sources of a mixed random vector without knowing the mixing structure. A key ingredient for performing BSS successfully is to know the indeterminacies of the problem—that is, to know how the separating model relates to the original mixing model (separability). For linear BSS, Comon (1994) showed using the Darmois-Skitovitch theorem that the linear mixing matrix can be found except for permutation and scaling. In this work, a much simpler, direct proof for linear separability is given. The idea is based on the fact that a random vector is independent if and only if the Hessian of its logarithmic density (resp. characteristic function) is diagonal everywhere. This property is then exploited to propose a new algorithm for performing BSS. Furthermore, first ideas of how to generalize separability results based on Hessian diagonalization to more complicated nonlinear models are studied in the setting of postnonlinear BSS.
APA, Harvard, Vancouver, ISO, and other styles
24

Gabidullina, Z. R. "A Linear Separability Criterion for Sets of Euclidean Space." Journal of Optimization Theory and Applications 158, no. 1 (September 5, 2012): 145–71. http://dx.doi.org/10.1007/s10957-012-0155-x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Chen, Degang, Qiang He, and Xizhao Wang. "On linear separability of data sets in feature space." Neurocomputing 70, no. 13-15 (August 2007): 2441–48. http://dx.doi.org/10.1016/j.neucom.2006.12.002.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Budinich, M. "On linear separability of random subsets of hypercube vertices." Journal of Physics A: Mathematical and General 24, no. 4 (February 21, 1991): L211—L213. http://dx.doi.org/10.1088/0305-4470/24/4/010.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Johnston, Nathaniel. "Characterizing operations preserving separability measures via linear preserver problems." Linear and Multilinear Algebra 59, no. 10 (October 2011): 1171–87. http://dx.doi.org/10.1080/03081087.2011.596540.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Arguin, Martin, and Daniel Saumier. "Conjunction and linear non-separability effects in visual shape encoding." Vision Research 40, no. 22 (October 2000): 3099–115. http://dx.doi.org/10.1016/s0042-6989(00)00155-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Bauer, Ben, and Sharon McFadden. "Linear separability and redundant colour coding in visual search displays." Displays 18, no. 1 (April 1997): 21–28. http://dx.doi.org/10.1016/s0141-9382(96)01039-6.

Full text
APA, Harvard, Vancouver, ISO, and other styles
30

Delforce, Julie C. "Separability in farm‐household economics: an experiment with linear programming." Agricultural Economics 10, no. 2 (April 1994): 165–77. http://dx.doi.org/10.1111/j.1574-0862.1994.tb00299.x.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Woodsend, Kristian, and Jacek Gondzio. "Exploiting separability in large-scale linear support vector machine training." Computational Optimization and Applications 49, no. 2 (October 14, 2009): 241–69. http://dx.doi.org/10.1007/s10589-009-9296-8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Delforce, J. "Separability in farm-household economics: An experiment with linear programming." Agricultural Economics 10, no. 2 (April 1994): 165–77. http://dx.doi.org/10.1016/0169-5150(94)90005-1.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

ARKIN, ESTHER M., DELIA GARIJO, ALBERTO MÁRQUEZ, JOSEPH S. B. MITCHELL, and CARLOS SEARA. "SEPARABILITY OF POINT SETS BY k-LEVEL LINEAR CLASSIFICATION TREES." International Journal of Computational Geometry & Applications 22, no. 02 (April 2012): 143–65. http://dx.doi.org/10.1142/s0218195912500021.

Full text
Abstract:
Let R and B be sets of red and blue points in the plane in general position. We study the problem of computing a k-level binary space partition (BSP) tree to classify/separate R and B, such that the tree defines a linear decision at each internal node and each leaf of the tree corresponds to a (convex) cell of the partition that contains only red or only blue points. Specifically, we show that a 2-level tree can be computed, if one exists, in time O(n2). We show that a minimum-level (3 ≤ k ≤ log n) tree can be computed in time nO( log n). In the special case of axis-parallel partitions, we show that 2-level and 3-level trees can be computed in time O(n), while a minimum-level tree can be computed in time O(n5).
APA, Harvard, Vancouver, ISO, and other styles
34

Yu, Zhi Bin, and Chun Xia Chen. "The Radar Signal Feature-Separability Model Analysis." Advanced Materials Research 268-270 (July 2011): 1484–87. http://dx.doi.org/10.4028/www.scientific.net/amr.268-270.1484.

Full text
Abstract:
The one-dimensional feature-separability model concerning the feature-separability issue of radar emitter signals is proposed based on the probability theory and statistical theory, to evaluate the deinterleaving and recognition capability of extracted features. The proposed method is applied to analyze convention features of radar emitter signals. The theoretical analysis and experimental results show that the proposed model offers a new way to analyze the validity of extracted features, and is valid in both the original feature space and linear-transformed feature space.
APA, Harvard, Vancouver, ISO, and other styles
35

Avdeev, Dmitry, and Sergei Knizhnik. "3D integral equation modeling with a linear dependence on dimensions." GEOPHYSICS 74, no. 5 (September 2009): F89—F94. http://dx.doi.org/10.1190/1.3190132.

Full text
Abstract:
We have improved the integral equation method for modeling 3D electromagnetic fields by using the separability of its inherent [Formula: see text] dyadic Green’s tensors. Conventional integral equation approaches exhibit a quadratic dependence on model size, at least for the vertical dimension. In contrast, our approach has a linear dependence on all three dimensions. We tested our method on an example of induction logging in deviated boreholes.
APA, Harvard, Vancouver, ISO, and other styles
36

Lim, Lek-Heng. "Tensors in computations." Acta Numerica 30 (May 2021): 555–764. http://dx.doi.org/10.1017/s0962492921000076.

Full text
Abstract:
The notion of a tensor captures three great ideas: equivariance, multilinearity, separability. But trying to be three things at once makes the notion difficult to understand. We will explain tensors in an accessible and elementary way through the lens of linear algebra and numerical linear algebra, elucidated with examples from computational and applied mathematics.
APA, Harvard, Vancouver, ISO, and other styles
37

Blair, Mark, and Don Homa. "Expanding the search for a linear separability constraint on category learning." Memory & Cognition 29, no. 8 (December 2001): 1153–64. http://dx.doi.org/10.3758/bf03206385.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Caticha, N., J. E. Palo Tejada, D. Lancet, and E. Domany. "Computational Capacity of an Odorant Discriminator: The Linear Separability of Curves." Neural Computation 14, no. 9 (September 1, 2002): 2201–20. http://dx.doi.org/10.1162/089976602320264051.

Full text
Abstract:
We introduce and study an artificial neural network inspired by the probabilistic receptor affinity distribution model of olfaction. Our system consists of N sensory neurons whose outputs converge on a single processing linear threshold element. The system's aim is to model discrimination of a single target odorant from a large number p of background odorants within a range of odorant concentrations. We show that this is possible provided p does not exceed a critical value pc and calculate the critical capacity αc = pc/N. The critical capacity depends on the range of concentrations in which the discrimination is to be accomplished. If the olfactory bulb may be thought of as a collection of such processing elements, each responsible for the discrimination of a single odorant, our study provides a quantitative analysis of the potential computational properties of the olfactory bulb. The mathematical formulation of the problem we consider is one of determining the capacity for linear separability of continuous curves, embedded in a large-dimensional space. This is accomplished here by a numerical study, using a method that signals whether the discrimination task is realizable, together with a finite-size scaling analysis.
APA, Harvard, Vancouver, ISO, and other styles
39

Bauer, Ben, Pierre Jolicœur, and William B. Cowan. "Convex hull test of the linear separability hypothesis in visual search." Vision Research 39, no. 16 (August 1999): 2681–95. http://dx.doi.org/10.1016/s0042-6989(98)00302-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Hammer, P. L., B. Simeone, Th M. Liebling, and D. de Werra. "From Linear Separability to Unimodality: A Hierarchy of Pseudo-Boolean Functions." SIAM Journal on Discrete Mathematics 1, no. 2 (May 1988): 174–84. http://dx.doi.org/10.1137/0401019.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Wattenmaker, William D., Gerald I. Dewey, Timothy D. Murphy, and Douglas L. Medin. "Linear separability and concept learning: Context, relational properties, and concept naturalness." Cognitive Psychology 18, no. 2 (April 1986): 158–94. http://dx.doi.org/10.1016/0010-0285(86)90011-3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Chun, Youngsub. "Agreement, separability, and other axioms for quasi-linear social choice problems." Social Choice and Welfare 17, no. 3 (May 4, 2000): 507–21. http://dx.doi.org/10.1007/s003550050175.

Full text
APA, Harvard, Vancouver, ISO, and other styles
43

Stallard, Brian R. "Near-IR versus Mid-IR: Separability of Three Classes of Organic Compounds." Applied Spectroscopy 51, no. 5 (May 1997): 625–30. http://dx.doi.org/10.1366/0003702971941025.

Full text
Abstract:
Recently there has been a surge of interest in spectroscopic sensors operating in the near-IR, although it is recognized that the mid-IR contains more spectral information. The general question addressed in this paper is, How much specificity is lost in choosing the near-IR over the mid-IR for sensor applications? The example considered is the separability among three classes of organic compounds: alkanes, alcohols, and ketones/aldehydes. We use spectra from two sources: the Hummel polymer library (mid-IR) and the library of Buback and Vögele (near-IR). This is the first paper on class separability to make use of this new near-IR library, available in digital form only since July 1995. Five spectral regions are considered: region 5, 10,500 to 6300 cm−1; region 4, 7200 to 5200 cm−1; region 3, 5500 to 3800 cm−1; region 2, 3900 to 2500 cm−1; and region 1, 2500 to 500 cm−1. Class separability is explored both qualitatively and quantitatively with the use of principal component scatter plots, linear discriminant analysis, Bhattacharyya distances, and other methods. We find that the separability is greatest in region 1 and least in region 2, with the three near-IR regions being intermediate. Furthermore, we find that, in the near-IR, there is sufficient class separability to ensure that organic compounds of one class can be determined in the midst of interference from the other classes.
APA, Harvard, Vancouver, ISO, and other styles
44

Størmer, Erling. "Separable states and positive maps II." MATHEMATICA SCANDINAVICA 105, no. 2 (December 1, 2009): 188. http://dx.doi.org/10.7146/math.scand.a-15114.

Full text
Abstract:
Using the natural duality between linear functionals on tensor products of $C^*$-algebras with the trace class operators on a Hilbert space $H$ and linear maps of the $C^*$-algebra into $B(H)$, we give two characterizations of separability, one relating it to abelianness of the definite set of the map, and one on tensor products of nuclear and UHF $C^*$-algebras.
APA, Harvard, Vancouver, ISO, and other styles
45

WANG, DI, and NARENDRA S. CHAUDHARI. "BINARY NEURAL NETWORK TRAINING ALGORITHMS BASED ON LINEAR SEQUENTIAL LEARNING." International Journal of Neural Systems 13, no. 05 (October 2003): 333–51. http://dx.doi.org/10.1142/s0129065703001613.

Full text
Abstract:
A key problem in Binary Neural Network learning is to decide bigger linear separable subsets. In this paper we prove some lemmas about linear separability. Based on these lemmas, we propose Multi-Core Learning (MCL) and Multi-Core Expand-and-Truncate Learning (MCETL) algorithms to construct Binary Neural Networks. We conclude that MCL and MCETL simplify the equations to compute weights and thresholds, and they result in the construction of simpler hidden layer. Examples are given to demonstrate these conclusions.
APA, Harvard, Vancouver, ISO, and other styles
46

Sun, Yeong-Jeu. "New Criteria for the Linear Binary Separability in the Euclidean Normed Space." Open Cybernetics & Systemics Journal 2, no. 1 (May 19, 2008): 101–5. http://dx.doi.org/10.2174/1874110x00802010101.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Hamilton, Emily. "Finite quotients of rings and applications to subgroup separability of linear groups." Transactions of the American Mathematical Society 357, no. 5 (October 7, 2004): 1995–2006. http://dx.doi.org/10.1090/s0002-9947-04-03580-9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Buetti, Simona, Yujie Shao, Zoe Jing Xu, and Alejandro Lleras. "Re-examining the linear separability effect in visual search for oriented targets." Journal of Vision 20, no. 11 (October 20, 2020): 1244. http://dx.doi.org/10.1167/jov.20.11.1244.

Full text
APA, Harvard, Vancouver, ISO, and other styles
49

Wattenmaker, W. D. "Knowledge Structures and Linear Separability: Integrating Information in Object and Social Categorization." Cognitive Psychology 28, no. 3 (June 1995): 274–328. http://dx.doi.org/10.1006/cogp.1995.1007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Singh, Satvik, and Ion Nechita. "Diagonal unitary and orthogonal symmetries in quantum theory." Quantum 5 (August 9, 2021): 519. http://dx.doi.org/10.22331/q-2021-08-09-519.

Full text
Abstract:
We analyze bipartite matrices and linear maps between matrix algebras, which are respectively, invariant and covariant, under the diagonal unitary and orthogonal groups' actions. By presenting an expansive list of examples from the literature, which includes notable entries like the Diagonal Symmetric states and the Choi-type maps, we show that this class of matrices (and maps) encompasses a wide variety of scenarios, thereby unifying their study. We examine their linear algebraic structure and investigate different notions of positivity through their convex conic manifestations. In particular, we generalize the well-known cone of completely positive matrices to that of triplewise completely positive matrices and connect it to the separability of the relevant invariant states (or the entanglement breaking property of the corresponding quantum channels). For linear maps, we provide explicit characterizations of the stated covariance in terms of their Kraus, Stinespring, and Choi representations, and systematically analyze the usual properties of positivity, decomposability, complete positivity, and the like. We also describe the invariant subspaces of these maps and use their structure to provide necessary and sufficient conditions for separability of the associated invariant bipartite states.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography