Academic literature on the topic 'Linear separability'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Linear separability.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Linear separability"

1

Smith, J. David, Morgan J. Murray, and John Paul Minda. "Straight talk about linear separability." Journal of Experimental Psychology: Learning, Memory, and Cognition 23, no. 3 (1997): 659–80. http://dx.doi.org/10.1037/0278-7393.23.3.659.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Torres, Claudio, Pablo Pérez-Lantero, and Gilberto Gutiérrez. "Linear separability in spatial databases." Knowledge and Information Systems 54, no. 2 (May 27, 2017): 287–314. http://dx.doi.org/10.1007/s10115-017-1063-z.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Elizondo, David A., Ralph Birkenhead, Matias Gamez, Noelia Garcia, and Esteban Alfaro. "Linear separability and classification complexity." Expert Systems with Applications 39, no. 9 (July 2012): 7796–807. http://dx.doi.org/10.1016/j.eswa.2012.01.090.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Bauer, Ben, Pierre Jolicoeur, and William B. Cowan. "Distractor Heterogeneity versus Linear Separability in Colour Visual Search." Perception 25, no. 11 (November 1996): 1281–93. http://dx.doi.org/10.1068/p251281.

Full text
Abstract:
D'Zmura, and Bauer, Jolicoeur, and Cowan demonstrated that a target whose chromaticity was linearly separable from distractor chromaticities was relatively easy to detect in a search display, whereas a target that was not linearly separable from the distractor chromaticities resulted in steep search slopes. This linear separability effect suggests that efficient colour visual search is mediated by a chromatically linear mechanism. Failure of this mechanism leads to search performance strongly influenced by number of search items (set size). In their studies, linear separability was confounded with distractor heterogeneity and thus the results attributed to linear separability were also consistent with the model of visual search proposed by Duncan and Humphreys in which search performance is determined in part by distractor heterogeneity. We contrasted the predictions based on linear separability and on the Duncan and Humphreys model by varying the ratios of the quantities of the two distractors and demonstrated the potent effects of linear separability in a design that deconfounded linear separability and distractor heterogeneity.
APA, Harvard, Vancouver, ISO, and other styles
5

Tajine, M., and D. Elizondo. "New methods for testing linear separability." Neurocomputing 47, no. 1-4 (August 2002): 161–88. http://dx.doi.org/10.1016/s0925-2312(01)00587-2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Bruckstein, Alfred M., and Thomas M. Cover. "Monotonicity of Linear Separability Under Translation." IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-7, no. 3 (May 1985): 355–58. http://dx.doi.org/10.1109/tpami.1985.4767666.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Gherardi, Marco. "Solvable Model for the Linear Separability of Structured Data." Entropy 23, no. 3 (March 4, 2021): 305. http://dx.doi.org/10.3390/e23030305.

Full text
Abstract:
Linear separability, a core concept in supervised machine learning, refers to whether the labels of a data set can be captured by the simplest possible machine: a linear classifier. In order to quantify linear separability beyond this single bit of information, one needs models of data structure parameterized by interpretable quantities, and tractable analytically. Here, I address one class of models with these properties, and show how a combinatorial method allows for the computation, in a mean field approximation, of two useful descriptors of linear separability, one of which is closely related to the popular concept of storage capacity. I motivate the need for multiple metrics by quantifying linear separability in a simple synthetic data set with controlled correlations between the points and their labels, as well as in the benchmark data set MNIST, where the capacity alone paints an incomplete picture. The analytical results indicate a high degree of “universality”, or robustness with respect to the microscopic parameters controlling data structure.
APA, Harvard, Vancouver, ISO, and other styles
8

Herrnberger, Bärbel, and Günter Ehret. "Linearity or separability?" Behavioral and Brain Sciences 21, no. 2 (April 1998): 269–70. http://dx.doi.org/10.1017/s0140525x98331179.

Full text
Abstract:
Sussman et al. state that auditory systems exploit linear correlations in the sound signal in order to identify perceptual categories. Can the auditory system recognize linearity? In bats and owls, separability of emergent features is an additional constraint that goes beyond linearity and for which linearity is not a necessary prerequisite.
APA, Harvard, Vancouver, ISO, and other styles
9

Ruts, Wim, Gert Storms, and James Hampton. "Linear separability in superordinate natural language concepts." Memory & Cognition 32, no. 1 (January 2004): 83–95. http://dx.doi.org/10.3758/bf03195822.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Hou, Jinchuan, and Xiaofei Qi. "Linear maps preserving separability of pure states." Linear Algebra and its Applications 439, no. 5 (September 2013): 1245–57. http://dx.doi.org/10.1016/j.laa.2013.04.007.

Full text
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Linear separability"

1

Tuma, Carlos Cesar Mansur. "Aprendizado de máquina baseado em separabilidade linear em sistema de classificação híbrido-nebuloso aplicado a problemas multiclasse." Universidade Federal de São Carlos, 2009. https://repositorio.ufscar.br/handle/ufscar/407.

Full text
Abstract:
Made available in DSpace on 2016-06-02T19:05:36Z (GMT). No. of bitstreams: 1 2598.pdf: 3349204 bytes, checksum: 01649491fd1f03aa5a11b9191727f88b (MD5) Previous issue date: 2009-06-29
Financiadora de Estudos e Projetos
This master thesis describes an intelligent classifier system applied to multiclass non-linearly separable problems called Slicer. The system adopts a low computacional cost supervised learning strategy (evaluated as ) based on linear separability. During the learning period the system determines a set of hyperplanes associated to oneclass regions (sub-spaces). In classification tasks the classifier system uses the hyperplanes as a set of if-then-else rules to infer the class of the input attribute vector (non classified object). Among other characteristics, the intelligent classifier system is able to: deal with missing attribute values examples; reject noise examples during learning; adjust hyperplane parameters to improve the definition of the one-class regions; and eliminate redundant rules. The fuzzy theory is considered to design a hybrid version with features such as approximate reasoning and parallel inference computation. Different classification methods and benchmarks are considered for evaluation. The classifier system Slicer reaches acceptable results in terms of accuracy, justifying future investigation effort.
Este trabalho de mestrado descreve um sistema classificador inteligente aplicado a problemas multiclasse não-linearmente separáveis chamado Slicer. O sistema adota uma estratégia de aprendizado supervisionado de baixo custo computacional (avaliado em ) baseado em separabilidade linear. Durante o período de aprendizagem o sistema determina um conjunto de hiperplanos associados a regiões de classe única (subespaços). Nas tarefas de classificação o sistema classificador usa os hiperplanos como um conjunto de regras se-entao-senao para inferir a classe do vetor de atributos dado como entrada (objeto a ser classificado). Entre outras caracteristicas, o sistema classificador é capaz de: tratar atributos faltantes; eliminar ruídos durante o aprendizado; ajustar os parâmetros dos hiperplanos para obter melhores regiões de classe única; e eliminar regras redundantes. A teoria nebulosa é considerada para desenvolver uma versão híbrida com características como raciocínio aproximado e simultaneidade no mecanismo de inferência. Diferentes métodos de classificação e domínios são considerados para avaliação. O sistema classificador Slicer alcança resultados aceitáveis em termos de acurácia, justificando investir em futuras investigações.
APA, Harvard, Vancouver, ISO, and other styles
2

Liu, Mau-Sheng, and 劉茂生. "Necessary and sufficient condition for the linear binary separability in the Euclidean normed space." Thesis, 2006. http://ndltd.ncl.edu.tw/handle/11456312275869539597.

Full text
Abstract:
碩士
義守大學
電機工程學系碩士班
94
The classical binary classification problem is considered in this thesis. Necessary and sufficient condition is proposed to guarantee the linear binary separability of the training data in the Euclidean normed space. A suitable hyperplane that correctly classifies the training data is also constructed provided that the necessary and sufficient condition is satisfied. Based on the main result, we present an easy-to-check criterion for the linear binary separability of the training set. Finally, two numerical examples are given to illustrate the use of the main result.
APA, Harvard, Vancouver, ISO, and other styles
3

Johnston, Nathaniel. "Norms and Cones in the Theory of Quantum Entanglement." Thesis, 2012. http://hdl.handle.net/10214/3773.

Full text
Abstract:
There are various notions of positivity for matrices and linear matrix-valued maps that play important roles in quantum information theory. The cones of positive semidefinite matrices and completely positive linear maps, which represent quantum states and quantum channels respectively, are the most ubiquitous positive cones. There are also many natural cones that can been regarded as "more" or "less" positive than these standard examples. In particular, entanglement theory deals with the cones of separable operators and entanglement witnesses, which satisfy very strong and weak positivity properties respectively. Rather complementary to the various cones that arise in entanglement theory are norms. The trace norm (or operator norm, depending on context) for operators and the diamond norm (or completely bounded norm) for superoperators are the typical norms that are seen throughout quantum information theory. In this work our main goal is to develop a family of norms that play a role analogous to the cone of entanglement witnesses. We investigate the basic mathematical properties of these norms, including their relationships with other well-known norms, their isometry groups, and their dual norms. We also make the place of these norms in entanglement theory rigorous by showing that entanglement witnesses arise from minimal operator systems, and analogously our norms arise from minimal operator spaces. Finally, we connect the various cones and norms considered here to several seemingly unrelated problems from other areas. We characterize the problem of whether or not non-positive partial transpose bound entangled states exist in terms of one of our norms, and provide evidence in favour of their existence. We also characterize the minimum gate fidelity of a quantum channel, the maximum output purity and its completely bounded counterpart, and the geometric measure of entanglement in terms of these norms.
Natural Sciences and Engineering Research Council (Canada Graduate Scholarship), Brock Scholarship
APA, Harvard, Vancouver, ISO, and other styles
4

Beaulieu, Julien. "La contribution de la stéréoscopie à la constance de forme." Thèse, 2013. http://hdl.handle.net/1866/10730.

Full text
Abstract:
Le but de cette étude est de vérifier l'apport de la stéréoscopie dans le phénomène de la constance de forme. La méthode utilisée consiste à mesurer la performance de différents participants (temps de réponse et de taux d'erreurs) à une tâche de prospection visuelle. Quatre groupes de participants ont effectué la tâche. Le premier groupe a été exposé à une présentation stéréoscopique des stimuli, le deuxième groupe à une présentation des stimuli en stéréoscopie inversée (la disparité binoculaire était inversée), le troisième groupe à des stimuli comprenant une information de texture, mais sans stéréoscopie et le quatrième groupe à des stimuli bi-dimensionnels, sans texture. Une interaction entre les effets de rotation (points de vue familiers vs. points de vue non familiers) et le type d'information de profondeur disponible (stéréoscopie, stéréoscopie inversée, texture ou ombrage) a été mise en évidence, le coût de rotation étant plus faible au sein du groupe exposé à une présentation en stéréoscopie inversée. Ces résultats appuient l'implication de représentations tridimensionnelles dans le traitement de l'information visuelle.
This study was conducted to evaluate the contribution of stereopsis to the shape constancy phenomenon. Four groups of eight participants each were asked to perform a visual exploration task. The first group was exposed to a stereoscopic stimulation, the second group was exposed to a reversed stereoscopic stimulation, the third group was exposed to a monocular stimulation with textures and shadow and the fourth group was exposed to a monocular stimulation with shadow only. Response times and error rates were used to measure participant's performance. Results show an interaction between rotation effects (familiar viewpoints vs. non-familiar viewpoints) and available depth cues (stereopsis, reversed stereopsis, textures and shadow, shadow only). The rotation cost was smaller in the group exposed to a reversed stereoscopic stimulation. These results are congruent with the use of tridimensional representations underlying visual processing.
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Linear separability"

1

Cover, Thomas M. "Linear Separability." In Open Problems in Communication and Computation, 156–57. New York, NY: Springer New York, 1987. http://dx.doi.org/10.1007/978-1-4612-4808-8_47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Webb, Geoffrey I., Claude Sammut, Claudia Perlich, Tamás Horváth, Stefan Wrobel, Kevin B. Korb, William Stafford Noble, et al. "Linear Separability." In Encyclopedia of Machine Learning, 606. Boston, MA: Springer US, 2011. http://dx.doi.org/10.1007/978-0-387-30164-8_478.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Bobrowski, Leon. "Prognostic Models Based on Linear Separability." In Advances in Data Mining. Applications and Theoretical Aspects, 11–24. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23184-1_2.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Sperduti, Alessandro. "On Linear Separability of Sequences and Structures." In Artificial Neural Networks — ICANN 2002, 601–6. Berlin, Heidelberg: Springer Berlin Heidelberg, 2002. http://dx.doi.org/10.1007/3-540-46084-5_98.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Contassot-Vivier, Sylvain, and David Elizondo. "A Near Linear Algorithm for Testing Linear Separability in Two Dimensions." In Engineering Applications of Neural Networks, 114–24. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012. http://dx.doi.org/10.1007/978-3-642-32909-8_12.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Elizondo, David, Juan Miguel Ortiz-de-Lazcano-Lobato, and Ralph Birkenhead. "A Novel and Efficient Method for Testing Non Linear Separability." In Lecture Notes in Computer Science, 737–46. Berlin, Heidelberg: Springer Berlin Heidelberg, 2007. http://dx.doi.org/10.1007/978-3-540-74690-4_75.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bobrowski, Leon. "Induction of Linear Separability through the Ranked Layers of Binary Classifiers." In Engineering Applications of Neural Networks, 69–77. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23957-1_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Bobrowski, Leon. "CPL Criterion Functions and Learning Algorithms Linked to the Linear Separability Concept." In Engineering Applications of Neural Networks, 456–65. Berlin, Heidelberg: Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-41013-0_47.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

Bertini, João Roberto, and Maria do Carmo Nicoletti. "A Feedforward Constructive Neural Network Algorithm for Multiclass Tasks Based on Linear Separability." In Constructive Neural Networks, 145–69. Berlin, Heidelberg: Springer Berlin Heidelberg, 2009. http://dx.doi.org/10.1007/978-3-642-04512-7_8.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Bertini, João Roberto, and Maria do Carmo Nicoletti. "MBabCoNN – A Multiclass Version of a Constructive Neural Network Algorithm Based on Linear Separability and Convex Hull." In Artificial Neural Networks - ICANN 2008, 723–33. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. http://dx.doi.org/10.1007/978-3-540-87559-8_75.

Full text
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Linear separability"

1

Sheppard, John W., and Stephyn G. W. Butcher. "On the Linear Separability of Diagnostic Models." In 2006 IEEE AUTOTESTCON. IEEE Systems Readiness Technology Conference. IEEE, 2006. http://dx.doi.org/10.1109/autest.2006.283738.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Ozay, Mete, and Fatos T. Yarman Vural. "Linear separability analysis for stacked generalization architecture." In 2009 IEEE 17th Signal Processing and Communications Applications Conference (SIU). IEEE, 2009. http://dx.doi.org/10.1109/siu.2009.5136569.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Toth, Zsolt, and Laszlo Kovacs. "Testing linear separability in classification of inflection rules." In 2014 IEEE 12th International Symposium on Intelligent Systems and Informatics (SISY 2014). IEEE, 2014. http://dx.doi.org/10.1109/sisy.2014.6923610.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Pathak, Anjali, Bhawna Vohra, and Kapil Gupta. "Supervised Learning Approach towards Class Separability- Linear Discriminant Analysis." In 2019 International Conference on Intelligent Computing and Control Systems (ICCS). IEEE, 2019. http://dx.doi.org/10.1109/iccs45141.2019.9065622.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Saha, Suvarup, and Randall A. Berry. "Parallel linear deterministic interference channels with feedback: Combinatorial structure and separability." In 2013 IEEE International Symposium on Information Theory (ISIT). IEEE, 2013. http://dx.doi.org/10.1109/isit.2013.6620336.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, D., M. Kamel, and M. I. Elmasry. "A training approach based on linear separability analysis for layered perceptrons." In Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94). IEEE, 1994. http://dx.doi.org/10.1109/icnn.1994.374217.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Bobrowski, L. "Piecewise-linear classifiers, formal neurons and separability of the learning sets." In Proceedings of 13th International Conference on Pattern Recognition. IEEE, 1996. http://dx.doi.org/10.1109/icpr.1996.547420.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Xu, Yong, and Guangming Lu. "Analysis On Fisher Discriminant Criterion And Linear Separability Of Feature Space." In 2006 International Conference on Computational Intelligence and Security. IEEE, 2006. http://dx.doi.org/10.1109/iccias.2006.295345.

Full text
APA, Harvard, Vancouver, ISO, and other styles
9

P, Yogananda A., M. Narasimha Murthy, and Lakshmi Gopal. "A fast linear separability test by projection of positive points on subspaces." In the 24th international conference. New York, New York, USA: ACM Press, 2007. http://dx.doi.org/10.1145/1273496.1273586.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Karras, D. A., S. J. Perantonis, and S. J. Varoufakis. "An efficient constrained learning algorithm for optimal linear separability of the internal representations." In Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94). IEEE, 1994. http://dx.doi.org/10.1109/icnn.1994.374176.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography