To see the other types of publications on this topic, follow the link: Algebraic functions – Computer programs.

Dissertations / Theses on the topic 'Algebraic functions – Computer programs'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 20 dissertations / theses for your research on the topic 'Algebraic functions – Computer programs.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Schanzer, Emmanuel Tanenbaum. "Algebraic Functions, Computer Programming, and the Challenge of Transfer." Thesis, Harvard University, 2015. http://nrs.harvard.edu/urn-3:HUL.InstRepos:16461037.

Full text
Abstract:
Students' struggles with algebra are well documented. Prior to the introduction of functions, mathematics is typically focused on applying a set of arithmetic operations to compute an answer. The introduction of functions, however, marks the point at which mathematics begins to focus on building up abstractions as a way to solve complex problems. A common refrain about word problems is that “the equations are easy to solve - the hard part is setting them up!” A student of algebra is asked to identify functional relationships in the world around them - to set up the equations that describe a system- and to reason about these relationships. Functions, in essence, mark the shift from computing answers to solving problems. Researchers have called for this shift to accompany a change in pedagogy, and have looked to computer programming and game design as a means to combine mathematical rigor with creative inquiry. Many studies have explored the impact of teaching students to program, with the goal of having them transfer what they have learned back into traditional mathematics. While some of these studies have shown positive outcomes for concepts like geometry and fractions, transfer between programming and algebra has remained elusive. The literature identifies a number of conditions that must be met to facilitate transfer, including careful attention to content, software, and pedagogy. This dissertation is a feasibility study of Bootstrap, a curricular intervention based on best practices from the transfer and math-education literature. Bootstrap teaches students to build a video game by applying algebraic concepts and a problem solving technique in the programming domain, with the goal of transferring what they learn back into traditional algebra tasks. The study employed a mixed-methods analysis of six Bootstrap classes taught by math and computer science teachers, pairing pre- and post-tests with classroom observations and teacher interviews. Despite the use of a CS-derived problem solving technique, a programming language and a series of programming challenges, students were able to transfer what they learned into traditional algebra tasks and math teachers were found to be more successful at facilitating this transfer than their CS counterparts.
Education Policy, Leadership, and Instructional Practice
APA, Harvard, Vancouver, ISO, and other styles
2

Klingler, Carol Diane. "Syntax-directed semantics-supported editing of algebraic specifications." Master's thesis, This resource online, 1990. http://scholar.lib.vt.edu/theses/available/etd-01202010-020048/.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Müller-Olm, Markus. "Modular compiler verification : a refinement algebraic approach advocating stepwise abstraction /." Berlin [u.a.] : Springer, 1997. http://www.loc.gov/catdir/enhancements/fy0815/97013428-d.html.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Forbes, Michael Andrew. "Polynomial identity testing of read-once oblivious algebraic branching programs." Thesis, Massachusetts Institute of Technology, 2014. http://hdl.handle.net/1721.1/89843.

Full text
Abstract:
Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.
This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
Cataloged from student-submitted PDF version of thesis.
Includes bibliographical references (pages 209-220).
We study the problem of obtaining efficient, deterministic, black-box polynomial identity testing algorithms (PIT) for algebraic branching programs (ABPs) that are read-once and oblivious. This class has an efficient, deterministic, white-box polynomial identity testing algorithm (due to Raz and Shpilka [RS05]), but prior to this work there was no known such black-box algorithm. The main result of this work gives the first quasi-polynomial sized hitting set for size S circuits from this class, when the order of the variables is known. As our hitting set is of size exp(lg2 S), this is analogous (in the terminology of boolean pseudorandomness) to a seed-length of lg2 S, which is the seed length of the pseudorandom generators of Nisan [Nis92] and Impagliazzo-Nisan-Wigderson [INW94] for read-once oblivious boolean branching programs. Thus our work can be seen as an algebraic analogue of these foundational results in boolean pseudorandomness. We also show that several other circuit classes can be black-box reduced to readonce oblivious ABPs, including non-commutative ABPs and diagonal depth-4 circuits, and consequently obtain similar hitting sets for these classes as well. To establish the above hitting sets, we use a form of dimension reduction we call a rank condenser, which maps a large-dimensional space to a medium-dimensional space, while preserving the rank of low-dimensional subspaces. We give an explicit construction of a rank condenser that is randomness efficient and show how it can be used as a form of oblivious Gaussian elimination. As an application, we strengthen a result of Mulmuley [Mul12a], and show that derandomizing a particular case of the Noether Normalization Lemma is reducible to black-box PIT of read-once oblivious ABPs. Using our hitting set results, this gives a derandomization of Noether Normalization in that case.
by Michael Andrew Forbes.
Ph. D.
APA, Harvard, Vancouver, ISO, and other styles
5

McCay, Matthew Eric. "Computing the Algebraic Immunity of Boolean Functions on the SRC-6 Reconfigurable Computer." Thesis, Monterey, California. Naval Postgraduate School, 2012. http://hdl.handle.net/10945/6831.

Full text
Abstract:
Boolean functions with high algebraic immunity (AI) are vital in reducing the possibility of utilizing algebraic attacks to break an encryption system. Simple algorithms exist to compute the AI of a given n-variable Boolean function, but the time required to test a large number of functions is much greater on conventional computing systems. AI was computed for all functions through n = 5 using the SRC-6. AI was also computed for n = 5 using a C algorithm. The SRC-6 performed 4.86 times faster than a conventional processor for this computation. It is believed that this is the first enumeration of all 5-variable functions with respect to AI. Monte Carlo trials were performed for n = 6, both on the SRC-6 and utilizing a C algorithm on a conventional processor. These trials provided the first known distribution of AI for 6-variable functions. Some algorithms for computing AI require a conversion between the truth table form of the function and its algebraic normal form. The first known Verilog implementation of a reduced transeunt triangle was developed for this conversion. This reduced form requires many fewer gates and has (n) delay versus (2) n delay for a full transeunt triangle.
APA, Harvard, Vancouver, ISO, and other styles
6

Song, Ning. "Minimization of Exclusive Sum of Products Expressions for Multiple-Valued Input Incompletely Specified Functions." PDXScholar, 1992. https://pdxscholar.library.pdx.edu/open_access_etds/4684.

Full text
Abstract:
In recent years, there is an increased interest in the design of logic circuits which use EXOR gates. Particular interest is in the minimization of arbitrary Exclusive Sums Of Products (ESOPs). Functions realized by such circuits can have fewer gates, fewer connections, and take up less area in VLSI and especially, FPGA realizations. They are also easily testable. So far, the ESOPs are not as popular as their Sum of Products (SOP) counterparts. One of the main reasons it that the problem of the minimization of ESOP circuits was traditionally an extremely difficult one. Since exact solutions can be practically found only for functions with not more than 5 variables the interest is in approximate solutions. Two approaches to generate s~b optimal solutions can be found in the literature. One approach is to minimize sub-families of ESOPs. Another approach is to minimize ESOPs using heuristic algorithms. The method we introduced in this thesis belongs to the second approach, which normally generates better results than the first approach. In the second approach, two general methods are used. One method is to minimize the coefficients of Reed-Muller forms. Another method is to perform a set of cube operations iteratively on a given ESOP. So far, this method has achieved better results than other methods. In this method (we call it cube operation approach), the quality of the results depends on the quality of the cube operations. Different cube operations have been invented in the past a few years. All these cube operations can be applied only when some conditions are satisfied. This is due to the limitations of the operations. These limitations reduce the opportunity to get a high quality solution and reduce the efficiency of the algorithm as well. The efforts of removing these limitations led to the invention of our new cube operation, exorlink, which is introduced in this thesis. Exorlink can be applied on any two cubes in the array without condition. All the existing cube operations in this approach are included in it. So this is the most general operation in this approach. Another key issue in the cube operation approach is the efficiency of the algorithm. Existing algorithms perform all possible cube operations, and give litter guide to select the operations. Our new algorithm selectively performs some of the possible operations. Experimental results show that this algorithm is more efficient than existing ones. New algorithms to minimize multiple output functions and especially incompletely specified ESOPs are also presented. The algorithms are included in the program EXORCISM-MV -2, which is a new version of EXORCISM-MY. EXORCISM-MV -2 was tested on many benchmark functions and compared to the results from literature. The program in most cases gives the same or better solutions on binary and 4-valued completely specified functions. More importantly, it is able to efficiently minimize arbitrary-valued and incompletely specified functions, while the programs from literature are either for completely specified functions, or for binary variables. Additionally, as in Espresso, the number of variables in our program is unlimited and the only constraint is the number of input cubes that are read, so very large functions can be minimized. Based on our new cube operation and new algorithms, in the thesis, we present a solution to a problem that has not yet been practically solved in the literature: efficient minimization of arbitrary ESOP expressions for multiple-output multiple-valued input incompletely specified functions.
APA, Harvard, Vancouver, ISO, and other styles
7

Dhillon, Adam. "A Complexity of Real Functions based on Analog Computing." Scholarship @ Claremont, 2019. https://scholarship.claremont.edu/hmc_theses/225.

Full text
Abstract:
This thesis is focused on analyzing a particular notion of complexity of real valued functions through the lens of analog computers. This report features design changes to Pour-El’s notion of an analog computer that reflect this question of complexity in a concrete way. Additionally, these changes to the analog computer allow an extension of Pour-El’s work in which the complexity of a function can be identified with the order of a differentiably algebraic equation that the function satisfies.
APA, Harvard, Vancouver, ISO, and other styles
8

Maddock, Thomas III, and Laurel J. Lacher. "MODRSP: a program to calculate drawdown, velocity, storage and capture response functions for multi-aquifer systems." Department of Hydrology and Water Resources, University of Arizona (Tucson, AZ), 1991. http://hdl.handle.net/10150/620142.

Full text
Abstract:
MODRSP is program used for calculating drawdown, velocity, storage losses and capture response functions for multi - aquifer ground -water flow systems. Capture is defined as the sum of the increase in aquifer recharge and decrease in aquifer discharge as a result of an applied stress from pumping [Bredehoeft et al., 19821. The capture phenomena treated by MODRSP are stream- aquifer leakance, reduction of evapotranspiration losses, leakance from adjacent aquifers, flows to and from prescribed head boundaries and increases or decreases in natural recharge or discharge from head dependent boundaries. The response functions are independent of the magnitude of the stresses and are dependent on the type of partial differential equation, the boundary and initial conditions and the parameters thereof, and the spatial and temporal location of stresses. The aquifers modeled may have irregular -shaped areal boundaries and non -homogeneous transmissive and storage qualities. For regional aquifers, the stresses are generally pumpages from wells. The utility of response functions arises from their capacity to be embedded in management models. The management models consist of a mathematical expression of a criterion to measure preference, and sets of constraints which act to limit the preferred actions. The response functions are incorporated into constraints that couple the hydrologic system with the management system (Maddock, 1972). MODRSP is a modification of MODFLOW (McDonald and Harbaugh, 1984,1988). MODRSP uses many of the data input structures of MODFLOW, but there are major differences between the two programs. The differences are discussed in Chapters 4 and 5. An abbreviated theoretical development is presented in Chapter 2, a more complete theoretical development may be found in Maddock and Lacher (1991). The finite difference technique discussion presented in Chapter 3 is a synopsis of that covered more completely in McDonald and Harbaugh (1988). Subprogram organization is presented in Chapter 4 with the data requirements explained in Chapter 5. Chapter 6 contains three example applications of MODRSP.
APA, Harvard, Vancouver, ISO, and other styles
9

Haug, Mark. "Nonparametric density estimation for univariate and bivariate distributions with applications in discriminant analysis for the bivariate case." Thesis, Kansas State University, 1986. http://hdl.handle.net/2097/9916.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Kim, Jongwoo. "A robust hough transform based on validity /." free to MU campus, to others for purchase, 1997. http://wwwlib.umi.com/cr/mo/fullcit?p9842545.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Karr, Rosemary McCroskey. "Design, Development, and Implementation of a Computer-Based Graphics Presentation for the Undergraduate Teaching of Functions and Graphing." Thesis, University of North Texas, 1996. https://digital.library.unt.edu/ark:/67531/metadc278093/.

Full text
Abstract:
The problems with which this study was concerned were threefold: (a) to design a computer-based graphics presentation on the topics of functions and graphing, (b) to develop the presentation, and (c) to determine the instructional effectiveness of this computer-based graphics instruction. The computerized presentation was written in Authorware for the Macintosh computer. The population of this study consisted of three intermediate algebra classes at Collin County Community College (n = 51). A standardized examination, the Descriptive Tests of Mathematics Skills for Functions and Graphs, was used for pretest and posttest purposes. Means were calculated on these scores and compared using a t-test for correlated means. The level of significance was set at .01. The results of the data analysis indicated: 1. There was a significant difference between the pretest and posttest performance after exposure to the computer-based graphics presentation. 2. There was no significant gender difference between the pretest and posttest performance after exposure to the computer-based graphics presentation. 3. There was no significant difference between the pretest and posttest performance of the traditional and nontraditional age students after exposure to the computer-based graphics presentation. Females had a lower posttest score than the mean male posttest score, but an analysis of the differences showed no significance. Traditional age students had a higher posttest performance score than the mean traditional age student posttest score, but their pretest performance scores were higher as well. An analysis of the differences showed no significance. In summary, this computer-based graphics presentation was an effective teaching technique for increasing mathematics performance.
APA, Harvard, Vancouver, ISO, and other styles
12

Wan, Wei. "A New Approach to the Decomposition of Incompletely Specified Functions Based on Graph Coloring and Local Transformation and Its Application to FPGA Mapping." PDXScholar, 1992. https://pdxscholar.library.pdx.edu/open_access_etds/4698.

Full text
Abstract:
The thesis presents a new approach to the decomposition of incompletely specified functions and its application to FPGA (Field Programmable Gate Array) mapping. Five methods: Variable Partitioning, Graph Coloring, Bond Set Encoding, CLB Reusing and Local Transformation are developed in order to efficiently perform decomposition and FPGA (Lookup-Table based FPGA) mapping. 1) Variable Partitioning is a high quality hemistic method used to find the "best" partitions, avoiding the very time consuming testing of all possible decomposition charts, which is impractical when there are many input variables in the input function. 2) Graph Coloring is another high quality heuristic\ used to perform the quasi-optimum don't care assignment, making the program possible to accept incompletely specified function and perform a quasi-optimum assignment to the unspecified part of the function. 3) Bond Set Encoding algorithm is used to simplify the decomposed blocks during the process of decomposition. 4) CLB Reusing algorithm is used to reduce the number of CLBs used in the final mapped circuit. 5) Local Transformation concept is introduced to transform nondecomposable functions into decomposable ones, thus making it possible to apply decomposition method to FPGA mapping. All the above developed methods are incorporated into a program named TRADE, which performs global optimization over the input functions. While most of the existing methods recursively perform local optimization over some kinds of network-like graphs, and few of them can handle incompletely specified functions. Cube calculus is used in the TRADE program, the operations are global and very fast. A short description of the TRADE program and the evaluation of the results are provided at the_ end of the thesis. For many benchmarks the TRADE program gives better results than any program published in the literature.
APA, Harvard, Vancouver, ISO, and other styles
13

Lukenbill, Francis C. "A target/missile engagement scenario using classical proportional navigation." Thesis, Monterey, California : Naval Postgraduate School, 1990. http://handle.dtic.mil/100.2/ADA243123.

Full text
Abstract:
Thesis (M.S. in Electrical Engineering)--Naval Postgraduate School, December 1990.
Thesis Advisor(s): Titus, Harold A. Second Reader: Powell, James R. "December 1990." Description based on title screen as viewed on April 1, 2010. DTIC Descriptor(s): Guided Missiles, Simulation, Forward Areas, Optimization, Transfer Functions, Guided Missile Warheads, Dynamics, Two Dimensional, Theses, Targets, Time, Three Dimensional, Solutions (General), Homing Devices, Maneuvers, Evasion, Simplification, Proportional Navigation, Automatic Pilots, Guided Missile Components, Miss Distance. DTIC Identifier(s): Proportional Navigation, Guided Missile Targets, Evasion, Flight Maneuvers, Intercept Trajectories, Guided Missile Trajectories, Antiaircraft Missiles, Aircraft Defense Systems, Miss Distance, Optimization, Adjoint Models, Survivability, Barrel Roll Maneuver, Split S Maneuver, Scenarios, Computer Programs, Theses. Author(s) subject terms: Proportional Navigation, Miss Distance, Adjoint. Includes bibliographical references (p. 111). Also available in print.
APA, Harvard, Vancouver, ISO, and other styles
14

Fan, Yang, Hidehiko Masuhara, Tomoyuki Aotani, Flemming Nielson, and Hanne Riis Nielson. "AspectKE*: Security aspects with program analysis for distributed systems." Universität Potsdam, 2010. http://opus.kobv.de/ubp/volltexte/2010/4136/.

Full text
Abstract:
Enforcing security policies to distributed systems is difficult, in particular, when a system contains untrusted components. We designed AspectKE*, a distributed AOP language based on a tuple space, to tackle this issue. In AspectKE*, aspects can enforce access control policies that depend on future behavior of running processes. One of the key language features is the predicates and functions that extract results of static program analysis, which are useful for defining security aspects that have to know about future behavior of a program. AspectKE* also provides a novel variable binding mechanism for pointcuts, so that pointcuts can uniformly specify join points based on both static and dynamic information about the program. Our implementation strategy performs fundamental static analysis at load-time, so as to retain runtime overheads minimal. We implemented a compiler for AspectKE*, and demonstrate usefulness of AspectKE* through a security aspect for a distributed chat system.
APA, Harvard, Vancouver, ISO, and other styles
15

Araújo, Elpídio de. "A concepção de um software de matemática para auxiliar na aprendizagem dos alunos da primeira série do ensino médio no estudo das funções exponenciais e logarítmicas." Pontifícia Universidade Católica de São Paulo, 2005. https://tede2.pucsp.br/handle/handle/11487.

Full text
Abstract:
Made available in DSpace on 2016-04-27T17:12:56Z (GMT). No. of bitstreams: 1 dissertacao_elpidio_araujo.pdf: 3591682 bytes, checksum: 1e6320d9be62313c3347706ab5174ea4 (MD5) Previous issue date: 2005-11-22
Made available in DSpace on 2016-08-25T17:25:40Z (GMT). No. of bitstreams: 2 dissertacao_elpidio_araujo.pdf.jpg: 3662 bytes, checksum: 5abae02d1498a95126dea56a691f200b (MD5) dissertacao_elpidio_araujo.pdf: 3591682 bytes, checksum: 1e6320d9be62313c3347706ab5174ea4 (MD5) Previous issue date: 2005-11-22
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior
This research job has as goal to think up a Mathematics Educational Software for the high school students. The developed questions in this software have as goal to help in the learning of exponential and logarithm functions that will provide the users with informations that will contribute to the development of the planned activities. Our main focus is the student , thus his insertion in a computerized environment opens up one more option to his learning. The development of the software utilized, as basis, researches involving teachers of Elementary School and High School from private and public schools of São Paulo state. The main theme of the investigation was the difficulty that the students present on the comprehension of the exponential and logarithm functions. This research job hunts for an answer for the following inquiry: At which measure the software utilization as teaching tool of the Mathematics contents, related on exponential and logarithm functions, may contribute to the students learning? The activities were thought up based on the needs of the students identified during the research teachers. The software application promoted, in the students, a positive attitude in relation to the solutions of the proposals questions.
Este trabalho tem como objetivo conceber para alunos do Ensino Médio um Software Educacional de Matemática. As questões desenvolvidas neste software têm como objetivo auxiliar na aprendizagem das funções exponenciais e logarítmicas. Elas proporcionarão aos seus usuários informações que contribuirão para o desenvolvimento das atividades. Como nosso foco principal é o aluno, a sua inserção num ambiente informatizado abre uma opção a mais para a sua aprendizagem. O desenvolvimento do software utilizou como base pesquisas realizadas com os professores que ministram aulas no Ensino Fundamental e Médio da rede pública e privada do Estado de São Paulo. O tema principal da investigação foi a dificuldade que os alunos apresentam na compreensão das funções exponenciais e logarítmicas. Este trabalho procura responder a seguinte questão: em que medida a utilização de um software como ferramenta didática no estudo de conteúdos matemáticos relacionados com as funções exponenciais e logarítmicas contribui na aprendizagem do aluno? As atividades foram concebidas a partir das necessidades identificadas na pesquisa com os professores. A aplicação do software desenvolveu nos alunos uma atitude positiva em relação à resolução das questões.
APA, Harvard, Vancouver, ISO, and other styles
16

Uddin, Razack Sheriff. "Geogebra, tool for mediating knowledge in the teaching and learning of transformation of functions in mathematics." Thesis, 2011. http://hdl.handle.net/10413/5685.

Full text
Abstract:
As a teacher of mathematics, I always taught the topic functions (graphs such as linear, quadratic, hyperbola, exponential, trigonometric functions) in the same way all of my twenty-three years in the profession. I often assumed that the learner understood a concept that had been presented only to find, in subsequent lessons, that the learner could not recall it or talk about it. I referred to the constant value c in the function f (x) = ax² + c or f (x) = ax² + bx + c as the y-intercept informing my learners that it is a point on the y-axis of the Cartesian plane. I also taught transformation of functions as the vertical and horizontal shift without much visual demonstration beyond pen and paper. Whilst using dynamic mathematics geometry software, last year namely, Geogebra, I realized that this section could be taught more effectively through interaction with this software. Geogebra, is a freely available interactive dynamic software for the teaching and learning of mathematics that combines geometry and algebra into a single user-friendly package. Within this research I set out to explore firstly, the function of Geogebra, as a pedagogical tool and mediating artifact in the teaching and learning of transformation of functions in secondary school mathematics; and secondly whether interaction with these virtual manipulatives enhance the understanding of mathematics concepts. The study is rooted in a social constructivist view of learning and mediated learning and the approach used is a case study. The research was carried out in an independent school that involved 8 learners. My data consisted of feedback from two sets of student worksheets, the first being from prior to using the Geogebra applets and the other from post engagement with the applets, classroom observations during the practical use of Geogebra and finally with learner interviews. On analysis of the data it seems that the introduction of Geogebra did indeed influence the educational practice in three dimensions, namely: the development of mathematical ideas and concepts through computer-based teaching and the role Geogebra plays in the understanding of and visualization of certain mathematical concepts in high school algebra topics.
Thesis (M.Ed.)-University of KwaZulu-Natal, Edgewood, 2011.
APA, Harvard, Vancouver, ISO, and other styles
17

Nawaz, Yassir. "Design of Stream Ciphers and Cryptographic Properties of Nonlinear Functions." Thesis, 2007. http://hdl.handle.net/10012/3447.

Full text
Abstract:
Block and stream ciphers are widely used to protect the privacy of digital information. A variety of attacks against block and stream ciphers exist; the most recent being the algebraic attacks. These attacks reduce the cipher to a simple algebraic system which can be solved by known algebraic techniques. These attacks have been very successful against a variety of stream ciphers and major efforts (for example eSTREAM project) are underway to design and analyze new stream ciphers. These attacks have also raised some concerns about the security of popular block ciphers. In this thesis, apart from designing new stream ciphers, we focus on analyzing popular nonlinear transformations (Boolean functions and S-boxes) used in block and stream ciphers for various cryptographic properties, in particular their resistance against algebraic attacks. The main contribution of this work is the design of two new stream ciphers and a thorough analysis of the algebraic immunity of Boolean functions and S-boxes based on power mappings. First we present WG, a family of new stream ciphers designed to obtain a keystream with guaranteed randomness properties. We show how to obtain a mathematical description of a WG stream cipher for the desired randomness properties and security level, and then how to translate this description into a practical hardware design. Next we describe the design of a new RC4-like stream cipher suitable for high speed software applications. The design is compared with original RC4 stream cipher for both security and speed. The second part of this thesis closely examines the algebraic immunity of Boolean functions and S-boxes based on power mappings. We derive meaningful upper bounds on the algebraic immunity of cryptographically significant Boolean power functions and show that for large input sizes these functions have very low algebraic immunity. To analyze the algebraic immunity of S-boxes based on power mappings, we focus on calculating the bi-affine and quadratic equations they satisfy. We present two very efficient algorithms for this purpose and give new S-box constructions that guarantee zero bi-affine and quadratic equations. We also examine these S-boxes for their resistance against linear and differential attacks and provide a list of S-boxes based on power mappings that offer high resistance against linear, differential, and algebraic attacks. Finally we investigate the algebraic structure of S-boxes used in AES and DES by deriving their equivalent algebraic descriptions.
APA, Harvard, Vancouver, ISO, and other styles
18

Panos, Dennis C. "Approximate dynamic programming and aerial refueling." Thesis, 2007. http://hdl.handle.net/10945/2990.

Full text
Abstract:
Aerial refueling is an integral part of the United States military's ability to strike targets around the world with an overwhelming and continuous projection of force. However, with an aging fleet of refueling tankers and an indefinite replacement schedule the optimization of tanker usage is vital to national security. Optimizing tanker and receiver refueling operations is a complicated endeavor as it can involve over a thousand of missions during a 24 hour period, as in Operation Iraqi Freedom and Operation Enduring Freedom. Therefore, a planning model which increases receiver mission capability, while reducing demands on tankers, can be used by the military to extend the capabilities of the current tanker fleet. Aerial refueling optimization software, created in CASTLE Laboratory, solves the aerial refueling problem through a multi-period approximation dynamic programming approach. The multi-period approach is built around sequential linear programs, which incorporate value functions, to find the optimal refueling tracks for receivers and tankers. The use of value functions allows for a solution which optimizes over the entire horizon of the planning period. This approach varies greatly from the myopic optimization currently in use by the Air Force and produces superior results. The aerial refueling model produces fast, consistent, robust results which require fewer tankers than current planning methods. The results are flexible enough to incorporate stochastic inputs, such as: varying refueling times and receiver mission loads, while still meeting all receiver refueling requirements. The model's ability to handle real world uncertainties while optimizing better than current methods provides a great leap forward in aerial refueling optimization. The aerial refueling model, created in CASTLE Lab, can extend the capabilities of the current tanker fleet.
Contract number: N00244-99-G-0019
US Navy (USN) author.
APA, Harvard, Vancouver, ISO, and other styles
19

Eickhoff-Schachtebeck, Annika. "Thetafunktionen und konjugationsinvariante Funktionen auf Paaren von Matrizen." Doctoral thesis, 2008. http://hdl.handle.net/11858/00-1735-0000-0006-B3B6-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
20

Irvine, Allison W. "Computational Analysis of Flow Cytometry Data." 2013. http://hdl.handle.net/1805/3367.

Full text
Abstract:
Indiana University-Purdue University Indianapolis (IUPUI)
The objective of this thesis is to compare automated methods for performing analysis of flow cytometry data. Flow cytometry is an important and efficient tool for analyzing the characteristics of cells. It is used in several fields, including immunology, pathology, marine biology, and molecular biology. Flow cytometry measures light scatter from cells and fluorescent emission from dyes which are attached to cells. There are two main tasks that must be performed. The first is the adjustment of measured fluorescence from the cells to correct for the overlap of the spectra of the fluorescent markers used to characterize a cell’s chemical characteristics. The second is to use the amount of markers present in each cell to identify its phenotype. Several methods are compared to perform these tasks. The Unconstrained Least Squares, Orthogonal Subspace Projection, Fully Constrained Least Squares and Fully Constrained One Norm methods are used to perform compensation and compared. The fully constrained least squares method of compensation gives the overall best results in terms of accuracy and running time. Spectral Clustering, Gaussian Mixture Modeling, Naive Bayes classification, Support Vector Machine and Expectation Maximization using a gaussian mixture model are used to classify cells based on the amounts of dyes present in each cell. The generative models created by the Naive Bayes and Gaussian mixture modeling methods performed classification of cells most accurately. These supervised methods may be the most useful when online classification is necessary, such as in cell sorting applications of flow cytometers. Unsupervised methods may be used to completely replace manual analysis when no training data is given. Expectation Maximization combined with a cluster merging post-processing step gives the best results of the unsupervised methods considered.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography