To see the other types of publications on this topic, follow the link: Mathematical software problems.

Dissertations / Theses on the topic 'Mathematical software problems'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 33 dissertations / theses for your research on the topic 'Mathematical software problems.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Chang, Tyler Hunter. "Mathematical Software for Multiobjective Optimization Problems." Diss., Virginia Tech, 2020. http://hdl.handle.net/10919/98915.

Full text
Abstract:
In this thesis, two distinct problems in data-driven computational science are considered. The main problem of interest is the multiobjective optimization problem, where the tradeoff surface (called the Pareto front) between multiple conflicting objectives must be approximated in order to identify designs that balance real-world tradeoffs. In order to solve multiobjective optimization problems that are derived from computationally expensive blackbox functions, such as engineering design optimization problems, several methodologies are combined, including surrogate modeling, trust region methods, and adaptive weighting. The result is a numerical software package that finds approximately Pareto optimal solutions that are evenly distributed across the Pareto front, using minimal cost function evaluations. The second problem of interest is the closely related problem of multivariate interpolation, where an unknown response surface representing an underlying phenomenon is approximated by finding a function that exactly matches available data. To solve the interpolation problem, a novel algorithm is proposed for computing only a sparse subset of the elements in the Delaunay triangulation, as needed to compute the Delaunay interpolant. For high-dimensional data, this reduces the time and space complexity of Delaunay interpolation from exponential time to polynomial time in practice. For each of the above problems, both serial and parallel implementations are described. Additionally, both solutions are demonstrated on real-world problems in computer system performance modeling.<br>Doctor of Philosophy<br>Science and engineering are full of multiobjective tradeoff problems. For example, a portfolio manager may seek to build a financial portfolio with low risk, high return rates, and minimal transaction fees; an aircraft engineer may seek a design that maximizes lift, minimizes drag force, and minimizes aircraft weight; a chemist may seek a catalyst with low viscosity, low production costs, and high effective yield; or a computational scientist may seek to fit a numerical model that minimizes the fit error while also minimizing a regularization term that leverages domain knowledge. Often, these criteria are conflicting, meaning that improved performance by one criterion must be at the expense of decreased performance in another criterion. The solution to a multiobjective optimization problem allows decision makers to balance the inherent tradeoff between conflicting objectives. A related problem is the multivariate interpolation problem, where the goal is to predict the outcome of an event based on a database of past observations, while exactly matching all observations in that database. Multivariate interpolation problems are equally as prevalent and impactful as multiobjective optimization problems. For example, a pharmaceutical company may seek a prediction for the costs and effects of a proposed drug; an aerospace engineer may seek a prediction for the lift and drag of a new aircraft design; or a search engine may seek a prediction for the classification of an unlabeled image. Delaunay interpolation offers a unique solution to this problem, backed by decades of rigorous theory and analytical error bounds, but does not scale to high-dimensional "big data" problems. In this thesis, novel algorithms and software are proposed for solving both of these extremely difficult problems.
APA, Harvard, Vancouver, ISO, and other styles
2

Lawson, Jane. "Towards error control for the numerical solution of parabolic equations." Thesis, University of Leeds, 1989. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.329947.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Vincent, Jill. "Mechanical linkages, dynamic geometry software, and argumentation : supporting a classroom culture of mathematical proof /." Connect to thesis, 2002. http://eprints.unimelb.edu.au/archive/00001399.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Graf, Edith Aurora. "Designing a computer tutorial to correct a common student misconception in mathematics /." Thesis, Connect to this title online; UW restricted, 2000. http://hdl.handle.net/1773/9154.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Vasconcelos, Francisco Ricardo Nogueira de. "ResoluÃÃo de problemas de congruÃncia de triÃngulos com auxÃlio do software Geogebra." Universidade Federal do CearÃ, 2015. http://www.teses.ufc.br/tde_busca/arquivo.php?codArquivo=15153.

Full text
Abstract:
nÃo hÃ<br>O nosso desafio como professor à possibilitar a melhoria da qualidade do ensino em MatemÃtica buscando meios de garantir a formaÃÃo de cidadÃos capazes de reconhecer o seu papel perante a sociedade e descobrir caminhos elucidativos para o desempenho de uma carreira profissional promissora. Nesse sentido buscamos focar a nossa pesquisa em aÃÃes pedagÃgicas que possibilitem o desenvolvimento das potencialidades cognitivas dos alunos no estudo de congruÃncia. Para isso, propomos o uso do software GeoGebra como ferramenta didÃtica para as aulas de Geometria Plana, por entendermos que esse recurso favorece ao aluno um ambiente favorÃvel ao desenvolvimento da aprendizagem e coloca o professor com mediador no processo de sistematizaÃÃo conceitual das ideias matemÃticas necessÃrias para o desenvolvimento das estruturas cognitivas dos alunos. O objetivo do nosso estudo consiste em subsidiar os alunos do curso de licenciatura em MatemÃtica do Instituto Federal de EducaÃÃo, CiÃncia e Tecnologia, no sentido de utilizar o software GeoGebra como ferramenta didÃtica auxiliar para a resoluÃÃo de problemas de Geometria Plana, que envolvem casos de congruÃncia de triÃngulos. Para a anÃlise e coleta de dados foi realizado o estudo do projeto pedagÃgico do curso e a realizaÃÃo de 01 minicurso para utilizaÃÃo do software GeoGebra destinado a 21 alunos regularmente matriculados na disciplina de Geometria Plana. Utilizamos como instrumentos de pesquisa: 02 questionÃrios diagnÃsticos, observaÃÃo e o registro fotogrÃfico. As anÃlises dos resultados evidenciaram que os alunos se mostraram interessados ao uso do software GeoGebra em sala de aula. O minicurso e as atividades didÃticas aplicadas tiveram um bom nÃvel de aceitaÃÃo por parte dos futuros professores de MatemÃtica. As conclusÃes ressaltam que o uso do software GeoGebra deve ser entendido como ferramenta didÃtica alternativa para o ensino de Geometria, no sentido de proporcionar ao aluno, uma metodologia dinÃmica, interativa e lÃdica para se aprender MatemÃtica.<br>Our challenge as a teacher is to enable the improvement of education quality in mathematics looking for ways to ensure the formation of citizens able to recognize their role in society and find illuminating paths to the performance of a promising career. In this sense, we seek to focus our research on pedagogical actions which enable the development of the students cognitive potential. For this, we propose the use of GeoGebra software as a teaching tool for Plane Geometry classes, because we believe that this resource provides a favorable environment for the development of learning to the student and places the teacher as a mediator in the process of conceptual systematization of the necessary mathematical ideas to the development of the students cognitive structures. The aim of our study is to support the students of degree in Mathematics from the Federal Institute of Education, Science and Technology in order to use GeoGebra software as a teaching tool to help solving plane geometry problems involving cases of congruence triangles. For analysis and data collection, it was carried out the study of the pedagogical project of the course and the completion of a short course for the use of GeoGebra software designed for 21 students enrolled in plane geometry discipline.We used as research tools two diagnostic questionnaires, observation and photographic record. Analysis of the results showed that students in the degree course were receptive to the use of GeoGebra software in the classroom, and the short course and teaching activities applied had a great level of acceptance by the future teachers of mathematics.The conclusions point out that the use of GeoGebra software should be understood as an alternative teaching tool for teaching Geometry in order to provide the student a dynamic, interactive and fun method for learning mathematics.
APA, Harvard, Vancouver, ISO, and other styles
6

Siahaan, Antony. "Defect correction based domain decomposition methods for some nonlinear problems." Thesis, University of Greenwich, 2011. http://gala.gre.ac.uk/7144/.

Full text
Abstract:
Defect correction schemes as a class of nonoverlapping domain decomposition methods offer several advantages in the ways they split a complex problem into several subdomain problems with less complexity. The schemes need a nonlinear solver to take care of the residual at the interface. The adaptive-∝ solver can converge locally in the ∞-norm, where the sufficient condition requires a relatively small local neighbourhood and the problem must have a strongly diagonal dominant Jacobian matrix with a very small condition number. Yet its advantage can be of high signicance in the computational cost where it simply needs a scalar as the approximation of Jacobian matrix. Other nonlinear solvers employed for the schemes are a Newton-GMRES method, a Newton method with a finite difference Jacobian approximation, and nonlinear conjugate gradient solvers with Fletcher-Reeves and Pollak-Ribiere searching direction formulas. The schemes are applied to three nonlinear problems. The first problem is a heat conduction in a multichip module where there the domain is assembled from many components of different conductivities and physical sizes. Here the implementations of the schemes satisfy the component meshing and gluing concept. A finite difference approximation of the residual of the governing equation turns out to be a better defect equation than the equality of normal derivative. Of all the nonlinear solvers implemented in the defect correction scheme, the nonlinear conjugate gradient method with Fletcher-Reeves searching direction has the best performance. The second problem is a 2D single-phase fluid flow with heat transfer where the PHOENICS CFD code is used to run the subdomain computation. The Newton method with a finite difference Jacobian is a reasonable interface solver in coupling these subdomain computations. The final problem is a multiphase heat and moisture transfer in a porous textile. The PHOENICS code is also used to solve the system of partial differential equations governing the multiphase process in each subdomain while the coupling of the subdomain solutions is taken care of with some FORTRAN codes by the defect correction schemes. A scheme using a modified-∝ method fails to obtain decent solutions in both single and two layers case. On the other hand, the scheme using the above Newton method produces satisfying results for both cases where it can lead an initially distant interface data into a good convergent solution. However, it is found that in general the number of nonlinear iteration of the defect correction schemes increases with the mesh refinement.
APA, Harvard, Vancouver, ISO, and other styles
7

Koripalli, RadhaShilpa. "Parameter Tuning for Optimization Software." VCU Scholars Compass, 2012. http://scholarscompass.vcu.edu/etd/2862.

Full text
Abstract:
Mixed integer programming (MIP) problems are highly parameterized, and finding parameter settings that achieve high performance for specific types of MIP instances is challenging. This paper presents a method to find the information about how CPLEX solver parameter settings perform for the different classes of mixed integer linear programs by using designed experiments and statistical models. Fitting a model through design of experiments helps in finding the optimal region across all combinations of parameter settings. The study involves recognizing the best parameter settings that results in the best performance for a specific class of instances. Choosing good setting has a large effect in minimizing the solution time and optimality gap.
APA, Harvard, Vancouver, ISO, and other styles
8

Kwanashie, Augustine. "Efficient algorithms for optimal matching problems under preferences." Thesis, University of Glasgow, 2015. http://theses.gla.ac.uk/6706/.

Full text
Abstract:
In this thesis we consider efficient algorithms for matching problems involving preferences, i.e., problems where agents may be required to list other agents that they find acceptable in order of preference. In particular we mainly study the Stable Marriage problem (SM), the Hospitals / Residents problem (HR) and the Student / Project Allocation problem (SPA), and some of their variants. In some of these problems the aim is to find a stable matching which is one that admits no blocking pair. A blocking pair with respect to a matching is a pair of agents that prefer to be matched to each other than their assigned partners in the matching if any. We present an Integer Programming (IP) model for the Hospitals / Residents problem with Ties (HRT) and use it to find a maximum cardinality stable matching. We also present results from an empirical evaluation of our model which show it to be scalable with respect to real-world HRT instance sizes. Motivated by the observation that not all blocking pairs that exist in theory will lead to a matching being undermined in practice, we investigate a relaxed stability criterion called social stability where only pairs of agents with a social relationship have the ability to undermine a matching. This stability concept is studied in instances of the Stable Marriage problem with Incomplete lists (smi) and in instances of hr. We show that, in the smi and hr contexts, socially stable matchings can be of varying sizes and the problem of finding a maximum socially stable matching (max smiss and max hrss respectively) is NP-hard though approximable within 3/2. Furthermore we give polynomial time algorithms for three special cases of the problem arising from restrictions on the social network graph and the lengths of agents’ preference lists. We also consider other optimality criteria with respect to social stability and establish inapproximability bounds for the problems of finding an egalitarian, minimum regret and sex equal socially stable matching in the sm context. We extend our study of social stability by considering other variants and restrictions of max smiss and max hrss. We present NP-hardness results for max smiss even under certain restrictions on the degree and structure of the social network graph as well as the presence of master lists. Other NP-hardness results presented relate to the problem of determining whether a given man-woman pair belongs to a socially stable matching and the problem of determining whether a given man (or woman) is part of at least one socially stable matching. We also consider the Stable Roommates problem with Incomplete lists under Social Stability (a non-bipartite generalisation of smi under social stability). We observe that the problem of finding a maximum socially stable matching in this context is also NP-hard. We present efficient algorithms for three special cases of the problem arising from restrictions on the social network graph and the lengths of agents’ preference lists. These are the cases where (i) there exists a constant number of acquainted pairs (ii) or a constant number of unacquainted pairs or (iii) each preference list is of length at most 2. We also present algorithmic results for finding matchings in the spa context that are optimal with respect to profile, which is the vector whose ith component is the number of students assigned to their ith-choice project. We present an efficient algorithm for finding a greedy maximum matching in the spa context — this is a maximum matching whose profile is lexicographically maximum. We then show how to adapt this algorithm to find a generous maximum matching — this is a matching whose reverse profile is lexicographically minimum. We demonstrate how this approach can allow additional constraints, such as lecturer lower quotas, to be handled flexibly. We also present results of empirical evaluations carried out on both real world and randomly generated datasets. These results demonstrate the scalability of our algorithms as well as some interesting properties of these profile-based optimality criteria. Practical applications of spa motivate the investigation of certain special cases of the problem. For instance, it is often desired that the workload on lecturers is evenly distributed (i.e. load balanced). We enforce this by either adding lower quota constraints on the lecturers (which leads to the potential for infeasible problem instances) or adding a load balancing optimisation criterion. We present efficient algorithms in both cases. Another consideration is the fact that certain projects may require a minimum number of students to become viable. This can be handled by enforcing lower quota constraints on the projects (which also leads to the possibility of infeasible problem instances). A technique of handling this infeasibility is the idea of closing projects that do not meet their lower quotas (i.e. leaving such project completely unassigned). We show that the problem of finding a maximum matching subject to project lower quotas where projects can be closed is NP-hard even under severe restrictions on preference lists lengths and project upper and lower quotas. To offset this hardness, we present polynomial time heuristics that find large feasible matchings in practice. We also present ip models for the spa variants discussed and show results obtained from an empirical evaluation carried out on both real and randomly generated datasets. These results show that our algorithms and heuristics are scalable and provide good matchings with respect to profile-based optimality.
APA, Harvard, Vancouver, ISO, and other styles
9

McGinn, Michelle Katherine. "Researching problem solving in software design, mathematics, and statistical consulting, from qualitative case studies to grounded theory." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp03/NQ51899.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
10

Boccardo, Mateus Eduardo [UNESP]. "Sistemas lineares: aplicações e propostas de aula usando a metodologia de resolução de problemas e o software GeoGebra." Universidade Estadual Paulista (UNESP), 2017. http://hdl.handle.net/11449/151922.

Full text
Abstract:
Submitted by Mateus Eduardo Boccardo null (mateuseboccardo@hotmail.com) on 2017-10-16T21:58:41Z No. of bitstreams: 1 SISTEMAS LINEARES- APLICAÇÕES E PROPOSTAS DE AULA USANDO A METODOLOGIA DE RESOLUÇÃO DE PROBLEMAS E O SOFTWARE GEOGEBRA.pdf: 1649568 bytes, checksum: c2692aee302e29bada55e36958c6599b (MD5)<br>Approved for entry into archive by Monique Sasaki (sayumi_sasaki@hotmail.com) on 2017-10-18T18:37:19Z (GMT) No. of bitstreams: 1 boccardo_me_me_sjrp.pdf: 1649568 bytes, checksum: c2692aee302e29bada55e36958c6599b (MD5)<br>Made available in DSpace on 2017-10-18T18:37:19Z (GMT). No. of bitstreams: 1 boccardo_me_me_sjrp.pdf: 1649568 bytes, checksum: c2692aee302e29bada55e36958c6599b (MD5) Previous issue date: 2017-09-25<br>Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)<br>Sistemas Lineares, mais precisamente, Sistemas de Equações Lineares, é ferramenta útil para a resolução de vários problemas práticos e importantes, por exemplo, problemas relacionados a tráfego de veículos, balanceamento de equações químicas, cálculo de uma alimentação diária equilibrada, circuitos elétricos e interpolação polinomial. Neste trabalho abordamos o conteúdo Sistemas Lineares, seus métodos de resolução, algumas de suas inúmeras aplicações, bem como a interpretação geométrica do conjunto solução de sistemas lineares em duas ou três variáveis. Apresentamos também, uma análise de como esse assunto é tratado em alguns documentos oficiais de ensino. Por fim, são expostas duas Propostas de Aula que foram elaboradas para alunos do Ensino Básico, uma para ser desenvolvida usando a Resolução de Problemas como metodologia de ensino (na abordagem de problemas sobre sistemas lineares) e outra, sobre a Interpretação Geométrica do conjunto solução de Sistemas Lineares, para ser realizada na Sala de Informática, utilizando o software GeoGebra.<br>Linear System, more precisely, System of Linear Equations, is a useful tool for their solution of several practical and important problems, for example problems related to vehicle traffic, balancing of chemical equations, elaboration healthy daily diet, electrical circuits and polynomial interpolation. In this work, we study Linear System, its methods of resolution, some of its numerous applications, as well as the geometric interpretation of the solution set of linear system in two or three variables. We also present an analysis of how this subject is treated in some official teaching documents. Finally, we present two Class Proposals that are elaborated for Basic Education students, one to be developed using Problem Solving as a teaching methodology (in the approach to problems on linear system) and another, on the Geometric Interpretation of the solution set of Linear System, to be held in the Computer Laboratory, using GeoGebra software.
APA, Harvard, Vancouver, ISO, and other styles
11

Gorry, Thomas. "Navigation problems for autonomous robots in distributed environments." Thesis, University of Liverpool, 2015. http://livrepository.liverpool.ac.uk/2013959/.

Full text
Abstract:
This thesis studies algorithms for Distributed Computing. More specifically however the project aimed to carry out research on the performance analysis of mobile robots in a variety of different settings. In a range of different network and geometric settings we investigate efficient algorithms for the robots to perform given tasks. We looked at a variety of different models when completing this work but focused mainly on cases where the robots have limited communication mechanisms. Within this framework we investigated cases where the robots were numerous to cases where they were few in number. Also we looked at scenarios where the robots involved had different limitations on the maximal speeds they could travel. When conducting this work we explored two main tasks carried out by the robots that became the primary theme of the study. These two main tasks are Robot Location Discovery and Robot Evacuation. To accomplish these tasks we constructed algorithms that made use of both randomised and deterministic approaches in their solutions.
APA, Harvard, Vancouver, ISO, and other styles
12

Koreňová, L., M. Dillingerová, P. Vankúš, and D. Židová. "Experience with solving real-life math problems in DQME II project." Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2012. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-80425.

Full text
Abstract:
The network "Developing Quality in Mathematics Education II" is a continuation of the associated project "Developing Quality in Mathematics Education" (http://www.dqime.unidortmund. de). In this project participate universities, teacher education institutions and schools from 11 European countries. Cross-cultural cooperation and exchange of ideas, materials, teachers and pupils support developing quality in mathematics education, especially in the area of mathematical modelling. The quality and application of the developed learning materials is also guaranteed by using, comparing and modifying them in eleven different countries. This comparison leads to an agreement about contents of mathematical learning and teaching in eleven European countries. Thus we want to establish a "European Curriculum for the teaching and learning of mathematics" in the 21st century. A special feature of this project is the strong connection between theory and practice and between the research and development of mathematics education. In this project our Faculty of Mathematics, Physics and Informatics of Comenius University Bratislava manage testing of translated teaching materials at the high school „Gymnazium Sturovo“. We know that using ICT and didactical software in schools is almost present and wide spread. So we try to focus on several possibilities in solving real-life tasks using this technologies, regard to the fact technologies are hard upon the young generation of students.
APA, Harvard, Vancouver, ISO, and other styles
13

Rehfeldt, Márcia Jussara Hepp. "A aplicação de modelos matemáticos em situações-problema empresariais, com uso do software LINDO." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2009. http://hdl.handle.net/10183/17255.

Full text
Abstract:
Esta tese tem por objetivo mostrar a possibilidade de observação da existência da aprendizagem significativa a partir do uso de modelos matemáticos quando os alunos do curso de administração equacionam situações-problema empresariais com o auxílio do software LINDO. A pesquisa foi realizada com discentes do Centro Universitário UNIVATES, situado em Lajeado, Rio Grande do Sul, quando estes frequentaram a disciplina Pesquisa Operacional. Os fundamentos teóricos estão embasados na teoria da aprendizagem significativa de Ausubel (1968, 2003), na pesquisa operacional e suas ferramentas de resolução, principalmente o software LINDO, bem como na modelagem matemática. Metodologicamente, foram aplicados instrumentos de avaliação de subsunçores relacionados à capacidade de modelagem de problemas de programação linear. Face à ausência de alguns subsunçores, foram utilizados organizadores avançados que serviram como mecanismos pedagógicos para estabelecer relações entre aquilo que os alunos já sabiam e o que deveriam saber. Posteriormente, cada aluno desenvolveu, no mínimo, dois modelos matemáticos e dois mapas conceituais, sendo os primeiros no início da pesquisa e outros ao final. Como resultado, percebeu-se que o ambiente de modelagem matemática sugerido por Barbosa (2006) favoreceu a observação de aprendizagem significativa (AUSUBEL, 2003) da programação linear quando os alunos abstraíram e resolveram situações-problema empresariais com o auxílio do software LINDO. Os modelos matemáticos finais evoluíram, na maioria dos casos, apresentando mais variáveis e restrições. Por meio dos modelos matemáticos e mapas conceituais, foi possível observar algumas evidências em relação às exigências profissionais do administrador como a capacidade de reconhecer e de definir problemas e equacionar soluções e a capacidade de pensar estrategicamente e introduzir modificações no processo produtivo. Cabe ressaltar que os modelos matemáticos ilustram o conhecimento que o aluno possui. Por isso, são diferentes, têm níveis diferentes e refletem a idiossincrasia do processo ensino-aprendizagem, como postulam Moreira (2005) e Biembengut (2003).<br>This thesis aims at demonstrating the possibility of observing the existence of the significant apprenticeship, proceeding from the use of mathematic models when business administration students solve corporative problem situations with the help of the LINDO software. The research was carried out with students at UNIVATES University Center, in the city of Lajeado, Rio Grande do Sul, while attending the subject of Operational Research. The theoretical basis lies on Ausubel's (1968, 2003) significant apprenticeship theory, on the operational research and its solving tools, mainly the LINDO software, as well as on mathematic modeling. Methodologically, subsumer evaluation instruments related to the modeling capacity of linear programming problems were applied. Due to the lack of some subsumers, advanced organizers were used that served as pedagogical mechanisms in order to establish relationships between what the students already knew and what the should know. Later, each student developed, at least, two mathematic models and two conceptual maps, the first being at the research commencement, and the others at its end. As a result, it was noted that the mathematic modeling environment suggested by Barbosa (2006) favored the observation of a significant apprenticeship (AUSUBEL, 2003) of linear programming when the students abstracted and solved corporative problem situations with the help of the LINDO software. The final mathematic models evolved presenting, in most cases, more variables and restrictions. Through of mathematic models and conceptual maps, it was possible to observe some evidences relative to business administrator's professional requirement, such as the capacity of identifying and solving problems and finding solutions, and the capacity of thinking strategically and introducing modifications into the productive process. It is necessary to be emphasized that the mathematic models illustrate the student's knowledge. Therefore, they are different, have different levels and reflect the idiosyncrasy of the teaching-learning process, as postulated by Moreira (2005) and Biembengut (2003).
APA, Harvard, Vancouver, ISO, and other styles
14

Ewer, John Andrew Clark. "An investigation into the feasibility, problems and benefits of re-engineering a legacy procedural CFD code into an event driven, object oriented system that allows dynamic user interaction." Thesis, University of Greenwich, 2000. http://gala.gre.ac.uk/6165/.

Full text
Abstract:
This research started with questions about how the overall efficiency, reliability and ease-of-use of Computational Fluid Dynamics (CFD) codes could be improved using any available software engineering and Human Computer Interaction (HCI) techniques. Much of this research has been driven by the difficulties experienced by novice CFD users in the area of Fire Field Modelling where the introduction of performance based building regulations have led to a situation where non CFD experts are increasingly making use of CFD techniques, with varying degrees of effectiveness, for safety critical research. Formerly, such modelling has not been helped by the mode of use, high degree of expertise required from the user and the complexity of specifying a simulation case. Many of the early stages of this research were channelled by perceived limitations of the original legacy CFD software that was chosen as a framework for these investigations. These limitations included poor code clarity, bad overall efficiency due to the use of batch mode processing, poor assurance that the final results presented from the CFD code were correct and the requirement for considerable expertise on the part of users. The innovative incremental re-engineering techniques developed to reverse-engineer, re-engineer and improve the internal structure and usability of the software were arrived at as a by-product of the research into overcoming the problems discovered in the legacy software. The incremental reengineering methodology was considered to be of enough importance to warrant inclusion in this thesis. Various HCI techniques were employed to attempt to overcome the efficiency and solution correctness problems. These investigations have demonstrated that the quality, reliability and overall run-time efficiency of CFD software can be significantly improved by the introduction of run-time monitoring and interactive solution control. It should be noted that the re-engineered CFD code is observed to run more slowly than the original FORTRAN legacy code due, mostly, to the changes in calling architecture of the software and differences in compiler optimisation: but, it is argued that the overall effectiveness, reliability and ease-of-use of the prototype software are all greatly improved. Investigations into dynamic solution control (made possible by the open software architecture and the interactive control interface) have demonstrated considerable savings when using solution control optimisation. Such investigations have also demonstrated the potential for improved assurance of correct simulation when compared with the batch mode of processing found in most legacy CFD software. Investigations have also been conducted into the efficiency implications of using unstructured group solvers. These group solvers are a derivation of the simple point-by-point Jaccobi Over Relaxation (JOR) and Successive Over Relaxation (SOR) solvers [CROFT98] and using group solvers allows the computational processing to be more effectively targeted on regions or logical collections of cells that require more intensive computation. Considerable savings have been demonstrated for the use of both static- and dynamic- group membership when using these group solvers for a complex 3-imensional fire modelling scenario. Furthermore the improvements in the system architecture (brought about as a result of software re-engineering) have helped to create an open framework that is both easy to comprehend and extend. This is in spite of the underlying unstructured nature of the simulation mesh with all of the associated complexity that this brings to the data structures. The prototype CFD software framework has recently been used as the core processing module in a commercial Fire Field Modelling product (called "SMARTFIRE" [EWER99-1]). This CFD framework is also being used by researchers to investigate many diverse aspects of CFD technology including Knowledge Based Solution Control, Gaseous and Solid Phase Combustion, Adaptive Meshing and CAD file interpretation for ease of case specification.
APA, Harvard, Vancouver, ISO, and other styles
15

Bundala, Daniel. "Algorithmic verification problems in automata-theoretic settings." Thesis, University of Oxford, 2014. https://ora.ox.ac.uk/objects/uuid:60b2d507-153f-4119-a888-56ccd47c3752.

Full text
Abstract:
Problems in formal verification are often stated in terms of finite automata and extensions thereof. In this thesis we investigate several such algorithmic problems. In the first part of the thesis we develop a theory of completeness thresholds in Bounded Model Checking. A completeness threshold for a given model M and a specification &phi; is a bound k such that, if no counterexample to &phi; of length k or less can be found in M, then M in fact satisfies &phi;. We settle a problem of Kroening et al. [KOS<sup>+</sup>11] in the affirmative, by showing that the linearity problem for both regular and &omega;-regular specifications (provided as finite automata and Buchi automata respectively) is PSPACE-complete. Moreover, we establish the following dichotomies: for regular specifications, completeness thresholds are either linear or exponential, whereas for &omega;-regular specifications, completeness thresholds are either linear or at least quadratic in the recurrence diameter of the model under consideration. Given a formula in a temporal logic such as LTL or MTL, a fundamental problem underpinning automata-based model checking is the complexity of evaluating the formula on a given finite word. For LTL, the complexity of this task was recently shown to be in NC [KF09]. In the second part of the thesis we present an NC algorithm for MTL, a quantitative (or metric) extension of LTL, and give an AC<sup>1</sup> algorithm for UTL, the unary fragment of LTL. We then establish a connection between LTL path checking and planar circuits which, among others, implies that the complexity of LTL path checking depends on the Boolean connectives allowed: adding Boolean exclusive or yields a temporal logic with P-complete path-checking problem. In the third part of the thesis we study the decidability of the reachability problem for parametric timed automata. The problem was introduced over 20 years ago by Alur, Henzinger, and Vardi [AHV93]. It is known that for three or more parametric clocks the problem is undecidable. We translate the problem to reachability questions in certain extensions of parametric one-counter machines. By further reducing to satisfiability in Presburger arithmetic with divisibility, we obtain decidability results for several classes of parametric one-counter machines. As a corollary, we show that, in the case of a single parametric clock (with arbitrarily many nonparametric clocks) the reachability problem is NEXP-complete, improving the nonelementary decision procedure of Alur et al. The case of two parametric clocks is open. Here, we show that the reachability is decidable in this case of automata with a single parameter.
APA, Harvard, Vancouver, ISO, and other styles
16

Hekimoglu, Ozge. "Comparison Of The Resource Allocation Capabilities Of Project Management Software Packages In Resource Constrained Project Scheduling Problems." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608203/index.pdf.

Full text
Abstract:
In this study, results of a comparison on benchmark test problems are presented to investigate the performance of Primavera V.4.1 with its two resource allocation priority rules and MS Project 2003. Resource allocation capabilities of the packages are measured in terms of deviation from the upper bound of the minimum makespan. Resource constrained project scheduling problem instances are taken from PSPLIB which are generated under a factorial design from ProGen. Statistical tests are applied to the results for investigating the significance effectiveness of the parameters.
APA, Harvard, Vancouver, ISO, and other styles
17

McBride, Iain. "Complexity results and integer programming models for hospitals/residents problem variants." Thesis, University of Glasgow, 2015. http://theses.gla.ac.uk/7027/.

Full text
Abstract:
The classical Hospitals / Residents problem (HR) is a many-to-one bipartite matching problem involving preferences, motivated by centralised matching schemes arising in entry level labour markets, the assignment of pupils to schools and higher education admissions schemes, among its many applications. The particular requirements of these matching schemes may lead to generalisations of HR that involve additional inputs or constraints on an acceptable solution. In this thesis we study such variants of HR from an algorithmic and integer programming viewpoint. The Hospitals / Residents problem with Couples (HRC) is a variant of HR that is important in practical applications because it models the case where couples submit joint preference lists over pairs of (typically geographically close) hospitals. It is known that an instance of HRC need not admit a stable matching. We show that deciding whether an instance of HRC admits a stable matching is NP-complete even under some very severe restrictions on the lengths and the structure of the participants’ preference lists. However, we show that under certain restrictions on the lengths of the agents’ preference lists, it is possible to find a maximum cardinality stable matching or report that none exists in polynomial time. Since an instance of HRC need not admit a stable matching, it is natural to seek the ‘most stable’ matching possible, i.e., a matching that admits the minimum number of blocking pairs. We use a gap-introducing reduction to establish an inapproximability bound for the problem of finding a matching in an instance of HRC that admits the minimum number of blocking pairs. Further, we show how this result might be generalised to prove that a number of minimisation problems based on matchings having NP-complete decision versions have the same inapproximability bound. We also show that this result holds for more general minimisation problems having NP-complete decisions versions that are not based on matching problems. Further, we present a full description of the first Integer Programming (IP) model for finding a maximum cardinality stable matching or reporting that none exists in an arbitrary instance of HRC. We present empirical results showing the average size of a maximum cardinality stable matching and the percentage of instances admitting stable matching taken over a number of randomly generated instances of HRC with varying properties. We also show how this model might be generalised to the variant of HRC in which ties are allowed in either the hospitals’ or the residents’ preference lists, the Hospitals / Residents problem with Couples and Ties (HRCT). We also describe and prove the correctness of the first IP model for finding a maximum cardinality ‘most stable’ matching in an arbitrary instance of HRC. We describe empirical results showing the average number of blocking pairs admitted by a most-stable matching as well as the average size of a maximum cardinality ‘most stable’ matching taken over a number of randomly generated instances of HRC with varying properties. Further, we examine the output when the IP model for HRCT is applied to real world instances arising from the process used to assign medical graduates to Foundation Programme places in Scotland in the years 2010-2012. The Hungarian Higher Education Allocation Scheme places a number of additional constraints on the feasibility of an allocation and this gives rise to various generalisations of HR. We show how a number of these additional requirements may be modelled using IP techniques by use of an appropriate combination of IP constraints. We present IP models for HR with Stable Score Limits and Ties, HR with Paired Applications, Ties and Stable Score limits, HR with Common Quotas, Ties and Stable Score Limits and also HR with Lower Quotas, Ties and Stable Score limits that model these generalisations of HR. The Teachers’ Allocation Problem (TAP) is a variant of HR that models the allocation of trainee teachers to supervised teaching positions in Slovakia. In TAP teachers express preference lists over pairs of subjects at individual schools. It is known that deciding whether an optimal matching exists that assigns all of the trainee teachers is NP-complete for a number of restricted cases. We describe IP models for finding a maximum cardinality matching in an arbitrary TAP instance and for finding a maximum cardinality stable matching, or reporting that none exists, in a TAP instance where schools also have preferences. We show the results when applying the first model to the real data arising from the allocation of trainee teachers to schools carried out at P.J. Safarik University in Kosice in 2013.
APA, Harvard, Vancouver, ISO, and other styles
18

Koyuncu, Ilhan. "Investigating The Use Of Technology On Pre-service Elementary Mathematics Teachers." Master's thesis, METU, 2013. http://etd.lib.metu.edu.tr/upload/12615588/index.pdf.

Full text
Abstract:
The purpose of this study was to investigate plane geometry problem solving strategies of pre-service elementary mathematics teachers in technology and paper-and-pencil environments after receiving an instruction with GeoGebra. Qualitative research strategies were used to investigate teacher candidates&lsquo<br>solution strategies. The data was collected and analyzed by means of a multiple case study design. The study was carried out with 7 pre-service elementary mathematics teachers. The main data sources were classroom observations and interviews. After receiving a three-week instructional period, the participants experienced data collection sessions during a week. The data was analyzed by using records of the interviews, answers to the instrument, and transcribing and examining observation records. Results revealed that the participants developed three solution strategies: algebraic, geometric and harmonic. They used mostly algebraic solutions in paper-and-pencil environment and v geometric ones in technology environment. It means that different environments contribute separately pre-service teachers&lsquo<br>mathematical problem solving abilities. Different from traditional environments, technology contributed students&lsquo<br>mathematical understanding by means of dynamic features. In addition, pre-service teachers saved time, developed alternative strategies, constructed the figures precisely, visualized them easily, and measured accurately and quickly. The participants faced some technical difficulties in using the software at the beginning of the study but they overcome most of them at the end of instructional period. The results of this study has useful implications for mathematics teachers to use technology during their problem solving activities as educational community encourages to use technology in teaching and learning of mathematics.
APA, Harvard, Vancouver, ISO, and other styles
19

Troltzsch, Anke. "Une méthode de région de confiance avec ensemble actif pour l'optimisation non linéaire sans dérivées avec contraintes de bornes appliquée à des problèmes aérodynamiques bruités." Phd thesis, Institut National Polytechnique de Toulouse - INPT, 2011. http://tel.archives-ouvertes.fr/tel-00639257.

Full text
Abstract:
L'optimisation sans dérivées (OSD) a connu un regain d'intérêt ces dernières années, principalement motivée par le besoin croissant de résoudre les problèmes d'optimisation définis par des fonctions dont les valeurs sont calculées par simulation (par exemple, la conception technique, la restauration d'images médicales ou de nappes phréatiques). Ces dernières années, un certain nombre de méthodes d'optimisation sans dérivée ont été développées et en particulier des méthodes fondées sur un modèle de région de confiance se sont avérées obtenir de bons résultats. Dans cette thèse, nous présentons un nouvel algorithme de région de confiance, basé sur l'interpolation, qui se montre efficace et globalement convergent (en ce sens que sa convergence vers un point stationnaire est garantie depuis tout point de départ arbitraire). Le nouvel algorithme repose sur la technique d'auto-correction de la géométrie proposé par Scheinberg and Toint (2010). Dans leur théorie, ils ont fait avancer la compréhension du rôle de la géométrie dans les méthodes d'OSD à base de modèles. Dans notre travail, nous avons pu améliorer considérablement l'efficacité de leur méthode, tout en maintenant ses bonnes propriétés de convergence. De plus, nous examinons l'influence de différents types de modèles d'interpolation sur les performances du nouvel algorithme. Nous avons en outre étendu cette méthode pour prendre en compte les contraintes de borne par l'application d'une stratégie d'activation. Considérer une méthode avec ensemble actif pour l'optimisation basée sur des modèles d'interpolation donne la possibilité d'économiser une quantité importante d'évaluations de fonctions. Il permet de maintenir les ensembles d'interpolation plus petits tout en poursuivant l'optimisation dans des sous-espaces de dimension inférieure. L'algorithme résultant montre un comportement numérique très compétitif. Nous présentons des résultats sur un ensemble de problèmes-tests issu de la collection CUTEr et comparons notre méthode à des algorithmes de référence appartenant à différentes classes de méthodes d'OSD. Pour réaliser des expériences numériques qui intègrent le bruit, nous créons un ensemble de cas-tests bruités en ajoutant des perturbations à l'ensemble des problèmes sans bruit. Le choix des problèmes bruités a été guidé par le désir d'imiter les problèmes d'optimisation basés sur la simulation. Enfin, nous présentons des résultats sur une application réelle d'un problème de conception de forme d'une aile fourni par Airbus.
APA, Harvard, Vancouver, ISO, and other styles
20

Silva, Maria Celimar da. "Desenvolvimento de um corretor automático de exercícios gerados por software matemático." reponame:Biblioteca Digital de Teses e Dissertações da UFRGS, 2016. http://hdl.handle.net/10183/159529.

Full text
Abstract:
Podemos utilizar na disciplina de Matemática vários softwares que auxiliam a constr uir, desconstr ui r, visualizar conceitos, experimenta r e sim ular a partir da manipulação dessas fe rramentas, o que instiga à concepção de ferramentas que também proporcionem tàci I idades aos educado res no acompa nhamento do trabalho dos aprendizes. Nessa pe rspectiva, o objetivo geral desse trabalho é desenvolver um corretor automático de exercícios gerados por um software matemático, que apresente em tela o gabari to e o resultado com o percentual de erros e acertos, na tentativa de co ntrib ui r com uma nova possi bi I idade de aval i ação da aprendizagem. O co rretor au tomá ti co o bjeto desse trabalh o, recebeu o nome de SACAEM- Sistema de Avaliação e Correção de Atividades no Ens ino de Matemática, este fará a leitura das atividades geradas pelo GeoGebra, escolh ido porque além de ser um software li vre, possibilita a conversão de seus resultados no fo rmato XM L - eXtensih/e Mar!atp Language - Linguagem de Marcação Extensível. Para tanto, a pesquisa desenvolvida no presente trabalh o, quanto a sua natu reza, é c lassificada como tecno lógica quanto aos seus objetivos , à pesquisa é descri tiva, pois são feitas observações, análises e registros a respeito do desenvolvimento do corretor automático, quanto aos procedimentos técnicos pode ser c lassificada como bibliográfica e experimental, pois visa elaborar e fo rmular novos e lementos , simular eventos, fazer estudos em laborató rio utilizando protótipos. Após a descrição do desenvolvimento do corretor automático proposto, detalhando sua a rq uitetu ra, os diagramas de processo de leitura, a rmazenamento e correção, as ferramentas uti lizadas pa ra a construção deste, bem como a modelagem do banco de dados, fo ram elaboradas seis atividades uti lizando para realização dos testes. A eficiência do co rretor automático fo i medida nestas atividades e para demonstra r a aceitação do SACAEM, sete professores de Matemática responderam ao questionário de avaliação da ferramenta, deixando c lara a receptividade da ferramenta por estes. O grande diferencial do SACA EM é a possibi I idade da correção de atividades livremente desenvolvidas pelo aluno , sem que o professo r envie um arquivo inicialmente respondido pelo professor. Além de ser uma ferramenta acessível mesmo ao professor que não possua conexão com a Internet e que pode auxiliar no árduo tra balho de corrigir questôes subjetivas de Matemática, analisando as diferentes fo rmas que os alunos têm de expressar o caminho percorrido no alcance da aprendizagem.<br>We can use in Mathematics various software that help build , deconstruc t , visualize concepts, ex periment and simulate from the handling of these tools, which instigates the des ign tools also provide fac ilities to educators in monitoring the work of appre ntices. Tn th is perspecti ve, the aim o f this study is to develop an automatic broker exerc ises generated by a mathematical software, to present on screen feedback and results with the percentage of tri al and error in an attempt to contribute to a new possib ility of assessment of learning. Automatic broker was named SACAEM - Evaluation System and Activities Correction in Teaching of Mathematics, this will read the activities generated by GeoGebra, as this enables the conversion of its results in XML fonnat - eXtensible Markup Lang uage - Markup Language Extens ible. Therefore, the research develo ped in this work, as to its nature, is classified as technological as its objectives, the research is descriptive, because observati ons are made, anal yzes and records regard ing the development of the auto broker, as the technical procedures can Tt is class if ied as bibl iographic and ex perimental. Tt is the description of the development of the pro posed autocorrect made, detail ing its architecture, the reading process diagrams, storage and correcti on, the tools used to build this, and database modeling. For the tests, six activities have been prepared whe re the autocorrect effic ie ncy was measured and to demonstrate accepta nce seven mathematics teachers answered the questi onn aire assessment tool. Fin ally, the great advantage of SACAEM is the possib ility of correcti on activities freely developed by th e student, not the teacher senda fi le initially answered by the teacher. Bes ides being a handy tool even the teacher who does not h ave T nternet connection and can assist in the hard work to correct subjective questi ons of mathematics.
APA, Harvard, Vancouver, ISO, and other styles
21

Stutler, Richard A. "Analysis of Perturbation-based Testing Methodology as applied to a Real-Time Control System Problem." VCU Scholars Compass, 2005. http://scholarscompass.vcu.edu/etd/1118.

Full text
Abstract:
Perturbation analysis is a software analysis technique used to study the tail function of a program by inserting an error into an executing program using data state mutation. The impact of this induced error on the output is then measured. This methodology can be used to evaluate the effectiveness of a given test set and in fact can be used as a means to derive a test set which provides coverage for a given program. Previous research has shown that there is a "coupling effect" such that test sets that identify simple errors will also identify more complex errors. Thus the research would indicate that this methodology would facilitate the generation of test sets that would detect a wide range of possible faults. This research applies a perturbation analysis technique to the Cell Pre-selection algorithm as used in the Tomahawk Weapons Control System.
APA, Harvard, Vancouver, ISO, and other styles
22

Coskun, Sirin. "A multiple case study investigating the effects of technology on students' visual and nonvisual thinking preferences comparing paper-pencil and dynamic software based strategies of algebra word problems." Doctoral diss., University of Central Florida, 2011. http://digital.library.ucf.edu/cdm/ref/collection/ETD/id/4874.

Full text
Abstract:
In this multiple-case study, I developed cases describing three students' (Mary, Ryan and David) solution methods for algebra word problems and investigated the effect of technology on their solution methods by making inferences about their preferences for visual or nonvisual solutions. Furthermore, I examined the students' solution methods when presented with virtual physical representations of the situations described in the problems and attempted to explain the effect of those representations on students' thinking preferences. In this study, the use of technology referred to the use of the dynamic software program Geogebra. Suwarsono's (1982) Mathematical Processing Instrument (MPI) was administered to determine their preferences for visual and nonvisual thinking. During the interviews, students were presented with paper-and-pencil-based tasks (PBTs), Geogebra-based tasks (GBTs) and Geogebra-based tasks with virtual physical representations (GBT-VPRs). Each category included 10 algebra word problems, with similar problems across categories. (i.e., PBT 9, GBT 9 and GBT-VPR 9 were similar). By investigating students' methods of solution and their use of representations in solving those tasks, I compared and contrasted their preferences for visual and nonvisual methods when solving problems with and without technology. The comparison between their solutions of PBTs and GBTs revealed how dynamic software influenced their method of solution. Regardless of students' preferences for visual and nonvisual solutions, with the use of dynamic software students employed more visual methods when presented with GBTs. When visual methods were as accessible and easy to use as nonvisual methods, students preferred to use them, thus demonstrating that they possessed a more complete knowledge of problem-solving with dynamic software than their work on the PBTs.; Nowadays, we can construct virtual physical representations of the problems in technology environments that will help students explore the relationships and look for patterns that can be used to solve the problem. Unlike GBTs, GBT-VPRs did not influence students' preferences for visual or nonvisual methods. Students continued to rely on methods that they preferred since their preferences for visual or nonvisual solutions regarding GBT-PRs were similar to their solution preferences for the problems on MPI that was administered to them to determine their preferences for visual or nonvisual methods. Mary, whose MPI score suggested that she preferred to solve mathematics problems using nonvisual methods, solved GBT-VPRs with nonvisual methods. Ryan, whose MPI score suggested that he preferred to solve mathematics problems using visual methods, solved GBT-VPRs with visual methods. David, whose MPI score suggested that he preferred to solve mathematics problems using both visual and nonvisual methods, solved GBT-VPRs with both visual and nonvisual methods.<br>ID: 030422900; System requirements: World Wide Web browser and PDF reader.; Mode of access: World Wide Web.; Thesis (Ph.D.)--University of Central Florida, 2011.; Includes bibliographical references (p. 293-303).<br>Ph.D.<br>Doctorate<br>Education
APA, Harvard, Vancouver, ISO, and other styles
23

Nirenstein, Shaun. "Fast and Accurate Visibility Preprocessing." Thesis, University of Cape Town, 2003. http://pubs.cs.uct.ac.za/archive/00000101/.

Full text
Abstract:
Visibility culling is a means of accelerating the graphical rendering of geometric models. Invisible objects are efficiently culled to prevent their submission to the standard graphics pipeline. It is advantageous to preprocess scenes in order to determine invisible objects from all possible camera views. This information is typically saved to disk and may then be reused until the model geometry changes. Such preprocessing algorithms are therefore used for scenes that are primarily static. Currently, the standard approach to visibility preprocessing algorithms is to use a form of approximate solution, known as conservative culling. Such algorithms over-estimate the set of visible polygons. This compromise has been considered necessary in order to perform visibility preprocessing quickly. These algorithms attempt to satisfy the goals of both rapid preprocessing and rapid run-time rendering. We observe, however, that there is a need for algorithms with superior performance in preprocessing, as well as for algorithms that are more accurate. For most applications these features are not required simultaneously. In this thesis we present two novel visibility preprocessing algorithms, each of which is strongly biased toward one of these requirements. The first algorithm has the advantage of performance. It executes quickly by exploiting graphics hardware. The algorithm also has the features of output sensitivity (to what is visible), and a logarithmic dependency in the size of the camera space partition. These advantages come at the cost of image error. We present a heuristic guided adaptive sampling methodology that minimises this error. We further show how this algorithm may be parallelised and also present a natural extension of the algorithm to five dimensions for accelerating generalised ray shooting. The second algorithm has the advantage of accuracy. No over-estimation is performed, nor are any sacrifices made in terms of image quality. The cost is primarily that of time. Despite the relatively long computation, the algorithm is still tractable and on average scales slightly superlinearly with the input size. This algorithm also has the advantage of output sensitivity. This is the first known tractable exact solution to the general 3D from-region visibility problem. In order to solve the exact from-region visibility problem, we had to first solve a more general form of the standard stabbing problem. An efficient solution to this problem is presented independently.
APA, Harvard, Vancouver, ISO, and other styles
24

Engström, Lil. "Möjligheter till lärande i matematik : Lärares problemformuleringar och dynamisk programvara." Doctoral thesis, Stockholms universitet, Institutionen för undervisningsprocesser, kommunikation och lärande (UKL), 2006. http://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-943.

Full text
Abstract:
This thesis presents the first Swedish empirical evidence on how teachers employ a dynamic mathematical software when teaching mathematics in upper secondary school. The study examines: a) How teachers formulate mathematical problems? b) How they use the experience the students have gained? and c) What use they make of the software’s potential? These questions are examined through classroom observations followed up by discussions with the teachers. The study comprises three teachers and shows that they have very different mathematical experiences and teaching skills. A questionnaire was sent to the teachers prior to the classroom visits to collect relevant background information; e.g., the teachers were asked to describe their teacher training, their view of mathematics and of how a dynamic software could contribute to their teaching. The results show that the teachers’ ability to pose thought-provoking openended problems is the most important factor as it significantly influences what the students learn. The way a mathematical problem is formulated could, in conjunction with a dynamic software, actually limit the students’ achievement. However, this study confirms that it could also provide an opportunity for students to discover new mathematical relations, draw conclusions, generalise and formulate hypotheses. This could in turn lead to an in formally proving a mathematical relation. A conclusion of the study is that to be successful, teachers need a good mathematical background with a firm knowledge base and an understanding of the software’s potential, but they also need the skill to formulate open-ended problems that will enable their students to work successfully with a dynamic mathematical software.
APA, Harvard, Vancouver, ISO, and other styles
25

Parker, Christopher Alonzo. "K x N Trust-Based Agent Reputation." VCU Scholars Compass, 2006. http://scholarscompass.vcu.edu/etd/702.

Full text
Abstract:
In this research, a multi-agent system called KMAS is presented that models an environment of intelligent, autonomous, rational, and adaptive agents that reason about trust, and adapt trust based on experience. Agents reason and adapt using a modification of the k-Nearest Neighbor algorithm called (k X n) Nearest Neighbor where k neighbors recommend reputation values for trust during each of n interactions. Reputation allows a single agent to receive recommendations about the trustworthiness of others. One goal is to present a recommendation model of trust that outperforms MAS architectures relying solely on direct agent interaction. A second goal is to converge KMAS to an emergent system state where only successful cooperation is allowed. Three experiments are chosen to compare KMAS against a non-(k X n) MAS, and between different variations of KMAS execution. Research results show KMAS converges to the desired state, and in the context of this research, KMAS outperforms a direct interaction-based system.
APA, Harvard, Vancouver, ISO, and other styles
26

Fiório, Rafael Carpanedo. "Uma abordagem heurística para o problema de otimização de distrito postal." Universidade Federal do Espírito Santo, 2006. http://repositorio.ufes.br/handle/10/6358.

Full text
Abstract:
Made available in DSpace on 2016-12-23T14:33:35Z (GMT). No. of bitstreams: 1 dissertacao.pdf: 2646193 bytes, checksum: 043989a54d6611e19c06eb6bcd7bba69 (MD5) Previous issue date: 2006-06-23<br>Neste trabalho é proposta uma estratégia de solução para a construção otimizada de distritos postais. Distrito Postal consiste num conjunto de segmento de eixo de logradouros conectados. Dada uma localidade formada por inúmeros segmentos de logradouros, esse trabalho propõe o arranjamento de subgrupos conexos de segmentos de eixos de logradouros de modo a compor um distrito postal. A estratégia é transformar o sistema de logradouros de uma localidade em um grafo. A partir desse grafo, extrair seus respectivos subgrafos cíclicos que são entendidos como entidades atômicas. Essas entidades atômicas passam por um processo de montagem até comporem um conjunto de distritos postais. A metodologia aqui apresentada divide o trabalho em duas fases distintas: a primeira compreende o processo de obtenção dos subgrafos cíclicos; e a segunda compreende o processo de montagem de distrito postal. O processo de obtenção de subgrafos cíclicos consiste na obtenção da envoltória convexa do grafo e posterior extração dos subgrafos cíclicos tangentes às arestas dessa. Isso de forma sequencial, ou seja, determina-se a primeira envoltória convexa do grafo e extraemse seus respectivos subgrafos tangentes; determina-se a segunda envoltória convexa e extraem-se seus subgrafos, e assim sucessivamente. O trabalho de determinação da envoltória convexa e de extração dos subgrafos cíclicos é feito através de operações da geometria computacional. O processo de construção dos distritos postais se dá através da clusterização dos subgrafos cíclicos, usando como ferramenta a meta-heurística Simulated Annealing. O problema do Carteiro Chinês e Carteiro Chinês Capacitado são formulações suporte para o presente trabalho. O objetivo principal do trabalho é obter, de forma rápida e eficiente o distrito postal otimizado, com menor percurso improdutivo possível, oferecendo agilidade no processo de distribuição domiciliária de objetos postais.<br>This study proposes a strategia solution for the optimized construction of postal districts. Postal District is a set of segments of publics areas connecteds. Given a locality composed of uncounted segments of publics areas, this study proposes an arrangement of connects subgroups of publics areas with the goal of composing a postal district. The strategy is to transform the system of public areas of a place in a graph and from this graph, to extract their respective cyclical subgraphs that are understood as atomics entities. Those atomics entities are submited by an assembly process until compose a group of postal districts. The methodology here presented divides the study in two different phases: the first one understands the process of obtaining of the cyclical subgraphs; and the second one is understood as the assembly process of postal district The process of obtaining of cyclical subgraph consists in the obtaining of the hull convex of the graph and subsequent extracting up the cyclical subgraphs tangent to edge of that. That is, in a sequential way, in other words, it is determined the first convex hull of the graph and extract up their respective tangent subgraphs; it is determined the second convex hull and extract up their subgraphs and so forth. The study of determination of the convex hull and extracting of the cyclical subgraphs is done through operations of the computational geometry. The process of construction of the postal districts is given through the clustering of the cyclicals subgraphs, using as a tool the meta- heuristic Simulated Annealing. The Chinese Postman's Problem and Capacited Chinese Postman's Problem are formulations support for the present study. The main objective of the study is to obtain, in a fast and efficient way the optimized postal district, with smaller unproductive course possible, offering agility for the process of domiciliary distribution of postal objects.
APA, Harvard, Vancouver, ISO, and other styles
27

Banaszewski, Roni Fabio. "Modelo multiagentes baseado em um protocolo de leilões simultâneos para aplicação no problema de planejamento de transferências de produtos no segmento downstream do sistema logístico brasileiro de petróleo." Universidade Tecnológica Federal do Paraná, 2014. http://repositorio.utfpr.edu.br/jspui/handle/1/822.

Full text
Abstract:
CAPES<br>O segmento downstream da cadeia de suprimentos da indústria brasileira de petróleo é composta por bases de produção (e.g. refinarias), armazenamento (e.g. terminais) e consumo (e.g. mercados consumidores) e modais de transportes (e.g. oleodutos, navios, caminhões e trens). O planejamento da transferência de derivados de petróleo nesta rede multimodal é um problema complexo e atualmente é realizado para um horizonte de três meses com base na experiência de profissionais e sem auxílio de um sistema computacional de apoio à decisão. Basicamente, o problema pode ser visto como uma negociação para alocação de recursos disponíveis (tais como derivados de petróleo, tanques e modais de transporte) pelas diferentes bases envolvidas que necessitam enviar ou receber derivados de petróleo. Na literatura, alguns problemas semelhantes, porém mais voltados para o planejamento de redes formadas por um único tipo de modal de transporte, têm sido tratados por diferentes abordagens, com predominância da programação matemática. Estes trabalhos ilustram a difícil tarefa de modelar grandes problemas por meio desta abordagem. Geralmente, tais trabalhos consideram apenas um curto horizonte de planejamento ou apenas uma parte do problema original, tal como uma parte da rede petrolífera brasileira, gerando limitações importantes para os modelos desenvolvidos. Devido às características do problema em estudo, o qual envolve toda a rede de transporte e apresenta perfil de negociação entre as diferentes entidades envolvidas, surge o interesse da utilização do paradigma de sistemas multiagente. O paradigma de agentes tem sido aplicado a problemas de diferentes contextos, particularmente em problemas de gerenciamento de cadeias de suprimentos devido à sua correspondência natural com a realidade e, em geral, em problemas que envolvem a competição por recursos por meio de mecanismos de negociação com base em leilões. Este trabalho apresenta um novo protocolo de negociação baseado em leilões e aplicação deste protocolo em forma de um modelo multiagente na resolução do problema de planejamento em questão. Os agentes que formam a solução representam principalmente os locais de produção, armazenamento, consumo e os modais de transporte na rede petrolífera brasileira. O objetivo destes agentes é manter um nível de estoque diário factível de cada produto em cada local por meio de transferências de produtos pela rede petrolífera brasileira com preferível redução do custo de transporte. Por fim, este trabalho apresenta a satisfação destes objetivos por meio de experimentos em cenários fictícios e reais da rede brasileira de petróleo.<br>The Brazilian oil supply chain is composed by oil refineries, consumer markets, terminals for intermediary storage and several transportation modals, such as pipelines, ships, trucks and trains. The transportation planning of oil products in this multimodal network is a complex problem that is currently performed manually based on expertise, for a period of three months, due to the lack of a software system to cover the problem complexity. Such problem involves the negotiation of available resources such as oil products, tanks and transportation modals between different sources and consumption points. Similar problems, but more directed to the planning of single modes of transportation, have been treated by different approaches, mainly mathematical programming. Such works illustrate the difficult task of modeling large problems with this mechanism. Generally, they consider a short horizon planning or only part of the original problem, such as a part of the network, rendering important limitations to the models developed. Due to the characteristics of the problem in study where the full network needs to be considered and there exists negotiation amongst the different entities involved, the usage of multi-agent models seems to be worth to explore. Such models have been applied in different contexts such as to supply chain problems due its natural correspondence with the reality. Furthermore, in problems involving competition for resources, multi-agents negotiation mechanisms based on auctions are commonly applied. Thus, this thesis presents one auction-based solution formed by the cooperation among agents for them to achieve their goals. The agents involved in the auctions represent mainly the production, storage and consumption locations. Their goal is to maintain a daily suitable inventory level for each product by means of transportation through the multimodal network at a low transport cost. Finally, this paper presents the satisfaction of these objectives through experiments on real and fictional scenarios of Brazilian oil network.
APA, Harvard, Vancouver, ISO, and other styles
28

Rouet, François-Henry. "Problèmes de mémoire et de performance de la factorisation multifrontale parallèle et de la résolution triangulaire à seconds membres creux." Phd thesis, Institut National Polytechnique de Toulouse - INPT, 2012. http://tel.archives-ouvertes.fr/tel-00785748.

Full text
Abstract:
Nous nous intéressons à la résolution de systèmes linéaires creux de très grande taille sur des machines parallèles. Dans ce contexte, la mémoire est un facteur qui limite voire empêche souvent l'utilisation de solveurs directs, notamment ceux basés sur la méthode multifrontale. Cette étude se concentre sur les problèmes de mémoire et de performance des deux phases des méthodes directes les plus coûteuses en mémoire et en temps : la factorisation numérique et la résolution triangulaire. Dans une première partie nous nous intéressons à la phase de résolution à seconds membres creux, puis, dans une seconde partie, nous nous intéressons à la scalabilité mémoire de la factorisation multifrontale. La première partie de cette étude se concentre sur la résolution triangulaire à seconds membres creux, qui apparaissent dans de nombreuses applications. En particulier, nous nous intéressons au calcul d'entrées de l'inverse d'une matrice creuse, où les seconds membres et les vecteurs solutions sont tous deux creux. Nous présentons d'abord plusieurs schémas de stockage qui permettent de réduire significativement l'espace mémoire utilisé lors de la résolution, dans le cadre d'exécutions séquentielles et parallèles. Nous montrons ensuite que la façon dont les seconds membres sont regroupés peut fortement influencer la performance et nous considérons deux cadres différents : le cas "hors-mémoire" (out-of-core) où le but est de réduire le nombre d'accès aux facteurs stockés sur disque, et le cas "en mémoire" (in-core) où le but est de réduire le nombre d'opérations. Finalement, nous montrons comment améliorer le parallélisme. Dans la seconde partie, nous nous intéressons à la factorisation multifrontale parallèle. Nous montrons tout d'abord que contrôler la mémoire active spécifique à la méthode multifrontale est crucial, et que les techniques de "répartition" (mapping) classiques ne peuvent fournir une bonne scalabilité mémoire : le coût mémoire de la factorisation augmente fortement avec le nombre de processeurs. Nous proposons une classe d'algorithmes de répartition et d'ordonnancement "conscients de la mémoire" (memory-aware) qui cherchent à maximiser la performance tout en respectant une contrainte mémoire fournie par l'utilisateur. Ces techniques ont révélé des problèmes de performances dans certains des noyaux parallèles denses utilisés à chaque étape de la factorisation, et nous avons proposé plusieurs améliorations algorithmiques. Les idées présentées tout au long de cette étude ont été implantées dans le solveur MUMPS (Solveur MUltifrontal Massivement Parallèle) et expérimentées sur des matrices de grande taille (plusieurs dizaines de millions d'inconnues) et sur des machines massivement parallèles (jusqu'à quelques milliers de coeurs). Elles ont permis d'améliorer les performances et la robustesse du code et seront disponibles dans une prochaine version. Certaines des idées présentées dans la première partie ont également été implantées dans le solveur PDSLin (solveur linéaire hybride basé sur une méthode de complément de Schur).
APA, Harvard, Vancouver, ISO, and other styles
29

Nuentsa, Wakam Désiré. "Parallélisme et robustesse dans les solveurs hybrides pour grands systèmes linéaires : application à l'optimisation en dynamique des fluides." Phd thesis, Université Rennes 1, 2011. http://tel.archives-ouvertes.fr/tel-00690965.

Full text
Abstract:
Cette thèse présente un ensemble de routines pour la résolution des grands systèmes linéaires creuses sur des architectures parallèles. Les approches proposées s'inscrivent dans un schéma hybride combinant les méthodes directes et itératives à travers l'utilisation des techniques de décomposition de domaine. Dans un tel schéma, le problème initial est divisé en sous-problèmes en effectuant un partitionnement du graphe de la matrice coefficient du système. Les méthodes de Schwarz sont ensuite utilisées comme outils de préconditionnements des méthodes de Krylov basées sur GMRES. Nous nous intéressons tout d'abord au schéma utilisant un préconditionneur de Schwarz multiplicatif. Nous définissons deux niveaux de parallélisme: le premier est associé à GMRES préconditionné sur le système global et le second est utilisé pour résoudre les sous-systèmes à l'aide d'une méthode directe parallèle. Nous montrons que ce découpage permet de garantir une certaine robustesse à la méthode en limitant le nombre total de sous-domaines. De plus, cette approche permet d'utiliser plus efficacement tous les processeurs alloués sur un noeud de calcul. Nous nous intéressons ensuite à la convergence et au parallélisme de GMRES qui est utilisée comme accélerateur global dans l'approche hybride. L'observation générale est que le nombre global d'itérations, et donc le temps de calcul global, augmente avec le nombre de partitions. Pour réduire cet effet, nous proposons plusieurs versions de GMRES basés sur la déflation. Les techniques de déflation proposées utilisent soit un préconditionnement adaptatif soit une base augmentée. Nous montrons l'utilité de ces approches dans leur capacité à limiter l'influence du choix d'une taille de base de Krylov adaptée, et donc à éviter une stagnation de la méthode hybride globale. De plus, elles permettent de réduire considérablement le coût mémoire, le temps de calcul ainsi que le nombre de messages échangés par les différents processeurs. Les performances de ces méthodes sont démontrées numériquement sur des systèmes linéaires de grande taille provenant de plusieurs champs d'application, et principalement de l'optimisation de certains paramètres de conception en dynamique des fluides.
APA, Harvard, Vancouver, ISO, and other styles
30

Seo, You-Jin 1974. "Effects of multimedia software on word problem-solving performance for students with mathematics difficulties." 2008. http://hdl.handle.net/2152/17998.

Full text
Abstract:
Computer-Assisted Instruction (CAI) offers the potential to deliver cognitive and meta-cognitive strategies in mathematical word problem-solving for students with mathematics difficulties. However, there is a lack of commercially available CAI programs with cognitive and meta-cognitive strategies for mathematical word problemsolving that pay particular attention to the critical design features for students with mathematics difficulties. Therefore, empirical evidence regarding the effects of CAI program with cognitive and meta-cognitive strategies on the word problem-solving of students with mathematics difficulties has not been found. Considering the imperative need for a CAI program with cognitive and metacognitive strategies for students with mathematics difficulties, an interactive multimedia software, ‘Math Explorer,’ was designed, developed, and implemented to teach one-step addition and subtraction word problem-solving skills to students with mathematics difficulties. Math Explorer incorporates: (a) four-step cognitive strategies and corresponding three-step meta-cognitive strategies adapted from the research on cognitive and meta-cognitive strategies, and (b) instruction, interface, and interaction design features of CAI identified as crucial for successful delivery of cognitive and metacognitive strategies for students with mathematics difficulties. The purpose of this study was to investigate the effectiveness of Math Explorer, which was designed to be a potential tool to deliver cognitive and meta-cognitive strategy instruction in one-step addition and subtraction word problem-solving. Three research questions guided this study: (a) To what extent does the use of Math Explorer affect the accuracy performance of students with mathematics difficulties in grades 2-3 on computer-based tasks with one-step addition and subtraction word problem-solving?, (b) To what extent does the use of Math Explorer generalize to the accuracy performance of students with mathematics difficulties in grades 2-3 on paper/pencil-based tasks with one-step addition and subtraction word problem-solving?, and (c) To what extent does the use of Math Explorer maintain the accuracy performance of students with mathematics difficulties in grades 2-3 on computer- and paper/pencilbased tasks with one-step addition and subtraction word problem-solving? A multiple probe across subjects design was used for the study. Four students with mathematics difficulties participated in the pre-experimental (i.e., introduction; screening test; and computer training I) and experimental (i.e., baseline, computer training II, intervention, and follow-up) sessions over an 18-week period. Each week of the intervention phase, the students received an individual 20- to 30-minute Math Explorer intervention, at most, five days. After each intervention, they took the 10-minute computer- or paper/pencil-based tests developed by the researcher. The intervention phase for each student lasted five to seven weeks. Two weeks after termination of the intervention phase, their accuracy performance on the computer- and paper/pencil-based tests were examined during the follow-up phases. The findings of the study revealed that all four of the students were able to use the cognitive and meta-cognitive strategies to solve the addition and subtraction word problems and improved their accuracy performance on the computer-based tests. Their improved accuracy performance found on the computer-based tests was successfully transferred to the paper/pencil-based tests. About two weeks after termination of the intervention phase, except for one student who had many absences and behavioral problems during the extended intervention phase, the three students successfully maintained their improved accuracy performance during the follow-up phase. Taken together, the findings of the study clearly provide evidence that Math Explorer is an effective method for teaching one-step addition and subtraction word problem-solving skills to students with mathematics difficulties and suggest that the instruction, interface, and interaction design features of CAI program is carefully designed to produce successful mathematical performance of students with mathematics difficulties. Limitations of the research and implications for practice and future research were discussed.<br>text
APA, Harvard, Vancouver, ISO, and other styles
31

Lin, Wan-Huei, and 林婉惠. "The Effect of Applying Gcompris Software with Problem-Based Learning in Teaching Mathematics for Elementary School Students." Thesis, 2012. http://ndltd.ncl.edu.tw/handle/94240472864429802602.

Full text
Abstract:
碩士<br>國立雲林科技大學<br>技術及職業教育研究所碩士班<br>100<br>This research aims to study the effect of applying open source software with Problem-Based learning approach on Teaching Mathematics and Science for elementary Students. By adopting quasi-experimental research method, the research subjects were 35 students in the first grade of two elementary schools in Yunlin County. The research tools, math and science achievement examination and open source software “GCompris”, were adopted in the study.One class was experimental group receiving “open source software applying with PBL model”, while the other class was control group receiving just “ PBL model” teaching materials. The experimental time cost twelve periods. Before and after the experiment, the two classes were also asked to take “math and science achievement examination”. The pretest as covariance and the posttest as dependent variable proceeded into the analysis of Covariance. A month later, the intention examination was taken to evaluate stunents whether the learning goals were achieved. The results are as follows: 1. The immediate effects of the experimental group receiving “Gcompris software applying with PBL model” is higher than the control group. 2. There were no significant differences among the receiving “Gcompris software applying with PBL model” or”PBL model” on the learning intention. 3. The immediate effects of the lower stundents in experimental group receiving “Gcompris software applying with PBL model” is better than the lower stundents in control group. 4. The immediate effects of the boys in experimental group receiving “Gcompris software applying with PBL model” is better than the boys in control group. 5. There were no significant differences among the students in different degrees receiving “Gcompris software applying with PBL model” or”PBL model” on the learning intention. 6. There were no significant differences among the girls and boys receiving “Gcompris software applying with PBL model” or”PBL model” on the learning intention.
APA, Harvard, Vancouver, ISO, and other styles
32

Eblen, John David. "The Maximum Clique Problem: Algorithms, Applications, and Implementations." 2010. http://trace.tennessee.edu/utk_graddiss/793.

Full text
Abstract:
Computationally hard problems are routinely encountered during the course of solving practical problems. This is commonly dealt with by settling for less than optimal solutions, through the use of heuristics or approximation algorithms. This dissertation examines the alternate possibility of solving such problems exactly, through a detailed study of one particular problem, the maximum clique problem. It discusses algorithms, implementations, and the application of maximum clique results to real-world problems. First, the theoretical roots of the algorithmic method employed are discussed. Then a practical approach is described, which separates out important algorithmic decisions so that the algorithm can be easily tuned for different types of input data. This general and modifiable approach is also meant as a tool for research so that different strategies can easily be tried for different situations. Next, a specific implementation is described. The program is tuned, by use of experiments, to work best for two different graph types, real-world biological data and a suite of synthetic graphs. A parallel implementation is then briefly discussed and tested. After considering implementation, an example of applying these clique-finding tools to a specific case of real-world biological data is presented. Results are analyzed using both statistical and biological metrics. Then the development of practical algorithms based on clique-finding tools is explored in greater detail. New algorithms are introduced and preliminary experiments are performed. Next, some relaxations of clique are discussed along with the possibility of developing new practical algorithms from these variations. Finally, conclusions and future research directions are given.
APA, Harvard, Vancouver, ISO, and other styles
33

Klimanis, Nils. "Generic Programming and Algebraic Multigrid for Stabilized Finite Element Methods." Doctoral thesis, 2006. http://hdl.handle.net/11858/00-1735-0000-0006-B38C-5.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography