Auswahl der wissenschaftlichen Literatur zum Thema „Multithreaded application“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit den Listen der aktuellen Artikel, Bücher, Dissertationen, Berichten und anderer wissenschaftlichen Quellen zum Thema "Multithreaded application" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Zeitschriftenartikel zum Thema "Multithreaded application"

1

Giebas, Damian, and Rafał Wojszczyk. "Deadlocks Detection in Multithreaded Applications Based on Source Code Analysis." Applied Sciences 10, no. 2 (2020): 532. http://dx.doi.org/10.3390/app10020532.

Der volle Inhalt der Quelle
Annotation:
This paper extends multithreaded application source code model and shows how to using it to detect deadlocks in C language applications. Four known deadlock scenarios from literature can be detected using our model. For every scenario we created theorems and proofs whose fulfillment guarantees the occurrence of deadlocks in multithreaded applications. Paper also contains comparison of multithreaded application source code model and Petri nets and describe advantages and disadvantages both of them.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

GIEBAS, Damian, and Rafał WOJSZCZYK. "GRAPHICAL REPRESENTATIONS OF MULTITHREADED APPLICATIONS." Applied Computer Science 14, no. 2 (2018): 20–37. http://dx.doi.org/10.35784/acs-2018-10.

Der volle Inhalt der Quelle
Annotation:
This article contains a brief description of existing graphical methods for presenting multithreaded applications, i.e. Control Flow Graph and Petri nets. These methods will be discussed, and then a way to represent multithreaded applications using the concurrent process system model will be presented. All these methods will be used to present the idea of a multithreaded application that includes the race condition phenomenon. In the summary, all three methods will be compared and subjected to the evaluation, which will depend on whether the given representation will allow to find the mentioned phenomenon.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Kundan, Shivam, Theodoros Marinakis, Iraklis Anagnostopoulos, and Dimitri Kagaris. "A Pressure-Aware Policy for Contention Minimization on Multicore Systems." ACM Transactions on Architecture and Code Optimization 19, no. 3 (2022): 1–26. http://dx.doi.org/10.1145/3524616.

Der volle Inhalt der Quelle
Annotation:
Modern Chip Multiprocessors (CMPs) are integrating an increasing amount of cores to address the continually growing demand for high-application performance. The cores of a CMP share several components of the memory hierarchy, such as Last-Level Cache (LLC) and main memory. This allows for considerable gains in multithreaded applications while also helping to maintain architectural simplicity. However, sharing resources can also result in performance bottleneck due to contention among concurrently executing applications. In this work, we formulate a fine-grained application characterization methodology that leverages Performance Monitoring Counters (PMCs) and Cache Monitoring Technology (CMT) in Intel processors. We utilize this characterization methodology to develop two contention-aware scheduling policies, one static and one dynamic , that co-schedule applications based on their resource-interference profiles. Our approach focuses on minimizing contention on both the main-memory bandwidth and the LLC by monitoring the pressure that each application inflicts on these resources. We achieve performance benefits for diverse workloads, outperforming Linux and three state-of-the-art contention-aware schedulers in terms of system throughput and fairness for both single and multithreaded workloads. Compared with Linux, our policy achieves up to 16% greater throughput for single-threaded and up to 40% greater throughput for multithreaded applications. Additionally, the policies increase fairness by up to 65% for single-threaded and up to 130% for multithreaded ones.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Muralidhara, Sai Prashanth, Mahmut Kandemir, and Padma Raghavan. "Intra-application shared cache partitioning for multithreaded applications." ACM SIGPLAN Notices 45, no. 5 (2010): 329–30. http://dx.doi.org/10.1145/1837853.1693498.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Molchanov, Viktor. "Implementation of multithreaded calculations in educational web applications." Development Management 17, no. 2 (2019): 1–7. http://dx.doi.org/10.21511/dm.17(2).2019.01.

Der volle Inhalt der Quelle
Annotation:
The complication of the logic of educational web applications raises the issue of the effectiveness of the organization of their implementation. At the same time, efficiency, including pedagogical one, is connected among other factors with the technology of implementation of programs. When using as the main browser program, it is necessary to take into account its features, in particular, one-flow mode of execution of programs (scripts). Implementation of more complex algorithms in web applications delays the response of the application interface to user actions. This creates a discomfort for the user and, as a result, reduces the effectiveness of his work. Expanding the range of devices from which users access the Internet leads to the fact that mobile devices are more and more often used for learning as well. Therefore, another side of the problem is the impact on the quality of connection to the server. It is necessary to ensure the work of the program in case of interruptions in connection or reduce their impact. A solution to the problem may be the implementation of part of the calculations in the background. The article deals with the use of calculations in the background streams of the browser and caching control for educational web applications. Various ways of creating such streams and the peculiarities of their use are analyzed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Makarov, Igor Sergeevich, Denis Vyacheslavovich Larin, Evgeniia Grigor'evna Vorobeva, Daniil Pavlovich Emelin, and Dmitry Aleksandrovich Kartashov. "The impact of asynchronous and multithreaded query processing models on the performance of server-side web applications." Программные системы и вычислительные методы, no. 1 (January 2025): 13–20. https://doi.org/10.7256/2454-0714.2025.1.73665.

Der volle Inhalt der Quelle
Annotation:
The object of the study is server-side web applications and their performance when processing a large number of simultaneous requests. Asynchronous technologies (Node.js, Python Asyncio, Go, Kotlin Coroutines) and multithreaded models (Java Threading, Python Threading). The authors analyze asynchronous event loops, goroutines, coroutines, and classical multithreaded approaches in detail, evaluating their effectiveness in tasks with intensive use of I/O and computing resources. An experiment is underway with API development in three languages (Java, Node.js, Go) and testing using the hey utility. It also explores the features of scalability, performance optimization, caching, error handling, load tests, and implementation features of parallel computing. The purpose of the study is to determine which approaches provide the highest performance in server applications. Research methods include load testing, collection of metrics (response time, bandwidth, and server resource consumption), and analysis of the results. The scientific novelty lies in comparing asynchronous and multithreaded methods in real-world web development scenarios. The main conclusions of the study are recommendations on the use of asynchronous technologies in high-load I/O tasks and multithreading in computationally complex scenarios. The results obtained will help developers optimize the performance of server applications depending on their tasks and workload. Additionally, the study examines aspects of the complexity of debugging asynchronous applications, the impact of thread pools on the performance of multithreaded solutions, as well as scenarios in which asynchronous and multithreaded approaches can complement each other. Special attention is paid to server resource management under scalable loads, which will allow IT specialists to more accurately select tools and technologies for solving specific tasks. In conclusion, possible ways to optimize the operation of server applications are discussed, including the use of new approaches and algorithms, as well as the prospects for the development of asynchronous and multithreaded technologies in the context of highly loaded systems, their impact on the overall application architecture, as well as on increasing fault tolerance and security.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Ryabev, A. V. "Overview and classification of advanced schemes of multithreaded combined energy transmissions based on their kinematic analysis." Izvestiya MGTU MAMI 10, no. 2 (2016): 55–65. http://dx.doi.org/10.17816/2074-0530-66932.

Der volle Inhalt der Quelle
Annotation:
The article deals with the existing and promising modern automotive multithreaded combined energy transmissions, based on the principle of separation of power for the electrical and mechanical streams. These combined energy transmissions due to the presence in their design of continuously variable electric transmission allow obtaining an arbitrary gear ratio from the engine to the wheels, while maintaining high efficiency inherent to manual transmission. It allows to assume that multithreaded combined energy transmissions are promising for use in hybrid vehicles as evidenced by the successful operation of Toyota Prius automobile. The article describes 16 different schemes of electromechanical transmissions. Some of them are actually applied in practice, while others exist only as prototypes or theoretical projects. On the basis of the kinematic analysis, including determination of number of operating modes and degrees of freedom as well as the construction of kinematic plans for different of operating modes the classification of multithreaded combined energy transmissions by type of differential mechanism (mechanical part of transmission) was proposed. There were allocated single-mode and multi-mode multithreaded combined energy transmissions. The last ones were divided into three classes, depending on the method of obtaining different modes: stepped, variable and combined. Moreover, within each class transmissions with differential at input, differential at output with complex power division were identified. This review allows to get acquainted with possibilities of application of multithreaded combined energy transmissions in road transport, to understand its strengths and weaknesses, identify promising areas of application of multithreaded electromechanical transmissions of various types.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Shen, Hua, Guo Shun Zhou, and Hui Qi Yan. "A Study of Parallelization and Performance Optimizations Based on OpenMP." Applied Mechanics and Materials 321-324 (June 2013): 2933–37. http://dx.doi.org/10.4028/www.scientific.net/amm.321-324.2933.

Der volle Inhalt der Quelle
Annotation:
The primary consequence of the transition to multicore processors is that applications will increasingly need to be parallelized to improve application's throughput, responsiveness and latency. Multithreading is becoming increasingly important for modern programming. Unfortunately, parallel programming is no doubt much more tedious and error-prone than serial programming. Although modern compilers can manage threads well, but in practice, synchronization errors (such as: data race errors, deadlocks) required careful management and good optimization method. This paper presents a preliminary study of the usability of the Intel threading tools for multicore programming. This work compare performance of a single threaded application with multithreaded applications, use tools called Intel® VTune Performance Analyzer, Intel® Thread Checker and OpenMP to efficiently optimize multithreaded applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Settle, Alex, Dan Connors, Enric Gibert, and Antonio González. "A dynamically reconfigurable cache for multithreaded processors." Journal of Embedded Computing 2, no. 2 (2006): 221–33. https://doi.org/10.3233/emc-2006-00027.

Der volle Inhalt der Quelle
Annotation:
Chip multi-processors (CMP) are rapidly emerging as an important design paradigm for both high performance and embedded processors. These machines provide an important performance alternative to increasing the clock frequency. In spite of the increase in potential performance, several issues related to resource sharing on the chip can negatively impact the performance of embedded applications. In particular, the shared on-chip caches make each job's memory access times dependent on the behavior of the other jobs sharing the cache. If not adequately managed, this can lead to problems in meeting hard real-time scheduling constraints. This work explores adaptable caching strategies which balance the resource demands of each application and in turn lead to improvements in throughput for the collective workload. Experimental results demonstrate speedups of up to 1.47X for workloads of two co-scheduled applications compared against a fully-shared two-level cache hierarchy. Additionally, the adaptable caching scheme is shown to achieve an average speedup of 1.10X over the leading cache partitioning model. By dynamically managing cache storage for multiple application threads at runtime, sizable performance levels are achieved, which provides chip designers the opportunity to maintain high performance as cache size and power budgets become a concern in the CMP design space.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Mao, Li Na, and Lin Yan Tang. "The Design and Application of Monitoring Framework Based on AOP." Applied Mechanics and Materials 685 (October 2014): 671–75. http://dx.doi.org/10.4028/www.scientific.net/amm.685.671.

Der volle Inhalt der Quelle
Annotation:
In this article apply AOP technology to multithreaded monitoring, using a database to store thread information multi-threaded monitoring platform implementation scheme, thread monitoring module is completely independent of the original system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Dissertationen zum Thema "Multithreaded application"

1

Stridh, Fredrik. "A Simple Throttling Concept for Multithreaded Application Servers." Thesis, Blekinge Tekniska Högskola, Sektionen för datavetenskap och kommunikation, 2009. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-2840.

Der volle Inhalt der Quelle
Annotation:
Multithreading is today a very common technology to achieve concurrency within software. Today there exists three commonly used threading strategies for multithreaded application servers. These are thread per client, thread per request and thread pool. Earlier studies has shown that the choice of threading strategy is not that important. Our measurements show that the choice of threading architecture becomes more important when the application comes under high load. We will in this study present a throttling concept which can give thread per client almost as good qualities as the thread pool strategy when it comes to performance. No architecture change is required. This concept has been evaluated on three types of hardware, ranging from 1 to 64 CPUs, using 6 alternatives loads and both in C and Java. We have also identified that there is a high correlation between average response times and the length of the run time queue. This can be used to construct a self tuning throttling algorithm that makes the introduction of the throttle concept even simpler, since it does require any configuring.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Guitart, Fernández Jordi. "Performance Improvement of Multithreaded Java Applications Execution on Multiprocessor Systems." Doctoral thesis, Universitat Politècnica de Catalunya, 2005. http://hdl.handle.net/10803/5989.

Der volle Inhalt der Quelle
Annotation:
El disseny del llenguatge Java, que inclou aspectes importants com són la seva portabilitat i neutralitat envers l'arquitectura, les seves capacitats multithreading, la seva familiaritat (degut a la seva semblança amb C/C++), la seva robustesa, les seves capacitats en seguretat i la seva naturalesa distribuïda, fan que sigui un llenguatge potencialment interessant per ser utilitzat en entorns paral·lels com són els entorns de computació d'altes prestacions (HPC), on les aplicacions poden treure profit del suport que ofereix Java a l'execució multithreaded per realitzar càlculs en paral·lel, o en entorns e-business, on els servidors Java multithreaded (que segueixen l'especificació J2EE) poden treure profit de les capacitats multithreading de Java per atendre de manera concurrent un gran nombre de peticions.<br/><br/>No obstant, l'ús de Java per la programació paral·lela ha d'enfrontar-se a una sèrie de problemes que fàcilment poden neutralitzar el guany obtingut amb l'execució en paral·lel. El primer problema és el gran overhead provocat pel suport de threads de la JVM quan s'utilitzen threads per executar feina de gra fi, quan es crea un gran nombre de threads per suportar l'execució d'una aplicació o quan els threads interaccionen estretament mitjançant mecanismes de sincronització. El segon problema és la degradació en el rendiment produïda quan aquestes aplicacions multithreaded s'executen en sistemes paral·lels multiprogramats. La principal causa d'aquest problemes és la manca de comunicació entre l'entorn d'execució i les aplicacions, la qual pot induir a les aplicacions a fer un ús descoordinat dels recursos disponibles.<br/><br/>Aquesta tesi contribueix amb la definició d'un entorn per analitzar i comprendre el comportament de les aplicacions Java multithreaded. La contribució principal d'aquest entorn és que la informació de tots els nivells involucrats en l'execució (aplicació, servidor d'aplicacions, JVM i sistema operatiu) està correlada. Aquest fet és molt important per entendre com aquest tipus d'aplicacions es comporten quan s'executen en entorns que inclouen servidors i màquines virtuals, donat que l'origen dels problemes de rendiment es pot trobar en qualsevol d'aquests nivells o en la seva interacció.<br/><br/>Addicionalment, i basat en el coneixement adquirit mitjançant l'entorn d'anàlisis proposat, aquesta tesi contribueix amb mecanismes i polítiques de planificació orientats cap a l'execució eficient d'aplicacions Java multithreaded en sistemes multiprocessador considerant les interaccions i la coordinació dels mecanismes i les polítiques de planificació en els diferents nivells involucrats en l'execució. La idea bàsica consisteix en permetre la cooperació entre les aplicacions i l'entorn d'execució en la gestió de recursos establint una comunicació bi-direccional entre les aplicacions i el sistema. Per una banda, les aplicacions demanen a l'entorn d'execució la quantitat de recursos que necessiten. Per altra banda, l'entorn d'execució pot ser inquirit en qualsevol moment per les aplicacions ser informades sobre la seva assignació de recursos. <br/><br/>Aquesta tesi proposa que les aplicacions utilitzin la informació proporcionada per l'entorn d'execució per adaptar el seu comportament a la quantitat de recursos que tenen assignats (aplicacions auto-adaptables). Aquesta adaptació s'assoleix en aquesta tesi per entorns HPC per mitjà de la mal·leabilitat de les aplicacions, i per entorns e-business amb una proposta de control de congestió que fa control d'admissió basat en la diferenciació de connexions SSL per prevenir la degradació del rendiment i mantenir la Qualitat de Servei (QoS).<br/><br/>Els resultats de l'avaluació demostren que subministrar recursos de manera dinàmica a les aplicacions auto-adaptables en funció de la seva demanda millora el rendiment de les aplicacions Java multithreaded tant en entorns HPC com en entorns e-business. Mentre disposar d'aplicacions auto-adaptables evita la degradació del rendiment, el subministrament dinàmic de recursos permet satisfer els requeriments de les aplicacions en funció de la seva demanda i adaptar-se a la variabilitat de les seves necessitats de recursos. D'aquesta manera s'aconsegueix una millor utilització dels recursos donat que els recursos que no utilitza una aplicació determinada poden ser distribuïts entre les altres aplicacions.<br>The design of the Java language, which includes important aspects such as its portability and architecture neutrality, its multithreading facilities, its familiarity (due to its resemblance with C/C++), its robustness, its security capabilities and its distributed nature, makes it a potentially interesting language to be used in parallel environments such as high performance computing (HPC) environments, where applications can benefit from the Java multithreading support for performing parallel calculations, or e-business environments, where multithreaded Java application servers (i.e. following the J2EE specification) can take profit of Java multithreading facilities to handle concurrently a large number of requests.<br/><br/>However, the use of Java for parallel programming has to face a number of problems that can easily offset the gain due to parallel execution. The first problem is the large overhead incurred by the threading support available in the JVM when threads are used to execute fine-grained work, when a large number of threads are created to support the execution of the application or when threads closely interact through synchronization mechanisms. The second problem is the performance degradation occurred when these multithreaded applications are executed in multiprogrammed parallel systems. The main issue that causes these problems is the lack of communication between the execution environment and the applications, which can cause these applications to make an uncoordinated use of the available resources.<br/><br/>This thesis contributes with the definition of an environment to analyze and understand the behavior of multithreaded Java applications. The main contribution of this environment is that all levels in the execution (application, application server, JVM and operating system) are correlated. This is very important to understand how this kind of applications behaves when executed on environments that include servers and virtual machines, because the origin of performance problems can reside in any of these levels or in their interaction.<br/><br/>In addition, and based on the understanding gathered using the proposed analysis environment, this thesis contributes with scheduling mechanisms and policies oriented towards the efficient execution of multithreaded Java applications on multiprocessor systems considering the interactions and coordination between scheduling mechanisms and policies at the different levels involved in the execution. The basis idea consists of allowing the cooperation between the applications and the execution environment in the resource management by establishing a bi-directional communication path between the applications and the underlying system. On one side, the applications request to the execution environment the amount of resources they need. On the other side, the execution environment can be requested at any time by the applications to inform them about their resource assignments. <br/><br/>This thesis proposes that applications use the information provided by the execution environment to adapt their behavior to the amount of resources allocated to them (self-adaptive applications). This adaptation is accomplished in this thesis for HPC environments through the malleability of the applications, and for e-business environments with an overload control approach that performs admission control based on SSL connections differentiation for preventing throughput degradation and maintaining Quality of Service (QoS).<br/><br/>The evaluation results demonstrate that providing resources dynamically to self-adaptive applications on demand improves the performance of multithreaded Java applications as in HPC environments as in e-business environments. While having self-adaptive applications avoids performance degradation, dynamic provision of resources allows meeting the requirements of the applications on demand and adapting to their changing resource needs. In this way, better resource utilization is achieved because the resources not used by some application may be distributed among other applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Rokos, Georgios. "Scalable multithreaded algorithms for mutable irregular data with application to anisotropic mesh adaptivity." Thesis, Imperial College London, 2014. http://hdl.handle.net/10044/1/24812.

Der volle Inhalt der Quelle
Annotation:
Anisotropic mesh adaptation is a powerful way to directly minimise the computational cost of mesh based simulation. It is particularly important for multi-scale problems where the required number of floating-point operations can be reduced by orders of magnitude relative to more traditional static mesh approaches. Increasingly, finite element/volume codes are being optimised for modern multicore architectures. Inter-node parallelism for mesh adaptivity has been successfully implemented by a number of groups using domain decomposition methods. However, thread-level parallelism using programming models such as OpenMP is significantly more challenging because the underlying data structures are extensively modified during mesh adaptation and a greater degree of parallelism must be realised while keeping the code race-free. In this thesis we describe a new thread-parallel implementation of four anisotropic mesh adaptation algorithms, namely edge coarsening, element refinement, edge swapping and vertex smoothing. For each of the mesh optimisation phases we describe how safe parallel execution is guaranteed by processing workitems in batches of independent sets and using a deferred-operations strategy to update the mesh data structures in parallel without data contention. Scalable execution is further assisted by creating worklists using atomic operations, which provides a synchronisation-free alternative to reduction-based worklist algorithms. Additionally, we compare graph colouring methods for the creation of independent sets and present an improved version which can run up to 50% faster than existing techniques. Finally, we describe some early work on an interrupt-driven work-sharing for-loop scheduler which is shown to perform better than existing work-stealing schedulers. Combining all aforementioned novel techniques, which are generally applicable to other unordered irregular problems, we show that despite the complex nature of mesh adaptation and inherent load imbalances, we achieve a parallel efficiency of 60% on an 8-core Intel(R) Xeon(R) Sandy Bridge and 40% using 16 cores on a dual-socket Intel(R) Xeon(R) Sandy Bridge ccNUMA system.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Martin, Rovira Julia, and Fructoso Melero Francisco Manuel. "Micro-Network Processor : A Processor Architecture for Implementing NoC Routers." Thesis, Jönköping University, JTH, Computer and Electrical Engineering, 2007. http://urn.kb.se/resolve?urn=urn:nbn:se:hj:diva-941.

Der volle Inhalt der Quelle
Annotation:
<p>Routers are probably the most important component of a NoC, as the performance of the whole network is driven by the routers’ performance. Cost for the whole network in terms of area will also be minimised if the router design is kept small. A new application specific processor architecture for implementing NoC routers is proposed in this master thesis, which will be called µNP (Micro-Network Processor). The aim is to offer a solution in which there is a trade-off between the high performance of routers implemented in hardware and the high level of flexibility that could be achieved by loading a software that routed packets into a GPP. Therefore, a study including the design of a hardware based router and a GPP based router has been conducted. In this project the first version of the µNP has been designed and a complete instruction set, along with some sample programs, is also proposed. The results show that, in the best case for all implementation options, µNP was 7.5 times slower than the hardware based router. It has also behaved more than 100 times faster than the GPP based router, keeping almost the same degree of flexibility for routing purposes within NoC.</p>
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Tallam, Sriraman Madapusi. "Fault Location and Avoidance in Long-Running Multithreaded Applications." Diss., The University of Arizona, 2007. http://hdl.handle.net/10150/194927.

Der volle Inhalt der Quelle
Annotation:
Faults are common-place and inevitable in complex applications. Hence, automated techniques are necessary to analyze failed executions and debug the application to locate the fault. For locating faults in programs, dynamic slices have been shown to be very effective in reducing the effort of debugging. The user needs to inspect only a small subset of program statements to get to the root cause of the fault. While prior work has primarily focussed on single-threaded programs, this dissertation shows how dynamic slicing can be used for fault location in multithreaded programs. This dissertation also shows that dynamic slices can be used to track down faults due to data races in multithreaded programs by incorporating additional data dependences that arise in the presence of many threads. In order to construct the dynamic slices, dependence traces are collected and processed. However, program runs generate traces in the order of Gigabytes in a few seconds. Hence, for multithreaded program runs that are long-running, the process of collecting and storing these traces poses a significant challenge. This dissertation proposes two techniques to overcome this challenge. Experiments indicate that the techniques combined can reduce the size of the traces by 3 orders of magnitude. For applications that are critical and for which down time is highly detrimental, techniques for surviving software failures and letting the execution continue are desired. This dissertation proposes one such technique to recover applications from a class of faults that are caused by the execution environment and prevent the fault in future runs. This technique has been successfully used to avoid faults in a variety of applications caused due to thread scheduling, heap overflow, and malformed user requests. Case studies indicate that, for most environment bugs, the point in the execution where the environment modification is necessary can be clearly pin-pointed by using the proposed system and the fault can be avoided in the first attempt. The case studies also show that the patches needed to prevent the different faults are simple and the overhead induced by the system during the normal run of the application is less than 10 \%, on average.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Bechara, Charly. "Study and design of a manycore architecture with multithreaded processors for dynamic embedded applications." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00713536.

Der volle Inhalt der Quelle
Annotation:
Embedded systems are getting more complex and require more intensive processing capabilities. They must be able to adapt to the rapid evolution of the high-end embedded applications that are characterized by their high computation-intensive workloads (order of TOPS: Tera Operations Per Second), and their high level of parallelism. Moreover, since the dynamism of the applications is becoming more significant, powerful computing solutions should be designed accordingly. By exploiting efficiently the dynamism, the load will be balanced between the computing resources, which will improve greatly the overall performance. To tackle the challenges of these future high-end massively-parallel dynamic embedded applications, we have designed the AHDAM architecture, which stands for "Asymmetric Homogeneous with Dynamic Allocator Manycore architecture". Its architecture permits to process applications with large data sets by efficiently hiding the processors' stall time using multithreaded processors. Besides, it exploits the parallelism of the applications at multiple levels so that they would be accelerated efficiently on dedicated resources, hence improving efficiently the overall performance. AHDAM architecture tackles the dynamism of these applications by dynamically balancing the load between its computing resources using a central controller to increase their utilization rate.The AHDAM architecture has been evaluated using a relevant embedded application from the telecommunication domain called "spectrum radio-sensing". With 136 cores running at 500 MHz, AHDAM architecture reaches a peak performance of 196 GOPS and meets the computation requirements of the application.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Curtis-Maury, Matthew. "Improving the Efficiency of Parallel Applications on Multithreaded and Multicore Systems." Diss., Virginia Tech, 2008. http://hdl.handle.net/10919/26697.

Der volle Inhalt der Quelle
Annotation:
The scalability of parallel applications executing on multithreaded and multicore multiprocessors is often quite limited due to large degrees of contention over shared resources on these systems. In fact, negative scalability frequently occurs such that a non-negligable performance loss is observed through the use of more processors and cores. In this dissertation, we present a prediction model for identifying efficient operating points of concurrency in multithreaded scientific applications in terms of both performance as a primary objective and power secondarily. We also present a runtime system that uses live analysis of hardware event rates through the prediction model to optimize applications dynamically. We discuss a dynamic, phase-aware performance prediction model (DPAPP), which combines statistical learning techniques, including multivariate linear regression and artificial neural networks, with runtime analysis of data collected from hardware event counters to locate optimal operating points of concurrency. We find that the scalability model achieves accuracy approaching 95%, sufficiently accurate to identify improved concurrency levels and thread placements from within real parallel scientific applications. Using DPAPP, we develop a prediction-driven runtime optimization scheme, called ACTOR, which throttles concurrency so that power consumption can be reduced and performance can be set at the knee of the scalability curve of each parallel execution phase in an application. ACTOR successfully identifies and exploits program phases where limited scalability results in a performance loss through the use of more processing elements, providing simultaneous reductions in execution time by 5%-18% and power consumption by 0%-11% across a variety of parallel applications and architectures. Further, we extend DPAPP and ACTOR to include support for runtime adaptation of DVFS, allowing for the synergistic exploitation of concurrency throttling and DVFS from within a single, autonomically-acting library, providing improved energy-efficiency compared to either approach in isolation.<br>Ph. D.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Urban, Martin. "Práce s historickými mapami na mobilním zařízení." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2014. http://www.nusl.cz/ntk/nusl-235414.

Der volle Inhalt der Quelle
Annotation:
The goal of this thesis is to experiment with the latest web technologies and to design new process for mobile application creation. It is possible to create multiplatform applications which are almost unrecognizable from native applications by proposed procedures.  It is focused on performance and native behaviour of the user interface in this thesis. Described practices are demonstrated on application designed for work with historical maps, which is able to show maps from historical archives whole over world real-time. Rapid acceleration has been showed on the demonstrative application compared to standard process of creation of web applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Rico, Carro Alejandro. "Raising the level of abstraction : simulation of large chip multiprocessors running multithreaded applications." Doctoral thesis, Universitat Politècnica de Catalunya, 2013. http://hdl.handle.net/10803/134743.

Der volle Inhalt der Quelle
Annotation:
The number of transistors on an integrated circuit keeps doubling every two years. This increasing number of transistors is used to integrate more processing cores on the same chip. However, due to power density and ILP diminishing returns, the single-thread performance of such processing cores does not double every two years, but doubles every three years and a half. Computer architecture research is mainly driven by simulation. In computer architecture simulators, the complexity of the simulated machine increases with the number of available transistors. The more transistors, the more cores, the more complex is the model. However, the performance of computer architecture simulators depends on the single-thread performance of the host machine and, as we mentioned before, this is not doubling every two years but every three years and a half. This increasing difference between the complexity of the simulated machine and simulation speed is what we call the simulation speed gap. Because of the simulation speed gap, computer architecture simulators are increasingly slow. The simulation of a reference benchmark may take several weeks or even months. Researchers are concious of this problem and have been proposing techniques to reduce simulation time. These techniques include the use of reduced application input sets, sampled simulation and parallelization. Another technique to reduce simulation time is raising the level of abstraction of the simulated model. In this thesis we advocate for this approach. First, we decide to use trace-driven simulation because it does not require to provide functional simulation, and thus, allows to raise the level of abstraction beyond the instruction-stream representation. However, trace-driven simulation has several limitations, the most important being the inability to reproduce the dynamic behavior of multithreaded applications. In this thesis we propose a simulation methodology that employs a trace-driven simulator together with a runtime sytem that allows the proper simulation of multithreaded applications by reproducing the timing-dependent dynamic behavior at simulation time. Having this methodology, we evaluate the use of multiple levels of abstraction to reduce simulation time, from a high-speed application-level simulation mode to a detailed instruction-level mode. We provide a comprehensive evaluation of the impact in accuracy and simulation speed of these abstraction levels and also show their applicability and usefulness depending on the target evaluations. We also compare these levels of abstraction with the existing ones in popular computer architecture simulators. Also, we validate the highest abstraction level against a real machine. One of the interesting levels of abstraction for the simulation of multi-cores is the memory mode. This simulation mode is able to model the performanceof a superscalar out-of-order core using memory-access traces. At this level of abstraction, previous works have used filtered traces that do not include L1 hits, and allow to simulate only L2 misses for single-core simulations. However, simulating multithreaded applications using filtered traces as in previous works has inherent inaccuracies. We propose a technique to reduce such inaccuracies and evaluate the speed-up, applicability, and usefulness of memory-level simulation. All in all, this thesis contributes to knowledge with techniques for the simulation of chip multiprocessors with hundreds of cores using traces. It states and evaluates the trade-offs of using varying degress of abstraction in terms of accuracy and simulation speed.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Pop, Ruxandra. "Mapping Concurrent Applications to Multiprocessor Systems with Multithreaded Processors and Network on Chip-Based Interconnections." Licentiate thesis, Linköpings universitet, Institutionen för datavetenskap, 2011. http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-64256.

Der volle Inhalt der Quelle
Annotation:
Network on Chip (NoC) architectures provide scalable platforms for designing Systems on Chip (SoC) with large number of cores. Developing products and applications using an NoC architecture offers many challenges and opportunities. A tool which can map an application or a set of applications to a given NoC architecture will be essential. In this thesis we first survey current techniques and we present our proposals for mapping and scheduling of concurrent applications to NoCs with multithreaded processors as computational resources. NoC platforms are basically a special class of Multiprocessor Embedded Systems (MPES). Conventional MPES architectures are mostly bus-based and, thus, are exposed to potential difficulties regarding scalability and reusability. There has been a lot of research on MPES development including work on mapping and scheduling of applications. Many of these results can also be applied to NoC platforms. Mapping and scheduling are known to be computationally hard problems. A large range of exact and approximate optimization algorithms have been proposed for solving these problems. The methods include Branch-and–Bound (BB), constructive and transformative heuristics such as List Scheduling (LS), Genetic Algorithms (GA) and various types of Mathematical Programming algorithms. Concurrent applications are able to capture a typical embedded system which is multifunctional. Concurrent applications can be executed on an NoC which provides a large computational power with multiple on-chip computational resources. Improving the time performances of concurrent applications which are running on Network on Chip (NoC) architectures is mainly correlated with the ability of mapping and scheduling methodologies to exploit the Thread Level Parallelism (TLP) of concurrent applications through the available NoC parallelism. Matching the architectural parallelism to the application concurrency for obtaining good performance-cost tradeoffs is  another aspect of the problem. Multithreading is a technique for hiding long latencies of memory accesses, through the overlapped execution of several threads. Recently, Multi-Threaded Processors (MTPs) have been designed providing the architectural infrastructure to concurrently execute multiple threads at hardware level which, usually, results in a very low context switching overhead. Simultaneous Multi-Threaded Processors (SMTPs) are superscalar processor architectures which adaptively exploit the coarse grain and the fine grain parallelism of applications, by simultaneously executing instructions from several thread contexts. In this thesis we make a case for using SMTPs and MTPs as NoC resources and show that such a multiprocessor architecture provides better time performances than an NoC with solely General-purpose Processors (GP). We have developed a methodology for task mapping and scheduling to an NoC with mixed SMTP, MTP and GP resources, which aims to maximize the time performance of concurrent applications and to satisfy their soft deadlines. The developed methodology was evaluated on many configurations of NoC-based platforms with SMTP, MTP and GP resources. The experimental results demonstrate that the use of SMTPs and MTPs in NoC platforms can significantly speed-up applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Mehr Quellen

Bücher zum Thema "Multithreaded application"

1

Reich, David E. Designing High-Powered OS/2 Warp Applications: The Anatomy of Multithreaded Programs. Wiley, 1995.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Dadyan, Eduard. Modern programming technologies. The C#language. Volume 1. For novice users. INFRA-M Academic Publishing LLC., 2021. http://dx.doi.org/10.12737/1196552.

Der volle Inhalt der Quelle
Annotation:
Volume 1 of the textbook is addressed to novice users who want to learn the popular object-oriented programming language C#. The tutorial provides complete information about the C# language and platform .NET. Basic data types, variables, functions, and arrays are considered. Working with dates and enumerations is shown. The elements and constructs of the language are described: classes, interfaces, assemblies, manifests, namespaces, collections, generalizations, delegates, events, etc. It provides information about Windows processes and threads, as well as examples of organizing work in multithreaded mode. The questions of creating console applications, applications such as Windows Forms and applications for working with databases, as well as questions of deep and advanced development of the material are described. The Visual Studio. NET environment is considered as the development environment. All sample programs are given in C#.&#x0D; Meets the requirements of the federal state educational standards of higher education of the latest generation.&#x0D; It is intended for students studying in the direction of training 09.03.03 "Applied Informatics", undergraduate and graduate students of all specialties, as well as graduate students and students of the IPC.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Tazehkandi, Amin Ahmadi. Computer Vision with OpenCV 3 and Qt5: Build visually appealing, multithreaded, cross-platform computer vision applications. Packt Publishing, 2018.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Gonzalez, Javier Fernandez. Mastering Concurrency Programming with Java 8: Master the Principles and Techniques of Multithreaded Programming with the Java 8 Concurrency API. Packt Publishing, Limited, 2016.

Den vollen Inhalt der Quelle finden
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Buchteile zum Thema "Multithreaded application"

1

Giebas, Damian, and Rafał Wojszczyk. "Multithreaded Application Model." In Advances in Intelligent Systems and Computing. Springer International Publishing, 2019. http://dx.doi.org/10.1007/978-3-030-23946-6_11.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Korndörfer, Jonas H. Müller, Ahmed Eleliemy, Osman Seckin Simsek, Thomas Ilsche, Robert Schöne, and Florina M. Ciorba. "How Do OS and Application Schedulers Interact? An Investigation with Multithreaded Applications." In Euro-Par 2023: Parallel Processing. Springer Nature Switzerland, 2023. http://dx.doi.org/10.1007/978-3-031-39698-4_15.

Der volle Inhalt der Quelle
Annotation:
AbstractScheduling is critical for achieving high performance for parallel applications executing on high performance computing (HPC) systems. Scheduling decisions can be taken at batch system, application, and operating system (OS) levels. In this work, we investigate the interaction between the Linux scheduler and various OpenMP scheduling options during the execution of three multithreaded codes on two types of computing nodes. When threads are unpinned, we found that OS scheduling events significantly interfere with the performance of compute-bound applications, aggravating their inherent load imbalance or overhead (by additional context switches). While the Linux scheduler balances system load in the absence of application-level load balancing, we also found it decreases performance via additional context switches and thread migrations. We observed that performing load balancing operations both at the OS and application levels is advantageous for the performance of concurrently executing applications. These results show the importance of considering the role of OS scheduling in the design of application scheduling techniques and vice versa. This work motivates further research into coordination of scheduling within multithreaded applications and the OS.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Grelck, Clemens, and Frank Penczek. "Implementation Architecture and Multithreaded Runtime System of S-Net." In Implementation and Application of Functional Languages. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-24452-0_4.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Bagchi, Susmit, and Mads Nygaard. "Application Controlled IPC Synchrony — An Event Driven Multithreaded Approach." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2003. http://dx.doi.org/10.1007/3-540-44862-4_109.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Barbu, Guillaume, and Hugues Thiebeauld. "Synchronized Attacks on Multithreaded Systems - Application to Java Card 3.0 -." In Smart Card Research and Advanced Applications. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-27257-8_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Theobald, Kevin B., Rishi Kumar, Gagan Agrawal, Gerd Heber, Ruppa K. Thulasiram, and Guang R. Gao. "Developing a Communication Intensive Application on the EARTH Multithreaded Architecture." In Euro-Par 2000 Parallel Processing. Springer Berlin Heidelberg, 2000. http://dx.doi.org/10.1007/3-540-44520-x_88.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Lafortune, Stéphane, Yin Wang, and Spyros Reveliotis. "Eliminating Concurrency Bugs in Multithreaded Software: An Approach Based on Control of Petri Nets." In Application and Theory of Petri Nets and Concurrency. Springer Berlin Heidelberg, 2013. http://dx.doi.org/10.1007/978-3-642-38697-8_2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Cassagnabère, Christophe, François Rousselle, and Christophe Renaud. "CPU-GPU Multithreaded Programming Model: Application to the Path Tracing with Next Event Estimation Algorithm." In Advances in Visual Computing. Springer Berlin Heidelberg, 2006. http://dx.doi.org/10.1007/11919629_28.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Marowka, Ami. "On Performance Analysis of a Multithreaded Application Parallelized by Different Programming Models Using Intel VTune." In Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-23178-0_28.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Troelsen, Andrew. "Building Multithreaded Applications." In Pro C# 2008 and the .NET 3.5 Platform. Apress, 2007. http://dx.doi.org/10.1007/978-1-4302-0422-0_18.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Konferenzberichte zum Thema "Multithreaded application"

1

Muralidhara, Sai Prashanth, Mahmut Kandemir, and Padma Raghavan. "Intra-application shared cache partitioning for multithreaded applications." In the 15th ACM SIGPLAN symposium. ACM Press, 2010. http://dx.doi.org/10.1145/1693453.1693498.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Chinh, Nguyen Duc, Ettikan Kandasamy, and Lam Yoke Khei. "Efficient Development Methodology for Multithreaded Network Application." In 2007 5th Student Conference on Research and Development. IEEE, 2007. http://dx.doi.org/10.1109/scored.2007.4451394.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Scherrer, Chad, Nathaniel Beagley, Jarek Nieplocha, Andres Marquez, John Feo, and Daniel Chavarria-Miranda. "Probability Convergence in a Multithreaded Counting Application." In 2007 IEEE International Parallel and Distributed Processing Symposium. IEEE, 2007. http://dx.doi.org/10.1109/ipdps.2007.370688.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

Lupin, S., M. Nestiurkina, M. Puschin, and M. Skvortsova. "Multithreaded application for work distribution in hierarchical systems." In XLIII ACADEMIC SPACE CONFERENCE: dedicated to the memory of academician S.P. Korolev and other outstanding Russian scientists – Pioneers of space exploration. AIP Publishing, 2019. http://dx.doi.org/10.1063/1.5133203.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Eickemeyer, Richard J., Ross E. Johnson, Steven R. Kunkel, Mark S. Squillante, and Shiafun Liu. "Evaluation of multithreaded uniprocessors for commercial application environments." In the 23rd annual international symposium. ACM Press, 1996. http://dx.doi.org/10.1145/232973.232994.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Li, Jiawen, Xuhao Chen, Li Shen, Xinbiao Gan, and Zhong Zheng. "Evaluating Multithreaded Workloads in CMP Virtualization Environment." In 2012 Second International Conference on Intelligent System Design and Engineering Application (ISDEA). IEEE, 2012. http://dx.doi.org/10.1109/isdea.2012.481.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

Zagan, Ionel, and Vasile Gheorghita Gaitan. "Schedulability analysis of nMPRA processor based on multithreaded execution." In 2016 International Conference on Development and Application Systems (DAS). IEEE, 2016. http://dx.doi.org/10.1109/daas.2016.7492561.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Ma, Pei-Jun, Ling-Fang Zhu, Kang Li, Jia-Liang Zhao, and Jiang-Yi Shi. "The application and optimization of SDRAM controller in multicore multithreaded SoC." In 2010 10th IEEE International Conference on Solid-State and Integrated Circuit Technology (ICSICT). IEEE, 2010. http://dx.doi.org/10.1109/icsict.2010.5667737.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Li, Sheng, Shannon Kuntz, Peter Kogge, and Jay Brockman. "Memory model effects on application performance for a lightweight multithreaded architecture." In Distributed Processing Symposium (IPDPS). IEEE, 2008. http://dx.doi.org/10.1109/ipdps.2008.4536356.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Leon, Hernan Ponce de, Olli Saarikivi, Kari Kahkonen, Keijo Heljanko, and Javier Esparza. "Unfolding Based Minimal Test Suites for Testing Multithreaded Programs." In 2015 15th International Conference on Application of Concurrency to System Design (ACSD). IEEE, 2015. http://dx.doi.org/10.1109/acsd.2015.12.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen

Berichte der Organisationen zum Thema "Multithreaded application"

1

Pfeiffer, Wayne, Larry Carter, Allan Snavely, Robert Leary, and Amit Majumdar. Evaluation of a Multithreaded Architecture for Defense Applications. Defense Technical Information Center, 1999. http://dx.doi.org/10.21236/ada369107.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Amela, R., R. Badia, S. Böhm, R. Tosi, C. Soriano, and R. Rossi. D4.2 Profiling report of the partner’s tools, complete with performance suggestions. Scipedia, 2021. http://dx.doi.org/10.23967/exaqute.2021.2.023.

Der volle Inhalt der Quelle
Annotation:
This deliverable focuses on the proling activities developed in the project with the partner's applications. To perform this proling activities, a couple of benchmarks were dened in collaboration with WP5. The rst benchmark is an embarrassingly parallel benchmark that performs a read and then multiple writes of the same object, with the objective of stressing the memory and storage systems and evaluate the overhead when these reads and writes are performed in parallel. A second benchmark is dened based on the Continuation Multi Level Monte Carlo (C-MLMC) algorithm. While this algorithm is normally executed using multiple levels, for the proling and performance analysis objectives, the execution of a single level was enough since the forthcoming levels have similar performance characteristics. Additionally, while the simulation tasks can be executed as parallel (multi-threaded tasks), in the benchmark, single threaded tasks were executed to increase the number of simulations to be scheduled and stress the scheduling engines. A set of experiments based on these two benchmarks have been executed in the MareNostrum 4 supercomputer and using PyCOMPSs as underlying programming model and dynamic scheduler of the tasks involved in the executions. While the rst benchmark was executed several times in a single iteration, the second benchmark was executed in an iterative manner, with cycles of 1) Execution and trace generation; 2) Performance analysis; 3) Improvements. This had enabled to perform several improvements in the benchmark and in the scheduler of PyCOMPSs. The initial iterations focused on the C-MLMC structure itself, performing re-factors of the code to remove ne grain and sequential tasks and merging them in larger granularity tasks. The next iterations focused on improving the PyCOMPSs scheduler, removing existent bottlenecks and increasing its performance by making the scheduler a multithreaded engine. While the results can still be improved, we are satised with the results since the granularity of the simulations run in this evaluation step are much ner than the one that will be used for the real scenarios. The deliverable nishes with some recommendations that should be followed along the project in order to obtain good performance in the execution of the project codes.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!

Zur Bibliographie