To see the other types of publications on this topic, follow the link: Mainframe Migration.

Journal articles on the topic 'Mainframe Migration'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 15 journal articles for your research on the topic 'Mainframe Migration.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Kalohia, Ankur. "Mainframe Modernization: Cost Analysis and ROI for Migration." International Journal for Research in Applied Science and Engineering Technology 12, no. 12 (2024): 267–74. https://doi.org/10.22214/ijraset.2024.65742.

Full text
Abstract:
As businesses strive to enhance operational efficiency, reduce costs, and leverage the latest technological innovations, many are transitioning from legacy mainframe systems to distributed, open-source environments. Mainframes, once the cornerstone of enterprise IT infrastructure, have become costly and less flexible than modern computing paradigms. This paper explores the cost analysis of migrating from mainframe legacy systems to distributed systems utilizing open-source technologies. It examines the key factors influencing migration costs, identifies opportunities for cost savings, and builds a business case for migration, emphasizing the return on investment (ROI). By carefully analyzing the cost structure, the paper outlines strategies for effective migration, demonstrates the long-term benefits, and presents a compelling argument for organizations to modernize their IT infrastructures.
APA, Harvard, Vancouver, ISO, and other styles
2

Narasimha Rao Vanaparthi. "The Roadmap to Mainframe Modernization: Bridging Legacy Systems with the Cloud." International Journal of Scientific Research in Computer Science, Engineering and Information Technology 11, no. 1 (2025): 125–33. https://doi.org/10.32628/cseit25111214.

Full text
Abstract:
The modernization of mainframe systems represents a critical inflection point for financial institutions seeking to remain competitive in an increasingly digital marketplace. This comprehensive technical article presents a strategic framework for transitioning from legacy mainframe infrastructure to cloud-native solutions, emphasizing risk mitigation and business continuity. Through detailed analysis of implementation methodologies, data migration strategies, and security architectures, financial organizations can leverage modern cloud platforms while preserving the reliability and security inherent in traditional mainframe systems. The article examines real-world case studies and empirical evidence demonstrating how institutions have successfully navigated common challenges, including regulatory compliance, system integration, and operational transformation. Special attention is given to the practical aspects of maintaining business operations during migration, implementing robust security controls, and establishing scalable cloud operations models. By providing a comprehensive roadmap encompassing technical and organizational considerations, this article serves as an essential resource for technology leaders and architects tasked with modernizing mission-critical mainframe systems while ensuring minimal disruption to core business functions.
APA, Harvard, Vancouver, ISO, and other styles
3

Kazanavičius, Justas, Dalius Mažeika, and Diana Kalibatienė. "An Approach to Migrate a Monolith Database into Multi-Model Polyglot Persistence Based on Microservice Architecture: A Case Study for Mainframe Database." Applied Sciences 12, no. 12 (2022): 6189. http://dx.doi.org/10.3390/app12126189.

Full text
Abstract:
Migration from a monolithic architecture to a microservice architecture is a complex challenge, which consists of issues such as microservices identification, code decomposition, commination between microservices, independent deployment, etc. One of the key issues is data storage adaptation to a microservice architecture. A monolithic architecture interacts with a single database, while in microservice architecture, data storage is decentralized, each microservice works independently and has its own private data storage. A viable option to fulfil different microservice persistence requirements is polyglot persistence, which is data storage technology selected according to the characteristics of each microservice need. This research aims to propose and evaluate the approach of monolith database migration into multi-model polyglot persistence based on microservice architecture. The novelty and relevance of the proposed approach are double, that is, it provides a general approach of how to conduct database migration from monolith architecture into a microservice architecture and allows the data model to be transformed into multi-model polyglot persistence. Migration from a mainframe monolith database to a multi-model polyglot persistence was performed as a proof-of-concept for the proposed migration approach. Quality attributes defined in the ISO/IEC 25012:2008 standard were used to evaluate and compare the data quality of the microservice with the multi-model polyglot persistence and the existing monolith mainframe database. Results of the research showed that the proposed approach can be used to conduct data storage migration from a monolith to microservice architecture and improve the quality of the consistency, understandability, availability, and portability attributes. Moreover, we expect that our results could inspire researchers and practitioners toward further work aimed to improve and automate the proposed approach.
APA, Harvard, Vancouver, ISO, and other styles
4

Cao, Yuan, Wen Ke Wang, Tie Liang Wang, and Ying Jie Wang. "Parameter Inversion of Tritium Migration Based on Parallel Genetic Algorithm." Advanced Materials Research 610-613 (December 2012): 1883–88. http://dx.doi.org/10.4028/www.scientific.net/amr.610-613.1883.

Full text
Abstract:
To calibrate model parameters of tritium migration in a test site of China, an intelligent parameter inversion model based on parallel genetic algorithm is built, a forward and inverse coupling program of radionuclide migration is designed, and the values of key parameters like hydraulic conductivity, dispersity and porosity are inverted automatically on a mainframe computer, by means of abundant observation data of tritium concentration. The inversion results accord with observation data well on the whole. Compared to manual adjustment method, this method has better overall convergence, higher calculated precision and efficiency, and less manpower cost. The results show that parallel genetic algorithm is feasible and valid in application to parameter inversion of tritium migration.
APA, Harvard, Vancouver, ISO, and other styles
5

Srinivas, Adilapuram. "The Roadmap to Legacy System Modernization: Phased Approach to Mainframe Migration and Cloud Adoption." Journal of Scientific and Engineering Research 7, no. 9 (2020): 252–57. https://doi.org/10.5281/zenodo.14770551.

Full text
Abstract:
Legacy systems often present significant challenges for organizations seeking to modernize their IT infrastructure. Migrating these systems to contemporary platforms requires careful planning and execution to mitigate risks and maximize benefits. This paper addressed the challenges of migrating legacy mainframe systems to modern cloud platforms. It proposed a phased approach to mitigate risks such as performance issues, skill gaps, and high costs. The study outlined three phases: assessment and planning, rehosting and refactoring, and application modernization. Recommendations included employing CI/CD pipelines, upskilling teams, and utilizing performance monitoring tools.
APA, Harvard, Vancouver, ISO, and other styles
6

Govindaraj, Vasanthi. "Cloud Migration Strategies for Mainframe Modernization: A Comparative Study of AWS, Azure, and GCP." International Journal of Computer Trends and Technology 72, no. 10 (2024): 57–65. http://dx.doi.org/10.14445/22312803/ijctt-v72i10p110.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Matana, Tony. "Origin & History of ASP SaaS, PaaS and Cloud Computing." International Journal of Scientific Research and Management (IJSRM) 12, no. 10 (2024): 1571–607. http://dx.doi.org/10.18535/ijsrm/v12i10.ec08.

Full text
Abstract:
This research paper outlines the evolution of cloud computing and software-as-a-service (SaaS) from their origins in 1960s time-sharing systems to microkernel based systems in the late 1980’s. It traces key developments during the period from 1987 to 1988 including the emergence of an Application Service Provider system. It uses methods and models to confirm accuracy of these claims. The paper highlights how the simple migration from mainframe to microkernels overcame the limitations of adoption by the public, ultimately leading to the dominance of SaaS and PaaS as primary models for delivering software and ASP services. This progression represents in a shift towards more flexible, scalable, and accessible computing models that are delivered by services models, by network with management platforms for software as a service (SaaS), platforms as a service (PaaS) and infrastructure as a service (IaaS) customization.
APA, Harvard, Vancouver, ISO, and other styles
8

TAI, K. L. "SI-ON-SI MCM TECHNOLOGY AND ITS APPLICATIONS." International Journal of High Speed Electronics and Systems 02, no. 04 (1991): 251–61. http://dx.doi.org/10.1142/s0129156491000120.

Full text
Abstract:
Multichip Module (MCM) packaging has been used in high-end systems, such as mainframe and supercomputers for some time. Rapid advances in VLSI technology and novel system architecture concepts have presented both challenges and opportunities for MCM technologists. We should not just try to find a solution, but also try to take a long-term view and plan the technological development. We would like to develop MCM technology which has a broad range of applications from consumer products to supercomputers. The technology should focus on low cost, high performance, compact size, and high reliability. We believe that it is most attractive to leverage IC technology and surface mount technology (SMT). Therefore we select Si wafer as the substrate, Al as the metallization, polyimide as the dielectrics, Ta-Si as the resistor material, and Si oxide and nitride as the dielectrics for capacitor. Flip-chip solder attachment are used to assemble chips on the substrate. We view our version of MCM as a “giant chip” rather than a miniaturized printed wiring board. This “giant chip” contains mixed device technologies which cannot be obtained by current device technology. The migration path should be from small to large module. The infrastructure of the CAD system and the testing system is critical for the development of MCM technology. Potential applications and implementations of MCM technology are given in this paper.
APA, Harvard, Vancouver, ISO, and other styles
9

Lovis, Christian, François Borst, and Jean-Raoul Scherrer. "DIOGENE 2, a distributed Hospital Information System with an emphasis on its Medical Information Content." Yearbook of Medical Informatics 04, no. 01 (1995): 86–97. http://dx.doi.org/10.1055/s-0038-1638023.

Full text
Abstract:
Abstract:DIOGENE 1 has been a mainframe-based centralised HIS with a star network of communication operating on a daily basis with 120 nursing ward units since 1978. Together the limited and costly growth capabilities of such a system with its extreme difficulty in cooperating jointly with other heterogeneous medical systems, with the need for faster networking expansions, led to the new design of a distributed architecture called DIOGENE 2. In 1989, a migration process between DIOGENE 1 and DIOGENE 2 was initiated and is now on the verge of being achieved. During the time of this new expansion of the HIS, it has been easy to cooperate with the decentralisation process of the new hospital organisation as well facilitating the integration of new functionalities like i.e. new WIS architectture, medical office patient histories, integration based upon PCs with UNIX based client/server platforms. That approach combines the handling of paragraphs structured patient records with the use of medical natural language processing and semi-automatic encoding as well. Amongst these new functionalities the P ACS are associated with image manipulation platforms called OSIRIS for X-Ray images as well as other tools devoted to molecula biology and genetics up to the ExPASy server on Internet using WWW Mosaic which is accessible from all over the world. The distributed architectture appears well suited not only for the integration of these new functionalities but to keep them growing as smoothly as possible.
APA, Harvard, Vancouver, ISO, and other styles
10

Schaffer, M. D., and J. C. Biendit. "Migrating resource commitment from mainframe to PC." IEEE Computer Applications in Power 7, no. 1 (1994): 14–19. http://dx.doi.org/10.1109/67.251310.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Venkat, Sumanth Guduru. "Salesforce Data Migration Strategies: From Legacy Systems to Cloud." Journal of Scientific and Engineering Research 5, no. 12 (2018): 382–87. https://doi.org/10.5281/zenodo.14168062.

Full text
Abstract:
The transfer of data from legacy systems to cloud based applications such as Saleforce is a complex process that comes with many complexities and considerations. Thus, the current paper provides a comprehensible analysis of the selected case and the approaches applied to Salesforce data migration that outlines the challenges of moving from legacy systems to the Salesforce environment. It analyses the significant issues of data ingestion, protection, correlation, and conversion, and sets out a detailed reference on the way to circumventing these hurdles. Each of the above approaches has its strengths and weaknesses: Big Bang, Trickle, and Hybrid migration types are described in detail in the paper. Here, you will thoroughly understand pseudocode corresponding to the data extraction, transformation, and loading migration process; the migration architecture and migration workflow diagrams, the detailed flowchart of migrating without downtime and migration concerns are also furnished in the context. The techniques and procedures to be adopted in order to migrate the project with the system with high quality but low risk of technical problems are examined along with the relevant recommendations as data backup, incremental testing, automation, and documentation. This work will serve to empower the IT practitioners with the key knowledge and the most relevant tools that are required to implement robust and effective data migration to Salesforce so as to spur organizational productivity while at the same unleashing the richness of the advanced and complex cloud-based CRM solutions.
APA, Harvard, Vancouver, ISO, and other styles
12

Thilmany, Jean. "Information Aging." Mechanical Engineering 130, no. 03 (2008): 22–25. http://dx.doi.org/10.1115/1.2008-mar-1.

Full text
Abstract:
This paper emphasizes on the importance of keeping aging information in a format that can be used readily and understood by everyone. In order to get up-to-the-minute access to older engineering information, managers need to be ever vigilant about ensuring that legacy data exists in a format, which can be easily understood and accessed. Today a number of new software applications and technologies can help even those companies with seemingly the most outdated of computers, the mainframe. In recent days, software developers have introduced innovative ways in which companies can speedily retrieve legacy information, whether it is stored in a format for a desktop computer or a mainframe. These methods involve migrating or upgrading information or installing middleware, all at some cost. Some companies may choose to move legacy data to the open XML format and use a number of software tools, such as database query methods, to quickly retrieve information. Legacy documents that can be written into XML can include blueprints, CAD designs, change orders, materials specifications, assembly instructions, and cost estimates.
APA, Harvard, Vancouver, ISO, and other styles
13

Prieto, Beatriz, Juan José Escobar, Juan Carlos Gómez-López, Antonio F. Díaz, and Thomas Lampert. "Energy Efficiency of Personal Computers: A Comparative Analysis." Sustainability 14, no. 19 (2022): 12829. http://dx.doi.org/10.3390/su141912829.

Full text
Abstract:
The demand for electricity related to Information and Communications Technologies is constantly growing and significantly contributes to the increase in global greenhouse gas emissions. To reduce this harmful growth, it is necessary to address this problem from different perspectives. Among these is changing the computing scale, such as migrating, if possible, algorithms and processes to the most energy efficient resources. In this context, this paper explores the possibility of running scientific and engineering programs on personal computers and compares the obtained power efficiency on these systems with that of mainframe computers and even supercomputers. Anecdotally, this paper also shows how the power efficiency obtained for the same workloads on personal computers is similar to that obtained on supercomputers included in the Green500 ranking.
APA, Harvard, Vancouver, ISO, and other styles
14

Prieto, Beatriz, Juan José Escobar, Juan Carlos Gómez-López, Antonio Francisco Díaz, and Thomas Lampert. "Energy Efficiency of Personal Computers: A Comparative Analysis." Sustainability 14, no. 19 (2022): e12829. https://doi.org/10.3390/su141912829.

Full text
Abstract:
The demand for electricity related to Information and Communications Technologies is constantly growing and significantly contributes to the increase in global greenhouse gas emissions. To reduce this harmful growth, it is necessary to address this problem from different perspectives. Among these is changing the computing scale, such as migrating, if possible, algorithms and processes to the most energy efficient resources. In this context, this paper explores the possibility of running scientific and engineering programs on personal computers and compares the obtained power efficiency on these systems with that of mainframe computers and even supercomputers. Anecdotally, this paper also shows how the power efficiency obtained for the same workloads on personal computers is similar to that obtained on supercomputers included in the Green500 ranking.
APA, Harvard, Vancouver, ISO, and other styles
15

Sadanandam, Latha. "Enabling Mainframe Assets to Services for SOA." International Journal of Computer and Communication Technology, October 2010, 282–86. http://dx.doi.org/10.47893/ijcct.2010.1058.

Full text
Abstract:
Service-oriented architecture (SOA) is a mechanism for achieving interoperability between heterogeneous systems. SOA enables existing legacy systems to expose their functionality as services, without making significant changes to the legacy systems. Migration towards a service-oriented approach (SOA) not only standardizes interaction, but also allows for more flexibility in the existing process. Web services technology is an ideal technology choice for implementing a SOA. Web services can be implemented in any programming language. The functionality of Web services range from simple request-reply to full business process. These services can be newly developed applications or just wrapper program for existing business functions to be network-enabled. The strategy is to form a framework to integrate z/OS assets in distributed environment using SOA approach, to enable optimal business agility and flexibility. Mainframe applications run the business and contain critical business logic that is unique, difficult, and costly to replicate. Enabling existing applications allows reusing critical business assets and leveraging the assets as a service to be invoked in heterogeneous environment.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography