To see the other types of publications on this topic, follow the link: DB2, database performance, database workload management.

Journal articles on the topic 'DB2, database performance, database workload management'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 19 journal articles for your research on the topic 'DB2, database performance, database workload management.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Suryana, N., M. S. Rohman, and F. S. Utomo. "PREDICTION BASED WORKLOAD PERFORMANCE EVALUATION FOR DISASTER MANAGEMENT SPATIAL DATABASE." ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLII-4/W10 (September 12, 2018): 187–92. http://dx.doi.org/10.5194/isprs-archives-xlii-4-w10-187-2018.

Full text
Abstract:
<p><strong>Abstract.</strong> This paper discusses a prediction based workload performance evaluation implementation during Disaster Management, especially at the response phase, to handle large spatial data in the event of an eruption of the Merapi volcano in Indonesia. Complexity associated with a large spatial database are not the same with the conventional database. This implies that in coming complex work loads are difficult to be handled by human from which needs longer processing time and may lead to failure and undernourishment. Based on incoming workload, this study is intended to predict the associated workload into OLTP and DSS workload performance types. From the SQL statements, it is clear that the DBMS can obtain and record the process, measure the analysed performances and the workload classifier in the form of DBMS snapshots. The Case-Based Reasoning (CBR) optimised with Hash Search Technique has been adopted in this study to evaluate and predict the workload performance of PostgreSQL. It has been proven that the proposed CBR using Hash Search technique has resulted in acceptable prediction of the accuracy measurement than other machine learning algorithm like Neural Network and Support Vector Machine. Besides, the results of the evaluation using confusion matrix has resulted in very good accuracy as well as improvement in execution time. Additionally, the results of the study indicated that the prediction model for workload performance evaluation using CBR which is optimised by Hash Search technique for determining workload data on shortest path analysis via the employment of Dijkstra algorithm. It could be useful for the prediction of the incoming workload based on the status of the predetermined DBMS parameters. In this way, information is delivered to DBMS hence ensuring incoming workload information that is very crucial to determine the smooth works of PostgreSQL.</p>
APA, Harvard, Vancouver, ISO, and other styles
2

Zheng, Shuai, Fusheng Wang, and James Lu. "Enabling Ontology Based Semantic Queries in Biomedical Database Systems." International Journal of Semantic Computing 08, no. 01 (March 2014): 67–83. http://dx.doi.org/10.1142/s1793351x14500032.

Full text
Abstract:
There is a lack of tools to ease the integration and ontology based semantic queries in biomedical databases, which are often annotated with ontology concepts. We aim to provide a middle layer between ontology repositories and semantically annotated databases to support semantic queries directly in the databases with expressive standard database query languages. We have developed a semantic query engine that provides semantic reasoning and query processing, and translates the queries into ontology repository operations on NCBO BioPortal. Semantic operators are implemented in the database as user defined functions extended to the database engine, thus semantic queries can be directly specified in standard database query languages such as SQL and XQuery. The system provides caching management to boosts query performance. The system is highly adaptable to support different ontologies through easy customizations. We have implemented the system DBOntoLink as an open source software, which supports major ontologies hosted at BioPortal. DBOntoLink supports a set of common ontology based semantic operations and have them fully integrated with a database management system IBM DB2. The system has been deployed and evaluated with an existing biomedical database for managing and querying image annotations and markups (AIM). Our performance study demonstrates the high expressiveness of semantic queries and the high efficiency of the queries.
APA, Harvard, Vancouver, ISO, and other styles
3

Van Aken, Dana, Dongsheng Yang, Sebastien Brillard, Ari Fiorino, Bohan Zhang, Christian Bilien, and Andrew Pavlo. "An inquiry into machine learning-based automatic configuration tuning services on real-world database management systems." Proceedings of the VLDB Endowment 14, no. 7 (March 2021): 1241–53. http://dx.doi.org/10.14778/3450980.3450992.

Full text
Abstract:
Modern database management systems (DBMS) expose dozens of configurable knobs that control their runtime behavior. Setting these knobs correctly for an application's workload can improve the performance and efficiency of the DBMS. But because of their complexity, tuning a DBMS often requires considerable effort from experienced database administrators (DBAs). Recent work on automated tuning methods using machine learning (ML) have shown to achieve better performance compared with expert DBAs. These ML-based methods, however, were evaluated on synthetic workloads with limited tuning opportunities, and thus it is unknown whether they provide the same benefit in a production environment. To better understand ML-based tuning, we conducted a thorough evaluation of ML-based DBMS knob tuning methods on an enterprise database application. We use the OtterTune tuning service to compare three state-of-the-art ML algorithms on an Oracle installation with a real workload trace. Our results with OtterTune show that these algorithms generate knob configurations that improve performance by 45% over enterprise-grade configurations. We also identify deployment and measurement issues that were overlooked by previous research in automated DBMS tuning services.
APA, Harvard, Vancouver, ISO, and other styles
4

Raza, Basit, Yogan Jaya Kumar, Ahmad Kamran Malik, Adeel Anjum, and Muhammad Faheem. "Performance prediction and adaptation for database management system workload using Case-Based Reasoning approach." Information Systems 76 (July 2018): 46–58. http://dx.doi.org/10.1016/j.is.2018.04.005.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Memon, Muhammad Qasim, Jingsha He, Aasma Memon, Khurram Gulzar Rana, and Muhammad Salman Pathan. "Query Processing for Time Efficient Data Retrieval." Indonesian Journal of Electrical Engineering and Computer Science 9, no. 3 (March 1, 2018): 784. http://dx.doi.org/10.11591/ijeecs.v9.i3.pp784-788.

Full text
Abstract:
<p class="TTPAbstract">In database management system (DBMS) retrieving data through structure query language is an essential aspect to find better execution plan for performance. In this paper, we incorporated database objects to optimize query execution time and its cost by vanishing poorly SQL statements. We proposed a method of evolving and inserting database constraints as database objects embedded with queries either to add them for the sake of transactions required by user to detect those queries for the betterment of performance. We took analysis on several databases while processing queries itself and assimilate real time database workload with the bunch of transactions are invoked in comparison with tuning approaches. These database objects are coded in procedural language environment pertaining rules to make it worth and are merged into queries offering improved execution plan.</p>
APA, Harvard, Vancouver, ISO, and other styles
6

Çelikyürek, Hasan, Kadir Karakuş, and Murat Kara. "Hayvancılık İşletmelerinde Kayıtların Veri Tabanlarında Saklanması ve Değerlendirilmesi." Turkish Journal of Agriculture - Food Science and Technology 7, no. 12 (December 14, 2019): 2089. http://dx.doi.org/10.24925/turjaf.v7i12.2089-2094.2793.

Full text
Abstract:
The data stored for a long time in livestock enterprises will play a crucial role in increasing the productivity in animal production, revealing animal breeding values, meeting qualified breeding needs, making effective breeding organizations, obtaining high income, determining the animals to be kept or as a breeder. Among the important technical data kept in livestock enterprises; ram, bull, and goat and their reproduction, growth-development, yield records (animal weight and wool yield in small ruminants, body weight gain, feed consumption, lactation and milk yield), reproductive performance measures, slaughter and carcass dimensions and characteristics records such as meat quality, animal diseases and vaccination practices can be shown as important technical data in livestock enterprises. Issues such as followed animals and storing identifying information of the animals from this data in the database are being made compulsory for conformity program of Turkey with the European Union by the rule number 27137 “Regulation on the identification, registration and monitoring of sheep and goat type of animals” that published in the official newspaper by Agriculture and Forestry Ministry on 10.02.2009. Nowadays, database software such as MySQL, MS SQL, Postrage SQL, Oracle, Firebird, IBM DB2 and MS Access are used in order to obtain healthy data and store the data safely. Knowledge of the use and cost of this database software and Database Management Systems (DBMS) is important for the enterprise. In this study, it is aimed to give information about the software that adds value to the enterprise and their costs of the operations on enterprise.
APA, Harvard, Vancouver, ISO, and other styles
7

Gorbenko, Anatoliy, and Olga Tarasyuk. "EXPLORING TIMEOUT AS A PERFORMANCE AND AVAILABILITY FACTOR OF DISTRIBUTED REPLICATED DATABASE SYSTEMS." RADIOELECTRONIC AND COMPUTER SYSTEMS, no. 4 (November 27, 2020): 98–105. http://dx.doi.org/10.32620/reks.2020.4.09.

Full text
Abstract:
A concept of distributed replicated data storages like Cassandra, HBase, MongoDB has been proposed to effectively manage the Big Data sets whose volume, velocity, and variability are difficult to deal with by using the traditional Relational Database Management Systems. Trade-offs between consistency, availability, partition tolerance, and latency are intrinsic to such systems. Although relations between these properties have been previously identified by the well-known CAP theorem in qualitative terms, it is still necessary to quantify how different consistency and timeout settings affect system latency. The paper reports results of Cassandra's performance evaluation using the YCSB benchmark and experimentally demonstrates how to read latency depends on the consistency settings and the current database workload. These results clearly show that stronger data consistency increases system latency, which is in line with the qualitative implication of the CAP theorem. Moreover, Cassandra latency and its variation considerably depend on the system workload. The distributed nature of such a system does not always guarantee that the client receives a response from the database within a finite time. If this happens, it causes so-called timing failures when the response is received too late or is not received at all. In the paper, we also consider the role of the application timeout which is the fundamental part of all distributed fault tolerance mechanisms working over the Internet and used as the main error detection mechanism here. The role of the application timeout as the main determinant in the interplay between system availability and responsiveness is also examined in the paper. It is quantitatively shown how different timeout settings could affect system availability and the average servicing and waiting time. Although many modern distributed systems including Cassandra use static timeouts it was shown that the most promising approach is to set timeouts dynamically at run time to balance performance, availability and improve the efficiency of the fault-tolerance mechanisms.
APA, Harvard, Vancouver, ISO, and other styles
8

Fan, Yimeng, Yu Liu, Haosong Chen, and Jianlong Ma. "Data Mining-based Design and Implementation of College Physical Education Performance Management and Analysis System." International Journal of Emerging Technologies in Learning (iJET) 14, no. 06 (March 29, 2019): 87. http://dx.doi.org/10.3991/ijet.v14i06.10159.

Full text
Abstract:
The purpose of this paper was to effectively apply data mining technology to sci-entifically analyze the students' physical education (PE) performance so as to serve the physical teaching. The methodology adopted in this paper was to apply ASP.NET 3-layer architecture and design and implement college PE performance management and analysis system under the premise of fully analyzing the system requirements based on Visual Studio2008 software development platform and using SQL Server 2005 database platform. Based on data mining technology, students' PE performances were analyzed, and decision tree algorithm was used to make valuable judgments on student performance. The results indicated that applying computer technology to the management and analysis of college PE per-formance can effectively reduce the teaching and managing workload of PE teachers so that the teachers concentrate more on the quality of physical educa-tion.
APA, Harvard, Vancouver, ISO, and other styles
9

Wang, Chenxiao, Zach Arani, Le Gruenwald, Laurent d'Orazio, and Eleazar Leal. "Re-optimization for Multi-objective Cloud Database Query Processing using Machine Learning." International Journal of Database Management Systems 13, no. 1 (February 28, 2021): 21–40. http://dx.doi.org/10.5121/ijdms.2021.13102.

Full text
Abstract:
In cloud environments, hardware configurations, data usage, and workload allocations are continuously changing. These changes make it difficult for the query optimizer of a cloud database management system (DBMS) to select an optimal query execution plan (QEP). In order to optimize a query with a more accurate cost estimation, performing query re-optimizations during the query execution has been proposed in the literature. However, some of there-optimizations may not provide any performance gain in terms of query response time or monetary costs, which are the two optimization objectives for cloud databases, and may also have negative impacts on the performance due to their overheads. This raises the question of how to determine when are-optimization is beneficial. In this paper, we present a technique called ReOptML that uses machine learning to enable effective re-optimizations. This technique executes a query in stages, employs a machine learning model to predict whether a query re-optimization is beneficial after a stage is executed, and invokes the query optimizer to perform the re-optimization automatically. The experiments comparing ReOptML with existing query re-optimization algorithms show that ReOptML improves query response time from 13% to 35% for skew data and from 13% to 21% for uniform data, and improves monetary cost paid to cloud service providers from 17% to 35% on skewdata.
APA, Harvard, Vancouver, ISO, and other styles
10

Tiwari, Rajeev, Shuchi Upadhyay, Gunjan Lal, and Varun Tanwar. "Project Workflow Management: A Cloud based Solution-Scrum Console." International Journal of Engineering & Technology 7, no. 4 (September 20, 2018): 2457. http://dx.doi.org/10.14419/ijet.v7i4.15799.

Full text
Abstract:
Today, there is a data workload that needs to be managed efficiently. There are many ways for the management and scheduling of processes, which can impact the performance and quality of the product and highly available, scalable web hosting can be a complex and expensive proposition. Traditional web architectures don’t offer reliability. So in this work a Scrum Console is being designed for managing a process which will be hosted on Amazon Web Services (AWS) [2] which provides a reliable, scalable, highly available and high performance infrastructure web application. The Scrum Console Platform facilitates the collaboration of various members of a team to manage projects together. The Scrum Console Platform has been developed using JSP, Hibernate & Oracle 12c Enterprise Edition Database. The Platform is deployed as a web application on AWS Elastic Beanstalk which automates the deployment, management and monitoring of the application while relying on the underlying AWS resources such EC2, S3, RDS, CloudWatch, autoscaling, etc.
APA, Harvard, Vancouver, ISO, and other styles
11

Guo, Shengnan, and Jianqiu Xu. "CPRQ: Cost Prediction for Range Queries in Moving Object Databases." ISPRS International Journal of Geo-Information 10, no. 7 (July 8, 2021): 468. http://dx.doi.org/10.3390/ijgi10070468.

Full text
Abstract:
Predicting query cost plays an important role in moving object databases. Accurate predictions help database administrators effectively schedule workloads and achieve optimal resource allocation strategies. There are some works focusing on query cost prediction, but most of them employ analytical methods to obtain an index-based cost prediction model. The accuracy can be seriously challenged as the workload of the database management system becomes more and more complex. Differing from the previous work, this paper proposes a method called CPRQ (Cost Prediction of Range Query) which is based on machine-learning techniques. The proposed method contains four learning models: the polynomial regression model, the decision tree regression model, the random forest regression model, and the KNN (k-Nearest Neighbor) regression model. Using R-squared and MSE (Mean Squared Error) as measurements, we perform an extensive experimental evaluation. The results demonstrate that CPRQ achieves high accuracy and the random forest regression model obtains the best predictive performance (R-squared is 0.9695 and MSE is 0.154).
APA, Harvard, Vancouver, ISO, and other styles
12

Ferro, Gustavo, Carlos A. Romero, and Exequiel Romero-Gómez. "Efficient courts? A frontier performance assessment." Benchmarking: An International Journal 25, no. 9 (November 29, 2018): 3443–58. http://dx.doi.org/10.1108/bij-09-2017-0244.

Full text
Abstract:
Purpose The purpose of this paper is to build performance indicators to assess efficiency for First Instance Federal Courts in Argentina and study the determinants of efficiency in Criminal Instruction Courts. Design/methodology/approach The efficiency scores were determined using data envelopment analysis with a database for the period 2006–2010. Then, a search of the efficiency determinants in the Criminal Instruction Courts was performed. Four output-oriented models were developed based on various explanatory and environmental variables. Findings Workload is an environmental variable that significantly increased the average levels of efficiency. When analyzing explanatory factors of the efficiency levels of the Criminal Instruction Courts, surrogate judges and temporary staff are more efficient on average than tenured judges and staff. Research limitations/implications The method chosen permits flexibility in the analysis. Future research would be interesting to develop the underlying economic model using econometric methods. Practical implications This paper’s contribution is twofold: first, to estimate the relative efficiency for all First Instance Federal Courts in every jurisdiction; and second, to explain the differences in efficiency in the Criminal Instruction Courts. Social implications This study has the potential to greatly impact the discussion of how to structure judicial procedures (from the benchmarking between different branches of Federal justice) and in the design of incentives in a judicial career (e.g. tenured vs temporary judges and clerical employees, the role of seniority of judges and clerical employees and the impact of gender in performance). Originality/value To the authors’ knowledge, this paper is the first scholarly article to measure efficiency in Argentine justice system using mathematical programming and econometric methods. It has academic interest since it advances on the comprehension of the underlying production function of justice service provision. The paper also has social and practical implications since it permits contributing to the institutional design and opens the discussion for further sequels with other methods and complementary purposes.
APA, Harvard, Vancouver, ISO, and other styles
13

Martin, Scott A., and Patricia E. Styer. "Assessing Performance, Productivity, and Staffing Needs in Pathology Groups: Observations From the College of American Pathologists PathFocus Pathology Practice Activity and Staffing Program." Archives of Pathology & Laboratory Medicine 130, no. 9 (September 1, 2006): 1263–68. http://dx.doi.org/10.5858/2006-130-1263-appasn.

Full text
Abstract:
Abstract Context.—The PathFocus program affords the opportunity for participating pathology practices to be compared with other practices that have similar characteristics. Objectives.—To demonstrate variability in workload among different pathology practice settings and to determine practice characteristics that influence staffing levels. Design.—Among 228 group practices in the PathFocus database, group practice settings were analyzed. The practice characteristics that were highly correlated with staffing levels are presented. Results.—Activities that showed significant variation include surgical pathology (P = .003), cytopathology (P = .006), miscellaneous (P = .006), and professional development (P = .003). Group practices report up to 4% of hours devoted to clinical pathology consultation, on average, and from 20% to 25% to administration and management. There are strong positive associations with staffing levels for lower-complexity Current Procedural Terminology code volumes (P &lt; .001) and higher-complexity Current Procedural Terminology code volumes (P = .006). Conclusion.—The settings of pathology practices carry specific commitments of time that are different and not equally distributed among all practice settings and strongly influence staffing requirements.
APA, Harvard, Vancouver, ISO, and other styles
14

Ohnemus, Kenneth R., and David W. Biers. "Retrospective versus Concurrent Thinking-Out-Loud in Usability Testing." Proceedings of the Human Factors and Ergonomics Society Annual Meeting 37, no. 17 (October 1993): 1127–31. http://dx.doi.org/10.1177/154193129303701701.

Full text
Abstract:
This study examined the effect of concurrent and retrospective thinking-out-loud (TOL) on the frequency and value of user verbalizations during a software usability test. Three groups of users first learned to use an off-the-shelf database management package by means of a short tutorial and then engaged in six structured tasks. Users in the Concurrent condition thought-out-loud while performing the six tasks, whereas those in Retrospective-Immediate and Retrospective-Delayed conditions thought-out-loud while watching the videotape of their interaction with the software. Results indicated that there were no significant differences among the three conditions in performance or subjective evaluation of the software. More importantly, a verbal protocol analysis revealed that users in the Retrospective conditions spent more time making statements which had high value for designers than in the Concurrent condition. The value of verbalizations generated by the Retrospective conditions were not impacted by the 24 hour delay. The results were interpreted within context of differences in workload and in terms of the trade-off between increased value gained by using the retrospective paradigm versus increased cost of additional time to conduct the usability test.
APA, Harvard, Vancouver, ISO, and other styles
15

Jung, Josephine, Jignesh Tailor, Emma Dalton, Laurence J Glancz, Joy Roach, Rasheed Zakaria, Simon P Lammy, et al. "Management Evaluation of Metastasis in the Brain (MEMBRAIN) – A UK & Ireland prospective, multicentre observational study." Neuro-Oncology 21, Supplement_4 (October 2019): iv4. http://dx.doi.org/10.1093/neuonc/noz167.016.

Full text
Abstract:
Abstract Background Over the recent years an increasing number of patients with brain metastasis are being referred to the neuro-oncology multi-disciplinary team (NMDT). Our aim was to determine if referrals of this group of patients to the NMDT in the UK & Ireland comply with NICE guidelines and to assess referral volume, quality of information provided and its impact on NMDT decision-making. Methods Prospective multicentre oberservational study including all adult patients referred with ≥1 cerebral metastasis. Data was collected in neurosurgical units from 11/2017 to 02/2018. Demographics, primary disease, Karnofsky performance status (KPS), imaging and treatment recommendation were entered into an online database. Results 1049 patients were analysed from 24 neurosurgical units. Median age was 63[range 21–93] years with a median number of 3[range 1–17] referrals per NMDT. The most common primary malignancies were lung (36.5%, n=383), breast (18.5%, n=194) and melanoma (12.0%, n=126). 51.6% (n=541) of the referrals to the NMDT were within the NICE 2006 guidelines, and resulted in specialist intervention being offered in 68.8%. 41.2% (n=197) of patients being referred outside of the NICE 2006 guidelines were offered specialist treatment. NMDT decision-making was influenced by number of metastases, age, KPS, primary disease status and extent of extracranial disease (univariate logistic regression, p<0.0001) as well as metastasis location/histology (p<0.05). Conclusions This study confirmed a national change in culture of referral patterns. We identified a delay in NMDT decision-making in ~20%, contributing to increased NMDT workload. New stratification tools may be needed to reflect advancements in diagnostics and treatment modalities.
APA, Harvard, Vancouver, ISO, and other styles
16

Butman, Boris S. "Soviet Shipbuilding: Productivity improvement Efforts." Journal of Ship Production 2, no. 04 (November 1, 1986): 225–37. http://dx.doi.org/10.5957/jsp.1986.2.4.225.

Full text
Abstract:
Constant demand for new naval and commercial vessels has created special conditions for the Government-owned Soviet shipbuilding industry, which practically has not been affected by the world shipbuilding crisis. On the other hand, such chronic diseases of the centralized economy as lack of incentive, material shortage and poor workmanship cause specific problems for ship construction. Being technically and financially unable to rapidly improve the overall technology level and performance of the entire industry, the Soviets concentrate their efforts on certain important areas and have achieved significant results, especially in welding and cutting titanium and aluminum alloys, modular production methods, standardization, etc. All productivity improvement efforts are supported by an army of highly educated engineers and scientists at shipyards, in multiple scientific, research and design institutions. Discussion Edwin J. Petersen, Todd Pacific Shipyards Three years ago I addressed the Ship Production Symposium as chairman of the Ship Production Committee and outlined some major factors which had contributed to the U.S. shipbuilding industry's remarkable achievements in building and maintaining the world's largest naval and merchant fleets during the five-year period starting just before World War II. The factors were as follows:There was a national commitment to get the job done. The shipbuilding industry was recognized as a needed national resource. There was a dependable workload. Standardization was extensively and effectively utilized. Shipbuilding work was effectively organized. Although these lessons appear to have been lost by our Government since World War II, the paper indicates that the Soviet Union has picked up these principles and has applied them very well to its current shipbuilding program. The paper also gives testimony to the observation that the Soviet Government recognizes the strategic and economic importance of a strong merchant fleet as well as a powerful naval fleet. In reviewing the paper, I found great similarity between the Soviet shipbuilding productivity improvement efforts and our own efforts or goals under the National Shipbuilding Research Program in the following areas:welding technology, flexible automation (robotics), application of group technology, standardization, facilities development, and education and training. In some areas, the Soviet Union appears to be well ahead of the United States in improving the shipbuilding process. Most noteworthy among these is the stable long-and medium-range planning that is possible by virtue of the use and adherence to the "Table of Vessel Classes." It will be obvious to most who hear and read these comments what a vast and significant improvement in shipbuilding costs and schedules could be achieved with a relatively dependable 15year master ship procurement plan for the U.S. naval and merchant fleets. Another area where the Soviet Union appears to lead the United States is in the integration of ship component suppliers into the shipbuilding process. This has been recognized as a vital step by the National Shipbuilding Research Program, but so far we have not made significant progress. A necessary prerequisite for this "supplier integration" is extensive standardization of ship components, yet another area in which the Soviets have achieved significantly greater progress than we have. Additional areas of Soviet advantage are the presence of a multilevel research and development infrastructure well supported by highly educated scientists, engineering and technical personnel; and better integration of formally educated engineering and technical personnel into the ship production process. In his conclusion, the author lists a number of problems facing the Soviet economy that adversely affect shipbuilding productivity. Perhaps behind this listing we can delve out some potential U.S. shipbuilding advantages. First, production systems in U.S. shipyards (with the possible exception of naval shipyards) are probably more flexible and adjustable to meet new circumstances as a consequence of not being constrained by a burdensome centralized bureaucracy, as is the case with Soviet shipyards. Next, such initiatives as the Ship Production Committee's "Human Resources Innovation" projects stand a better chance of achieving product-oriented "production team" relationship among labor, management, and technical personnel than the more rigid Soviet system, especially in view of the ability of U.S. shipyard management to offer meaningful financial incentives without the kind of bureaucratic constraints imposed in the Soviet system. Finally, the current U.S. Navy/shipbuilding industry cooperative effort to develop a common engineering database should lead to a highly integrated and disciplined ship design, construction, operation, and maintenance system for naval ships (and subsequently for commercial ships) that will ultimately restore the U.S. shipbuilding process to a leadership position in the world marketplace (additional references [16] and [17]).On that tentatively positive note, it seems fitting to close this discussion with a question: Is the author aware of any similar Soviet effort to develop an integrated computer-aided design, production and logistics support system? The author is to be congratulated on an excellent, comprehensive insight into the Soviet shipbuilding process and productivity improvement efforts that should give us all adequate cause not to be complacent in our own efforts. Peter M. Palermo, Naval Sea Systems Command The author presents an interesting paper that unfortunately leaves this reader with a number of unanswered questions. The paper is a paradox. It depicts a system consisting of a highly educated work force, advanced fabrication processes including the use of standardized hull modules, sophisticated materials and welding processes, and yet in the author's words they suffer from "low productivity, poor product quality, . . . and the rigid production systems which resists the introduction of new ideas." Is it possible that incentive, motivation, and morale play an equally significant role in achieving quality and producibility advances? Can the author discuss underlying reasons for quality problems in particular—or can we assume that the learning curves of Figs. 5 and Fig. 6 are representative of quality improvement curves? It has been my general impression that quality will improve with application of high-tech fabrication procedures, enclosed fabrication ways, availability of highly educated welding engineers on the building ways, and that productivity would improve with the implementation of modular or zone outfitting techniques coupled with the quality improvements. Can the author give his impressions of the impact of these innovations in the U.S. shipbuilding industry vis-a-vis the Soviet industry? Many of the welding processes cited in the paper are also familiar to the free world, with certain notable exceptions concerning application in Navy shipbuilding. For example, (1) electroslag welding is generally confined to single-pass welding of heavy plates; application to thinner plates—l1/4 in. and less when certified—would permit its use in more applications than heretofore. (2) Electron beam welding is generally restricted to high-technology machinery parts; vacuum chamber size restricts its use for larger components (thus it must be assumed that the Soviets have solved the vacuum chamber problem or have much larger chambers). (3) Likewise, laser welding has had limited use in U.S. shipbuilding. An interesting theme that runs throughout the paper, but is not explicitly addressed, is the quality of Soviet ship fitting. The use of high-tech welding processes and the mention of "remote controlled tooling for welding and X-ray testing the butt, and for following painting" imply significant ship fitting capabilities for fitting and positioning. This is particularly true if modules are built in one facility, outfitted and assembled elsewhere depending on the type of ship required. Any comments concerning Soviet ship fitting capabilities would be appreciated. The discussion on modular construction seems to indicate that the Soviets have a "standard hull module" that is used for different types of vessels, and if the use of these hull modules permit increasing hull length without changes to the fore and aft ends, it can be assumed that they are based on a standard structural design. That being the case, the midship structure will be overdesigned for many applications and optimally designed for very few. Recognizing that the initial additional cost for such a piece of hull structure is relatively minimal, it cannot be forgotten that the lifecycle costs for transporting unnecessary hull weight around can have significant fuel cost impacts. If I perceived the modular construction approach correctly, then I am truly intrigued concerning the methods for handling the distributive systems. In particular, during conversion when the ship is lengthened, how are the electrical, fluid, communications, and other distributive systems broken down, reassembled and tested? "Quick connect couplings" for these type systems at the module breaks is one particular area where economies can be achieved when zone construction methods become the order of the day in U.S. Navy ships. The author's comments in this regard would be most welcome. The design process as presented is somewhat different than U.S. Navy practice. In U.S. practice, Preliminary and Contract design are developed by the Navy. Detail design, the development of the working drawings, is conducted by the lead shipbuilder. While the detail design drawings can be used by follow shipbuilders, flexibility is permitted to facilitate unique shipbuilding or outfitting procedures. Even the contract drawings supplied by the Navy can be modified— upon Navy approval—to permit application of unique shipbuilder capabilities. The large number of college-trained personnel entering the Soviet shipbuilding and allied fields annually is mind-boggling. According to the author's estimation, a minimum of about 6500 college graduates—5000 of which have M.S. degrees—enter these fields each year. It would be most interesting to see a breakdown of these figures—in particular, how many naval architects and welding engineers are included in these figures? These are disciplines with relatively few personnel entering the Navy design and shipbuilding field today. For example, in 1985 in all U.S. colleges and universities, there were only 928 graduates (B.S., M.S. and Ph.D.) in marine, naval architecture and ocean engineering and only 1872 graduates in materials and metallurgy. The number of these graduates that entered the U.S. shipbuilding field is unknown. Again, the author is to be congratulated for providing a very thought-provoking paper. Frank J. Long, Win/Win Strategies This paper serves not only as a chronicle of some of the productivity improvement efforts in Soviet shipbuilding but also as an important reminder of the fruits of those efforts. While most Americans have an appreciation of the strengths of the Russian Navy, this paper serves to bring into clearer focus the Russians' entire maritime might in its naval, commercial, and fishing fleets. Indeed, no other nation on earth has a greater maritime capability. It is generally acknowledged that the Soviet Navy is the largest in the world. When considering the fact that the commercial and fishing fleets are, in many military respects, arms of the naval fleet, we can more fully appreciate how awesome Soviet maritime power truly is. The expansion of its maritime capabilities is simply another but highly significant aspect of Soviet worldwide ambitions. The development and updating of "Setka Typov Su dov" (Table of Vessel Classes), which the author describes is a classic example of the Soviet planning process. As the author states, "A mighty fishing and commercial fleet was built in accordance with a 'Setka' which was originally developed in the 1960's. And an even more impressive example is the rapid expansion of the Soviet Navy." In my opinion it is not mere coincidence that the Russians embarked on this course in the 1960's. That was the beginning of the coldest of cold war periods—Francis Gary Power's U-2 plane was downed by the Russians on May 1, 1960; the mid-May 1960 Four Power Geneva Summit was a bust; the Berlin Wall was erected in 1961 and, in 1962, we had the Cuban Missile Crisis. The United States maritime embargo capability in that crisis undoubtedly influenced the Soviet's planning process. It is a natural and normal function of a state-controlled economy with its state-controlled industries to act to bring about the controlled productivity improvement developments in exactly the key areas discussed in the author's paper. As the author states, "All innovations at Soviet shipyards have originated at two main sources:domestic development andadaptation of new ideas introduced by leading foreign yards, or most likely a combination of both. Soviet shipbuilders are very fast learners; moreover, their own experience is quite substantial." The Ship Production Committee of SNAME has organized its panels to conduct research in many of these same areas for productivity improvement purposes. For example, addressing the areas of technology and equipment are Panels SP-1 and 3, Shipbuilding Facilities and Environmental Effects, and Panel SP-7, Shipbuilding Welding. Shipbuilding methods are the province of SP-2; outfitting and production aids and engineering and scientific support are the province of SP-4, Design Production Integration. As I read through the descriptions of the processes that led to the productivity improvements, I was hoping to learn more about the organizational structure of Soviet shipyards, the managerial hierarchy and how work is organized by function or by craft in the shipyard. (I would assume that for all intents and purposes, all Russian yards are organized in the same way.) American shipyard management is wedded to the notion that American shipbuilding suffers immeasurably from a productivity standpoint because of limitations on management's ability to assign workers across craft lines. It is unlikely that this limitation exists in Soviet shipyards. If it does not, how is the unfettered right of assignment optimized? What are the tangible, measurable results? I believe it would have been helpful, also, for the author to have dedicated some of the paper to one of the most important factors in improvement in the labor-intensive shipbuilding industry—the shipyard worker. There are several references to worker problems—absenteeism, labor shortage, poor workmanship, and labor discipline. The reader is left with the impression that the Russians believe that either those are unsolvable problems or have a priority ranking significantly inferior to the organizational, technical, and design efforts discussed. As a case in point, the author devotes a complete section to engineering education and professional training but makes no mention of education or training programs for blue-collar workers. It would seem that a paper on productivity improvement efforts in Soviet shipbuilding would address this most important element. My guess is that the Russians have considerable such efforts underway and it would be beneficial for us to learn of them.
APA, Harvard, Vancouver, ISO, and other styles
17

Zhang, Ji, Ke Zhou, Guoliang Li, Yu Liu, Ming Xie, Bin Cheng, and Jiashu Xing. "$$\hbox {CDBTune}^{+}$$: An efficient deep reinforcement learning-based automatic cloud database tuning system." VLDB Journal, June 5, 2021. http://dx.doi.org/10.1007/s00778-021-00670-9.

Full text
Abstract:
AbstractConfiguration tuning is vital to optimize the performance of a database management system (DBMS). It becomes more tedious and urgent for cloud databases (CDB) due to diverse database instances and query workloads, which make the job of a database administrator (DBA) very difficult. Existing solutions for automatic DBMS configuration tuning have several limitations. Firstly, they adopt a pipelined learning model but cannot optimize the overall performance in an end-to-end manner. Secondly, they rely on large-scale high-quality training samples which are hard to obtain. Thirdly, existing approaches cannot recommend reasonable configurations for a large number of knobs to tune whose potential values live in such high-dimensional continuous space. Lastly, in cloud environments, existing approaches can hardly cope with the changes of hardware configurations and workloads, and have poor adaptability. To address these challenges, we design an end-to-end automatic CDB tuning system, $${\texttt {CDBTune}}^{+}$$ CDBTune + , using deep reinforcement learning (RL). $${\texttt {CDBTune}}^{+}$$ CDBTune + utilizes the deep deterministic policy gradient method to find the optimal configurations in a high-dimensional continuous space. $${\texttt {CDBTune}}^{+}$$ CDBTune + adopts a trial-and-error strategy to learn knob settings with a limited number of samples to accomplish the initial training, which alleviates the necessity of collecting a massive amount of high-quality samples. $${\texttt {CDBTune}}^{+}$$ CDBTune + adopts the reward-feedback mechanism in RL instead of traditional regression, which enables end-to-end learning and accelerates the convergence speed of our model and improves the efficiency of online tuning. Besides, we propose effective techniques to improve the training and tuning efficiency of $${\texttt {CDBTune}}^{+}$$ CDBTune + for practical usage in a cloud environment. We conducted extensive experiments under 7 different workloads on real cloud databases to evaluate $${\texttt {CDBTune}}^{+}$$ CDBTune + . Experimental results showed that $${\texttt {CDBTune}}^{+}$$ CDBTune + adapts well to a new hardware environment or workload, and significantly outperformed the state-of-the-art tuning tools and DBA experts.
APA, Harvard, Vancouver, ISO, and other styles
18

Raza, Basit, Abdul Mateen, Muhammad Sher, and Mian Muhammad Awais. "Self-Prediction of Performance Metrics for the Database Management System Workload." International Journal of Computer Theory and Engineering, 2012, 198–201. http://dx.doi.org/10.7763/ijcte.2012.v4.450.

Full text
APA, Harvard, Vancouver, ISO, and other styles
19

Wangdi, Kinley, Haribondu Sarma, John Leaburi, Emma McBryde, and Archie C. A. Clements. "Evaluation of the malaria reporting system supported by the District Health Information System 2 in Solomon Islands." Malaria Journal 19, no. 1 (October 17, 2020). http://dx.doi.org/10.1186/s12936-020-03442-y.

Full text
Abstract:
Abstract Background District Health Information Systems 2 (DHIS2) is used for supporting health information management in 67 countries, including Solomon Islands. However, there have been few published evaluations of the performance of DHIS2-enhanced disease reporting systems, in particular for monitoring infectious diseases such as malaria. The aim of this study was to evaluate DHIS2 supported malaria reporting in Solomon Islands and to develop recommendations for improving the system. Methods The evaluation was conducted in three administrative areas of Solomon Islands: Honoria City Council, and Malaita and Guadalcanal Provinces. Records of nine malaria indicators including report submission date, total malaria cases, Plasmodium falciparum case record, Plasmodium vivax case record, clinical malaria, malaria diagnosed with microscopy, malaria diagnosed with (rapid diagnostic test) (RDT), record of drug stocks and records of RDT stocks from 1st January to 31st December 2016 were extracted from the DHIS2 database. The indicators permitted assessment in four core areas: availability, completeness, timeliness and reliability. To explore perceptions and point of view of the stakeholders on the performance of the malaria case reporting system, focus group discussions were conducted with health centre nurses, whilst in-depth interviews were conducted with stakeholder representatives from government (province and national) staff and World Health Organization officials who were users of DHIS2. Results Data were extracted from nine health centres in Honoria City Council and 64 health centres in Malaita Province. The completeness and timeliness from the two provinces of all nine indicators were 28.2% and 5.1%, respectively. The most reliable indicator in DHIS2 was ‘clinical malaria’ (i.e. numbers of clinically diagnosed malaria cases) with 62.4% reliability. Challenges to completeness were a lack of supervision, limited feedback, high workload, and a lack of training and refresher courses. Health centres located in geographically remote areas, a lack of regular transport, high workload and too many variables in the reporting forms led to delays in timely reporting. Reliability of reports was impacted by a lack of technical professionals such as statisticians and unavailability of tally sheets and reporting forms. Conclusion The availability, completeness, timeliness and reliability of nine malaria indicators collected in DHIS2 were variable within the study area, but generally low. Continued onsite support, supervision, feedback and additional enhancements, such as electronic reporting will be required to further improve the malaria reporting system.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography