Academic literature on the topic 'Proprietary Database'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'Proprietary Database.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "Proprietary Database"

1

Armahizer, Michael J., Sandra L. Kane-Gill, Pamela L. Smithburger, Ananth M. Anthes, and Amy L. Seybert. "Comparing Drug-Drug Interaction Severity Ratings between Bedside Clinicians and Proprietary Databases." ISRN Critical Care 2013 (November 26, 2013): 1–6. http://dx.doi.org/10.5402/2013/347346.

Full text
Abstract:
Purpose. The purpose of this project was to compare DDI severity for clinician opinion in the context of the patient’s clinical status to the severity of proprietary databases. Methods. This was a single-center, prospective evaluation of DDIs at a large, tertiary care academic medical center in a 10-bed cardiac intensive care unit (CCU). A pharmacist identified DDIs using two proprietary databases. The physicians and pharmacists caring for the patients evaluated the DDIs for severity while incorporating their clinical knowledge of the patient. Results. A total of 61 patients were included in the evaluation and experienced 769 DDIs. The most common DDIs included: aspirin/clopidogrel, aspirin/insulin, and aspirin/furosemide. Pharmacists ranked the DDIs identically 73.8% of the time, compared to the physicians who agreed 42.2% of the time. Pharmacists agreed with the more severe proprietary database scores for 14.8% of DDIs versus physicians at 7.3%. Overall, clinicians agreed with the proprietary database 20.6% of the time while clinicians ranked the DDIs lower than the database 77.3% of the time. Conclusions. Proprietary DDI databases generally label DDIs with a higher severity rating than bedside clinicians. Developing a DDI knowledgebase for CDSS requires consideration of the severity information source and should include the clinician.
APA, Harvard, Vancouver, ISO, and other styles
2

Dubey, Shail, Kumari Nita, and Kumari Nita. "Migration from Proprietary to Open Source Database for eHealth Domain." International Journal of Computer Applications 87, no. 16 (2014): 36–40. http://dx.doi.org/10.5120/15295-3990.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

O’Kane, K. C. "Migration of Legacy Mumps Applications to Relational Database Servers." Methods of Information in Medicine 40, no. 03 (2001): 225–28. http://dx.doi.org/10.1055/s-0038-1634157.

Full text
Abstract:
Abstract:An extended implementation of the Mumps language is described that facilitates vendor neutral migration of legacy Mumps applications to SQL-based relational database servers. Implemented as a compiler, this system translates Mumps programs to operating system independent, standard C code for subsequent compilation to fully stand-alone, binary executables. Added built-in functions and support modules extend the native hierarchical Mumps database with access to industry standard, networked, relational database management servers (RDBMS) thus freeing Mumps applications from dependence upon vendor specific, proprietary, unstandardized database models. Unlike Mumps systems that have added captive, proprietary RDMBS access, the programs generated by this development environment can be used with any RDBMS system that supports common network access protocols. Additional features include a built-in web server interface and the ability to interoperate directly with programs and functions written in other languages.
APA, Harvard, Vancouver, ISO, and other styles
4

Verschelde, Jean-Luc, Mariana Casella Dos Santos, Tom Deray, Barry Smith, and Werner Ceusters. "Ontology-Assisted Database Integration to Support Natural Language Processing and Biomedical Data-mining." Journal of Integrative Bioinformatics 1, no. 1 (2004): 1–10. http://dx.doi.org/10.1515/jib-2004-1.

Full text
Abstract:
Summary Successful biomedical data mining and information extraction require a complete picture of biological phenomena such as genes, biological processes, and diseases; as these exist on different levels of granularity. To realize this goal, several freely available heterogeneous databases as well as proprietary structured datasets have to be integrated into a single global customizable scheme. We will present a tool to integrate different biological data sources by mapping them to a proprietary biomedical ontology that has been developed for the purposes of making computers understand medical natural language.
APA, Harvard, Vancouver, ISO, and other styles
5

Romero-Oraá, Roberto, María García, Javier Oraá-Pérez, María I. López-Gálvez, and Roberto Hornero. "Effective Fundus Image Decomposition for the Detection of Red Lesions and Hard Exudates to Aid in the Diagnosis of Diabetic Retinopathy." Sensors 20, no. 22 (2020): 6549. http://dx.doi.org/10.3390/s20226549.

Full text
Abstract:
Diabetic retinopathy (DR) is characterized by the presence of red lesions (RLs), such as microaneurysms and hemorrhages, and bright lesions, such as exudates (EXs). Early DR diagnosis is paramount to prevent serious sight damage. Computer-assisted diagnostic systems are based on the detection of those lesions through the analysis of fundus images. In this paper, a novel method is proposed for the automatic detection of RLs and EXs. As the main contribution, the fundus image was decomposed into various layers, including the lesion candidates, the reflective features of the retina, and the choroidal vasculature visible in tigroid retinas. We used a proprietary database containing 564 images, randomly divided into a training set and a test set, and the public database DiaretDB1 to verify the robustness of the algorithm. Lesion detection results were computed per pixel and per image. Using the proprietary database, 88.34% per-image accuracy (ACCi), 91.07% per-pixel positive predictive value (PPVp), and 85.25% per-pixel sensitivity (SEp) were reached for the detection of RLs. Using the public database, 90.16% ACCi, 96.26% PPV_p, and 84.79% SEp were obtained. As for the detection of EXs, 95.41% ACCi, 96.01% PPV_p, and 89.42% SE_p were reached with the proprietary database. Using the public database, 91.80% ACCi, 98.59% PPVp, and 91.65% SEp were obtained. The proposed method could be useful to aid in the diagnosis of DR, reducing the workload of specialists and improving the attention to diabetic patients.
APA, Harvard, Vancouver, ISO, and other styles
6

Villordon, Arthur, Wambui Njuguna, Simon Gichuki, Philip Ndolo, and Don Labonte. "Using Open Source Software in Developing a Web-accessible Database of Sweetpotato Germplasm Collections in Kenya." HortTechnology 17, no. 4 (2007): 567–70. http://dx.doi.org/10.21273/horttech.17.4.567.

Full text
Abstract:
Web-accessible germplasm databases allow stakeholders to interactively search and locate information in real time. These databases can also be configured to permit designated users to remotely add, delete, or update information. These resources assist in decision-making activities that are related to germplasm documentation, conservation, and management. We report the development of a web-accessible database of Kenyan sweetpotato (Ipomoea batatas) varieties using open source software. Kenya is located in eastern Africa, a region that is considered one of the centers of diversity for sweetpotato. We describe the software applications used in developing the germplasm database as well as the web interface for displaying and interactively searching records. This report demonstrates that open source software can be used in developing a web-enabled database with management features similar to those found in proprietary or commercial applications.
APA, Harvard, Vancouver, ISO, and other styles
7

Johnstone, Peter A. S., Tim Crenshaw, Diane G. Cassels, and Timothy H. Fox. "Automated Data Mining of A Proprietary Database System for Physician Quality Improvement." International Journal of Radiation Oncology*Biology*Physics 70, no. 5 (2008): 1537–41. http://dx.doi.org/10.1016/j.ijrobp.2007.08.056.

Full text
APA, Harvard, Vancouver, ISO, and other styles
8

Emoto, Masahiko, Izumi Murakami, Daiji Kato, Masanobu Yoshida, Masatoshi Kato, and Setsuo Imazu. "Improvement of the NIFS Atom and Molecular Database." Atoms 7, no. 3 (2019): 91. http://dx.doi.org/10.3390/atoms7030091.

Full text
Abstract:
The NIFS (National Institute for Fusion Science) Atom and Molecular Database, which has been available online since 1997, is a numerical atomic and molecular database of collision processes that is important for fusion research. This database provides the following: (1) the cross-sections and rate coefficients for ionization, excitation, and recombination caused by electron impact; (2) the charge transfer caused by heavy particle collision and collision processes of molecules; and (3) the sputtering yields of solids and backscattering coefficients from solids. It also offers a bibliographic database. We recently reconstructed the database system. The main purpose of the reconstruction was to migrate the database into an open-source architecture to make the system more flexible and extensible. The previous system used proprietary software and was difficult to customize. The new system consists of open-source software, including PostgreSQL database and Ruby on Rails. New features were also added to the system. The most important improvement is the interface with the Virtual Atomic and Molecular Data Center (VAMDC) portal. Using this interface, researchers can search for data in the NIFS database as well as in various other online databases simultaneously.
APA, Harvard, Vancouver, ISO, and other styles
9

Stein, Guido, Manuel Gallego, and Marta Cuadrado. "CEO succession and proprietary directors: evidence from Spanish listed firms." Corporate Ownership and Control 11, no. 1 (2013): 140–46. http://dx.doi.org/10.22495/cocv11i1conf2p5.

Full text
Abstract:
This study advances research on CEO succession and board monitoring of senior executives by examining how proprietary directors can affect the probability of CEO dismissal. Drawing on our newly developed database covering all CEO successions occurring in all Spanish listed firms during the period 2007–2010, we propose that proprietary directors may increase the board’s monitoring efforts over the chief executive, forcing him to resign in situations of poor performance. Hypotheses are tested longitudinally, using CEO succession data taken from 111 publicly-traded firms in the Spanish ‘mercado continuo’ over a four-year period
APA, Harvard, Vancouver, ISO, and other styles
10

Dhanny, Dhanny, and Sandi Badiwibowo Atiim. "Free Open-Source High – Availability Solution for Java Web Application Using Tomcat And MySQL." ACMIT Proceedings 7, no. 1 (2021): 68–77. http://dx.doi.org/10.33555/acmit.v7i1.108.

Full text
Abstract:
With the growth of the internet, the number of web applications is also growing. Many web applications are becoming more important to the stakeholders that they cannot afford downtime which can cause loss of revenue, loss of productivity, etc. In the past, only big organizations with deep pocket could afford implement high-availability for their web application, but nowadays there are free open-source software programs that support high-availability feature available to everyone. This research studied the feasibility of implementing high-availability for Java web application system without using commercial software. This research compared the capability of proprietary and free open-source high-availability solution for Java web application based on a simple high-availability design, where a test Java web application was deployed into the environment based on proprietary and free open-source solutions, and tested how well each solution perform when problem occurs. The result showed that the free open-source high-availability solution worked, but not as well as proprietary one. However, the proprietary high-availability solution for database did not perform well, and neither did the open-source one. This research concludes that the free open-source high-availability solution works and thus made high-availability become much more affordable, especially for individual or small organizations with budget constraints.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "Proprietary Database"

1

Rodrigues, Lanny Anthony, and Srujan Kumar Polepally. "Creating Financial Database for Education and Research: Using WEB SCRAPING Technique." Thesis, Högskolan Dalarna, Mikrodataanalys, 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:du-36010.

Full text
Abstract:
Our objective of this thesis is to expand the microdata database of publicly available corporate information of the university by web scraping mechanism. The tool for this thesis is a web scraper that can access and concentrate information from websites utilizing a web application as an interface for client connection. In our comprehensive work we have demonstrated that the GRI text files approximately consist of 7227 companies; from the total number of companies the data is filtered with “listed” companies. Among the filtered 2252 companies some do not have income statements data. Hence, we have finally collected data of 2112 companies with 36 different sectors and 13 different countries in this thesis. The publicly available information of income statements between 2016 to 2020 have been collected by GRI of microdata department. Collecting such data from any proprietary database by web scraping may cost more than $ 24000 a year were collecting the same from the public database may cost almost nil, which we will discuss further in our thesis.In our work we are motivated to collect the financial data from the annual financial statement or financial report of the business concerns which can be used for the purpose to measure and investigate the trading costs and changes of securities, common assets, futures, cryptocurrencies, and so forth. Stock exchange, official statements and different business-related news are additionally sources of financial data that individuals will scrape. We are helping those petty investors and students who require financial statements from numerous companies for several years to verify the condition of the economy and finance concerning whether to capitalise or not, which is not possible in a conventional way; hence they use the web scraping mechanism to extract financial statements from diverse websites and make the investment decisions on further research and analysis.Here in this thesis work, we have indicated the outcome of the web scraping is to keep the extracted data in a database. The gathered data of the resulted database can be implemented for the required goal of further research, education, and other purposes with the further use of the web scraping technique.
APA, Harvard, Vancouver, ISO, and other styles
2

Pehal, Petr. "Systém řízení báze dat v operační paměti." Master's thesis, Vysoké učení technické v Brně. Fakulta informačních technologií, 2013. http://www.nusl.cz/ntk/nusl-236348.

Full text
Abstract:
The focus of this thesis is a proprietary database interface for management tables in memory. At the beginning, there is given a short introduction to the databases. Then the concept of in-memory database systems is presented. Also the main advantages and disadvantages of this solution are discussed. The theoretical introduction is ended by brief overview of existing systems. After that the basic information about energetic management system RIS are presented together with system's in-memory database interface. Further the work aims at the specification and design of required modifications and extensions of the interface. Then the implementation details and tests results are presented. In conclusion the results are summarized and future development is discussed.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "Proprietary Database"

1

Ernst, Carl R. The sourcebook of online public record experts: Featuring proprietary databases, gateway vendors, CD-ROM providers, national and regional public record search firms, online proficient PI's, pre-employment & tenant screening specialist. BRB Publications, 1996.

Find full text
APA, Harvard, Vancouver, ISO, and other styles
2

The propriety of the taxpayer-funded White House data base: Hearing before the Subcommittee on National Economic Growth, Natural Resources, and Regulatory Affairs of the Committee on Government Reform and Oversight, House of Representatives, One Hundred Fourth Congress, second session, September 10, 1996. U.S. G.P.O., 1997.

Find full text
APA, Harvard, Vancouver, ISO, and other styles

Book chapters on the topic "Proprietary Database"

1

Adler, Aaron, Jake Beal, Mary Lancaster, and Daniel Wyschogrod. "Cyberbiosecurity and Public Health in the Age of COVID-19." In NATO Science for Peace and Security Series C: Environmental Security. Springer Netherlands, 2021. http://dx.doi.org/10.1007/978-94-024-2086-9_7.

Full text
Abstract:
AbstractCyberbiosecurity, the aspect of biosecurity involving the digital representation of biological data, had already been emerging as a matter of public concern even prior to the onset of the COVID-19 pandemic. Key issues of concern include, among others, the privacy of patient data, the security of public health databases, the integrity of diagnostic test data, the integrity of public biological databases, the security implications of automated laboratory systems and the security of proprietary biological engineering advances.
APA, Harvard, Vancouver, ISO, and other styles
2

Udoh, Emmanuel. "Open Source Database Technologies." In Database Technologies. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-058-5.ch050.

Full text
Abstract:
In the contemporary software landscape, the open source movement can no longer be overlooked by any major players in the industry, as the movement portends a paradigm shift and is forcing a major rethinking of strategy in the software business. For instance, companies like Oracle, Microsoft, and IBM now offer the lightweight versions of their proprietary flagship products to small—to-medium businesses at no cost for product trial (Samuelson, 2006). These developments are signs of the success of the OSS movement.
APA, Harvard, Vancouver, ISO, and other styles
3

Udoh, Emmanuel. "Open Source Database Technologies." In Encyclopedia of Multimedia Technology and Networking, Second Edition. IGI Global, 2009. http://dx.doi.org/10.4018/978-1-60566-014-1.ch150.

Full text
Abstract:
The free or open source software (OSS) movement, pioneered by Richard Stallman in 1983, is gaining mainstream acceptance and challenging the established order of the commercial software world. The movement is taking root in various aspects of software development, namely operating systems (Linux), Web servers (Apache), databases (MySQL), and scripting languages (PHP) to mention but a few. The basic tenet of the movement is that the underlying code of any open source software should be freely viewable, modifiable, or redistributable by any interested party, as enunciated under the copyleft concept (Stallman, 2002) This is in sharp contrast to the proprietary software (closed source), in which the code is controlled under the copyright laws. In the contemporary software landscape, the open source movement can no longer be overlooked by any major players in the industry, as the movement portends a paradigm shift and is forcing a major rethinking of strategy in the software business. For instance, companies like Oracle, Microsoft, and IBM now offer the lightweight versions of their proprietary flagship products to small—to-medium businesses at no cost for product trial (Samuelson, 2006). These developments are signs of the success of the OSS movement. Reasons abound for the success of the OSS, viz. the collective effort of many volunteer programmers, flexible and quick release rate, code availability, and security. On the other hand, one of the main disadvantages of OSS is the limited technical support, as it may be difficult to find an expert to help an organization with system setup or maintenance. Due to the extensive nature of OSS, this article will only focus on the database aspects. A database is one of the critical components of the application stack for an organization or a business. Increasingly, open-source databases (OSDBs) such as MYSQL, PostgreSQL, MaxDB, Firebird, and Ingress are coming up against the big three commercial proprietary databases: Oracle, SQL server, and IBM DB (McKendrick, 2006; Paulson, 2004; Shankland, 2004). Big companies like Yahoo and Dell are now embracing OSDBs for enterprise-wide applications. According to the Independent Oracle Users Group (IOUG) survey, 37% of enterprise database sites are running at least one of the major brands of open source databases (McKendrik, 2006). The survey further finds that the OSDBs are mostly used for single function systems, followed by custom home-grown applications and Web sites. But critics maintain that these OSDBs are used for nonmission critical purposes, because IT organizations still have concerns about support, security, and management tools (Harris, 2004; Zhao & Elbaum, 2003)
APA, Harvard, Vancouver, ISO, and other styles
4

Deka, Ganesh Chandra. "NoSQL Databases." In Advances in Data Mining and Database Management. IGI Global, 2014. http://dx.doi.org/10.4018/978-1-4666-5864-6.ch008.

Full text
Abstract:
NoSQL databases are designed to meet the huge data storage requirements of cloud computing and big data processing. NoSQL databases have lots of advanced features in addition to the conventional RDBMS features. Hence, the “NoSQL” databases are popularly known as “Not only SQL” databases. A variety of NoSQL databases having different features to deal with exponentially growing data-intensive applications are available with open source and proprietary option. This chapter discusses some of the popular NoSQL databases and their features on the light of CAP theorem.
APA, Harvard, Vancouver, ISO, and other styles
5

Synodinou, Tatiani-Eleni. "The Protection of Digital Libraries as Databases." In Digital Rights Management. IGI Global, 2013. http://dx.doi.org/10.4018/978-1-4666-2136-7.ch058.

Full text
Abstract:
This chapter explains the application of EU Directive 96/9 to digital libraries. Digital libraries correspond largely to the broad definition of databases which is established by the Directive 96/9. The application of the database copyright and sui generis regime to digital libraries provides a safe and solid legal protection to digital libraries which fulfill the conditions of originality and investment set by the Directive. The chapter examines in detail the conditions for protection, the subject matter, the content and the extent of the Directive’s 96/9 two-tier legal protection regime as it is applied to digital libraries. While the protection of the structure of a digital library by copyright law has not provoked any reactions both in Europe and in U.S.A., the possibility of protection of the digital library’s contents by the quasi proprietary database sui generis right has been since the adoption of the Directive 96/9 a highly controversial issue. The defendants of the Internet dogma of free and open flow of information consider the sui generis right as an inappropriate and unbalanced legal mechanism which promotes the monopolization of the digital knowledge to the detriment of the public interest. The chapter also demonstrates the conflict between the proprietary interests of the digital library’s maker and the interests of the lawful user of a digital library. Furthermore, a critical overview of the regime of exceptions to database sui generis right is provided. In order to justify and balance the attribution of the proprietary sui generis right, the author argues that the regime of database sui generis exceptions should be enriched and strengthened, especially when the purposes of education, research and information are served by the exceptions.
APA, Harvard, Vancouver, ISO, and other styles
6

Synodinou, Tatiani-Eleni. "The Protection of Digital Libraries as Databases." In E-Publishing and Digital Libraries. IGI Global, 2011. http://dx.doi.org/10.4018/978-1-60960-031-0.ch012.

Full text
Abstract:
This chapter explains the application of EU Directive 96/9 to digital libraries. Digital libraries correspond largely to the broad definition of databases which is established by the Directive 96/9. The application of the database copyright and sui generis regime to digital libraries provides a safe and solid legal protection to digital libraries which fulfill the conditions of originality and investment set by the Directive. The chapter examines in detail the conditions for protection, the subject matter, the content and the extent of the Directive’s 96/9 two-tier legal protection regime as it is applied to digital libraries. While the protection of the structure of a digital library by copyright law has not provoked any reactions both in Europe and in U.S.A., the possibility of protection of the digital library’s contents by the quasi proprietary database sui generis right has been since the adoption of the Directive 96/9 a highly controversial issue. The defendants of the Internet dogma of free and open flow of information consider the sui generis right as an inappropriate and unbalanced legal mechanism which promotes the monopolization of the digital knowledge to the detriment of the public interest. The chapter also demonstrates the conflict between the proprietary interests of the digital library’s maker and the interests of the lawful user of a digital library. Furthermore, a critical overview of the regime of exceptions to database sui generis right is provided. In order to justify and balance the attribution of the proprietary sui generis right, the author argues that the regime of database sui generis exceptions should be enriched and strengthened, especially when the purposes of education, research and information are served by the exceptions.
APA, Harvard, Vancouver, ISO, and other styles
7

Adhikari, Mainak, and Sukhendu Kar. "NoSQL Databases." In Handbook of Research on Securing Cloud-Based Databases with Biometric Applications. IGI Global, 2015. http://dx.doi.org/10.4018/978-1-4666-6559-0.ch006.

Full text
Abstract:
NoSQL database provides a mechanism for storage and access of data across multiple storage clusters. NoSQL dabases are finding significant and growing industry to meet the huge data storage requirements of Big data, real time applications, and Cloud Computing. NoSQL databases have lots of advantages over the conventional RDBMS features. NoSQL systems are also referred to as “Not only SQL” to emphasize that they may in fact allow Structured language like SQL, and additionally, they allow Semi Structured as well as Unstructured language. A variety of NoSQL databases having different features to deal with exponentially growing data intensive applications are available with open source and proprietary option mostly prompted and used by social networking sites. This chapter discusses some features and challenges of NoSQL databases and some of the popular NoSQL databases with their features on the light of CAP theorem.
APA, Harvard, Vancouver, ISO, and other styles
8

Küçük, Dilek. "Sentiment, Stance, and Intent Detection in Turkish Tweets." In Advances in Data Mining and Database Management. IGI Global, 2021. http://dx.doi.org/10.4018/978-1-7998-8061-5.ch011.

Full text
Abstract:
Sentiment analysis, stance detection, and intent detection on social media texts are all significant research problems with several application opportunities. In this chapter, the authors explore the possible contribution of sentiment and intent information to machine learning-based stance detection on tweets. They first annotate a Turkish tweet dataset with sentiment and proprietary intent labels, where the dataset was already annotated with stance labels. Next, they perform stance detection experiments on the dataset using sentiment and intent labels as additional features. The experiments with SVM classifiers show that using sentiment and intent labels as additional features improves stance detection performance considerably. The final form of the dataset is made publicly available for research purposes. The findings reveal the contribution of sentiment and intent information to the solution of stance detection task on the Turkish tweet dataset employed. Yet, further studies on other datasets are needed to confirm that our findings are generalizable to other languages and on other topics.
APA, Harvard, Vancouver, ISO, and other styles
9

Yee, David P., and Tim Hunkapiller. "Overview: A System for Tracking and Managing the Results from Sequence Comparison Programs." In Pattern Discovery in Biomolecular Data. Oxford University Press, 1999. http://dx.doi.org/10.1093/oso/9780195119404.003.0017.

Full text
Abstract:
The Human Genome Project was launched in the early 1990s to map, sequence, and study the function of genomes derived from humans and a number of model organisms such as mouse, rat, fruit fly, worm, yeast, and Escherichia coli. This ambitious project was made possible by advances in high-speed DNA sequencing technology (Hunkapiller et al., 1991). To date, the Human Genome Project and other large-scale sequencing projects have been enormously successful. The complete genomes of several microbes (such as Hemophilus influenzae Rd, Mycoplasma genitalium, and Methanococcus jannaschii) have been completely sequenced. The genome of bacteriophage T4 is complete, and the 4.6-megabase sequence of E. coli and the 13-megabase genome of Saccharomyces cerevisiae have just recently also been completed. There are 71 megabases of the nematode Caenorhabditis elegans available. Six megabases of mouse and 60 megabases of human genomic sequence have been finished, which represent 0.2% and 2% of their respective genomes. Finally, more than 1 million expressed sequence tags derived from human and mouse complementary DNA expression libraries are publicly available. These public data, in addition to private and proprietary DNA sequence databases, represent an enormous information-processing challenge and data-mining opportunity. The need for common interfaces and query languages to access heterogeneous sequence databases is well documented, and several good systems are well underway to provide those interfaces (Woodsmall and Benson, 1993; Marr, 1996). Our own work on database and program interoperability in this domain and in computational chemistry (Gushing, 1995) has shown, however, that providing the interface is but the first step toward making these databases fully useful to the researcher. (Here, the term “database” means a collection of data in electronic form, which may not necessarily be physically deposited in a database management system [DBMS]. A scientist’s database could thus be a collection of flat files, where the term “database” means “data stored in a DBMS” is clear from the context.) Deciphering the genomes of sequenced organisms falls into the realm of analysis; there is now plenty of sequence data. The most common form of sequence analysis involves the identification of homologous relationships among similar sequences.
APA, Harvard, Vancouver, ISO, and other styles
10

Westland, J. Christopher. "Ten Lessons that Internet Auction Markets Can Learn from Securities Market Automation." In Advances in Global Information Management. IGI Global, 2002. http://dx.doi.org/10.4018/978-1-930708-43-3.ch008.

Full text
Abstract:
Internet auction markets offer customers a compelling new model for price discovery. This model places much more power in the hands of the consumer than a retail model that assumes price taking, while giving consumers choice of vendor and product. Models of auction market automation has been evolving for some time. Securities markets in most countries over the past decade have invested significantly in automating various components with database and communications technologies. This paper explores the automation of three emerging market exchanges (The Commercial Exchange of Santiago, The Moscow Central Stock Exchange, and Shanghai’s Stock Exchange) with the intention of drawing parallels between new Internet models of retailing and the older proprietary networked markets for financial securities.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "Proprietary Database"

1

Miller, Perry L., and James H. Oliver. "Extensible Architecture for Geometric-Model Database Translation." In ASME 2003 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2003. http://dx.doi.org/10.1115/detc2003/cie-48235.

Full text
Abstract:
This paper introduces an extensible software architecture for geometric model translation based on the concept of “interface” querying. Translation occurs by instantiating a “source” and a “target” module which are able to determine each other’s capabilities by querying for necessary interfaces. Data transfer is initiated between these uncoupled modules using a neutral protocol that they agree upon. This dynamic interoperation continues until the transfer of information is complete. Since it does not require conformance to any specific set of geometric functionality, the architecture accommodates evolving geometric formats. Communication between decoupled modules exclusively through software interfaces facilitates the intended extensibility. In support of ongoing virtual prototyping research at the Virtual Reality Applications Center (VRAC), an example implementation of the architecture in software is presented for conversion of 3D models from a proprietary large model visualization format, EDS/PLM DirectModel, into several alternative formats.
APA, Harvard, Vancouver, ISO, and other styles
2

Astrakova, Anna S., Dmitry Yu Kushnir, Nikolay N. Velker, and Gleb V. Dyatlov. "2D electromagnetic inversion using ANN solver for three–layer model with wall." In Недропользование. Горное дело. Направления и технологии поиска, разведки и разработки месторождений полезных ископаемых. Экономика. Геоэкология. Федеральное государственное бюджетное учреждение науки Институт нефтегазовой геологии и геофизики им. А.А. Трофимука Сибирского отделения Российской академии наук, 2020. http://dx.doi.org/10.18303/b978-5-4262-0102-6-2020-031.

Full text
Abstract:
We propose an approach to inversion of induction LWD measurements based on calculation of the synthetic signals by artificial neural networks (ANN) specially trained on some database. The database for ANN training is generated by means of the proprietary 2D solver Pie2d. Validation of the proposed approach and estimation of computation time is performed for the problem of reconstruction of the three–layer model with a wall. Also, we make uncertainty analysis for the reconstructed model parameters for two tool configurations.
APA, Harvard, Vancouver, ISO, and other styles
3

Southern, Carolyne, Joseph Wong, and Keith Bladon. "Challenges of Integrating Multidisciplinary Wayside Databases." In ASME 2012 Rail Transportation Division Fall Technical Conference. American Society of Mechanical Engineers, 2012. http://dx.doi.org/10.1115/rtdf2012-9446.

Full text
Abstract:
A single, integrated database to store inputs from multiple, and multidisciplinary wayside systems is a pre-requisite for cross-correlation of data, and the development of intelligent algorithms to determine alarm levels and automate decision making. Australian rail operators run on three track gauges, operate a mix of American, European and uniquely Australian rolling stock, and lack a unified set of interchange standards, making the development of operational and condition monitoring rules a complex task. Over the years, Wayside Equipment vendors have adopted different database architectures and data structures for their proprietary systems. Recognizing the need for an industry-wide standard, Pacific National and Track Owners in Australia have initiated a project to develop the architecture for an integrated, open database to capture and store data feeds from multiple wayside systems, from different suppliers. This paper describes the objectives, constraints, challenges and projected benefits of the project for the track owner and the rail operator, and the planned implementation of an integrated condition monitoring database in the Australian rail environment.
APA, Harvard, Vancouver, ISO, and other styles
4

Heister, Reinhard, and Reiner Anderl. "Federative Data Management Based on Unified XML Data Scheme to Support Prosthetic Dentistry Workflows." In ASME 2013 International Mechanical Engineering Congress and Exposition. American Society of Mechanical Engineers, 2013. http://dx.doi.org/10.1115/imece2013-62615.

Full text
Abstract:
The laboratory side of a digital dental workflow consists of heterogeneous software tools including digitization (scanning), modeling (CAD), production planning (CAM) and production. The heterogeneity can be structured in two dimensions: ‘various partial systems’, composing the dental product development system and ‘various vendors’, offering software solutions for these partial systems. As a result the value creation process lacks efficiency and different input/output data streams are still necessary. As a standard for the representation of geometric data the STL format has been established, whereas for additional information, such as organizational and administrative data, as well as requirements and design data, the XML (eXtensible Markup Language) format is considered appropriate. However, a variety of proprietary XML data formats have been developed by system vendors. Thus incompatibilities are a significant source for errors. Data flow structures as available today only allow unidirectional flow of information ‘downstream’. A new approach is based on a federative workflow data management. The basic concept is a unified XML scheme that represents data about all activities and states of dental objects created throughout the whole cycle of dental process. The new unified XML scheme provides a data structure, which can be adapted for the respective input/output data streams of all partial systems. The XML scheme represents a unified data scheme which allows both vertical (within a certain partial system class) and horizontal (along the digital dental workflow and independent of system vendor) data usage. Each dental system supplier only needs to create one input and output filter for the neutral XML interface. The system architecture is based on a web server to which a XML database server is connected. The XML database server manages project specific XML databases. Data can be made available through REST-, as well as through WebDAV-interface on LAN or WAN. With the help of XPath and XQuery required data can be extracted from the database. Redundant data input as well as incompatibility errors can be avoided by this approach. The innovative core is a unified workflow data format, in which a bidirectional data flow can be provided for both downstream and upstream, along the digital dental workflow.
APA, Harvard, Vancouver, ISO, and other styles
5

Chin, S. B., W. A. Bullough, G. E. Cardew, and J. Kinsella. "Computational Fluid Mechanics Studies of Conical Diffuser Flow." In ASME 1995 15th International Computers in Engineering Conference and the ASME 1995 9th Annual Engineering Database Symposium collocated with the ASME 1995 Design Engineering Technical Conferences. American Society of Mechanical Engineers, 1995. http://dx.doi.org/10.1115/cie1995-0768.

Full text
Abstract:
Abstract Values of pressure recovery coefficient (PRC) are compared for flow through and along conical diffusers. Overall analytical, experimental and Computational Fluid Mechanics (CFM) determined characterisations are presented for a range of included angle 20, at industrially representative Reynolds numbers, and for the 7° diffuser, over a range of Reynolds numbers. The 7° diffuser is also examined for variation in PRC along its length, at one Reynolds number. All diffusers have the same diameter ratio. The results were verified against data from the internal aerodynamics literature. These examinations were carried out in order to build up confidence in the use of a proprietary CFM package as a basis for the study of flows in diffusers and, in particular, pressure distributions therein. Computation and experiments were done by undergraduate students acting in concert, on a class mini-project. Problems of modelling, boundary location and specification of numerical boundary conditions vis a vis real diffuser situations with and without inlet duct and tailpipe are described. The relationship between measured wall tappings of pressure and pressure variations and those from the numerical solution are investigated.
APA, Harvard, Vancouver, ISO, and other styles
6

Sprenger, Florian, Vahid Hassani, Adolfo Maron, et al. "Establishment of a Validation and Benchmark Database for the Assessment of Ship Operation in Adverse Conditions." In ASME 2016 35th International Conference on Ocean, Offshore and Arctic Engineering. American Society of Mechanical Engineers, 2016. http://dx.doi.org/10.1115/omae2016-54865.

Full text
Abstract:
The Energy Efficiency Design Index (EEDI), introduced by the IMO [1] is applicable for various types of new-built ships since January 2013. Despite the release of an interim guideline [2], concerns regarding the sufficiency of propulsion power and steering devices to maintain manoeuvrability of ships in adverse conditions were raised. This was the motivation for the EU research project SHOPERA (Energy Efficient Safe SHip OPERAtion, 2013–2016 [3–6]). The aim of the project is the development of suitable methods, tools and guidelines to effectively address these concerns and to enable safe and green shipping. Within the framework of SHOPERA, a comprehensive test program consisting of more than 1,300 different model tests for three ship hulls of different geometry and hydrodynamic characteristics has been conducted by four of the leading European maritime experimental research institutes: MARINTEK, CEHIPAR, Flanders Hydraulics Research and Technische Universität Berlin. The hull types encompass two public domain designs, namely the KVLCC2 tanker (KRISO VLCC, developed by KRISO) and the DTC container ship (Duisburg Test Case, developed by Universität Duisburg-Essen) as well as a RoPax ferry design, which is a proprietary hull design of a member of the SHOPERA consortium. The tests have been distributed among the four research institutes to benefit from the unique possibilities of each facility and to gain added value by establishing data sets for the same hull model and test type at different under keel clearances (ukc). This publication presents the scope of the SHOPERA model test program for the two public domain hull models — the KVLCC2 and the DTC. The main particulars and loading conditions for the two vessels as well as the experimental setup is provided to support the interpretation of the examples of experimental data that are discussed. The focus lies on added resistance at moderate speed and drift force tests in high and steep regular head, following and oblique waves. These climates have been selected to check the applicability of numerical models in adverse wave conditions and to cover possible non-linear effects. The obtained test results with the KVLCC2 model in deep water at CEHIPAR are discussed and compared against the results obtained in shallow water at Flanders Hydraulics Research. The DTC model has been tested at MARINTEK in deep water and at Technische Universität Berlin and Flanders Hydraulics Research in intermediate/shallow water in different set-ups. Added resistance and drift force measurements from these facilities are discussed and compared. Examples of experimental data is also presented for manoeuvring in waves. At MARINTEK, turning circle and zig-zag tests have been performed with the DTC in regular waves. Parameters of variation are the initial heading, the wave period and height.
APA, Harvard, Vancouver, ISO, and other styles
7

Zawislak, M. S., D. J. Cerantola, and A. M. Birk. "Estimating the Drag Developed by a High Bypass Ratio Turbofan Engine." In ASME Turbo Expo 2018: Turbomachinery Technical Conference and Exposition. American Society of Mechanical Engineers, 2018. http://dx.doi.org/10.1115/gt2018-75204.

Full text
Abstract:
A high bypass ratio turbofan engine capable of powering the Boeing 757 was considered for thrust and drag analysis. A quasi-2D engine model applying the fundamental thermodynamics conservation equations and practical constraints determined engine performance and provided cross-sectional areas in the low-pressure system. Coupled with suggestions on boat-tail angle and curvature from literature, a representative bypass duct and primary exhaust nozzle was created. 3D steady-RANS simulations using Fluent® 18 were performed on a 1/8th axisymmetric section of the geometry. A modified 3D fan zone model forcing radial equilibrium was used to model the fan and bypass stator. Takeoff speed and cruise operating conditions were modeled and simulated to identify changes in thrust composition and intake sensitivity. Comparison between net thrust predictions by the engine model and measured in CFD were within grid uncertainty and model sensitivity at cruise. Trends observed in a published database were satisfied and calculations coincided with GasTurb™ 8.0. Verification of thrust in this manner gave confidence to the aerodynamic performance prediction of this modest CFD. Obtaining a baseline bypass design would allow rapid testing of aftermarket components and integration techniques in a realistic flow-field without reliance on proprietary engine data.
APA, Harvard, Vancouver, ISO, and other styles
8

Ruiz, Janneth, Antonio Ardila, Bernardo Rueda, et al. "A Proposed Procedure for Identifying the Predominant Heat Transfer Modes Along the Length of Large Nickel Laterite Ore Rotary Kiln: Experimental Validation in an Industrial Process." In ASME 2021 Heat Transfer Summer Conference collocated with the ASME 2021 15th International Conference on Energy Sustainability. American Society of Mechanical Engineers, 2021. http://dx.doi.org/10.1115/ht2021-64016.

Full text
Abstract:
Abstract Nickel is essential in many consumer, industrial, military, transport, aerospace, marine, and architectural products due to its outstanding physical and chemical properties. This work focuses on the calcination and pre-reduction of laterite nickel ore to produce ferronickel. Ferronickel is an alloy containing nickel (about 30% wt.) and iron used for manufacturing stainless steel. Calcination and pre-reduction entail removing chemically bonded water from partially dried ore and removing oxygen from mineral oxides in the calcine. Here we combine a proprietary database with operation data of two rotary kilns and model predictions of Mean Residence Time, shell losses, intraparticle evaporation, and intraparticle temperature distribution. The kilns feature notable differences in length, inclination angle, excess air, but the predicted Mean Residence Times are similar. A fitted profile of experimental solids bed temperature represented particles surface temperature. The model considered slab-like mineral particles with surface-to-center distances of 13, 25, and 38 mm. Results show notable differences in the drying zone length and average surface-to-center temperature differences. Surface-to-center distances higher than 25 mm result in average surface-to-center temperature differences higher than 80°C. The following steps are improvements in the particle model and its coupling with the gas and wall temperature profiles.
APA, Harvard, Vancouver, ISO, and other styles
9

Rangan, Ravi M., Kenneth Maddux, and William Duwe. "A Practical Methodology to Automate Routine and Data Intensive Design Activities." In ASME 1994 International Computers in Engineering Conference and Exhibition and the ASME 1994 8th Annual Database Symposium collocated with the ASME 1994 Design Technical Conferences. American Society of Mechanical Engineers, 1994. http://dx.doi.org/10.1115/edm1994-0506.

Full text
Abstract:
Abstract Many engineering enterprises are involved in routine design operations. Routine design, in such cases, is concerned with varying the size and/or configuration of the chosen system using established solution principles, i.e. no new solution principles are introduced. However, in most engineering environments that deliver custom products to customers, the processes followed in ensuring that these routine designs are viable, are dependent on complex interactions within the organization. For example, customer delivery requirements may dictate material selection based on timely availability and scheduling logistics, manufacturability, adherence to codes and standards and test procedures, in addition to technical design issues such as stress and fit calculations. This paper explores a practical and generic approach to help understand the design process, and then to subsequently prioritize the appropriate automation potential from a business standpoint. Using a structured methodology that equally emphasizes business process modeling, design process modeling, information modeling and final process implementation, the engineering enterprise is able to identify a technical focus area and an associated implementation plan. Upon identifying the technical focus, structured methodologies are again applied to develop and implement the design automation function. We illustrate this methodology using, as an example, a production-level automation capability that was developed within the framework of routine design operations at TDW, a designer and manufacturer of pipeline fittings, a company that services the major oil, gas and utilities companies. The complete program was carried out over a two year period and the company has successfully reduced the tedious manual design process of complex pressure vessel systems to a streamlined automated process that has resulted in vast improvements in time to market, product quality and consistency, and significantly shorter design cycle times. This automated system was developed on the UNIX platform, and integrates TDW proprietary algorithms and rules with SDRC’s I-DEAS (SDRC, 1994), a geometric modeling system, ORACLE (ORACLE, 1992), a database management system, and KES (KES, 1988), a commercial expert system shell.
APA, Harvard, Vancouver, ISO, and other styles
10

Nguyen, Quang-Viet. "Measurements of Equivalence Ratio Fluctuations in a Lean Premixed Prevaporized (LPP) Combustor and Its Correlation to Combustion Instability." In ASME Turbo Expo 2002: Power for Land, Sea, and Air. ASMEDC, 2002. http://dx.doi.org/10.1115/gt2002-30060.

Full text
Abstract:
Experimental evidence correlating equivalence ratio fluctuations with combustion instabilities and NOX emissions in a jet-A fueled lean premixed prevaporized (LPP) combustor utilizing a non-proprietary ‘generic’ fuel injector is presented. Real-time laser absorption measurements of equivalence ratio, together with dynamic combustor pressure, flame luminosity and fuel pressure were obtained at inlet air conditions up to 16.7 atm and 817 K. From this data, an extensive database of real-time variables was obtained for the purposes of providing validation data for future studies of LPP combustion modeling. In addition, time and frequency space analysis of the data revealed measurable levels of acoustic coupling between all variables. Equivalence ratio and dynamic pressure cross-correlations were found to predict the level of combustion instability. Furthermore, NOX production was found to follow the root-mean-square (RMS) flame luminosity and RMS combustor dynamic pressure. However, the unmixedness of the fuel-air mixture was not found to predict NOX production in this combustor. The generic LPP injector, although not optimized for low-emissions or combustion stability, provides some of the essential features of real injectors for the purposes of studying the relationship between fluctuations in equivalence ratios and combustion instability. In particular, the fuel premixer advection time was found to have a significant and direct impact on the level of combustion instability. The results of this work support the time-lag concept for avoiding combustion instability when designing injector/premixers in LPP combustors.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography