Academic literature on the topic 'AWS compute instances'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the lists of relevant articles, books, theses, conference reports, and other scholarly sources on the topic 'AWS compute instances.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Journal articles on the topic "AWS compute instances"

1

Chilukoori, Vishnu Vardhan Reddy, and Srikanth Gangarapu. "Strategic Cost Management in Cloud-Based Big Data Processing: An AWS Case Study from Amazon." International Journal for Research in Applied Science and Engineering Technology 12, no. 9 (2024): 164–71. http://dx.doi.org/10.22214/ijraset.2024.64155.

Full text
Abstract:
Abstract: This article presents a comprehensive case study on optimizing big data pipelines within the Amazon Web Services (AWS) ecosystem to achieve cost efficiency. We examine the implementation of various cost-saving strategies at Amazon, including right-sizing EC2 instances, leveraging spot instances, intelligent data lifecycle management, and strategic reserved instance purchasing. Through quantitative analysis of real-world scenarios, we demonstrate significant reductions in AWS compute costs while maintaining performance and scalability. The article reveals that a combination of these approaches led to a 37% decrease in overall operational expenses for Amazon's big data processing infrastructure. Furthermore, we discuss the challenges encountered during optimization, the trade-offs between cost and performance, and provide actionable insights for organizations seeking to maximize the value of their AWS investments. Our findings contribute to the growing body of knowledge on cloud resource optimization and offer practical guidelines for enterprises managing large-scale data processing workloads in cloud environments.
APA, Harvard, Vancouver, ISO, and other styles
2

Mr., Sayan Guha. "A NOVEL OPTIMIZATION OF CLOUD INSTANCES WITH INVENTORY THEORY APPLIED ON REAL TIME IOT DATA OF STOCHASTIC NATURE." International Journal on Cloud Computing: Services and Architecture (IJCCSA) 11, no. 1/2/3/4/5/6 (2022): 19. https://doi.org/10.5281/zenodo.7432870.

Full text
Abstract:
A Horizontal scaling is a Cloud architectural strategy by which the number of nodes or computers increased to meet the demand of continuously increasing workload. The cost of compute instances increases with increased workload & the research is aimed to bring an optimization of the reserved Cloud instances using principles of Inventory theory applied to IoT datasets with variable stochastic nature. With a structured solution architecture laid down for the business problem to understand the checkpoints of compute instances – the range of approximate reserved compute instances have been optimized & pinpointed by analysing the probability distribution curves of the IoT datasets. The Inventory theory applied to the distribution curves of the data provides the optimized number of compute instances required taking the range prescribed from the solution architecture. The solution would help Cloud solution architects & Project sponsors in planning the compute power required in AWS® Cloud platform in any business situation where ingestion & processing data of stochastic nature is a business need.
APA, Harvard, Vancouver, ISO, and other styles
3

Guha, Sayan. "A Novel Optimization of Cloud Instances with Inventory Theory Applied on Real Time IoT Data of Stochastic Nature." International Journal on Cloud Computing: Services and Architecture 11, no. 6 (2021): 19–33. http://dx.doi.org/10.5121/ijccsa.2021.11603.

Full text
Abstract:
A Horizontal scaling is a Cloud architectural strategy by which the number of nodes or computers increased to meet the demand of continuously increasing workload. The cost of compute instances increases with increased workload & the research is aimed to bring an optimization of the reserved Cloud instances using principles of Inventory theory applied to IoT datasets with variable stochastic nature. With a structured solution architecture laid down for the business problem to understand the checkpoints of compute instances – the range of approximate reserved compute instances have been optimized & pinpointed by analysing the probability distribution curves of the IoT datasets. The Inventory theory applied to the distribution curves of the data provides the optimized number of compute instances required taking the range prescribed from the solution architecture. The solution would help Cloud solution architects & Project sponsors in planning the compute power required in AWS® Cloud platform in any business situation where ingestion & processing data of stochastic nature is a business need.
APA, Harvard, Vancouver, ISO, and other styles
4

Tharwani, Jay, and Arnab Purkayastha. "Cost-Performance Evaluation of General Compute Instances: AWS, Azure, GCP, and OCI." International Journal of Computer Trends and Technology 72, no. 11 (2024): 248–55. https://doi.org/10.14445/22312803/ijctt-v72i11p127.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Sai, Krishna Chirumamilla. "Dynamic Scaling and Cost Optimization: Best Practices for Software Engineers Leveraging AWS Spot Instances and Reserved Capacity." International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences 12, no. 4 (2024): 1–12. https://doi.org/10.5281/zenodo.15107565.

Full text
Abstract:
Flexibility and cost management are becoming more valuable attributes to consider for software developers of cloud-based software solutions. With cloud usage increasing, cost optimization is equally becoming a consideration, as is performance. AWS offers multiple choices for compute instances, including Spot Instances and Reserved Instances, which have quite different saving advantages. Some of the well-known instances include Spot Instances; these give access to occasionally available units at markedly cheaper tariffs with the disadvantage that these can be interrupted. Medium-small scale and nonvital- or tolerant applications are best suited to these instances. On the other hand, there are Reserved Instances that provide sizeable cost reduction in return for a magnum commitment, which is best suited for apt, solid-state use. To maximize these options, the user must accurately determine the workload requirements of his/her organization and the pricing structure of AWS.
APA, Harvard, Vancouver, ISO, and other styles
6

Venkata, Sasidhar (Sasi) Kanumuri. "Compute Cost Optimization: Unleashing Cost Savings and Efficiency Wins with Current-Gen AWS EC2 Instances/ Virtual Machines (VM)." European Journal of Advances in Engineering and Technology 9, no. 7 (2022): 51–58. https://doi.org/10.5281/zenodo.12758890.

Full text
Abstract:
The ever-growing cloud landscape demands strategic cost management practices, and non-containerized workloads often pose a significant cost challenge. This article explores how new-generation EC2 instances (C, R, M families) offer a transformative approach to optimizing and empowering these workloads. We delve into the limitations of traditional methods, highlighting the performance, pricing, and feature advancements offered by new-gen instances. Concrete comparisons and quantifiable results showcased through real-world case studies demonstrate substantial cost savings, performance gains, and enhanced efficiency. Finally, the article provides a roadmap for a successful migration, empowering businesses to unlock the potential of new-gen EC2 instances for their non-containerized deployments and achieve a more sustainable and cost-effective cloud strategy.
APA, Harvard, Vancouver, ISO, and other styles
7

Bai, Jinbing, Ileen Jhaney, and Jessica Wells. "Developing a Reproducible Microbiome Data Analysis Pipeline Using the Amazon Web Services Cloud for a Cancer Research Group: Proof-of-Concept Study." JMIR Medical Informatics 7, no. 4 (2019): e14667. http://dx.doi.org/10.2196/14667.

Full text
Abstract:
Background Cloud computing for microbiome data sets can significantly increase working efficiencies and expedite the translation of research findings into clinical practice. The Amazon Web Services (AWS) cloud provides an invaluable option for microbiome data storage, computation, and analysis. Objective The goals of this study were to develop a microbiome data analysis pipeline by using AWS cloud and to conduct a proof-of-concept test for microbiome data storage, processing, and analysis. Methods A multidisciplinary team was formed to develop and test a reproducible microbiome data analysis pipeline with multiple AWS cloud services that could be used for storage, computation, and data analysis. The microbiome data analysis pipeline developed in AWS was tested by using two data sets: 19 vaginal microbiome samples and 50 gut microbiome samples. Results Using AWS features, we developed a microbiome data analysis pipeline that included Amazon Simple Storage Service for microbiome sequence storage, Linux Elastic Compute Cloud (EC2) instances (ie, servers) for data computation and analysis, and security keys to create and manage the use of encryption for the pipeline. Bioinformatics and statistical tools (ie, Quantitative Insights Into Microbial Ecology 2 and RStudio) were installed within the Linux EC2 instances to run microbiome statistical analysis. The microbiome data analysis pipeline was performed through command-line interfaces within the Linux operating system or in the Mac operating system. Using this new pipeline, we were able to successfully process and analyze 50 gut microbiome samples within 4 hours at a very low cost (a c4.4xlarge EC2 instance costs $0.80 per hour). Gut microbiome findings regarding diversity, taxonomy, and abundance analyses were easily shared within our research team. Conclusions Building a microbiome data analysis pipeline with AWS cloud is feasible. This pipeline is highly reliable, computationally powerful, and cost effective. Our AWS-based microbiome analysis pipeline provides an efficient tool to conduct microbiome data analysis.
APA, Harvard, Vancouver, ISO, and other styles
8

S.M.Jaybhaye and V.Z.Attar. "Resource Provisioning for Scientific Workflow Applications using Aws Cloud." International Journal of Engineering and Advanced Technology (IJEAT) 9, no. 3 (2020): 1122–26. https://doi.org/10.35940/ijeat.B4502.029320.

Full text
Abstract:
Cloud computing play a very important role in day to day life of everyone. In recent years cloud services are much popular for hosting the applications. Virtual Machine Instances are the Images of physical machines which are described with its specification and configurations such as number of microprocessor (CPU) cycles, Memory access and network bandwidth. Cloud provider must contribute special interest while designing and implementing the Iaas. The role of quality and service performance is crucial aspects in application execution Scientific workflow based applications are both compute and data intensive. These application can take advantage of cloud features. Resource provisioning approaches varies based on the user’s requirements and the metric which is used to allocated resources.
APA, Harvard, Vancouver, ISO, and other styles
9

viatoire, Dr T. Amalraj. "Dynamic Auto-Scaling and Load-Balanced Web Application Deployment in AWS." INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 09, no. 06 (2025): 1–9. https://doi.org/10.55041/ijsrem49936.

Full text
Abstract:
ABSTRACT Web applications must be fast, dependable, and able to manage evolving user needs without collapsing or becoming overly costly to maintain in the digital environment of today. Manual server management or traffic spike handling in traditional approaches of application deployment sometimes result in downtime, inadequate performance, or expensive costs. This project, "Dynamic Auto-Scaling and Load-Balanced Wed Application Deployment In AWS," thus emphasizes on creating a cloud-based infrastructure capable of automatically adjusting to demand, remain available, and operate effectively without continual human intervention. The project bases deployment on Amazon Web Services (AWS). Combining key services including Elastic Load Balancer (ELB), Amazon RDS (Relational Database Service), Auto Scaling, and Elastic Compute Cloud results in a dynamic and dependable system. By automatically increasing or lowering the number of EC2 instances depending on real-time usage, auto scaling guarantees the application has just the correct level of computing capability. The Load Balancer controls traffic by spreading it equally, so preventing any one server from becoming overwhelmed in front of the others. Amazon RDS offers a scalable, safe, managed database solution for data storage supporting backup, replication, and failover mechanisms. Launching EC2 instances as web servers, integrating them with an ELB, configuring Auto Scaling policies triggered by CPU usage, and configuring RDS to host the database of the application constituted the setup that the project followed. Added to protect were security policies including appropriate IAM roles, security groups, and encryption. Keywords: Amazon Web Services (AWS), EC2, Auto Scaling, RDS, VPC, IAM, Security Groups Scalable deployment, Web application, Auto Scaling Load Balancing, Amazon RDS, AWS, Cloud computing, high availability, fault tolerance, elastic infrastructure, financial maximization
APA, Harvard, Vancouver, ISO, and other styles
10

Adelusi, Bamidele Samuel, Favour Uche Ojika, and Abel Chukwuemeke Uzoka. "A Conceptual Model for Cost-Efficient Data Warehouse Management in AWS, GCP, and Azure Environments." International Journal of Multidisciplinary Research and Growth Evaluation 3, no. 2 (2022): 831–46. https://doi.org/10.54660/.ijmrge.2022.3.2.831-846.

Full text
Abstract:
As enterprises increasingly migrate to cloud platforms, managing the cost-efficiency of data warehouses in environments such as Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure has become a critical concern. This paper proposes a conceptual model for optimizing the financial and operational management of cloud-based data warehouses. Through a synthesis of recent peer-reviewed studies, whitepapers, and real-world implementation reports from 2015 to 2024, the model integrates strategic design principles, workload optimization techniques, and governance frameworks across multi-cloud ecosystems. The proposed model emphasizes dynamic workload management, tiered storage optimization, intelligent scaling policies, and metadata-driven governance to ensure cost control without compromising performance. Key architectural components include serverless and autoscaling compute layers, storage lifecycle management, query optimization strategies, and automated performance tuning mechanisms. Particular focus is placed on the unique features and pricing models of AWS Redshift, GCP BigQuery, and Azure Synapse Analytics, detailing how organizations can exploit platform-specific capabilities to enhance cost-efficiency. Furthermore, the model incorporates modern innovations such as FinOps practices, usage-based cost allocation, predictive scaling powered by machine learning, and real-time cost observability dashboards. It also outlines potential pitfalls, such as overprovisioning, inefficient data partitioning, and underutilized reserved instances, and provides mitigation strategies to address them. By aligning technical architecture decisions with proactive financial operations, this conceptual model offers a pathway for organizations to balance performance, scalability, and budget constraints effectively. The study concludes by recommending future directions, including AI-driven autonomous warehouse management, unified billing optimization across multi-cloud deployments, and frameworks for continuous cost-performance evaluation. Mastering cost-efficient warehouse management is increasingly essential for organizations seeking to maximize the value of their data assets while maintaining fiscal responsibility in complex, distributed cloud environments.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Dissertations / Theses on the topic "AWS compute instances"

1

Cherukuri, Prudhvi Nath Naidu, and Sree Kavya Ganja. "Comparison of GCP and AWS using usability heuristic and cognitive walkthrough while creating and launching Virtual Machine instances in VirtualPrivate Cloud." Thesis, Blekinge Tekniska Högskola, 2021. http://urn.kb.se/resolve?urn=urn:nbn:se:bth-21896.

Full text
Abstract:
ABSTRACT Cloud computing has become increasingly important over the years, as the need for computational resources, data storage, and networking capabilities in the field of information technology has been increased. There are several large corporations that offer these services to small companies or to end-users such as GCP, AWS, Microsoft Azure, IBM Cloud, and many more. The main aim of this thesis is to perform the comparison of GCP and AWS consoles in terms of the user interface while performing tasks related to compute engine. The cognitive walkthrough has been performed on tasks such as the creation of VPC, creation of VM instances, and launching them and then from the results, both the interfaces are compared using usability heuristics. Background: As the usage of cloud computing has increased over the years, the companies that are offering these services have grown eventually. Though there are many cloud services available in the market the user might always choose the services that are more flexible and efficient to use. In this manner, the choice of our research is made to compare the cloud services in terms of user interaction user experience. As we dig deep into the topic of user interaction and experience there are evaluation techniques and principles such as cognitive walkthrough and usability heuristics are suitable for our research. Here the comparison is made among GCP and AWS user interfaces while performing some tasks related to compute engine. Objectives: The main objectives of this thesis are to create VPC, VM instances,s and launch VM instances in two different cloud services such as GCP and AWS. To find out the best user interface among these two cloud services from the perspective of the user. Method: The process of finding best user interface among GCP and AWS cloud services is based on the cognitive walkthrough and comparing with usability heuristics. The cognitive walkthrough is performed on chosen tasks in both the services and then compared using usability heuristics to get the results of our research. Results: The results that are obtained from cognitive walkthrough and comparison with usability heuristics shown in graphical formats such as bar graphs, pie charts, and the comparison results are shown in the form of tabular form. The results cannot be universal, as they are just observational results from cognitive walkthrough and usability heuristic evaluation. Conclusion: After performing the above-mentioned methods it is observed that the user interface of GCP is more flexible and efficient in terms of user interaction and experience. Though the user experience may vary based on the experience level of users in cloud services, as per our research the novice user and moderate users have chosen GCP as a better interactive system over AWS. Keywords: Cloud computing, VM instance, Cognitive walkthrough, Usability heuristics, User-interface.
APA, Harvard, Vancouver, ISO, and other styles

Books on the topic "AWS compute instances"

1

Shagrir, Oron. The Nature of Physical Computation. Oxford University Press, 2022. http://dx.doi.org/10.1093/oso/9780197552384.001.0001.

Full text
Abstract:
Computing systems are everywhere today. Even the brain is thought to be a sort of computing system. But what does it mean to say that a given organ or system computes? What is it about laptops, smartphones, and nervous systems that they are deemed to compute, and why does it seldom occur to us to describe stomachs, hurricanes, rocks, or chairs that way? The book provides an extended argument for the semantic view of computation, which states that semantic properties are involved in the nature of computing systems. Laptops, smartphones, and nervous systems compute because they are accompanied by representations. Stomachs, hurricanes, and rocks, for instance, which do not have semantic properties, do not compute. The first part of the book argues that the linkage between the mathematical theory of computability and the notion of physical computation is weak. Theoretical notions such as algorithms, effective procedure, program, and automaton play only a minor role in identifying physical computation. The second part of the book reviews three influential accounts of physical computation and argues that while none of these accounts is satisfactory, each of them highlights certain key features of physical computation. The final part of the book develops and argues for a semantic account of physical computation and offers a characterization of computational explanations.
APA, Harvard, Vancouver, ISO, and other styles
2

Gelernter, David. Mirror Worlds. Oxford University Press, 1991. http://dx.doi.org/10.1093/oso/9780195068122.001.0001.

Full text
Abstract:
Technology doesn't flow smoothly; it's the big surprises that matter, and Yale computer expert David Gelernter sees one such giant leap right on the horizon. Today's small scale software programs are about to be joined by vast public software works that will revolutionize computing and transform society as a whole. One such vast program is the "Mirror world." Imagine looking at your computer screen and seeing reality--an image of your city, for instance, complete with moving traffic patterns, or a picture that sketches the state of an entire far-flung corporation at this second. These representations are called Mirror worlds, and according to Gelernter they will soon be available to everyone. Mirror worlds are high-tech voodoo dolls: by interacting with the images, you interact with reality. Indeed, Mirror worlds will revolutionize the use of computers, transforming them from (mere) handy tools to crystal balls which will allow us to see the world more vividly and see into it more deeply. Reality will be replaced gradually, piece-by-piece, by a software imitation; we will live inside the imitation; and the surprising thing is--this will be a great humanistic advance. we gain control over our world, plus a huge new measure of insight and vision. In this fascinating book--part speculation, part explanation--Gelernter takes us on a tour of the computer technology of the near future. Mirror worlds, he contends, will allow us to explore the world in unprecedented depth and detail without ever changing out of our pajamas. A hospital administrator might wander through an entire medical complex via a desktop computer. Any citizen might explore the performance of the local schools, chat electronically with teachers and other Mirror world visitors, plant software agents to report back on interesting topics; decide to run for the local school board, hire a campaign manager, and conduct the better part of the campaign itself--all by interacting with the Mirror world. Gelernter doesn't just speculate about how this amazing new software will be used--he shows us how it will be made, explaining carefully and in detail how to build a Mirror world using technology already available. we learn about "disembodied machines," "trellises," "ensembles," and other computer components which sound obscure, but which Gelernter explains using familiar metaphors and terms. (He tells us that a Mirror world is a microcosm just like a Japanese garden or a Gothic cathedral, and that a computer program is translated by the computer in the same way a symphony is translated by a violinist into music.) Mirror worlds offers a lucid and humanistic account of the coming software revolution, told by a computer scientist at the cutting edge of his field.
APA, Harvard, Vancouver, ISO, and other styles
3

Succi, Sauro. Flows at Moderate Reynolds Numbers. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780199592357.003.0018.

Full text
Abstract:
This chapter presents the application of LBE to flows at moderate Reynolds numbers, typically hundreds to thousands. This is an important area of theoretical and applied fluid mechanics, one that relates, for instance, to the onset of nonlinear instabilities and their effects on the transport properties of the unsteady flow configuration. The regime of Reynolds numbers at which these instabilities take place is usually not very high, of the order of thousands, hence basically within reach of present day computer capabilities. Nonetheless, following the full evolution of these transitional flows requires very long-time integrations with short time-steps, which command substantial computational power. Therefore, efficient numerical methods are in great demand. Also of major interest are steady-state or pulsatile flows at moderate Reynolds numbers in complex geometries, such as they occur, for instance, in hemodynamic applications. The application of LBE to such flows will also briefly be mentioned
APA, Harvard, Vancouver, ISO, and other styles
4

Cousins, Alita J., and Theresa Porter. Darwinian Perspectives on Women’s Progenicide. Edited by Maryanne L. Fisher. Oxford University Press, 2016. http://dx.doi.org/10.1093/oxfordhb/9780199376377.013.33.

Full text
Abstract:
Evolutionary perspectives on infanticide suggest that women kill their offspring under specific circumstances: for instance, when children have low fitness, when women are young and unpartnered, when they have older children, and when the birth spacing is too close. Infanticide may also serve as a way to increase women’s ability to compete for access to mates, especially when the mating market has a surplus of males. Under these circumstances, to stay intrasexually competitive, unpartnered women are more likely to commit infanticide, indicating that women may sometimes kill their infants as a mechanism to be able to compete for access to better mates. This may be especially true for young women. Stepmothers may also abuse or kill stepchildren to increase access to a mate and increase intrasexual competition. This chapter addresses the circumstances under which mothers abuse, neglect, and kill their offspring, including how intrasexual competition may increase infanticide.
APA, Harvard, Vancouver, ISO, and other styles
5

Pfeiffer, Christian. Body in Categories 6. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198779728.003.0005.

Full text
Abstract:
This chapter expands on the basic theory, which is presented in the Categories. It offers a treatment of the mereotopological properties of bodies, for instance, what belongs to them insofar as they are bodies of physical substances. Bodies are complete and perfect in virtue of being three‐dimensional. Body is prior to surfaces and lines and, because bodies are complete, there cannot be a four‐dimensional magnitude. The explanation offered is that certain topological properties are linked to and determined by the nature of the object in question. Body is a composite of the boundary and the interior or extension. A formal characterization of boundaries as limit entities is offered and it is argued that boundaries are dependent particulars. Similarly, the extension is ontologically dependent on bodies. Aristotle’s argument that the extension of objects is divisible into ever‐divisibles is revisited.
APA, Harvard, Vancouver, ISO, and other styles
6

Schmidt, Kjeld. Practice and Technology. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198733249.003.0003.

Full text
Abstract:
The emergence of practice-centered computing (e.g., Computer-Supported Cooperative Work, or CSCW) raises the crucial question: How can we conceptualize the practices into which the prospective technology is to be integrated? How can we, reasonably, say of two observed activities or events that they are, or are not, instances of the same type? These are crucial questions. This chapter therefore attempts to clarify the concepts of “practice” and “technique.” First, since our ordinary concepts of “practice” and “technique” developed as part of the evolution of modern technology, as tools for practitioners’ and scholars’ reflections on the role of technical knowledge in work, the chapter outlines the major turning points in the evolution of these concepts, from Aristotle (via the scholastics), to enlightenment thinkers such as Diderot and Kant, and finally to Marx and Marxism. The chapter thereafter moves on to analyze the concepts as we use them today in ordinary discourse.
APA, Harvard, Vancouver, ISO, and other styles
7

Joseph, Oliver. Independence in Electoral Management: Electoral Processes Primer 1. International Institute for Democracy and Electoral Assistance (International IDEA), 2021. http://dx.doi.org/10.31752/idea.2021.103.

Full text
Abstract:
Elections are the cornerstone of democratic political processes, serving as a mechanism for political parties or candidates to compete for public office under equal conditions before the electorate. For an election to be credible, the competition must be fair, requiring impartial management of the process. As described in International IDEA’s Handbook on Electoral Management Design (Catt et al. 2014), electoral management bodies (EMBs) are the state institution or institutions established and mandated to organize or, in some instances, supervise the essential (or core) elements of this process. This Primer focuses on the establishment of EMBs as institutions normatively, structurally and functionally independent from government. It examines the benefits and limitations of the legal and institutional framework, governance structure, remit, autonomy over their resources, and contextual approaches that facilitate their functional independence, applicable to different legal and political contexts. The Primer discusses EMB independence and EMB relations with all stakeholders engaged in an electoral process at the national level.
APA, Harvard, Vancouver, ISO, and other styles
8

Patterson, Caroline, and Derek Bell. Causes and diagnosis of chest pain. Oxford University Press, 2016. http://dx.doi.org/10.1093/med/9780199600830.003.0144.

Full text
Abstract:
Differentiating life-threatening from benign causes of chest pain in the critical care setting is a challenge when the symptoms and signs overlap, and patients are unable to communicate fully. A high index of suspicion is required for occult disease. Once the clinician has ensured the patient is haemodynamically stable, it is imperative to rule out myocardial infarction in the first instance. Where possible, a thorough history and a full examination should be undertaken. Electrocardiogram, chest X-ray, and routine observations are often diagnostic. Targeted investigation such as computed tomography, or transthoracic or transoesophageal ultrasonography may be required to confirm these diagnoses. Timely intervention optimizes survival benefit. Treatment may be necessary prior to confirmation of diagnosis, based on high clinical suspicion and risk scoring. Other causes of chest pain should be considered once the immediately life-threatening conditions are excluded.
APA, Harvard, Vancouver, ISO, and other styles
9

Sorensen, Roy. Lying to Mindless Machines. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198743965.003.0015.

Full text
Abstract:
We routinely lie to mindless machines, such as present-day computers, but they cannot lie to us. A mindless machine cannot lie because it cannot assert. One of the ways we can assert is by going on the record. The recorder need only make the assertion accessible to hearers. This is compatible with the speaker knowing that no one will actually access the recorded assertion. For instance, you lied when you last checked the box affirming that you read the service agreement to your computer’s new software. But you did not intend to deceive anyone. How did you manage to lie in such psychologically barren circumstances? An answer is suggested by the legal concept of estoppel.
APA, Harvard, Vancouver, ISO, and other styles
10

Brunsson, Nils. When Sellers Create Markets. Edited by Anna Tyllström. Oxford University Press, 2018. http://dx.doi.org/10.1093/oso/9780198815761.003.0006.

Full text
Abstract:
Using empirical examples of two new markets for professional service, coaching services and public relations consultancy, we discuss how prospective sellers of a new product can engage in market creation. For example, sellers must create fundamental market components, such as a good that is defined and perceived as new, buyers who can be convinced that the new good can be a commodity in a market, competitors, and forms for exchange. In so doing sellers face a specific set of market dilemmas and challenges. For instance, how can they strike a balance between presenting a good as new or old? How do they control whether buyers are individuals or organizations? How can they support the creation of an optimal number of competitors? And how do they decide what to compete with? The handling of such dilemmas and challenges is dependent on the type of good exchanged.
APA, Harvard, Vancouver, ISO, and other styles
More sources

Book chapters on the topic "AWS compute instances"

1

Dobraunig, Christoph, Krystian Matusiewicz, Bart Mennink, and Alexander Tereschenko. "Efficient Instances of Docked Double Decker with AES, and Application to Authenticated Encryption." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2025. https://doi.org/10.1007/978-3-031-91107-1_3.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Teng, Teck-Hou, Hoong Chuin Lau, and Aldy Gunawan. "Instance-Specific Selection of AOS Methods for Solving Combinatorial Optimisation Problems via Neural Networks." In Lecture Notes in Computer Science. Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-030-05348-2_9.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Zhang, Han, Qing Li, and Xin Yao. "Knowledge-Guided Optimization for Complex Vehicle Routing with 3D Loading Constraints." In Lecture Notes in Computer Science. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-70055-2_9.

Full text
Abstract:
AbstractThe split delivery vehicle routing problem with three-dimensional loading constraints (3L-SDVRP) intertwines complex routing and packing challenges. The current study addresses 3L-SDVRP using intelligent optimization algorithms, which iteratively evolve towards optimal solutions. A pivotal aspect of these algorithms is search operators that determine the search direction and the search step size. Effective operators significantly improve algorithmic performance. Traditional operators like swap, shift, and 2-opt fall short in complex scenarios like 3L-SDVRP, mainly due to their limited capacity to leverage domain knowledge. Additionally, the search step size is crucial: smaller steps enhance fine-grained search (exploitation), while larger steps facilitate exploring new areas (exploration). However, optimally balancing these step sizes remains an unresolved issue in 3L-SDVRP. To address this, we introduce an adaptive knowledge-guided insertion (AKI) operator. This innovative operator uses node distribution characteristics for adaptive node insertion, enhancing search abilities through domain knowledge integration and larger step sizes. Integrating AKI with the local search framework, we develop an adaptive knowledge-guided search (AKS) algorithm, which effectively balances exploitation and exploration by combining traditional neighbourhood operators for detailed searches with the AKI operator for broader exploration. Our experiments demonstrate that the AKS algorithm significantly outperforms the state-of-the-art method in solving various 3L-SDVRP instances.
APA, Harvard, Vancouver, ISO, and other styles
4

Galimberti, Andrea. "FPGA-Based Design and Implementation of a Code-Based Post-quantum KEM." In Special Topics in Information Technology. Springer Nature Switzerland, 2024. http://dx.doi.org/10.1007/978-3-031-51500-2_3.

Full text
Abstract:
AbstractPost-quantum cryptography aims to design cryptosystems that can be deployed on traditional computers and resist attacks from quantum computers, which are widely expected to break the currently deployed public-key cryptography solutions in the upcoming decades. Providing effective hardware support is crucial to ensuring a wide adoption of post-quantum cryptography solutions, and it is one of the requirements set by the USA’s National Institute of Standards and Technology within its ongoing standardization process. This research delivers a configurable FPGA-based hardware architecture to support BIKE, a post-quantum QC-MDPC code-based key encapsulation mechanism. The proposed architecture is configurable through a set of architectural and code parameters, which make it efficient, providing good performance while using the resources available on FPGAs effectively, flexible, allowing to support different large QC-MDPC codes defined by the designers of the cryptosystem, and scalable, targeting the whole Xilinx Artix-7 FPGA family. Two separate modules target the cryptographic functionality of the client and server nodes of the quantum-resistant key exchange, respectively, and a complexity-based heuristic that leverages the knowledge of the time and space complexity of the configurable hardware components steers the design space exploration to identify their best parameterization. The proposed architecture outperforms the state-of-the-art reference software that exploits the Intel AVX2 extension and runs on a desktop-class CPU by 1.77 and 1.98 times, respectively, for AES-128- and AES-192-equivalent security instances of BIKE, and it provides a speedup of more than six times compared to the fastest reference state-of-the-art hardware architecture, which targets the same FPGA family.
APA, Harvard, Vancouver, ISO, and other styles
5

Dubey, Parul, Arvind Kumar Tiwari, and Rohit Raja. "Amazon EC2 Instance." In Amazon Web Services: the Definitive Guide for Beginners and Advanced Users. BENTHAM SCIENCE PUBLISHERS, 2023. http://dx.doi.org/10.2174/9789815165821123010004.

Full text
Abstract:
This chapter offers comprehensive coverage of EC2 (Elastic Compute Cloud) in AWS, encompassing instance types, pricing, and other essential aspects. It provides detailed step-by-step instructions for setting up an EC2 instance and initiating work on the AWS platform. By following these procedures, readers gain the necessary knowledge and practical guidance to effectively utilize EC2 and leverage other capabilities offered by AWS. This chapter serves as a valuable resource for individuals seeking to navigate the intricacies of EC2 and fully harness the potential of AWS services.
APA, Harvard, Vancouver, ISO, and other styles
6

Dachselt, Raimund, Sarah Alice Gaggl, Markus Krötzsch, Julián Méndez, Dominik Rusovac, and Mei Yang. "NEXAS: A Visual Tool for Navigating and Exploring Argumentation Solution Spaces." In Computational Models of Argument. IOS Press, 2022. http://dx.doi.org/10.3233/faia220146.

Full text
Abstract:
Recent developments on solvers for abstract argumentation frameworks (AFs) made them capable to compute extensions for many semantics efficiently. However, for many input instances these solution spaces can become very large and incomprehensible. So far, for the further exploration and investigation of the AF solution space the user needs to use post-processing methods or handcrafted tools. To compare and explore the solution spaces of two selected semantics, we propose an approach that visually supports the user, via a combination of dimensionality reduction of argumentation extensions and a projection of extensions to sets of accepted or rejected arguments. We introduce the novel web-based visualization tool NEXAS that allows for an interactive exploration of the solution space together with a statistical analysis of the acceptance of individual arguments for the selected semantics, as well as provides an interactive correlation matrix for the acceptance of arguments. We validate the tool with a walk-through along three use cases.
APA, Harvard, Vancouver, ISO, and other styles
7

Lehtonen Tuomo, Niskanen Andreas, and Järvisalo Matti. "SAT-Based Approaches to Adjusting, Repairing, and Computing Largest Extensions of Argumentation Frameworks." In Frontiers in Artificial Intelligence and Applications. IOS Press, 2018. https://doi.org/10.3233/978-1-61499-906-5-193.

Full text
Abstract:
We present a computational study of effectiveness of declarative approaches for three optimization problems in the realm of abstract argumentation. In the largest extension problem, the task is to compute a σ-extension of largest cardinality (rather than, e.g., a subset-maximal extension) among the σ-extensions of a given argumentation framework (AF). The two other problems considered deal with a form of dynamics in AFs: given a subset S of arguments of an AF, the task is to compute a closest σ-extension within a distance-based setting, either by repairing S into a σ-extension of the AF, or by adjusting S to be a σ-extension containing (or not containing) a given argument. For each of the problems, we consider both iterative Boolean satisfiability (SAT) based approaches as well as directly solving the problems via Boolean optimization using maximum satisfiability (MaxSAT) solvers. We present results from an extensive empirical evaluation under several AF semantics σ using the ICCMA 2017 competition instances and several state-of-the-art solvers. The results indicate that the choice of the approach can play a significant role in the ability to solve these problems, and that a specific MaxSAT approach yields quite generally good results. Furthermore, with impact on SAT-based AF reasoning systems more generally, we demonstrate that, especially on dense AFs, taking into account the local structure of AFs can have a significant positive effect on the overall solving efficiency.
APA, Harvard, Vancouver, ISO, and other styles
8

Nobile, Drew. "Harmonic Syntax." In Form as Harmony in Rock Music. Oxford University Press, 2020. http://dx.doi.org/10.1093/oso/9780190948351.003.0001.

Full text
Abstract:
This chapter adapts traditional Schenkerian analytical methodology to form a theory of rock harmony rooted in the concept of prolongation. The chapter begins with the premise that focusing on small-scale chord-to-chord successions, as many existing theories do, tells us little about rock’s harmonic organization. After describing a new, syntactically based approach to harmonic function, the chapter defines the functional circuit: a large-scale harmonic trajectory spanning at least one complete song section and comprising the functional succession from tonic to pre-dominant to dominant and back to tonic. This trajectory is familiar from centuries of theoretical work on harmonic function, but its adaptation to the rock style is not trivial. In particular, it requires disentangling the notion that only certain chords can carry certain functions. For instance, dominant function can arise not only from the standard V chord but also from IV, ii, or sometimes even I chords.
APA, Harvard, Vancouver, ISO, and other styles
9

Thiam, Khadimou Rassoul. "La Réforme de l’orthographe du français de 1990 : portée et incidences dans la Francophonie." In Écoles, langues et cultures d’enseignement en contexte plurilingue africain. Observatoire européen du plurilinguisme, 2018. http://dx.doi.org/10.3917/oep.agbef.2018.01.0029.

Full text
Abstract:
Depuis 1990, le Conseil supérieur de la langue française a engagé une réforme de l’orthographe du français portant, selon ses tenants, sur des points qui nécessitent une harmonisation et une homogénéisation à des fins didactiques et normatives. Ces rectifications, consignées dans le Journal officiel de la République française n° 100 du 6 décembre 1990, ayant reçu un avis favorable de l’Académie française à l’unanimité, ainsi que l’accord du Conseil de la langue française du Québec et celui du Conseil de la langue de la Communauté française de Belgique, qui ont été consultés dans le processus, sont effectives en France depuis la rentrée 2016. Mais qu’en est-il du reste du monde francophone et notamment de l’Afrique francophone ? Si aujourd’hui le français a une dimension internationale et qu’il compte plus de locuteurs hors de l’Hexagone, il semble paradoxal que l’essentiel des pays ayant le français en partage ne soit pas associé à cette réforme nationale portant sur un objet commun. Une telle posture ne traduit-elle pas des rapports verticaux entre l’Hexagone et les autres pays francophones en matière de normalisation linguistique ? Les institutions franco françaises, Académie française, conseil supérieur sur la langue française, doivent-elles constituer les instances qui décident du fonctionnement et du devenir du français à l’échelle internationale quand l’on connait les bases historiques, sociales et les politiques nationalistes qui ont été à l’origine de leur érection ? Cette réforme commandée par le gouvernement français engage-t-elle ou devrait-elle engager le reste de la francophonie et notamment l’Afrique francophone ?
APA, Harvard, Vancouver, ISO, and other styles
10

Golse, Bernard, and Sylvain Missonnier. "1 as Jornadas Internacionales en Psiperinatalidad de Conecta Perinatal y ASMI WAIMH-España 2021." In 1 as Jornadas Internacionales en Psiperinatalidad de Conecta Perinatal y ASMI WAIMH-España 2021. ASMI-WAIMH España, 2021. https://doi.org/10.3917/s.asmi.waimh.2021.02.0036.

Full text
Abstract:
Les deux topiques renvoient à une conception du psychisme organise en des lieux psychiques, où instances, qui sont le fruit d’un processus de différenciation intrapsychique. Ces deux topiques sont à l’évidence toujours d’actualité et l’on sait la dimension heuristique qu’elles revêtent d’un point de vue clinique, technique et théorique quand on travaille avec des sujets instaurés (enfants, adolescents et adultes). En revanche, quand on travaille en périnatalité, avec des fœtus/bébés où avec des sujets encore mal où peu différenciés, ces deux topiques appartenant à une métapsychologie par essence intrapsychique, leur utilisation se trouve immanquablement sujette à caution. C’est une métapsychologie du lien qui nous est, ici, nécessaire, laquelle ouvre sur une « troisième topique » (Brusset, 1988 ; 2006 ; Dejours, 1986, 2002 et Kaës, 2009) qui permet de dépasser le clivage entre interpersonnel et intrapsychique. L’investissement préobjectal du lien rend compte du mouvement vers le dehors (demande intransitive) avant même que l’autre soit repère comme tel, comme l’illustrent les traitements des enfants autistes. L’investissement du lien renvoie au « sentiment d’être », tandis que l’investissement de l’objet renvoie au « sentiment d’exister », sentiment d’être et sentiment d’exister étant les deux facettes – narcissique et objectale – du « sense of being » winnicottien. La demande intransitive ne serait pas adressée à l’objet mais elle témoignerait d’ores et déjà d’un investissement de ce lien préobjectal intersubjectif dont nous essayons de traquer la représentation intrapsychique grâce au concept de troisième topique. Le cadre des thérapies conjointes offre un paradigme fécond pour mettre à l’épreuve et légitimer cliniquement le concept de troisième topique. ©2020 Association In Analysis. Publie par Elsevier Masson SAS. Tous droits réservés.
APA, Harvard, Vancouver, ISO, and other styles

Conference papers on the topic "AWS compute instances"

1

Pinto, Vinicius Garcia, João V. F. Lima, Vanderlei Munhoz, Daniel Cordeiro, Emilio Francesquini, and Márcio Castro. "Performance Evaluation of Dense Linear Algebra Kernels using Chameleon and StarPU on AWS." In Simpósio em Sistemas Computacionais de Alto Desempenho. Sociedade Brasileira de Computação, 2024. http://dx.doi.org/10.5753/sscad.2024.244405.

Full text
Abstract:
Due to recent advances and investments in cloud computing, public cloud providers now offer GPU-accelerated and compute-optimized Virtual Machine (VM) instances, allowing researchers to execute parallel workloads in virtual heterogeneous clusters in the cloud. This paper evaluates the performance and monetary costs of running dense linear algebra algorithms extracted from the Chameleon package implemented using StarPU on Amazon Elastic Compute Cloud (EC2) instances. We evaluated these metrics with a single powerful/costly instance with four NVIDIA GPUs (fat node) and with a cluster of five less powerful/cheaper instances with a single NVIDIA GPU in each node. Our results showed that most of the linear algebra algorithms achieved better performance and lower monetary costs on the fat node scenario even with one less GPU.
APA, Harvard, Vancouver, ISO, and other styles
2

Ramapuram, Vinay Goud, Jayshri Dhar, and Mallikarjuna Reddy Munaiahgari. "ADAS Simulation on Cloud." In 11th SAEINDIA International Mobility Conference (SIIMC 2024). SAE International, 2024. https://doi.org/10.4271/2024-28-0166.

Full text
Abstract:
<div class="section abstract"><div class="htmlview paragraph">The rapid evolution of new technologies in the automotive sector is driving the demand for advanced simulation solutions, enabling faster software development cycles. Developers often encounter challenges in managing the vast amounts of data generated during testing. For example, a single Advanced Driver Assistance System (ADAS) test vehicle can produce several terabytes of data daily. Efficiently handling and distributing this data across multiple locations can introduce delays in the development process. Moreover, the large volume of test cases required for simulation and validation further exacerbates these delays. On-premises simulation setups, especially those dependent on High-Performance Computing (HPC) systems, pose several challenges, including limited computational resources, scalability issues, high capital and maintenance costs, resource management inefficiencies, and compatibility problems between GPU drivers and servers, all of which can impact both performance and costs.</div><div class="htmlview paragraph">To address these issues, transitioning to cloud-based simulations using AWS services—such as S3, Batch, Auto Scaling Group (ASG), and Elastic Container Registry (ECR)—allows for parallel execution of Software-In-the-Loop (SIL) tests. Multiple compute instances can be deployed simultaneously, significantly reducing testing times. Additionally, GPU-based simulations can be executed on cost-effective instances, potentially reducing simulation costs by up to threefold. The Docker container used in these simulations packages the necessary software algorithms, simulation tools, and test recordings. Key driving functions, including Blind Spot Detection (BSD), Lane Change Assist (LCA), and Occupant Safety Exit (OSE), are tested using real-world driving scenario data to ensure their effectiveness. The workbench offers an innovative environment for developing software solutions using virtual ECUs, even before the physical hardware becomes available. Furthermore, the automotive operating system utilized for software development provides robust middleware, ensuring secure and standardized communication between computers and the cloud.</div></div>
APA, Harvard, Vancouver, ISO, and other styles
3

da Costa Marques, Luciana, and Alfredo Goldman. "A Preliminary Review of Function as a Service Platform Running with AWS Spot Instances." In 2023 International Symposium on Computer Architecture and High Performance Computing Workshops (SBAC-PADW). IEEE, 2023. http://dx.doi.org/10.1109/sbac-padw60351.2023.00024.

Full text
APA, Harvard, Vancouver, ISO, and other styles
4

Frois, João, Lucas Padrão, Johnatan Oliveira, Laerte Xavier, and Cleiton Tavares. "Terraform and AWS CDK: A Comparative Analysis of Infrastructure Management Tools." In Simpósio Brasileiro de Engenharia de Software. Sociedade Brasileira de Computação, 2024. http://dx.doi.org/10.5753/sbes.2024.3577.

Full text
Abstract:
Infrastructure as Code is a fundamental concept in DevOps that automates infrastructure management processes using code. Several tools, such as Terraform and CDK, support this environment. Selecting the appropriate tool is crucial to a project’s success, yet there is ambiguity about the circumstances in which developers should choose between these tools. Therefore, this study aims to compare Terraform and CDK across four aspects: abstraction, scalability, maintainability, and performance. Our findings indicate that each tool performs particularly well in specific scenarios. For instance, Terraform is better suited for experienced teams focused on rapid implementations, while CDK is more appropriate for less experienced teams prioritizing resource efficiency during implementation.
APA, Harvard, Vancouver, ISO, and other styles
5

Amar, Mohamed Abdellahi, and Soumaya Sassi Mahfoudh. "A Hybrid Partitioning Algorithm to Solve Big Instances of the Probabilistic TSP." In 2020 IEEE/ACS 17th International Conference on Computer Systems and Applications (AICCSA). IEEE, 2020. http://dx.doi.org/10.1109/aiccsa50499.2020.9316489.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Leung, Enoch, Nesrin Sarigul-Klijn, and Rolando F. Roberto. "Characterization of Irregular Stress Distributions Induced by Klippel Feil Syndrome." In ASME 2011 International Mechanical Engineering Congress and Exposition. ASMEDC, 2011. http://dx.doi.org/10.1115/imece2011-63343.

Full text
Abstract:
Klippel Feil Syndrome (KFS) is a congenital disorder characterized by failure of segmentation of cervical vertebrae, resulting in “fusions” at any level of the cervical spine. Clinical diagnosis of KFS occurs at a mean age of 7.1 years, with children diagnosed with KFS often exhibiting reduced motion and function characterized by reduction of upward and downward motions of the head on the neck (flexion/extension), axial rotation, and tilting of the head side to side (lateral bending). More importantly, however, previous KFS studies have acknowledged possible compromises to the structural integrity and overall health of the cervical spine in the presence of abnormal fusion. Instances of instabilities such as fracture and large amounts of mobility at vertebral segments adjacent to fusion have been recorded, both posing significant neurological and physiological dangers to an individual afflicted with KFS. While fusion and instability appear to be interrelated, more intrinsic evaluation of KFS-related instabilities is needed. Current KFS studies, relying predominantly on static radiographic modalities, have been unsuccessful in identifying factors contributing to craniocervical (CC) destabilization in the presence of congenital vertebral fusion. It has been hypothesized that fusion of vertebral bodies induces abnormal stress distributions that catalyze instances of fracture along any KFS spine segment. Using Finite Element (FE) Modeling and Analysis to characterize motion alterations and irregular stress patterns associated with vertebral fusion, a high fidelity computational representation of a KFS affected cervical spine segment spanning the base of the occiput to C6 was constructed. Computer Tomography (CT) images were used for vertebral reconstruction with soft tissue components such as intervertebral discs (IVDs), articular cartilages (ACs), and the transverse ligament were modeled as homogenous solid components.
APA, Harvard, Vancouver, ISO, and other styles
7

Horva´th, Imre, Zolta´n Rusa´k, Eva Hernando Martin, et al. "An Information Technological Specification of Abstract Prototyping for Artifact and Service Combinations." In ASME 2011 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. ASMEDC, 2011. http://dx.doi.org/10.1115/detc2011-47079.

Full text
Abstract:
Various early prototyping techniques have been proposed for specific purposes and products, for instance for user-centered design of software tools, or interface design of consumer durables. Our research focuses on the development of a comprehensive approach, called abstract prototyping, to support a rich and complete prototyping of artifact-service combinations (ASCs). In this paper we present the concept and implementation of abstract prototypes (APs) from an information system point of view, and discuss both the general information structure and the specific information constructs used in our approach. First, the main constituents of APs are identified. Then, formal definitions of the involved information constructs are introduced. Afterwards, the practical implementation of the information constructs is discussed. As an information processing activity, abstract prototyping decomposes to four stages: (i) aggregation of information about the innovated ASCs, (ii) compilation and testing of the technical contents for abstract prototype(s), (iii) demonstration of the abstract prototype(s) to stakeholders, and (iv) refinement of the contents towards a final abstract prototype. It is assumed that ideation and elaboration of the concepts of the new artifact-service combinations precedes and produces input for abstract prototyping. It is proposed that APs should demonstrate real life manifestation of all characteristic operation and interaction/use processes, including the operation of the conceptualized artifact-service combination, the actions of the human actors, and the happenings in the surrounding environment. This can be achieved through the inclusion and proper instantiation of the necessary information constructs in the APs. The real life processes established by the existence and operations of ASCs is modeled and represented by scenarios. The contents of the abstract prototype are designed and demonstrated taking the interests and needs of the stakeholders into consideration. Eventually, an abstract prototype consists of two main constituents, namely narration and enactment, which enable the presentation of the technical contents. The former conveys a story about the manifestation of the ASCs and highlights the accompanying processes, and the latter visualizes the components, actors, arrangements, procedures, and happenings involved in them. The presented approach of information content development has been tested in master graduation projects, certain cycles of PhD research, and a company orientated process innovation project. The follow up research focuses on the development of a dedicated tool for abstract prototyping, and on the validation of proposed development and application methodology in complex industrial cases.
APA, Harvard, Vancouver, ISO, and other styles
8

Slater, Michael R. S., and Douglas L. Van Bossuyt. "Toward a Dedicated Failure Flow Arrestor Function Methodology." In ASME 2015 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2015. http://dx.doi.org/10.1115/detc2015-46270.

Full text
Abstract:
Risk analysis in engineering design is of paramount importance when developing complex systems or upgrading existing systems. In many complex systems, new generations of systems are expected to have decreased risk and increased reliability when compared with previous designs. For instance, within the American civilian nuclear power industry, the Nuclear Regulatory Commission (NRC) has progressively increased requirements for reliability and driven down the chance of radiological release beyond the plant site boundary. However, many ongoing complex system design efforts analyze risk after early major architecture decisions have been made. One promising method of bringing risk considerations earlier into the conceptual stages of the complex system design process is functional failure modeling. Function Failure Identification and Propagation (FFIP) and related methods began the push toward assessing risk using the functional modeling taxonomy. This paper advances the Dedicated Failure Flow Arrestor Function (DFFAF) method which incorporates dedicated Arrestor Functions (AFs) whose purpose is to stop failure flows from propagating along uncoupled failure flow pathways, as defined by Uncoupled Failure Flow State Reasoner (UFFSR). By doing this, DFFAF provides a new tool to the functional failure modeling toolbox for complex system engineers. This paper introduces DFFAF and provides an illustrative simplified civilian Pressurized Water Reactor (PWR) nuclear power plant case study.
APA, Harvard, Vancouver, ISO, and other styles
9

Maråk, Rasmus, Emmanuel Blazquez, and Pablo Gómez. "Trajectory Optimization of a Swarm Orbiting 67P/Churyumov-Gerasimenko Maximising Gravitational Signal." In ESA 12th International Conference on Guidance Navigation and Control and 9th International Conference on Astrodynamics Tools and Techniques. ESA, 2023. http://dx.doi.org/10.5270/esa-gnc-icatt-2023-057.

Full text
Abstract:
Proper modelling of the gravitational fields of irregular asteroids and comets is an essential yet difficult part of any spacecraft visit and flyby to these bodies. Accurate density representations provide crucial information, e.g., for proximity operations of spacecraft near such bodies which rely heavily on it to design safe and efficient trajectories. [2] Recently, so-called neural density fields [1] have emerged as a versatile tool that can provide an accurate description of the density distribution of a body’s mass, internal and external shape with few prior requirements. This representation has several advantages as it requires no prior information on the body, converges even inside the Brillouin sphere, and is extensible even to heterogeneous density distributions of the celestial bodies. [1,3] However, it remains an open question whether there are feasible, achievable trajectories that provide sufficient information, i.e. gravitational signal, to model the gravity and density field of an irregular body with high fidelity using neural density fields. For instance, a previous study demonstrated that the planned trajectory of the OSIRIS-REx spacecraft around Bennu produced a gravitational signal that proved to be too sparse for the task [4]. This difficulty could be circumvented using a distributed data acquisition and learning approach, where a swarm of spacecraft instead of a single one would work to acquire the gravity signal and learn a body’s density field. In this work, we explore maximising the gravitational signal in a hypothetical mission around the comet 67P/Churyumov-Gerasimenko by using a swarm of spacecraft. Sets of spacecraft trajectories are simultaneously optimised to maximise overall signal return while minimising propellant budget for orbital manoeuvres. This proves to be a challenging optimization problem due to the complex topology of 67P’s gravitational field and its sidereal rotation. [5] In contrast to a single spacecraft scenario, this mission context allows us to improve the acquisition of the gravitational signal through multiple, simultaneous relative observation angles. Orbit propagation is based on an open-source polyhedral gravity model [6] using a detailed mesh of 67P and takes the comet’s sidereal rotation into account. Trajectory optimization routines rely on the open-source pygmo framework maintained by ESA’s Advanced Concepts Team to formulate the problem as a constrained, multi-objective optimization problem. The developed code is designed independently of the celestial body of interest and provided online to allow follow-up studies with related models on other bodies. Constraints considered for this application include partial line of sight with the rest of the swarm, absence of collision with the comet, spacecraft power generation and telemetry. A parameter of particular interest for this study is the number of spacecraft constituting the swarm, in order to investigate the potential benefits and signal return thresholds when using a distributed approach. We compare results on different formation flying scenarios with varying complexity of the imposed constraints. Further, we consider heterogeneous measures of the gravitational signal characterised by different view angles and altitudes with respect to the celestial body. Additionally, we can directly correlate richness of the obtained dataset of trajectories to the duration of the mission. Based on a dataset of points and accelerations for the swarm after varying amounts of time, we investigate the training of a geodesyNet to model P67’s mass density field. We compare the obtained fidelity in detail with the established synthetic training using randomly sampled points as in the original work on geodesyNet [1]. Thus, we can directly relate the signal obtainable in a real mission scenario with an ideal one. Overall, this work takes the next step in bringing neural density fields to an onboard mission scenario, where they can be a useful and potent tool complementing existing approaches such as polyhedral or mascon models. In practice, the developed, open-source code can serve as a testbed to evaluate whether a hypothetical mission scenario can reasonably rely on geodesyNets and neural density fields. It also serves as a first step in investigating the potential of autonomously learning small-body gravitational fields in a distributed fashion. The tools used for this study are fully available online and designed to be extended to more general distributed learning applications with spacecraft swarms around small celestial bodies. References [1] Izzo, D. and Gómez, P., 2021. Geodesy of irregular small bodies via neural density fields: geodesyNets. arXiv preprint arXiv:2105.13031. [2] Leonard, J.M., Geeraert, J.L., Page, B.R., French, A.S., Antreasian, P.G., Adam, C.D., Wibben, D.R., Moreau, M.C. and Lauretta, D.S., 2019, August. OSIRIS-REx orbit determination performance during the navigation campaign. In 2019 AAS/AIAA Astrodynamics Specialist Conference (pp. 1-20). [3] Cui, Pingyuan, and Dong Qiao. "The present status and prospects in the research of orbital dynamics and control near small celestial bodies." Theoretical and Applied Mechanics Letters 4.1 (2014): 013013. [4] von Looz, M., Gómez, P. and Izzo, D., 2021. Study of the asteroid Bennu using geodesyANNs and Osiris-Rex data. arXiv preprint arXiv:2109.14427. [5] Keller, H.U., Mottola, S., Skorov, Y. and Jorda, L., 2015. The changing rotation period of comet 67P/Churyumov-Gerasimenko controlled by its activity. Astronomy & Astrophysics, 579, p.L5. [6] Schuhmacher, J. (2022) Efficient Polyhedral Gravity Modeling in Modern C++. Technical University of Munich.
APA, Harvard, Vancouver, ISO, and other styles
10

St-Amour, Amélie, Pamela Woo, Ludwik A Sobiesiak, et al. "ALTIUS Attitude and Orbit Control System Software and System-Level Test Results." In ESA 12th International Conference on Guidance Navigation and Control and 9th International Conference on Astrodynamics Tools and Techniques. ESA, 2023. http://dx.doi.org/10.5270/esa-gnc-icatt-2023-084.

Full text
Abstract:
The Atmospheric Limb Tracker for Investigation of the Upcoming Stratosphere (ALTIUS) project is ESA’s ozone-monitoring mission developed as part of ESA’s Earth Watch programme and is planned for launch in 2025. The small satellite will use a high-resolution spectral imager and limb-sounding techniques to profile the ozone and other trace gases within the upper atmosphere, while supporting weather forecasting and monitoring long-term ozone trends. To carry out these mission objectives, the Attitude and Orbit Control System (AOCS) software must provide the spacecraft with high agility, high autonomy and fine pointing accuracy. The ALTIUS satellite platform and its AOCS software are an evolution of the ESA Project for On-Board Autonomy (PROBA) series of satellites, which have near 45 combined years of successful in-orbit experience from PROBA-1 (launched in October 2001), PROBA-2 (launched in November 2009) and PROBA-V (launched in May 2013). Building upon its PROBA predecessors, ALTIUS ensures a high-level of on-board autonomy to the spacecraft, minimising the need of ground station commands. To complete the mission objectives, ALTIUS requires several AOCS software innovations compared to the previous PROBA missions. The autonomous novelties implemented for ALTIUS include the following: •Limb Looking Mode: Limb Looking mode is an Earth atmosphere observation mode during which the AOCS software autonomously orients the spacecraft such that the payload line of sight points to a desired tangent point at a fixed altitude above the Earth surface. It includes nine submodes dictating the desired direction of the tangent point, including for instance Backwards Limb Looking (BLL) for anti-velocity limb pointing and Pole Limb Looking (PLL) for observation towards the North or South poles. •Augmented Bdot Safe Mode: The previous PROBA satellites offer two Safe modes: Bdot and 3-Axis Magnetic modes. The Bdot mode uses the magnetic field to reduce angular rates and to roughly align, along the orbit normal, a momentum bias generated by maintaining the reaction wheels at a constant speed. The 3-Axis Magnetic mode is based on the Bdot algorithm with an additional pointing of a third axis to nadir. It requires spacecraft position knowledge. As a novelty on ALTIUS, an additional Safe mode is included: the Augmented Bdot mode. This mode is similar to the Bdot mode but additionally roughly aligns a user-specified spacecraft axis (perpendicular to the reaction wheel momentum bias axis) with the Earth’s magnetic field. This mode offers 3-axis pointing with respect to the magnetic field and the orbit normal and is more robust than the 3-Axis Magnetic mode because it does not require spacecraft position knowledge. •Thruster Management with Off-Modulation: The ALTIUS spacecraft is equipped with 4 thrusters aligned in the same direction. The AOCS software is designed to command the 4 thrusters of the spacecraft, falling back to a 2-thruster configuration in case of thruster failure, for ΔV manoeuvres while performing off-modulation of the thrusters to contribute to the attitude control and avoid wheel saturation due to disturbing torques. •Occultation Event Prediction: Time and orientation (in terms of azimuth and elevation angles) prediction of the next occultation entry and exit for up to 10 celestial targets and the Sun, where occultation is defined as the event during which a celestial target is visible from the spacecraft within lower (-10km) and upper (+100km) tangent altitude limits with respect to the Earth’s horizon. •Occultation Event Monitoring: Continuous monitoring of the elevation angle for up to 10 celestial targets and the Sun. •Stellar Occultation Mode: -Standby Submode: aligns the spacecraft with the anticipated orientation of the next occultation start and avoids blinding of the star trackers and payload. -Tracking Submode: tracks a celestial target such as the Sun, a star, a planet, or the moon through the Earth’s atmosphere during occultation. After an overview of the ALTIUS mission, its AOCS software and its novelties, this paper will focus primarily on three of the above-mentioned novelties: Limb Looking mode, Augmented Bdot Safe mode and thruster management with off-modulation. It will discuss challenges associated with their design and present their performance based on the AOCS Software System-Level Tests (SST) results. The SST campaign has been performed on a closed-loop MATLAB/Simulink simulator and has been completed in January 2023. The SST results for the other novelties, namely the occultation event prediction and monitoring and the Stellar Occultation mode, are not detailed in the present paper as they are the subject of another paper to be published and presented at the 45th Annual AAS Guidance & Control Conference in Breckenridge, Colorado, USA in February 2023.
APA, Harvard, Vancouver, ISO, and other styles

Reports on the topic "AWS compute instances"

1

Huang, Haohang, Erol Tutumluer, Jiayi Luo, Kelin Ding, Issam Qamhia, and John Hart. 3D Image Analysis Using Deep Learning for Size and Shape Characterization of Stockpile Riprap Aggregates—Phase 2. Illinois Center for Transportation, 2022. http://dx.doi.org/10.36501/0197-9191/22-017.

Full text
Abstract:
Riprap rock and aggregates are extensively used in structural, transportation, geotechnical, and hydraulic engineering applications. Field determination of morphological properties of aggregates such as size and shape can greatly facilitate the quality assurance/quality control (QA/QC) process for proper aggregate material selection and engineering use. Many aggregate imaging approaches have been developed to characterize the size and morphology of individual aggregates by computer vision. However, 3D field characterization of aggregate particle morphology is challenging both during the quarry production process and at construction sites, particularly for aggregates in stockpile form. This research study presents a 3D reconstruction-segmentation-completion approach based on deep learning techniques by combining three developed research components: field 3D reconstruction procedures, 3D stockpile instance segmentation, and 3D shape completion. The approach was designed to reconstruct aggregate stockpiles from multi-view images, segment the stockpile into individual instances, and predict the unseen side of each instance (particle) based on the partial visible shapes. Based on the dataset constructed from individual aggregate models, a state-of-the-art 3D instance segmentation network and a 3D shape completion network were implemented and trained, respectively. The application of the integrated approach was demonstrated on re-engineered stockpiles and field stockpiles. The validation of results using ground-truth measurements showed satisfactory algorithm performance in capturing and predicting the unseen sides of aggregates. The algorithms are integrated into a software application with a user-friendly graphical user interface. Based on the findings of this study, this stockpile aggregate analysis approach is envisioned to provide efficient field evaluation of aggregate stockpiles by offering convenient and reliable solutions for on-site QA/QC tasks of riprap rock and aggregate stockpiles.
APA, Harvard, Vancouver, ISO, and other styles
2

Baader, Franz, Francesco Kriegel, Adrian Nuradiansyah, and Rafael Peñaloza. Computing Compliant Anonymisations of Quantified ABoxes w.r.t. EL Policies (Extended Version). Technische Universität Dresden, 2020. http://dx.doi.org/10.25368/2022.263.

Full text
Abstract:
We adapt existing approaches for privacy-preserving publishing of linked data to a setting where the data are given as Description Logic (DL) ABoxes with possibly anonymised (formally: existentially quantified) individuals and the privacy policies are expressed using sets of concepts of the DL EL. We provide a chacterization of compliance of such ABoxes w.r.t. EL policies, and show how optimal compliant anonymisations of ABoxes that are noncompliant can be computed. This work extends previous work on privacypreserving ontology publishing, in which a very restricted form of ABoxes, called instance stores, had been considered, but restricts the attention to compliance. The approach developed here can easily be adapted to the problem of computing optimal repairs of quantified ABoxes.
APA, Harvard, Vancouver, ISO, and other styles
3

Baader, Franz, Stefan Borgwardt, and Marcel Lippmann. On the Complexity of Temporal Query Answering. Technische Universität Dresden, 2013. http://dx.doi.org/10.25368/2022.191.

Full text
Abstract:
Ontology-based data access (OBDA) generalizes query answering in databases towards deduction since (i) the fact base is not assumed to contain complete knowledge (i.e., there is no closed world assumption), and (ii) the interpretation of the predicates occurring in the queries is constrained by axioms of an ontology. OBDA has been investigated in detail for the case where the ontology is expressed by an appropriate Description Logic (DL) and the queries are conjunctive queries. Motivated by situation awareness applications, we investigate an extension of OBDA to the temporal case. As query language we consider an extension of the well-known propositional temporal logic LTL where conjunctive queries can occur in place of propositional variables, and as ontology language we use the prototypical expressive DL ALC. For the resulting instance of temporalized OBDA, we investigate both data complexity and combined complexity of the query entailment problem.
APA, Harvard, Vancouver, ISO, and other styles
4

Baader, Franz, and Francesco Kriegel. Pushing Optimal ABox Repair from EL Towards More Expressive Horn-DLs: Extended Version. Technische Universität Dresden, 2022. http://dx.doi.org/10.25368/2022.131.

Full text
Abstract:
Ontologies based on Description Logic (DL) represent general background knowledge in a terminology (TBox) and the actual data in an ABox. DL systems can then be used to compute consequences (such as answers to certain queries) from an ontology consisting of a TBox and an ABox. Since both human-made and machine-learned data sets may contain errors, which manifest themselves as unintuitive or obviously incorrect consequences, repairing DL-based ontologies in the sense of removing such unwanted consequences is an important topic in DL research. Most of the repair approaches described in the literature produce repairs that are not optimal, in the sense that they do not guarantee that only a minimal set of consequences is removed. In a series of papers, we have developed an approach for computing optimal repairs, starting with the restricted setting of an EL instance store, extending this to the more general setting of a quantified ABox (where some individuals may be anonymous), and then adding a static EL TBox. Here, we extend the expressivity of the underlying DL considerably, by adding nominals, inverse roles, regular role inclusions and the bottom concept to EL, which yields a fragment of the well-known DL Horn-SROIQ. The ideas underlying our repair approach still apply to this DL, though several non-trivial extensions are needed to deal with the new constructors and axioms. The developed repair approach can also be used to treat unwanted consequences expressed by certain conjunctive queries or regular path queries, and to handle Horn-ALCOI TBoxes with regular role inclusions.
APA, Harvard, Vancouver, ISO, and other styles
5

Balza, Lenin, Nicolás Gómez Parra, Jorge Cuartas, and Tomás Serebrisky. Infrastructure Services and Early Childhood Development in Latin America and the Caribbean: Water, Sanitation, and Garbage Collection. Inter-American Development Bank, 2024. http://dx.doi.org/10.18235/0012998.

Full text
Abstract:
Access to essential infrastructure services such as water, sanitation, and garbage collection can considerably affect children's environment and may play a significant role in shaping early childhood developmental and health outcomes. Using data from the Multiple Indicator Cluster Surveys (MICS) and the Demographic and Health Surveys (DHS) for 18 countries in Latin America and the Caribbean (LAC), we show a significant positive association between access to water and sanitation and early childhood development, as well as reduced instances of stunting. In addition, we identify a negative association between access to improved garbage collection services and the rates of stunting and underweight among children under five. Our findings are robust after using alternative measures for access and controlling for individual, maternal, and household factors, alongside considerations of household wealth and caregiver's stimulation activities. Similarly, the economic relevance of the relationship is highlighted by the substantial gap relative to the size of the vulnerable groups, persisting even after adjusting for confounding variables. Our results also suggest that households may be able to lessen the potential impact of pollutants through mitigation measures such as treating water to make it safe for consumption, using handwashing cleansers, and storing household trash in lidded containers. The current findings underscore the importance of investing in basic infrastructure services as a critical component of comprehensive strategies to enhance early childhood development and health in low- and middle-income countries. We emphasize the importance of considering the quality and type of infrastructure services alongside their availability. Future research should incorporate more complete and detailed data to improve understanding of the causal relationship between water, sanitation, and garbage collection and early childhood development, as well as the mechanisms underlying the observed associations.
APA, Harvard, Vancouver, ISO, and other styles
6

Panek, Jeffrey, Adrian Huth, and Benjamin Shwaiko. PR-312-22200-Z01 Isolation Valve - Improved GHG Leak Detection Summary of Initial Testing Results. Pipeline Research Council International, Inc. (PRCI), 2024. http://dx.doi.org/10.55274/r0000077.

Full text
Abstract:
This project investigated and evaluated commercially available optical IR and acoustic technologies. The IR cameras were used to detect a temperature differential across the valve indicating a Joule-Thompson (JT) pressure drop and leak through the valve. Direct acoustically coupled instruments were used to detect "noise" generated from turbulence associated with through-valve leakage. In addition, other instruments were explored that had the potential to detect turbulence-induced vibrations. During the instrumentation evaluation, fugitive leak screening and detection methods for assessing through-valve leakage were also explored. IES completed a one-week laboratory and yard testing exercise on a single two- and eight-inch valve at the SoCal Gas Situation City facility in Pico Rivera, CA in November 2022. Noteworthy findings included the inability to detect a leak from valves that were previously in-service and known leakers. The reason for this has been hypothesized as improper valve stop position and/or debris in the valve that was removed to protect flow-rate measurement instrumentation in the test apparatus. Several instances of newly commissioned leaking valves have been shown to suffer from incorrect valve positioning and/or electronic transducer signal set points. Additional testing and data collection are needed to complete the initial test campaign. Outdoor testing could not be completed during the week due to resource limitations that precluded testing more than one eight-inch valve. The initial laboratory testing included one 2-inch test valve that had no discernable usage. An additional 2-inch valve was screened prior to lab testing, however neither valve produced a leak under the conditions in the lab (both valves failed prior to commissioning).
APA, Harvard, Vancouver, ISO, and other styles
7

White, Rickie, Carl Nordman, Lindsey Smart, et al. Forest vegetation monitoring protocol for the Cumberland Piedmont Network: Protocol Narrative?Version 2.1. National Park Service, 2024. http://dx.doi.org/10.36967/2302353.

Full text
Abstract:
In 2003, ?vegetation communities? were selected as one of the highest priority vital signs of importance across the Cumberland Piedmont Network (CUPN) parks (Leibfreid et al. 2005). The protocol described in this document will address all aspects of monitoring this vital sign. The primary monitoring goal is to assess status and trends of ecological health for park forest vegetation communities, including key communities of management concern where possible. By assessing status and trends for key metrics, we can obtain a more complete picture of the status of forest vegetation communities in the parks and the trends in ecological health for these communities. For instance, by repeatedly measuring one key metric of plant species composition within the same plots over the years, we can determine how quickly and profoundly invasive plant species are affecting the native vegetation and whether specific species are appearing or disappearing from plots. This information can be critical in helping land managers detect trends before they are visible qualitatively and setup more specific studies to determine the root causes of such trends. This protocol is intended for use by the Cumberland Piedmont Network and was designed to efficiently collect, analyze, and disseminate scientifically credible information to help park managers and researchers understand how the forest vegetation communities of the parks are changing. To fully implement this protocol, read the entire document and all appendices, as well as the standard operating procedures (SOPs; published separately). The appendices and SOPs contain detailed information on implementation not provided in the main body of the document. Datasheets have been designed so data collection is sequential starting with SOP 3: Site Selection and Plot Establishment?Version 1.48 (CUPN 2023a) and ending with SOP 10: Soil Measurements?Version 1.1 (CUPN 2023b). The SOPs are intended to serve as a reference for field teams implementing this protocol. It is anticipated field teams will be knowledgeable of their content and maintain a copy for reference as part of the required field equipment.
APA, Harvard, Vancouver, ISO, and other styles
8

Hanbali, Layth, Elliot Hannon, Susanna Lehtimaki, Christine McNab, and Nina Schwalbe. Independent Monitoring Mechanism for the Pandemic Accord: Accountability for a safer world. United Nations University International Institute of Global Health, 2022. http://dx.doi.org/10.37941/rr/2022/1.

Full text
Abstract:
To address the challenges in pandemic preparedness and response (PPR), the World Health Assembly (WHA), at a special session in November 2021, established an Intergovernmental Negotiating Body (the INB) and tasked it with drafting a new legal instrument for PPR. During its second meeting in July 2022, the INB decided to develop the accord under Article 19 of the WHO Constitution, which grants the WHO the authority to negotiate a legally-binding Convention or Agreement and requires ratification by countries according to their local laws to enter into force. The aim is to complete negotiations and adopt a new pandemic instrument at the WHA in May 2024. The new legally binding agreement aims to address many of the failures exposed by the COVID-19 pandemic. However, the adoption of such an agreement is not the end of the process but the beginning. The negotiations on the instrument must establish a mechanism to monitor countries' compliance with the accord, particularly on the legally-binding elements. In this paper, we recommend creating such a mechanism as part of the accord: an independent committee of experts that monitors state parties' compliance with the pandemic accord and the timeliness, completeness, and robustness of states’ reports on their obligations. Its primary purpose would be to verify state self-reports by triangulating them with a range of publicly available information, making direct inquiries, and accepting confidential submissions. It would report its findings to a body consisting of or that is directly accountable to heads of state, with a particular focus on elevating instances of non-compliance or inadequate reporting. Its reports would also be available to the public. The proposed design builds on the analysis of strengths and weaknesses of existing monitoring approaches to 11 international treaties and mechanisms within and outside of health, a review of the literature, and interviews and input from more than 40 experts from around the world.
APA, Harvard, Vancouver, ISO, and other styles
9

Tow Leong, Tiang, Mohd Saufi Ahmad, Ang Qian Yee, et al. HANDBOOK OF ELECTRICAL SYSTEM DESIGN FOR NON-DOMESTIC BUILDING. Penerbit Universiti Malaysia Perlis, 2023. http://dx.doi.org/10.58915/techrpt2023.001.

Full text
Abstract:
This technical report presents the electrical system installation design for development of a factory with 1 storey and 2 storey of offices. Firstly, the general methodology of designing the electrical system are elaborated in this report. As overall, the methodologies in designing the components of the electrical system are explained and elaborated, which included: (a) load and maximum demand estimation; (b) miniature circuit breaker (MCB) selection; (c) moulded case circuit breaker (MCCB) selection; (d) air circuit breaker (ACB) selection, (e) residual current device (RCD) selection; (f) protection relay selection; (g) current transformer (CT) selection; (h) sizing selection for cable and live conductors; (i) capacitor bank selection for power factor correction (PFC); and (j) distribution transformer and its protection devices selection. Then, the electrical system of this project is computed and designed by using the methodologies aforementioned. Firstly, the electrical system of various distribution boards (DBs) with the protection/metering devices along with its phase and earthing cables for every final circuits are designed and installed in the factory. Next, the installation is proceeded with the electrical system of main switchboard (MSB) with the protection/metering devices along with its phase and earthing cables for every DBs. Also, the electrical system of PFC by using detuned capacitor bank with various protection/metering devices is designed and built in the plant. Apart from that, the factory is equipped with the electrical system of high tension (HT) room that included the distribution power transformer with the protection/metering devices along with its phase and earthing cables. Lastly, the methodologies and the computation design of the electrical system installation in the context of connected load, load currents, maximum demand, MCB, MCCB, ACB, RCD, protection relay, metering CTs, live cable, protection conductor/earth cable, detuned capacitor bank, and distribution transformer, are prepared according to several important standards, for instance, the MS IEC 60364, Electrical Installations for Buildings, Suruhanjaya Tenaga (ST) – Non-Domestic Electrical Installation Safety Code, Electricity Supply Application Handbook, Tenaga Nasional Berhad (TNB).
APA, Harvard, Vancouver, ISO, and other styles
10

Henderson, Tim, Vincent Santucci, Tim Connors, and Justin Tweet. National Park Service geologic type section inventory: Mojave Desert Inventory & Monitoring Network. National Park Service, 2021. http://dx.doi.org/10.36967/nrr-2289952.

Full text
Abstract:
A fundamental responsibility of the National Park Service (NPS) is to ensure that park resources are preserved, protected, and managed in consideration of the resources themselves and for the benefit and enjoyment by the public. Through the inventory, monitoring, and study of park resources, we gain a greater understanding of the scope, significance, distribution, and management issues associated with these resources and their use. This baseline of natural resource information is available to inform park managers, scientists, stakeholders, and the public about the conditions of these resources and the factors or activities that may threaten or influence their stability and preservation. There are several different categories of geologic or stratigraphic units (supergroup, group, formation, member, bed) that represent a hierarchical system of classification. The mapping of stratigraphic units involves the evaluation of lithologies, bedding properties, thickness, geographic distribution, and other factors. Mappable geologic units may be described and named through a rigorously defined process that is standardized and codified by the professional geologic community (North American Commission on Stratigraphic Nomenclature 2005). In most instances when a new geologic unit such as a formation is described and named in the scientific literature, a specific and well-exposed section or exposure area of the unit is designated as the type section or other category of stratotype (see “Definitions” below). The type section is an important reference exposure for a named geologic unit which presents a relatively complete and representative example for this unit. Geologic stratotypes are important both historically and scientifically, and should be available for other researchers to evaluate in the future.. The inventory of all geologic stratotypes throughout the 423 units of the NPS is an important effort in documenting these locations in order that NPS staff recognize and protect these areas for future studies. The focus adopted for completing the baseline inventories throughout the NPS was centered on the 32 inventory and monitoring networks (I&M) established during the late 1990s. The I&M networks are clusters of parks within a defined geographic area based on the ecoregions of North America (Fenneman 1946; Bailey 1976; Omernik 1987). These networks share similar physical resources (e.g., geology, hydrology, climate), biological resources (e.g., flora, fauna), and ecological characteristics. Specialists familiar with the resources and ecological parameters of the network, and associated parks, work with park staff to support network-level activities such as inventory, monitoring, research, and data management. Adopting a network-based approach to inventories worked well when the NPS undertook paleontological resource inventories for the 32 I&M networks. The planning team from the NPS Geologic Resources Division who proposed and designed this inventory selected the Greater Yellowstone Inventory & Monitoring Network (GRYN) as the pilot network for initiating this project. Through the research undertaken to identify the geologic stratotypes within the parks of the GRYN methodologies for data mining and reporting on these resources were established. Methodologies and reporting adopted for the GRYN have been used in the development of this report for the Mojave Desert Inventory & Monitoring Network (MOJN). The goal of this project is to consolidate information pertaining to geologic type sections that occur within NPS-administered areas, in order that this information is available throughout the NPS to inform park managers and to promote the preservation and protection of these important geologic landmarks and geologic heritage resources. The review of stratotype occurrences for the MOJN shows there are currently no designated stratotypes for Joshua Tree National Park (JOTR) or Manzanar National Historic Site (MANZ); Death Valley...
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!