Um die anderen Arten von Veröffentlichungen zu diesem Thema anzuzeigen, folgen Sie diesem Link: Deployment models.

Zeitschriftenartikel zum Thema „Deployment models“

Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an

Wählen Sie eine Art der Quelle aus:

Machen Sie sich mit Top-50 Zeitschriftenartikel für die Forschung zum Thema "Deployment models" bekannt.

Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.

Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.

Sehen Sie die Zeitschriftenartikel für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.

1

Ravi, Shankar Koppula. "Databricks MLflow." Journal of Scientific and Engineering Research 8, no. 11 (November 30, 2021): 134–45. https://doi.org/10.5281/zenodo.11232369.

Der volle Inhalt der Quelle
Annotation:
This paper examines MLflow, an open-source platform specifically designed to simplify the management of the machine learning lifecycle. It covers various aspects, such as experiment tracking, code packaging, and sharing and deployment of models. The paper focuses on the integration of MLflow with Databricks, emphasizing how this collaboration enhances automatic experiment tracking and provides easier access to data and models. This integration ultimately leads to more efficient and reproducible machine learning workflows. The paper thoroughly explores the four main components of MLflow: MLflow Tracking, MLflow Projects, MLflow Models, and MLflow Model Registry. It highlights the platform's ability to address common challenges in machine learning projects, including experiment management, reproducibility, and model deployment across different environments. Furthermore, it delves into the crucial roles of MLflow Models and MLflow Model Registry in enabling flexible deployment options, such as batch, streaming, real-time, and edge deployments. These components also ensure centralized model management, compliance through audit logs, and facilitate collaboration among team members. In conclusion, the paper states that MLflow greatly contributes to overcoming the complexities of deploying real-time machine learning systems. It achieves this by offering streamlined workflows, centralized management, and comprehensive model deployment strategies. As a result, MLflow is seen as an invaluable resource for data scientists, machine learning practitioners, and researchers who are eager to enhance the efficiency and reproducibility of their machine learning projects.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
2

Reddy, Vonteru Srikanth, and Kumar Debasis. "Statistical Review of Health Monitoring Models for Real-Time Hospital Scenarios." International Journal on Recent and Innovation Trends in Computing and Communication 11, no. 7s (July 13, 2023): 465–81. http://dx.doi.org/10.17762/ijritcc.v11i7s.7025.

Der volle Inhalt der Quelle
Annotation:
Health Monitoring System Models (HMSMs) need speed, efficiency, and security to work. Cascading components ensure data collection, storage, communication, retrieval, and privacy in these models. Researchers propose many methods to design such models, varying in scalability, multidomain efficiency, flexibility, usage and deployment, computational complexity, cost of deployment, security level, feature usability, and other performance metrics. Thus, HMSM designers struggle to find the best models for their application-specific deployments. They must test and validate different models, which increases design time and cost, affecting deployment feasibility. This article discusses secure HMSMs' application-specific advantages, feature-specific limitations, context-specific nuances, and deployment-specific future research scopes to reduce model selection ambiguity. The models based on the Internet of Things (IoT), Machine Learning Models (MLMs), Blockchain Models, Hashing Methods, Encryption Methods, Distributed Computing Configurations, and Bioinspired Models have better Quality of Service (QoS) and security than their counterparts. Researchers can find application-specific models. This article compares the above models in deployment cost, attack mitigation performance, scalability, computational complexity, and monitoring applicability. This comparative analysis helps readers choose HMSMs for context-specific application deployments. This article also devises performance measuring metrics called Health Monitoring Model Metrics (HM3) to compare the performance of various models based on accuracy, precision, delay, scalability, computational complexity, energy consumption, and security.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
3

Howick, R. S., and M. Pidd. "Sales force deployment models." European Journal of Operational Research 48, no. 3 (October 1990): 295–310. http://dx.doi.org/10.1016/0377-2217(90)90413-6.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
4

B. Patel, Prof Hiral, and Prof Nirali Kansara. "Cloud Computing Deployment Models: A Comparative Study." International Journal of Innovative Research in Computer Science & Technology 9, no. 2 (March 2021): 45–50. http://dx.doi.org/10.21276/ijircst.2021.9.2.8.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
5

Rimbaud, Loup, Frédéric Fabre, Julien Papaïx, Benoît Moury, Christian Lannou, Luke G. Barrett, and Peter H. Thrall. "Models of Plant Resistance Deployment." Annual Review of Phytopathology 59, no. 1 (August 25, 2021): 125–52. http://dx.doi.org/10.1146/annurev-phyto-020620-122134.

Der volle Inhalt der Quelle
Annotation:
Owing to their evolutionary potential, plant pathogens are able to rapidly adapt to genetically controlled plant resistance, often resulting in resistance breakdown and major epidemics in agricultural crops. Various deployment strategies have been proposed to improve resistance management. Globally, these rely on careful selection of resistance sources and their combination at various spatiotemporal scales (e.g., via gene pyramiding, crop rotations and mixtures, landscape mosaics). However, testing and optimizing these strategies using controlled experiments at large spatiotemporal scales are logistically challenging. Mathematical models provide an alternative investigative tool, and many have been developed to explore resistance deployment strategies under various contexts. This review analyzes 69 modeling studies in light of specific model structures (e.g., demographic or demogenetic, spatial or not), underlying assumptions (e.g., whether preadapted pathogens are present before resistance deployment), and evaluation criteria (e.g., resistance durability, disease control, cost-effectiveness). It highlights major research findings and discusses challenges for future modeling efforts.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
6

Sriningsih, Riry, Muhammad Subhan, and Minora Longgom Nasution. "Analysis of torch deployment models." Journal of Physics: Conference Series 1317 (October 2019): 012013. http://dx.doi.org/10.1088/1742-6596/1317/1/012013.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
7

BUSHEHRIAN, OMID. "SOFTWARE PERFORMANCE ENGINEERING BY SIMULATED-BASED OBJECT DEPLOYMENT." International Journal of Software Engineering and Knowledge Engineering 23, no. 02 (March 2013): 211–21. http://dx.doi.org/10.1142/s0218194013500058.

Der volle Inhalt der Quelle
Annotation:
The object deployment of a distributed software has a great impact on its performance. In this paper an analytical model for performance evaluation of different object deployments, is presented. The key advantage of the proposed model over the traditional Queuing Network models is the usefulness in the deployment optimization when the search space is huge and automatic instantiation of Queuing performance models corresponding to an object deployment is costly. Since our model produces an optimal deployment corresponding to each input load separately, the runtime behavior of the software corresponding to each input load should be profiled using simulation first. In this paper a translation scheme for generating the simulate-able Labeled Transition Systems (LTS) from scenarios is also presented. Moreover, two deployment algorithms (a GA-based and an INLP-based) are implemented and the results are compared.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
8

Vijayan, Naveen Edapurath. "Building Scalable MLOps: Optimizing Machine Learning Deployment and Operations." INTERANTIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT 08, no. 10 (October 18, 2024): 1–9. http://dx.doi.org/10.55041/ijsrem37784.

Der volle Inhalt der Quelle
Annotation:
As machine learning (ML) models become increasingly integrated into mission-critical applications and production systems, the need for robust and scalable MLOps (Machine Learning Operations) practices has grown significantly. This paper explores key strategies and best practices for building scalable MLOps pipelines to optimize the deployment and operation of machine learning models at an enterprise scale. It delves into the importance of automating the end-to-end lifecycle of ML models, from data ingestion and model training to testing, deployment, and monitoring. Approaches for implementing continuous integration and continuous deployment (CI/CD) pipelines tailored for ML workflows are discussed, enabling efficient and repeatable model updates and deployments. The paper emphasizes the criticality of implementing comprehensive monitoring and observability mechanisms to track model performance, detect drift, and ensure the reliability and trustworthiness of deployed models. The paper also addresses the challenges of managing model versioning and governance at scale, including techniques for maintaining a centralized model registry, enforcing access controls, and ensuring compliance with regulatory requirements. The paper aims to provide a comprehensive guide for organizations seeking to establish scalable and robust MLOps practices, enabling them to unlock the full potential of machine learning while mitigating risks and ensuring responsible AI deployment. Keywords—Machine Learning Operations (MLOps), Scalable AI Deployment, Continuous Integration and Continuous Deployment (CI/CD) for ML, ML Monitoring and Observability, Model Reproducibility, Model Versioning and Governance, Centralized Model Registry, Responsible AI Deployment, Ethical AI Practices, Enterprise MLOps
APA, Harvard, Vancouver, ISO und andere Zitierweisen
9

Vinayak, Kalluri, and Rambabu Kodali. "Benchmarking the quality function deployment models." Benchmarking: An International Journal 20, no. 6 (October 21, 2013): 825–54. http://dx.doi.org/10.1108/bij-07-2011-0052.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
10

Perakis, Anastassions N., and Nikiforos Papadakis. "Fleet deployment optimization models. Part 1." Maritime Policy & Management 14, no. 2 (January 1987): 127–44. http://dx.doi.org/10.1080/03088838700000015.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
11

Martín-Martínez, Francisco, Jaime Boal, Álvaro Sánchez-Miralles, Carlos Becker Robles, and Rubén Rodríguez-Vilches. "Technical deployment of aggregator business models." Heliyon 10, no. 9 (May 2024): e30101. http://dx.doi.org/10.1016/j.heliyon.2024.e30101.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
12

Swamy, Prasadarao Velaga. "Continuous Deployment of AI Systems: Strategies for Seamless Updates and Rollbacks." International Journal of Innovative Research in Engineering & Multidisciplinary Physical Sciences 6, no. 6 (December 5, 2018): 1–8. https://doi.org/10.5281/zenodo.12805458.

Der volle Inhalt der Quelle
Annotation:
The deployment of artificial intelligence (AI) systems poses unique challenges compared to traditional software applications, primarily due to the dynamic nature of AI models and their sensitivity to data changes. Continuous deployment (CD) strategies play a crucial role in managing these complexities by enabling organizations to deploy, update, and manage AI models seamlessly and efficiently. This paper reviews key strategies for implementing CD in AI systems, focusing on seamless updates and robust rollback mechanisms. Strategies discussed include incremental deployment, A/B testing, canary releases, and automated rollback procedures, each designed to minimize disruption and optimize performance during model updates. Additionally, the importance of monitoring and feedback loops in ensuring ongoing performance and reliability is highlighted, emphasizing their role in detecting anomalies and integrating user feedback for continuous model improvement. The paper concludes with a discussion on future research directions, including advanced testing methodologies for AI models, scalable deployment strategies across heterogeneous environments, and ethical considerations in AI deployment practices. By addressing these challenges and embracing innovative approaches, organizations can enhance the agility, reliability, and effectiveness of AI deployments, paving the way for broader adoption and impactful application across various domains.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
13

Bollineni, Satyadeepak. "Implementing DevOps Strategies for Deploying and Managing Machine Learning Models in Lakehouse Platforms." International Journal of Multidisciplinary Research and Growth Evaluation 5, no. 4 (2024): 1367–71. https://doi.org/10.54660/.ijmrge.2024.5.4.1367-1371.

Der volle Inhalt der Quelle
Annotation:
The paper addresses the entry of DevOps into lakehouse platforms to lessen the deployment and administration of machine learning models. It intends to discover the successful practices that achieve faster deployments, better operational efficiency, and strong management of data-driven applications without technical jargon. Streamlining processes and increasing collaboration across development and operations teams take DevOps miles ahead in adaptability and efficiency with lakehouse platforms. The paper includes pragmatic implementations and transformational potential between these hybrid data ecosystems through continuous integration, deployment, and automated monitoring. The results will emphasize how such integration would allow a more dynamic and responsive data management strategy designed for innovation and success.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
14

Hungness, Derek, and Raj Bridgelall. "Model Contrast of Autonomous Vehicle Impacts on Traffic." Journal of Advanced Transportation 2020 (August 14, 2020): 1–10. http://dx.doi.org/10.1155/2020/8935692.

Der volle Inhalt der Quelle
Annotation:
The adoption of connected and autonomous vehicles (CAVs) is in its infancy. Therefore, very little is known about their potential impacts on traffic. Meanwhile, researchers and market analysts predict a wide range of possibilities about their potential benefits and the timing of their deployments. Planners traditionally use various types of travel demand models to forecast future traffic conditions. However, such models do not yet integrate any expected impacts from CAV deployments. Consequently, many long-range transportation plans do not yet account for their eventual deployment. To address some of these uncertainties, this work modified an existing model for Madison, Wisconsin. To compare outcomes, the authors used identical parameter changes and simulation scenarios for a model of Gainesville, Florida. Both models show that with increasing levels of CAV deployment, both the vehicle miles traveled and the average congestion speed will increase. However, there are some important exceptions due to differences in the road network layout, geospatial features, sociodemographic factors, land-use, and access to transit.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
15

So, C. J., C. A. Alfano, L. A. Riviere, and P. J. Quartana. "0273 Residual Sleep Difficulties During Reset Operations Predict Greater Post-Deployment Mental Health Difficulties in U.S. Soldiers: A Cross-Lagged Analysis." Sleep 43, Supplement_1 (April 2020): A104. http://dx.doi.org/10.1093/sleep/zsaa056.271.

Der volle Inhalt der Quelle
Annotation:
Abstract Introduction Military service is associated with a number of occupational stressors, including non-conducive sleeping environments, shift schedules, and extended deployments overseas. Service members who undergo combat deployments are at increased risk for mental health and sleep difficulties. Bidirectional associations between sleep and mental health difficulties are routinely observed, but the directional association of these difficulties from one deployment to the next has not been addressed. The purpose of this study was to examine whether residual sleep problems or mental health difficulties after a 12-month period of reset operations following an initial deployment were associated with changes in sleep and mental health following a subsequent deployment. Methods Data from 74 U.S. Soldiers were case-matched across three time points. Participants were assessed 6 months (T1) and 12 months (T2) following an initial deployment. Participants were then assessed 3 months (T3) following a subsequent deployment. Symptoms of PTSD, anxiety, depression, and sleep difficulties were assessed at all three time points. Results Cross-lagged hierarchical regression models revealed that residual sleep difficulties across the time points uniquely predicted later changes in PTSD and anxiety symptoms, but not depressive symptoms, following a subsequent deployment. Conversely, residual mental health difficulties were not unique predictors of later changes in sleep difficulties. Conclusion These findings suggest that higher levels of residual sleep difficulties 12 months following a prior deployment are associated with larger increases in mental health problems following a subsequent deployment. Moreover, and importantly, the converse association was not supported. Residual mental health difficulties prior to deployment were not associated with changes in sleep difficulties. These data provide a viable target for intervention during reset operations to mitigate mental health difficulties associated with combat deployments. They might also help inform return-to-duty decisions. Support N/A.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
16

Researcher. "MACHINE LEARNING MODELS IN PRODUCTION: A SYSTEMATIC FRAMEWORK FOR SCALABLE AND ROBUST DEPLOYMENT." International Journal of Research In Computer Applications and Information Technology (IJRCAIT) 15, no. 6 (November 29, 2024): 1608–28. https://doi.org/10.5281/zenodo.14243017.

Der volle Inhalt der Quelle
Annotation:
This article presents a comprehensive framework for deploying and productizing machine learning models in real-world industrial settings, addressing the critical gap between laboratory development and production implementation. Through a systematic analysis of 47 enterprise-scale ML deployments across diverse industries, we identify key challenges and establish best practices for transforming experimental models into robust production systems. The methodology encompasses four primary dimensions: technical integration architecture, operational excellence, continuous monitoring systems, and feedback loop implementation. The article reveals that successful ML productization requires more than model accuracy alone; it demands a holistic approach incorporating automated retraining pipelines, sophisticated monitoring systems, and scalable infrastructure. Results indicate that organizations implementing our proposed framework achieved a 64% reduction in deployment failures, 41% improvement in model maintenance efficiency, and 73% faster time-to-production compared to traditional deployment approaches. Furthermore, we introduce a novel scoring system for assessing production readiness of ML models, validated across multiple use cases. The article contributes to both theoretical understanding and practical implementation of ML systems at scale, offering concrete guidelines for practitioners while identifying areas for future research in automated ML operations and systematic deployment strategies.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
17

Ayodele Emmanuel Sonuga, Kingsley David Onyewuchi Ofoegbu, Chidiebere Somadina Ike, and Samuel Olaoluwa Folorunsho. "Deploying large language models on diverse computing architectures: A performance evaluation framework." Global Journal of Research in Engineering and Technology 2, no. 1 (September 30, 2024): 018–36. http://dx.doi.org/10.58175/gjret.2024.2.1.0026.

Der volle Inhalt der Quelle
Annotation:
Deploying large language models (LLMs) across diverse computing architectures is a critical challenge in the field of artificial intelligence, particularly as these models become increasingly complex and resource-intensive. This review presents a performance evaluation framework designed to systematically assess the deployment of LLMs on various computing architectures, including CPUs, GPUs, TPUs, and specialized accelerators. The framework is structured around key performance metrics such as computational efficiency, latency, throughput, energy consumption, and scalability. It considers the trade-offs associated with different hardware configurations, optimizing the deployment to meet specific application requirements. The evaluation framework employs a multi-faceted approach, integrating both theoretical and empirical analyses to offer comprehensive insights into the performance dynamics of LLMs. This includes benchmarking LLMs under varying workloads, data batch sizes, and precision levels, enabling a nuanced understanding of how these factors influence model performance across different hardware environments. Additionally, the framework emphasizes the importance of model parallelism and distribution strategies, which are critical for efficiently scaling LLMs on high-performance computing clusters. A significant contribution of this framework is its ability to guide practitioners in selecting the optimal computing architecture for LLM deployment based on application-specific needs, such as low-latency inference for real-time applications or energy-efficient processing for large-scale deployments. The framework also provides insights into cost-performance trade-offs, offering guidance for balancing the financial implications of different deployment strategies with their performance benefits. Overall, this performance evaluation framework is a valuable tool for researchers and engineers, facilitating the efficient deployment of LLMs on diverse computing architectures. By offering a systematic approach to evaluating and optimizing LLM performance, the framework supports the ongoing development and application of these models across various domains. This paper will evaluate the deployment of large language models (LLMs) on diverse computing architectures, including x86, ARM, and RISC-V platforms. It will discuss strategies for optimizing LLM performance, such as dynamic frequency scaling, core scaling, and memory optimization. The research will contribute to understanding the best practices for deploying AI applications on different architectures, supporting technological innovation and competitiveness.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
18

Eid, Mustafa I. M., Ibrahim M. Al-Jabri, and M. Sadiq Sohail. "Selection of Cloud Delivery and Deployment Models." International Journal of Decision Support System Technology 10, no. 4 (October 2018): 17–32. http://dx.doi.org/10.4018/ijdsst.2018100102.

Der volle Inhalt der Quelle
Annotation:
Research interests on cloud computing adoption and its effectiveness in terms of cost and time has been increasing. However, one of the challenging decisions facing management in adopting cloud services is taking on the right combinations of cloud service delivery and deployment models. A comprehensive review of literature revealed a lack of research addressing this selection decision problem. To fill this research gap, this article proposes an expert system approach for managers to decide on the right combination of service delivery and deployment model selection. The article first proposes a rule-based expert system prototype, which provides advice based on a set of factors that represent the organizational conditions and requirements pertaining to cloud computing adoption. Next, the authors evaluate the system prototype. Lastly, the article concludes with a discussion of the results, its practical implications, limitations, and further research directions.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
19

Kim, Kwang-Jae, Herbert Moskowitz, Anoop Dhingra, and Gerald Evans. "Fuzzy multicriteria models for quality function deployment." European Journal of Operational Research 121, no. 3 (March 2000): 504–18. http://dx.doi.org/10.1016/s0377-2217(99)00048-x.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
20

BALASUBRAMANIAN, KRISHNAKUMAR, ANIRUDDHA GOKHALE, YUEHUA LIN, JING ZHANG, and JEFF GRAY. "WEAVING DEPLOYMENT ASPECTS INTO DOMAIN-SPECIFIC MODELS." International Journal of Software Engineering and Knowledge Engineering 16, no. 03 (June 2006): 403–24. http://dx.doi.org/10.1142/s021819400600280x.

Der volle Inhalt der Quelle
Annotation:
Domain-specific models increase the level of abstraction used to develop large-scale component-based systems. Model-driven development (MDD) approaches (e.g., Model-Integrated Computing and Model-Driven Architecture) emphasize the use of models at all stages of system development. Decomposing problems using MDD approaches may result in a separation of the artifacts in a way that impedes comprehension. For example, a single concern (such as deployment of a distributed system) may crosscut different orthogonal activities (such as component specification, interaction, packaging and planning). To keep track of all entities associated with a component, and to ensure that the constraints for the system as a whole are not violated, a purely model-driven approach imposes extra effort, thereby negating some of the benefits of MDD. This paper provides three contributions to the study of applying aspect-oriented techniques to address the crosscutting challenges of model-driven component-based distributed systems development. First, we identify the sources of crosscutting concerns that typically arise in model-driven development of component-based systems. Second, we describe how aspect-oriented model weaving helps modularize these crosscutting concerns using model transformations. Third, we describe how we have applied model weaving using a tool called the Constraint-Specification Aspect Weaver (C-SAW) in the context of the Platform-Independent Component Modeling Language (PICML), which is a domain-specific modeling language for developing component-based systems. A case study of a joint-emergency response system is presented to express the challenges in modeling a typical distributed system. Our experience shows that model weaving is an effective and scalable technique for dealing with crosscutting aspects of component-based systems development.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
21

Piridi, Sarat, Satyanarayana Asundi, and Dr James C. Hyatt. "Cross-Environment Deployment Strategies for Power Platform Solutions – Investigating best practices for managing multi-environment deployments, from development to production, using managed environments and DevOps." International Journal of Advanced Engineering Research and Science 12, no. 4 (2025): 66–75. https://doi.org/10.22161/ijaers.124.8.

Der volle Inhalt der Quelle
Annotation:
The cross-environment deployment strategies for Power Platform solutions which include moving Power Platform solutions from development to production using managed environments and DevOps practices. The paper takes advantage of ten key academic and industry sources to evaluate frameworks, automation tools and governing models to streamline deployment and enhance system reliability. The measurable benefits of such case studies are reduced deployment time and improved accuracy. With this DevOps deployed throughout cloud and hybrid platform and agile methodology, it facilitates scalable and secure deployments. The insights provided from the findings contribute to organizations wanting to improve performance, maintain consistency, and direct development to meet operational goals in dynamic contexts of enterprise.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
22

Beltrán, Fernando, Marlies Van der Wee, and Sofie Verbruggen. "A Comparative Analysis of Selected National and Regional Investment Initiatives That Seek to Achieve Broadband Expansion by Deploying NGA Networks." Journal of Information Policy 8, no. 1 (March 1, 2018): 267–95. http://dx.doi.org/10.5325/jinfopoli.8.1.0267.

Der volle Inhalt der Quelle
Annotation:
Abstract Expectations about higher economic growth and the ever-increasing demand for higher bandwidth are driving the worldwide deployment of Next-Generation Access (NGA) networks. The paths followed to achieve this goal markedly vary, however, across different countries. This article offers a comparison of a handful of leading NGA deployments that rely on different investment models. We study the broadband national initiatives of New Zealand and Australia and a group of selected regional NGA deployments in Europe. While New Zealand's approach partially relies on a public–private partnership model of investment, Australia's National Broadband Network is a wholly government-funded initiative and the European local initiatives in Sweden, Spain, the Netherlands, and Portugal use a range of mixed models of investment. We use a common technology–policy–market framework that allows for a clear mapping of the incentives, goals, and actions of those involved in network deployment. Our main interest is the identification of the drivers for investment as well as the description of main risk factors in each case. By applying this framework to those selected deployment cases our work draws relevant conclusions about the impact of investment decisions on performance criteria such as coverage and uptake.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
23

Wang, Zhigang, Liqin Tian, Lianhai Lin, and Yinghua Tong. "Lattice-Based 3-Dimensional Wireless Sensor Deployment." Journal of Sensors 2021 (August 17, 2021): 1–14. http://dx.doi.org/10.1155/2021/2441122.

Der volle Inhalt der Quelle
Annotation:
With the wide application of wireless sensor networks (WSNs) in real space, there are numerous studies on 3D sensor deployments. In this paper, the k -connectivity theoretical model of fixed and random nodes in regular lattice-based deployment was proposed to study the coverage and connectivity of sensor networks with regular lattice in 3D space. The full connectivity range and cost of the deployment with sensor nodes fixed in the centers of four regular lattices were quantitatively analyzed. The optimal single lattice coverage model and the ratio of the communication range to the sensing range r c / r s were investigated when the deployment of random nodes satisfied the k -connectivity requirements for full coverage. In addition, based on the actual sensing model, the coverage, communication link quality, and reliability of different lattice-based deployment models were determined in this study.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
24

Ukani, Neema Amish, and Saurabh S. Chakole. "Empirical analysis of machine learning-based moisture sensing platforms for agricultural applications: A statistical perspective." Journal of Physics: Conference Series 2327, no. 1 (August 1, 2022): 012026. http://dx.doi.org/10.1088/1742-6596/2327/1/012026.

Der volle Inhalt der Quelle
Annotation:
Abstract Modelling of accurate detection & estimation soil moisture sensors requires integration of various signal processing, filtering, segmentation, and pattern analysis methods. Sensing of moisture is generally performed via use of resistive, or capacitive materials, which change their parametric characteristics w.r.t. changes in moisture levels. These sensors are further classified depending upon capabilities of measurements, which include, volumetric sensors, soil water tensor sensors, electromagnetic sensors, time domain reflectometry (TDR) sensors, Neutron probe sensors, tensiometer-based sensors, etc. Each of these sensors are connected to a series of processing blocks, which assist in improving their measurement performance. This performance includes parameters like, accuracy of measurement, cost of deployment, measurement delay, average measurement error, etc. This wide variation in measurement performance increases ambiguity of sensor selection for a particular soil type. Due to this, researchers & soil engineers are required to test & validate performance of different moisture sensors for their application scenario, which increases time & cost needed for model deployment. To overcome this limitation, and reduce ambiguity in selection of optimum moisture sensing interfaces, this text reviews various state-of-the-art models proposed by researchers for performing this task. This review discusses various nuances, advantages, limitations & future research scopes for existing moisture sensing interfaces and evaluates them in terms of statistical parameters like accuracy of detection, sensing & measurement delay, cost of deployment, deployment complexity, scalability, & type of usage applications. This text also compares the reviewed models in terms of these parameters, which will assist researchers & soil engineers to identify most optimum models for their deployments. Based on this research, it was observed that machine learning models are highly recommended for error reduction during moisture analysis. Machine learning prediction models that utilize Neural Networks (NNs) outperform other models in terms of error performance, and must be deployed for high-accuracy & low-cost moisture sensing applications. Based on similar observations, this text also recommends fusion of different sensing interfaces for improving accuracy, while optimizing cost & complexity of deployment. These recommendations are also based on context of the application for which the sensing interface is being deployed. These recommendations must be used to further improve overall sensing performance under multiple deployment scenarios.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
25

Prapas, Ioannis, Behrouz Derakhshan, Alireza Rezaei Mahdiraji, and Volker Markl. "Continuous Training and Deployment of Deep Learning Models." Datenbank-Spektrum 21, no. 3 (November 2021): 203–12. http://dx.doi.org/10.1007/s13222-021-00386-8.

Der volle Inhalt der Quelle
Annotation:
AbstractDeep Learning (DL) has consistently surpassed other Machine Learning methods and achieved state-of-the-art performance in multiple cases. Several modern applications like financial and recommender systems require models that are constantly updated with fresh data. The prominent approach for keeping a DL model fresh is to trigger full retraining from scratch when enough new data are available. However, retraining large and complex DL models is time-consuming and compute-intensive. This makes full retraining costly, wasteful, and slow. In this paper, we present an approach to continuously train and deploy DL models. First, we enable continuous training through proactive training that combines samples of historical data with new streaming data. Second, we enable continuous deployment through gradient sparsification that allows us to send a small percentage of the model updates per training iteration. Our experimental results with LeNet5 on MNIST and modern DL models on CIFAR-10 show that proactive training keeps models fresh with comparable—if not superior—performance to full retraining at a fraction of the time. Combined with gradient sparsification, sparse proactive training enables very fast updates of a deployed model with arbitrarily large sparsity, reducing communication per iteration up to four orders of magnitude, with minimal—if any—losses in model quality. Sparse training, however, comes at a price; it incurs overhead on the training that depends on the size of the model and increases the training time by factors ranging from 1.25 to 3 in our experiments. Arguably, a small price to pay for successfully enabling the continuous training and deployment of large DL models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
26

Boubrima, Ahmed, Walid Bechkit, and Herve Rivano. "Optimal WSN Deployment Models for Air Pollution Monitoring." IEEE Transactions on Wireless Communications 16, no. 5 (May 2017): 2723–35. http://dx.doi.org/10.1109/twc.2017.2658601.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
27

Kaskie, B. "The Widespread Deployment of Integrated Models of Care." Public Policy & Aging Report 23, no. 3 (June 1, 2013): 1–9. http://dx.doi.org/10.1093/ppar/23.3.1a.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
28

Esposito, Richard A., Larry S. Monroe, and Julio S. Friedman. "Deployment Models for Commercialized Carbon Capture and Storage†." Environmental Science & Technology 45, no. 1 (January 2011): 139–46. http://dx.doi.org/10.1021/es101441a.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
29

Ng, ManWo. "Revisiting a class of liner fleet deployment models." European Journal of Operational Research 257, no. 3 (March 2017): 773–76. http://dx.doi.org/10.1016/j.ejor.2016.07.044.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
30

Bisman, Singh, Tuli Bhumika, and Kumar Rakesh. "Cloud computing: Virtualization, service models, and deployment options." i-manager’s Journal on Cloud Computing 11, no. 2 (2024): 28. https://doi.org/10.26634/jcc.11.2.21197.

Der volle Inhalt der Quelle
Annotation:
Cloud computing has completely transformed how users access and use applications, services, and data. This study provides a thorough analysis of cloud computing, including its history, key features, the role of virtualization in cloud environments, and various cloud service models. From the introduction of time-sharing in the 1960s to its widespread adoption in the 2000s, cloud computing has evolved significantly. The properties of cloud computing, such as resource pooling, on-demand self-service, measured service, resilience, and rapid flexibility, are examined in this research. Virtualization plays a crucial role in cloud computing by enabling efficient resource utilization, scalability, and workload separation. A detailed discussion of several virtualization techniques, including multitenancy, containerization, and hypervisors, is provided. The advantages and drawbacks of each method are also compared in the paper to help readers select the most suitable approach for specific use cases. The functions, deployment models, customization options, scalability, and service examples of cloud service models—Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS)—are described. Additionally, the study explores cloud deployment options, such as community, multi-cloud, hybrid, public, and private models, each with its own unique features. This article offers a comprehensive overview of cloud computing, making it an invaluable resource for both beginners and experts. It enables informed decision-making and the successful deployment of cloud technologies to meet various business needs.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
31

Luyembi, Tshiniama Honoré, and Mudinga Arsène Banza. "Comparative Study of Different Cloud Computing Deployment Models." International Journal of Innovative Science and Research Technology (IJISRT) 10, no. 2 (February 26, 2025): 579–87. https://doi.org/10.5281/zenodo.14921249.

Der volle Inhalt der Quelle
Annotation:
Cloud Computing, often seen as a technological revolution, makes it possible to dematerialize information systems by making services accessible via a communication network, more often via the Internet. Its growth can be explained by its major advantages such as cost reduction, greater flexibility and independence from traditional physical infrastructures. We will compare the 4 cloud deployment models by showing how each model works. We will also detail the benefits and risks associated with each model.  The Study Focuses on these Four main Deployment models, Namely:  Public Cloud,  Private Cloud,  Hybrid Cloud,  Community Cloud. Although the benefits of cloud computing are countless, this study also raises significant challenges related to cloud, particularly in terms of data security, dependence on a stable Internet connection. To help businesses navigate this complex landscape, we have provided some recommendations concerning specific needs, integration capabilities and potential risks.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
32

Stötzner, Miles, Steffen Becker, Uwe Breitenbücher, Kálmán Képes, and Frank Leymann. "Modeling Different Deployment Variants of a Composite Application in a Single Declarative Deployment Model." Algorithms 15, no. 10 (October 19, 2022): 382. http://dx.doi.org/10.3390/a15100382.

Der volle Inhalt der Quelle
Annotation:
For automating the deployment of composite applications, typically, declarative deployment models are used. Depending on the context, the deployment of an application has to fulfill different requirements, such as costs and elasticity. As a consequence, one and the same application, i.e., its components, and their dependencies, often need to be deployed in different variants. If each different variant of a deployment is described using an individual deployment model, it quickly results in a large number of models, which are error prone to maintain. Deployment technologies, such as Terraform or Ansible, support conditional components and dependencies which allow modeling different deployment variants of a composite application in a single deployment model. However, there are deployment technologies, such as TOSCA and Docker Compose, which do not support such conditional elements. To address this, we extend the Essential Deployment Metamodel (EDMM) by conditional components and dependencies. EDMM is a declarative deployment model which can be mapped to several deployment technologies including Terraform, Ansible, TOSCA, and Docker Compose. Preprocessing such an extended model, i.e., conditional elements are evaluated and either preserved or removed, generates an EDMM conform model. As a result, conditional elements can be integrated on top of existing deployment technologies that are unaware of such concepts. We evaluate this by implementing a preprocessor for TOSCA, called OpenTOSCA Vintner, which employs the open-source TOSCA orchestrators xOpera and Unfurl to execute the generated TOSCA conform models.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
33

Hieder, Inaam Abbas. "Compared to wireless deployment in areas with different environmentse." International Journal of Electrical and Computer Engineering (IJECE) 9, no. 2 (April 1, 2019): 934–40. https://doi.org/10.11591/ijece.v9i2.pp934-940.

Der volle Inhalt der Quelle
Annotation:
In the mobile phone system, it is highly desirable to estimate the loss of the track not only to improve performance but also to achieve an accurate estimate of financial feasibility; the inaccurate estimate of track loss either leads to performance degradation or increased cost. Various models have been introduced to accurately estimate the path loss. One of these models is the Okomura / Hata model, which is recommended for estimating path loss in cellular systems that use micro cells. This system is suitable for use in a variety of environments. This study examines the comparison of path loss models for statistical analysis derived from experimental data collected in urban and suburban areas at frequencies of 150-1500 MHz’s. The results of the measurements were used to develop path loss models in urban and suburban areas. The results showed that Pathloss increases in urban areas respectively.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
34

Liagkou, Vasiliki, George Fragiadakis, Evangelia Filiopoulou, Vagia Kyriakidou, Christos Michalakelis, and Mara Nikolaidou. "Comparing the Cost of IaaS and CaaS Services." International Journal of Technology Diffusion 13, no. 1 (January 1, 2022): 1–11. http://dx.doi.org/10.4018/ijtd.315632.

Der volle Inhalt der Quelle
Annotation:
Cloud computing environments allow businesses to deploy applications in a fast and scalable way. Infrastructure-as-a-service (IaaS) and container-as-a-service (CaaS) models can be adopted for the deployment of cloud-based applications. The current paper presents a specific near-real-world scenario of a cloud-based application, deployed by the two aforementioned cloud models. The deployment cost differs between the cloud models and relies on the number of utilized resources, which is driven by the user demand. Since the cost is a major importance factor that finally determines the adoption of cloud technology, it is challenging to estimate and examine the cost of each proposed approach. This research can help cloud computing professionals pick a model that meets their goals and budget. It's also useful for cloud cost analysis. The corresponding costs of the two deployments are estimated based on the pricing policies of major providers, Amazon, Google, and Microsoft.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
35

Toluwase Peter Gbenle, Abraham Ayodeji Abayomi, Abel Chukwuemeke Uzoka, Oyejide Timothy Odofin, Oluwasanmi Segun Adanigbo, and Jeffrey Chidera Ogeawuchi. "Developing an AI Model Registry and Lifecycle Management System for Cross-Functional Tech Teams." International Journal of Scientific Research in Science, Engineering and Technology 11, no. 4 (August 20, 2024): 442–56. https://doi.org/10.32628/ijsrset25121179.

Der volle Inhalt der Quelle
Annotation:
This paper presents a comprehensive solution for managing AI models across their lifecycle through the development of an AI model registry and lifecycle management system. As AI continues to play a crucial role across industries, the complexity of managing models—from development to deployment—presents significant challenges, especially within cross-functional teams. These challenges include issues such as model versioning, metadata management, deployment inconsistencies, and communication breakdowns among data scientists, engineers, and business stakeholders. The proposed system addresses these challenges by providing a centralized platform that integrates features such as version control, metadata management, and automated deployment, thereby improving transparency and reducing the risk of deployment errors. Furthermore, the system fosters enhanced collaboration by integrating widely-used project management tools like GitHub, Jira, and Slack, ensuring that teams remain aligned throughout the model's lifecycle. By enabling continuous monitoring and incorporating automated model drift detection, the system ensures that AI models remain accurate and efficient post-deployment. This paper also explores the technical implementation strategy for the system, including the use of containerization, cloud-native infrastructure, and microservices architecture to ensure scalability and flexibility. The implications of this work extend beyond technical considerations, as it enhances collaboration, improves model quality, and accelerates deployment cycles. Future research directions include exploring automation in model updates, scalability in large enterprises, and the integration of additional tools and frameworks. This work provides a critical step toward optimizing AI model management, offering a scalable, efficient, and secure approach to managing AI models throughout their lifecycle.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
36

Reingle Gonzalez, Jennifer M., Stephen A. Bishopp, Katelyn K. Jetelina, Ellen Paddock, Kelley Pettee Gabriel, and M. Brad Cannell. "Does military veteran status and deployment history impact officer involved shootings? A case–control study." Journal of Public Health 41, no. 3 (October 3, 2018): e245-e252. http://dx.doi.org/10.1093/pubmed/fdy151.

Der volle Inhalt der Quelle
Annotation:
AbstractBackgroundDespite veterans’ preference hiring policies by law enforcement agencies, no studies have examined the nature or effects of military service or deployments on health outcomes. This study will examine the effect of military veteran status and deployment history on law enforcement officer (LEO)-involved shootings.MethodsTen years of data were extracted from Dallas Police Department records. LEOs who were involved in a shooting in the past 10 years were frequency matched on sex to LEOs never involved in a shooting. Military discharge records were examined to quantify veteran status and deployment(s). Multivariable logistic regression was used to estimate the effect of veteran status and deployment history on officer-involved shooting involvement.ResultsRecords were abstracted for 516 officers. In the adjusted models, veteran LEOs who were not deployed were significantly more likely to be involved in a shooting than non-veteran officers. Veterans with a deployment history were 2.9 times more likely to be in a shooting than non-veteran officers.ConclusionsMilitary veteran status, regardless of deployment history, is associated with increased odds of shootings among LEOs. Future studies should identify mechanisms that explain this relationship, and whether officers who experienced firsthand combat exposure experience greater odds of shooting involvement.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
37

Elkhatib, Yehia. "Building Cloud Applications for Challenged Networks." Communications in Computer and Information Science 514 (November 21, 2015): 1–10. https://doi.org/10.1007/978-3-319-25043-4_1.

Der volle Inhalt der Quelle
Annotation:
Cloud computing has seen vast advancements and uptake in many parts of the world. However, many of the design patterns and deployment models are not very suitable for locations with challenged networks such as countries with no nearby datacenters. This paper describes the problem and discusses the options available for such locations, focusing specifically on community clouds as a short-term solution. The paper highlights the impact of recent trends in the development of cloud applications and how changing these could better help deployment in challenged networks. The paper also outlines the consequent challenges in bridging different cloud deployments, also known as <em>cross-cloud computing</em>.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
38

Dunbar, Christopher R., Mark S. Riddle, Kristen Clarkson, Ramiro L. Gutierrez, Ashley Alcala, Angelique Byrd, and Chad K. Porter. "1104. Deployment-Associated Infectious Gastroenteritis and Associations With Irritable Bowel Syndrome, Post-Traumatic Stress Disorder, and Combat Stress: A Retrospective Cohort Study Among Deployed United States Military Personnel." Open Forum Infectious Diseases 5, suppl_1 (November 2018): S331. http://dx.doi.org/10.1093/ofid/ofy210.938.

Der volle Inhalt der Quelle
Annotation:
Abstract Background Previous studies have shown an association between post-traumatic stress disorder (PTSD) and the development of irritable bowel syndrome (IBS) in deployed service members. Deployment places soldiers at risk for chemical, physical, psychological, and infectious stressors. Acute stress can alter the gastrointestinal barrier leading to gut barrier dysfunction, which is an independent risk factor for infectious gastroenteritis (IGE). We sought to assess if there was an association between IBS and PTSD in military deployed in support of recent and ongoing military operations. Methods We conducted a retrospective cohort study of United States service members who participated in a combat deployment to the Middle East from 2001 to 2013 with no prior Axis I disorders or PTSD diagnoses based on data from the Defense Medical Surveillance System. Univariate and multivariate logistic regression models were used to assess the differential risk of PTSD following a combat deployment among those with and without a predeployment diagnosis of IBS. These models were controlled for confounders/covariates of interest (IGE, age, duration of deployment, sex, race, marital status, education level, military rank, branch of service, number of deployments). Results Among the 3825 subjects, those who developed IGE had a 34% (P = 0.02) increased risk of PTSD compared with those with no IGE during deployment. Additionally, those with IBS predeployment had a 40% (P = 0.001) increased risk of PTSD upon return from deployment compared with those without IBS predeployment. Duration of deployment was significantly (P &amp;lt; 0.0001) associated with PTSD with an increasing risk with increasing duration of deployment. Conclusion IGE and IBS were significantly associated with PTSD further supporting previous studies describing their association. Baseline chronic dysbiosis and acute stress-related microbiota perturbations may lead to short- and long-term resilience and performance deficits in our soldiers that may compromise mission capabilities and decrease the quality of life in returning soldiers. Further understanding the potential interactions between the gut–brain–microbiome may have immediate and long-term impacts on improving warfighter health and performance. Disclosures All authors: No reported disclosures.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
39

Nissen, Lars R., Karen-Inge Karstoft, Mia S. Vedtofte, Anni B. S. Nielsen, Merete Osler, Erik L. Mortensen, Gunhild T. Christensen, and Søren B. Andersen. "Cognitive ability and risk of post-traumatic stress disorder after military deployment: an observational cohort study." BJPsych Open 3, no. 6 (November 2017): 274–80. http://dx.doi.org/10.1192/bjpo.bp.117.005736.

Der volle Inhalt der Quelle
Annotation:
BackgroundStudies of the association between pre-deployment cognitive ability and post-deployment post-traumatic stress disorder (PTSD) have shown mixed results.AimsTo study the inflence of pre-deployment cognitive ability on PTSD symptoms 6–8 months post-deployment in a large population while controlling for pre-deployment education and deployment-related variables.MethodStudy linking prospective pre-deployment conscription board data with post-deployment self-reported data in 9695 Danish Army personnel deployed to different war zones in 1997–2013. The association between pre-deployment cognitive ability and post-deployment PTSD was investigated using repeated-measure logistic regression models. Two models with cognitive ability score as the main exposure variable were created (model 1 and model 2). Model 1 was only adjusted for pre-deployment variables, while model 2 was adjusted for both pre-deployment and deployment-related variables.ResultsWhen including only variables recorded pre-deployment (cognitive ability score and educational level) and gender (model 1), all variables predicted post-deployment PTSD. When deployment-related variables were added (model 2), this was no longer the case for cognitive ability score. However, when educational level was removed from the model adjusted for deployment-related variables, the association between cognitive ability and post-deployment PTSD became significant.ConclusionsPre-deployment lower cognitive ability did not predict post-deployment PTSD independently of educational level after adjustment for deployment-related variables.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
40

Mokale, Mahesh. "Automated Debugging and Deployment for High-Performance Telecom Applications." International Scientific Journal of Engineering and Management 02, no. 11 (November 28, 2023): 1–8. https://doi.org/10.55041/isjem00206.

Der volle Inhalt der Quelle
Annotation:
Abstract: High-performance telecom applications require efficient debugging and deployment strategies to ensure reliability, scalability, and seamless operations. These applications operate within highly complex and distributed environments where even minor failures or inefficiencies can result in significant service disruptions, financial losses, and customer dissatisfaction. Given the critical role telecom applications play in enabling global communication networks, minimizing downtime, optimizing system performance, and maintaining operational continuity is a top priority for telecom service providers. Automated debugging and deployment frameworks address these challenges by integrating advanced artificial intelligence (AI), machine learning (ML), and DevOps methodologies. Automated debugging solutions analyze logs and system metrics in real time, detecting anomalies, diagnosing root causes, and predicting potential failures before they impact service availability. By leveraging intelligent log analysis, anomaly detection algorithms, and self-healing mechanisms, these solutions enhance the fault tolerance and resilience of telecom applications. In addition to debugging, automated deployment frameworks streamline software releases, infrastructure updates, and configuration changes. Traditional deployment models often require manual interventions that increase the risk of errors, downtime, and inconsistent deployments across different environments. With automation-driven strategies such as Infrastructure as Code (IaC), containerization, and automated rollback mechanisms, telecom companies can achieve consistent, predictable, and secure deployments. Furthermore, modern deployment methodologies such as blue-green and canary deployments minimize disruption by allowing incremental rollouts of new software versions while ensuring service reliability. These approaches enable operators to test new releases in real-time environments with controlled user exposure, reducing the risks associated with large-scale software updates. The implementation of continuous integration and continuous deployment (CI/CD) pipelines further optimizes the development lifecycle, allowing frequent and seamless software updates without impacting ongoing operations. This white paper explores the critical challenges in debugging and deploying high-performance telecom applications and presents state-of-the-art automation strategies that were available up to 2022. By adopting AI- driven debugging techniques, robust deployment automation frameworks, and DevOps best practices, telecom providers can improve operational efficiency, enhance system resilience, reduce downtime, and accelerate time- to-market for new features and updates. Keywords: Automated Debugging, Deployment Automation, High-Performance Telecom Applications, AI-Driven Debugging, Machine Learning, DevOps, Infrastructure as Code (IaC), Continuous Integration (CI), Continuous Deployment (CD), Kubernetes, Docker, Containerization, Self-Healing Systems, Predictive Maintenance, Fault Detection, Anomaly Detection, Log Analysis, CI/CD Pipelines, Canary Deployment, Blue-Green Deployment, Automated Rollback, Disaster Recovery, Telecom Network Automation, Network Monitoring, AI-Based Root Cause Analysis, Service Orchestration, Cloud-Native Architectures, Microservices, Security Automation, Compliance Monitoring, Zero-Trust Security, Policy-Based Security Enforcement, Performance Testing, Load Balancing, Chaos Engineering, Shift-Left Testing, Regression Testing, Automated Testing, Fault Tolerance, Scalability, Operational Efficiency, Real-Time Analytics.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
41

Gonnade, Priyanka, and Sonali Ridhorkarb. "Empirical analysis of decision recommendation models for various processes from a pragmatic perspective." Multidisciplinary Reviews 7, no. 8 (April 25, 2024): 2024159. http://dx.doi.org/10.31893/multirev.2024159.

Der volle Inhalt der Quelle
Annotation:
Decision recommendation models allow researchers and process designers to identify and implement high-efficiency processes in ambiguous situations. These models perform multipara metric analysis on the given process sets to recommend high-quality decisions that assist in improving process-based efficiency levels. A wide variety of models have been proposed by researchers for the implementation of such recommenders, and each of them varies in terms of their functional nuances, applicative advantages, internal operating characteristics, contextual limitations, and deployment-specific future scopes. Thus, it is difficult for researchers and process designers to identify optimal models for functionality-specific use. Therefore, they tend to validate multiple process models, which increase deployment time, cost and complexity levels. To overcome this ambiguity, a detailed survey of different decision process recommendation models is discussed in this text. Fuzzy logic, analytical hierarchical processing (AHP), the technique for order performance by similarity to ideal solution (TOPSIS), and their variants are highly useful for the recommendation of efficient decisions. Based on this survey, readers will be able to identify recently proposed decision recommendation models and functionality-specific models for their deployments. To further assist in the model selection process, this paper compares the reviewed models in terms of their computational complexity, recommendation efficiency, delay needed for recommendation, scalability and contextual accuracy. Based on this comparison, readers will be able to identify performance-specific models for their deployments. This paper also evaluates a novel decision recommendation rank metric (DRRM), which combines these parameters, to identify models that can optimally perform w.r.t. multiple process metrics. Referring to this parameter comparison, readers will be able to identify optimal recommendation models for enhancing the performance of their decision recommendations under real-time scenarios.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
42

Gonnade, Priyanka, and Sonali Ridhorkar. "Empirical Analysis of Decision Recommendation Models for Various Processes from A Pragmatic Perspective." British Journal of Multidisciplinary and Advanced Studies 4, no. 6 (November 26, 2023): 20–49. http://dx.doi.org/10.37745/bjmas.2022.0358.

Der volle Inhalt der Quelle
Annotation:
Decision recommendation models allow researchers and process designers to identify &amp; implement high-efficiency processes under ambiguous situations. These models perform multiparametric analysis on the given process sets in order to recommend high quality decisions that assist in improving process-based efficiency levels. A wide variety of models are proposed by researchers for implementation of such recommenders, and each of them varies in terms of their functional nuances, applicative advantages, internal operating characteristics, contextual limitations, and deployment-specific future scopes. Thus, it is difficult for researchers and process designers to identify optimal models for their functionality-specific use cases. Due to which, they tend to validate multiple process models, which increases deployment time, cost &amp; complexity levels.To overcome this ambiguity, a detailed survey of different decision process recommendation models is discussed in this text. It was observed that Fuzzy Logic, Analytical Hierarchical Processing (AHP), Technique for Order Performance by Similarity to Ideal Solution (TOPSIS), and their variants are highly useful for recommendation of efficient decisions. Based on this survey, readers will be able to identify recently proposed decision recommendation models, and identify functionality-specific models for their deployments. To further assist the model selection process, this text compares the reviewed models in terms of their computational complexity, efficiency of recommendation, delay needed for recommendation, scalability and contextual accuracy levels. Based on this comparison, readers will be able to identify performance-specific models for their deployments. This text also proposes evaluation of a novel Decision Recommendation Rank Metric (DRRM), which combines these parameters, in order to identify models that can optimally perform w.r.t. multiple process metrics. Referring to this parameter comparison, readers will be able to identify optimal recommendation models for enhancing performance of their decision recommendations under real-time scenarios.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
43

Baldini, Edoardo, Stefano Chessa, and Antonio Brogi. "Estimating the Environmental Impact of Green IoT Deployments." Sensors 23, no. 3 (January 30, 2023): 1537. http://dx.doi.org/10.3390/s23031537.

Der volle Inhalt der Quelle
Annotation:
The Internet of Things (IoT) is demonstrating its huge innovation potential, but at the same time, its spread can induce one of highest environmental impacts caused by the IoT industry. This concern has motivated the rise of a new research area aimed at devising green IoT deployments. Our work falls in this research area by contributing to addressing the problem of assessing the environmental impact of IoT deployments. Specifically, we propose a methodology based on an analytical model to assess the environmental impact of an outdoor IoT deployment powered by solar energy harvesting. The model inputs the specification of the IoT devices that constitute the deployment in terms of the battery, solar panel and electronic components, and it outputs the energy required for the entire life-cycle of the deployment and the waste generated by its disposal. Given an existing IoT deployment, the models also determine a functionally equivalent baseline green solution, which is an ideal configuration with a lower environmental impact than the original solution. We validated the proposed methodology by means of the analysis of a case study conducted over an existing IoT deployment developed within the European project RESCATAME. In particular, by means of the model, we evaluate the impact of the RESCATAME system and assess its impact with respect to its baseline. In a scenario with a 30-year lifespan, the model estimates for the system more than 3 times the energy required by its baseline green solution and a waste for a volume 15 times greater. We also show how the impact of the baseline increases when assuming deployments in locations at increasing latitudes. Finally, the article presents an implementation of the proposed methodology as a web service that is publicly available.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
44

Ahamed, Fnu Imran. "A Comprehensive Guide to Optimizing Machine Learning and Deep Learning Models." European Journal of Computer Science and Information Technology 13, no. 11 (April 15, 2025): 99–113. https://doi.org/10.37745/ejcsit.2013/vol13n1199113.

Der volle Inhalt der Quelle
Annotation:
Machine learning and deep learning model optimization remain a pivotal aspect of artificial intelligence development, encompassing crucial elements from data preprocessing to deployment monitoring. The optimization process involves multiple interconnected stages, including data quality management, algorithm selection, feature engineering, hyperparameter tuning, transfer learning, and model deployment strategies. Each stage presents unique challenges and opportunities for enhancing model performance, with modern techniques offering solutions for improved accuracy, efficiency, and reliability. From addressing data quality issues through systematic preprocessing to implementing sophisticated deployment monitoring systems, the various aspects of model optimization work together to create robust and effective machine learning solutions that can be successfully deployed in real-world applications.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
45

Turner, Matthew, Yuan Liao, and Yan Du. "Comprehensive Smart Grid Planning in a Regulated Utility Environment." International Journal of Emerging Electric Power Systems 16, no. 3 (June 1, 2015): 265–79. http://dx.doi.org/10.1515/ijeeps-2014-0099.

Der volle Inhalt der Quelle
Annotation:
Abstract This paper presents the tools and exercises used during the Kentucky Smart Grid Roadmap Initiative in a collaborative electric grid planning process involving state regulators, public utilities, academic institutions, and private interest groups. The mandate of the initiative was to assess the existing condition of smart grid deployments in Kentucky, to enhance understanding of smart grid concepts by stakeholders, and to develop a roadmap for the deployment of smart grid technologies by the jurisdictional utilities of Kentucky. Through involvement of many important stakeholder groups, the resultant Smart Grid Deployment Roadmap proposes an aggressive yet achievable strategy and timetable designed to promote enhanced availability, security, efficiency, reliability, affordability, sustainability and safety of the electricity supply throughout the state while maintaining Kentucky’s nationally competitive electricity rates. The models and methods developed for this exercise can be utilized as a systematic process for the planning of coordinated smart grid deployments.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
46

Hedenus, F., N. Jakobsson, L. Reichenberg, and N. Mattsson. "Historical wind deployment and implications for energy system models." Renewable and Sustainable Energy Reviews 168 (October 2022): 112813. http://dx.doi.org/10.1016/j.rser.2022.112813.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
47

Fan, Tie Gang, Gui Fa Teng, and Li Min Huo. "Minimum Cost Programming Models of Nodes Deployment for WSNs." Applied Mechanics and Materials 719-720 (January 2015): 696–701. http://dx.doi.org/10.4028/www.scientific.net/amm.719-720.696.

Der volle Inhalt der Quelle
Annotation:
WSNs can cover a wide range of application. Node deployment is a fundamental factor in determining the connectivity, coverage, lifetime and cost of WSNs. This paper focuses on the cost of network that satisfies some constraints (coverage, connectivity and lifetime). In order to satisfy the connectivity and coverage, we use the regular hexagonal cell architecture. We present a new metric, the Cost Per Unit Area and Lifetime, to be objective function. Three programming models are proposed under different scenarios. For reasons of space, we present briefly the method to solve above models and some analysis.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
48

Castro, Diogo, Prasanth Kothuri, Piotr Mrowczynski, Danilo Piparo, and Enric Tejedor. "Apache Spark usage and deployment models for scientific computing." EPJ Web of Conferences 214 (2019): 07020. http://dx.doi.org/10.1051/epjconf/201921407020.

Der volle Inhalt der Quelle
Annotation:
This talk is about sharing our recent experiences in providing data analytics platform based on Apache Spark for High Energy Physics, CERN accelerator logging system and infrastructure monitoring. The Hadoop Service has started to expand its user base for researchers who want to perform analysis with big data technologies. Among many frameworks, Apache Spark is currently getting the most traction from various user communities and new ways to deploy Spark such as Apache Mesos or Spark on Kubernetes have started to evolve rapidly. Meanwhile, notebook web applications such as Jupyter offer the ability to perform interactive data analytics and visualizations without the need to install additional software. CERN already provides a web platform, called SWAN (Service for Web-based ANalysis), where users can write and run their analyses in the form of notebooks, seamlessly accessing the data and software they need. The first part of the presentation talks about several recent integrations and optimizations to the Apache Spark computing platform to enable HEP data processing and CERN accelerator logging system analytics. The optimizations and integrations, include, but not limited to, access of kerberized resources, xrootd connector enabling remote access to EOS storage and integration with SWAN for interactive data analysis, thus forming a truly Unified Analytics Platform. The second part of the talk touches upon the evolution of the Apache Spark data analytics platform, particularly sharing the recent work done to run Spark on Kubernetes on the virtualized and container-based infrastructure in Openstack. This deployment model allows for elastic scaling of data analytics workloads enabling efficient, on-demand utilization of resources in private or public clouds.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
49

Yang, Zhaojing, Min Xu, Xuecheng Tian, Yong Jin, and Shuaian Wang. "Optimal Deployment of Container Weighing Equipment: Models and Properties." Applied Sciences 14, no. 17 (September 3, 2024): 7798. http://dx.doi.org/10.3390/app14177798.

Der volle Inhalt der Quelle
Annotation:
Container weighing is crucial to the safety of the shipping system and has garnered significant attention in the maritime industry. This research develops a container weighing optimization model and validates several propositions derived from this model. Then, a case study is conducted on ports along the Yangtze River, and the sensitivity analysis of the model is provided. We report the following findings. First, the model can be solved efficiently for large-scale optimization problems. Second, as the number of weighing machines increases, the container weighing mode changes—from selectively weighing containers at their origin ports, then weighing containers at their transshipment ports or destination ports, to all of the containers weighed at their origin ports. Third, in order to improve the safety benefits of weighing containers, port authorities can increase the weighing capacity of weighing machines. The research provides theoretical guidance for shipping system managers to design container weighing plans that enhance maritime safety.
APA, Harvard, Vancouver, ISO und andere Zitierweisen
50

Sivasamy, K., C. Arumugam, S. R. Devadasan, R. Murugesh, and V. M. M. Thilak. "Advanced models of quality function deployment: a literature review." Quality & Quantity 50, no. 3 (May 5, 2015): 1399–414. http://dx.doi.org/10.1007/s11135-015-0212-2.

Der volle Inhalt der Quelle
APA, Harvard, Vancouver, ISO und andere Zitierweisen
Wir bieten Rabatte auf alle Premium-Pläne für Autoren, deren Werke in thematische Literatursammlungen aufgenommen wurden. Kontaktieren Sie uns, um einen einzigartigen Promo-Code zu erhalten!