Добірка наукової літератури з теми "Distributed optimization and learning"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Distributed optimization and learning".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Distributed optimization and learning"
Kamalesh, Kamalesh, and Dr Gobi Natesan. "Machine Learning-Driven Analysis of Distributed Computing Systems: Exploring Optimization and Efficiency." International Journal of Research Publication and Reviews 5, no. 3 (March 9, 2024): 3979–83. http://dx.doi.org/10.55248/gengpi.5.0324.0786.
Повний текст джерелаMertikopoulos, Panayotis, E. Veronica Belmega, Romain Negrel, and Luca Sanguinetti. "Distributed Stochastic Optimization via Matrix Exponential Learning." IEEE Transactions on Signal Processing 65, no. 9 (May 1, 2017): 2277–90. http://dx.doi.org/10.1109/tsp.2017.2656847.
Повний текст джерелаGratton, Cristiano, Naveen K. D. Venkategowda, Reza Arablouei, and Stefan Werner. "Privacy-Preserved Distributed Learning With Zeroth-Order Optimization." IEEE Transactions on Information Forensics and Security 17 (2022): 265–79. http://dx.doi.org/10.1109/tifs.2021.3139267.
Повний текст джерелаBlot, Michael, David Picard, Nicolas Thome, and Matthieu Cord. "Distributed optimization for deep learning with gossip exchange." Neurocomputing 330 (February 2019): 287–96. http://dx.doi.org/10.1016/j.neucom.2018.11.002.
Повний текст джерелаYoung, M. Todd, Jacob D. Hinkle, Ramakrishnan Kannan, and Arvind Ramanathan. "Distributed Bayesian optimization of deep reinforcement learning algorithms." Journal of Parallel and Distributed Computing 139 (May 2020): 43–52. http://dx.doi.org/10.1016/j.jpdc.2019.07.008.
Повний текст джерелаNedic, Angelia. "Distributed Gradient Methods for Convex Machine Learning Problems in Networks: Distributed Optimization." IEEE Signal Processing Magazine 37, no. 3 (May 2020): 92–101. http://dx.doi.org/10.1109/msp.2020.2975210.
Повний текст джерелаLin, I.-Cheng. "Learning and Optimization over Robust Networked Systems." ACM SIGMETRICS Performance Evaluation Review 52, no. 3 (January 9, 2025): 23–26. https://doi.org/10.1145/3712170.3712179.
Повний текст джерелаGao, Hongchang. "Distributed Stochastic Nested Optimization for Emerging Machine Learning Models: Algorithm and Theory." Proceedings of the AAAI Conference on Artificial Intelligence 37, no. 13 (June 26, 2023): 15437. http://dx.doi.org/10.1609/aaai.v37i13.26804.
Повний текст джерелаChoi, Dojin, Jiwon Wee, Sangho Song, Hyeonbyeong Lee, Jongtae Lim, Kyoungsoo Bok, and Jaesoo Yoo. "k-NN Query Optimization for High-Dimensional Index Using Machine Learning." Electronics 12, no. 11 (May 24, 2023): 2375. http://dx.doi.org/10.3390/electronics12112375.
Повний текст джерелаYang, Peng, and Ping Li. "Distributed Primal-Dual Optimization for Online Multi-Task Learning." Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 04 (April 3, 2020): 6631–38. http://dx.doi.org/10.1609/aaai.v34i04.6139.
Повний текст джерелаДисертації з теми "Distributed optimization and learning"
Funkquist, Mikaela, and Minghua Lu. "Distributed Optimization Through Deep Reinforcement Learning." Thesis, KTH, Skolan för elektroteknik och datavetenskap (EECS), 2020. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-293878.
Повний текст джерелаFörstärkningsinlärningsmetoder tillåter självlärande enheter att spela video- och brädspel autonomt. Projektet siktar på att studera effektiviteten hos förstärkningsinlärningsmetoderna Q-learning och deep Q-learning i dynamiska problem. Målet är att träna upp robotar så att de kan röra sig genom ett varuhus på bästa sätt utan att kollidera. En virtuell miljö skapades, i vilken algoritmerna testades genom att simulera agenter som rörde sig. Algoritmernas effektivitet utvärderades av hur snabbt agenterna lärde sig att utföra förutbestämda uppgifter. Resultatet visar att Q-learning fungerar bra för enkla problem med få agenter, där system med två aktiva agenter löstes snabbt. Deep Q-learning fungerar bättre för mer komplexa system som innehåller fler agenter, men fall med suboptimala rörelser uppstod. Båda algoritmerna visade god potential inom deras respektive områden, däremot måste förbättringar göras innan de kan användas i verkligheten.
Kandidatexjobb i elektroteknik 2020, KTH, Stockholm
Konečný, Jakub. "Stochastic, distributed and federated optimization for machine learning." Thesis, University of Edinburgh, 2017. http://hdl.handle.net/1842/31478.
Повний текст джерелаArmond, Kenneth C. Jr. "Distributed Support Vector Machine Learning." ScholarWorks@UNO, 2008. http://scholarworks.uno.edu/td/711.
Повний текст джерелаPatvarczki, Jozsef. "Layout Optimization for Distributed Relational Databases Using Machine Learning." Digital WPI, 2012. https://digitalcommons.wpi.edu/etd-dissertations/291.
Повний текст джерелаOuyang, Hua. "Optimal stochastic and distributed algorithms for machine learning." Diss., Georgia Institute of Technology, 2013. http://hdl.handle.net/1853/49091.
Повний текст джерелаEl, Gamal Mostafa. "Distributed Statistical Learning under Communication Constraints." Digital WPI, 2017. https://digitalcommons.wpi.edu/etd-dissertations/314.
Повний текст джерелаDai, Wei. "Learning with Staleness." Research Showcase @ CMU, 2018. http://repository.cmu.edu/dissertations/1209.
Повний текст джерелаLu, Yumao. "Kernel optimization and distributed learning algorithms for support vector machines." Diss., Restricted to subscribing institutions, 2005. http://uclibs.org/PID/11984.
Повний текст джерелаDinh, The Canh. "Distributed Algorithms for Fast and Personalized Federated Learning." Thesis, The University of Sydney, 2023. https://hdl.handle.net/2123/30019.
Повний текст джерелаReddi, Sashank Jakkam. "New Optimization Methods for Modern Machine Learning." Research Showcase @ CMU, 2017. http://repository.cmu.edu/dissertations/1116.
Повний текст джерелаКниги з теми "Distributed optimization and learning"
Jiang, Jiawei, Bin Cui, and Ce Zhang. Distributed Machine Learning and Gradient Optimization. Singapore: Springer Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-3420-8.
Повний текст джерелаWang, Huiwei, Huaqing Li, and Bo Zhou. Distributed Optimization, Game and Learning Algorithms. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-4528-7.
Повний текст джерелаJoshi, Gauri. Optimization Algorithms for Distributed Machine Learning. Cham: Springer International Publishing, 2023. http://dx.doi.org/10.1007/978-3-031-19067-4.
Повний текст джерелаTatarenko, Tatiana. Game-Theoretic Learning and Distributed Optimization in Memoryless Multi-Agent Systems. Cham: Springer International Publishing, 2017. http://dx.doi.org/10.1007/978-3-319-65479-9.
Повний текст джерелаMajhi, Sudhan, Rocío Pérez de Prado, and Chandrappa Dasanapura Nanjundaiah, eds. Distributed Computing and Optimization Techniques. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-19-2281-7.
Повний текст джерелаGiselsson, Pontus, and Anders Rantzer, eds. Large-Scale and Distributed Optimization. Cham: Springer International Publishing, 2018. http://dx.doi.org/10.1007/978-3-319-97478-1.
Повний текст джерелаLü, Qingguo, Xiaofeng Liao, Huaqing Li, Shaojiang Deng, and Shanfu Gao. Distributed Optimization in Networked Systems. Singapore: Springer Nature Singapore, 2023. http://dx.doi.org/10.1007/978-981-19-8559-1.
Повний текст джерелаAbdulrahman Younis Ali Younis Kalbat. Distributed and Large-Scale Optimization. [New York, N.Y.?]: [publisher not identified], 2016.
Знайти повний текст джерелаOtto, Daniel, Gianna Scharnberg, Michael Kerres, and Olaf Zawacki-Richter, eds. Distributed Learning Ecosystems. Wiesbaden: Springer Fachmedien Wiesbaden, 2023. http://dx.doi.org/10.1007/978-3-658-38703-7.
Повний текст джерелаЧастини книг з теми "Distributed optimization and learning"
Joshi, Gauri, and Shiqiang Wang. "Communication-Efficient Distributed Optimization Algorithms." In Federated Learning, 125–43. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-030-96896-0_6.
Повний текст джерелаJiang, Jiawei, Bin Cui, and Ce Zhang. "Distributed Gradient Optimization Algorithms." In Distributed Machine Learning and Gradient Optimization, 57–114. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-3420-8_3.
Повний текст джерелаJiang, Jiawei, Bin Cui, and Ce Zhang. "Distributed Machine Learning Systems." In Distributed Machine Learning and Gradient Optimization, 115–66. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-3420-8_4.
Повний текст джерелаJoshi, Gauri. "Distributed Optimization in Machine Learning." In Synthesis Lectures on Learning, Networks, and Algorithms, 1–12. Cham: Springer International Publishing, 2022. http://dx.doi.org/10.1007/978-3-031-19067-4_1.
Повний текст джерелаLin, Zhouchen, Huan Li, and Cong Fang. "ADMM for Distributed Optimization." In Alternating Direction Method of Multipliers for Machine Learning, 207–40. Singapore: Springer Nature Singapore, 2022. http://dx.doi.org/10.1007/978-981-16-9840-8_6.
Повний текст джерелаJiang, Jiawei, Bin Cui, and Ce Zhang. "Basics of Distributed Machine Learning." In Distributed Machine Learning and Gradient Optimization, 15–55. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-16-3420-8_2.
Повний текст джерелаScheidegger, Carre, Arpit Shah, and Dan Simon. "Distributed Learning with Biogeography-Based Optimization." In Lecture Notes in Computer Science, 203–15. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011. http://dx.doi.org/10.1007/978-3-642-21827-9_21.
Повний текст джерелаGonzález-Mendoza, Miguel, Neil Hernández-Gress, and André Titli. "Quadratic Optimization Fine Tuning for the Learning Phase of SVM." In Advanced Distributed Systems, 347–57. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005. http://dx.doi.org/10.1007/11533962_31.
Повний текст джерелаWang, Huiwei, Huaqing Li, and Bo Zhou. "Cooperative Distributed Optimization in Multiagent Networks with Delays." In Distributed Optimization, Game and Learning Algorithms, 1–17. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-4528-7_1.
Повний текст джерелаWang, Huiwei, Huaqing Li, and Bo Zhou. "Constrained Consensus of Multi-agent Systems with Time-Varying Topology." In Distributed Optimization, Game and Learning Algorithms, 19–37. Singapore: Springer Singapore, 2021. http://dx.doi.org/10.1007/978-981-33-4528-7_2.
Повний текст джерелаТези доповідей конференцій з теми "Distributed optimization and learning"
Patil, Aditya, Sanket Lodha, Sonal Deshmukh, Rupali S. Joshi, Vaishali Patil, and Sudhir Chitnis. "Battery Optimization Using Machine Learning." In 2024 IEEE International Conference on Blockchain and Distributed Systems Security (ICBDS), 1–5. IEEE, 2024. https://doi.org/10.1109/icbds61829.2024.10837428.
Повний текст джерелаKhan, Malak Abid Ali, Luo Senlin, Hongbin Ma, Abdul Khalique Shaikh, Ahlam Almusharraf, and Imran Khan Mirani. "Optimization of LoRa for Distributed Environments Based on Machine Learning." In 2024 IEEE Asia Pacific Conference on Wireless and Mobile (APWiMob), 137–42. IEEE, 2024. https://doi.org/10.1109/apwimob64015.2024.10792952.
Повний текст джерелаChao, Liangchen, Bo Zhang, Hengpeng Guo, Fangheng Ji, and Junfeng Li. "UAV Swarm Collaborative Transmission Optimization for Machine Learning Tasks." In 2024 IEEE 30th International Conference on Parallel and Distributed Systems (ICPADS), 504–11. IEEE, 2024. http://dx.doi.org/10.1109/icpads63350.2024.00072.
Повний текст джерелаShamir, Ohad, and Nathan Srebro. "Distributed stochastic optimization and learning." In 2014 52nd Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE, 2014. http://dx.doi.org/10.1109/allerton.2014.7028543.
Повний текст джерелаHulse, Daniel, Brandon Gigous, Kagan Tumer, Christopher Hoyle, and Irem Y. Tumer. "Towards a Distributed Multiagent Learning-Based Design Optimization Method." In ASME 2017 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference. American Society of Mechanical Engineers, 2017. http://dx.doi.org/10.1115/detc2017-68042.
Повний текст джерелаLi, Naihao, Jiaqi Wang, Xu Liu, Lanfeng Wang, and Long Zhang. "Contrastive Learning-based Meta-Learning Sequential Recommendation." In 2024 International Conference on Distributed Computing and Optimization Techniques (ICDCOT). IEEE, 2024. http://dx.doi.org/10.1109/icdcot61034.2024.10515699.
Повний текст джерелаVaidya, Nitin H. "Security and Privacy for Distributed Optimization & Distributed Machine Learning." In PODC '21: ACM Symposium on Principles of Distributed Computing. New York, NY, USA: ACM, 2021. http://dx.doi.org/10.1145/3465084.3467485.
Повний текст джерелаLiao, Leonardo, and Yongqiang Wu. "Distributed Polytope ARTMAP: A Vigilance-Free ART Network for Distributed Supervised Learning." In 2009 International Joint Conference on Computational Sciences and Optimization, CSO. IEEE, 2009. http://dx.doi.org/10.1109/cso.2009.63.
Повний текст джерелаWang, Shoujin, Fan Wang, and Yu Zhang. "Learning Rate Decay Algorithm Based on Mutual Information in Deep Learning." In 2024 International Conference on Distributed Computing and Optimization Techniques (ICDCOT). IEEE, 2024. http://dx.doi.org/10.1109/icdcot61034.2024.10515368.
Повний текст джерелаAnand, Aditya, Lakshay Rastogi, Ansh Agarwaal, and Shashank Bhardwaj. "Refraction-Learning Based Whale Optimization Algorithm with Opposition-Learning and Adaptive Parameter Optimization." In 2024 Third International Conference on Distributed Computing and Electrical Circuits and Electronics (ICDCECE). IEEE, 2024. http://dx.doi.org/10.1109/icdcece60827.2024.10548420.
Повний текст джерелаЗвіти організацій з теми "Distributed optimization and learning"
Stuckey, Peter, and Toby Walsh. Learning within Optimization. Fort Belvoir, VA: Defense Technical Information Center, April 2013. http://dx.doi.org/10.21236/ada575367.
Повний текст джерелаNygard, Kendall E. Distributed Optimization in Aircraft Mission Scheduling. Fort Belvoir, VA: Defense Technical Information Center, May 1995. http://dx.doi.org/10.21236/ada300064.
Повний текст джерелаMeyer, Robert R. Large-Scale Optimization Via Distributed Systems. Fort Belvoir, VA: Defense Technical Information Center, November 1989. http://dx.doi.org/10.21236/ada215136.
Повний текст джерелаShead, Timothy, Jonathan Berry, Cynthia Phillips, and Jared Saia. Information-Theoretically Secure Distributed Machine Learning. Office of Scientific and Technical Information (OSTI), November 2019. http://dx.doi.org/10.2172/1763277.
Повний текст джерелаGraesser, Arthur C., and Robert A. Wisher. Question Generation as a Learning Multiplier in Distributed Learning Environments. Fort Belvoir, VA: Defense Technical Information Center, October 2001. http://dx.doi.org/10.21236/ada399456.
Повний текст джерелаVoon, B. K., and M. A. Austin. Structural Optimization in a Distributed Computing Environment. Fort Belvoir, VA: Defense Technical Information Center, January 1991. http://dx.doi.org/10.21236/ada454846.
Повний текст джерелаHays, Robert T. Theoretical Foundation for Advanced Distributed Learning Research. Fort Belvoir, VA: Defense Technical Information Center, May 2001. http://dx.doi.org/10.21236/ada385457.
Повний текст джерелаChen, J. S. J. Distributed-query optimization in fragmented data-base systems. Office of Scientific and Technical Information (OSTI), August 1987. http://dx.doi.org/10.2172/7183881.
Повний текст джерелаNocedal, Jorge. Nonlinear Optimization Methods for Large-Scale Learning. Office of Scientific and Technical Information (OSTI), October 2019. http://dx.doi.org/10.2172/1571768.
Повний текст джерелаLumsdaine, Andrew. Scalable Second Order Optimization for Machine Learning. Office of Scientific and Technical Information (OSTI), May 2022. http://dx.doi.org/10.2172/1984057.
Повний текст джерела