Please use this identifier to cite or link to this item: https://repositorio.ufu.br/handle/123456789/41771
Full metadata record
DC FieldValueLanguage
dc.creatorPereira, Lucas Vinhal-
dc.date.accessioned2024-07-22T19:53:16Z-
dc.date.available2024-07-22T19:53:16Z-
dc.date.issued2023-12-27-
dc.identifier.citationVINHAL, Lucas Vinhal. RAFE: Resource Auto-Scaling For Multi-access Edge Computing With Machine Learning. 2024. 176 f. Dissertação (Mestrado em Ciência da Computação) - Universidade Federal de Uberlândia, Uberlândia, 2024. DOI https://doi.org/10.14393/ufu.di.2024.11.pt_BR
dc.identifier.urihttps://repositorio.ufu.br/handle/123456789/41771-
dc.description.abstractTo provide connectivity to multiple devices, including IoT, 5G relies on technologies like Network Function Virtualization (NFV) and Multi-access Edge Computing (MEC). Due to the continuously varying network flows, the resource management of these devices is one of the most important tasks that require dynamic algorithms to scale the finite resources efficiently and to satisfy QoS requirements. For this reason, the combination of reactive autoscaling mechanisms and AI-driven resource estimation models are foreseen as promising enablers. This work proposes RAFE (Resource Auto-scaling For Everything), a framework to auto-scale VNF and MEC applications, reacting and anticipating resource requirement changes through Machine Learning (ML), distributed training processes, multiple AI models, and revalidation. To this end, we first conduct an in-depth analysis and comparison of several ML algorithms applied in diverse contexts commonly faced by edge and cloud applications. Employing open datasets, we conducted a comprehensive performance evaluation of these algorithms in various scenarios frequently encountered at the network's edge. We assessed their effectiveness in univariate and multivariate contexts, encompassing one-step and multistep forecasting and tasks involving regression and classification. Furthermore, we detail the architecture and mechanisms of the proposed framework and present a Docker-based orchestration testbed to assess its performance and functionality in a suitable configuration. Moreover, we validate and compare the performance of the implemented autoscaling mechanisms over the expected network workload and a different and unseen workload to assess the performance over expressive changes in the learned patterns. Additionally, we evaluated RAFE's integrability and long operation effects through the revalidation effectiveness. Experimental results show that the proposed scheme achieved outstanding performance in predicting and managing resources while requiring a short time to train the forecasting models. Additionally, the hybrid and the predictive solutions outperform the reactive solution in terms of latency to traffic change reaction. Still, principally, the hybrid approach is fundamental to achieving cost-effectiveness while ensuring good results over unforeseen patterns. Finally, RAFE shows outstanding overall performance for auto-scaling edge and cloud applications, presenting great integrability.pt_BR
dc.languageengpt_BR
dc.publisherUniversidade Federal de Uberlândiapt_BR
dc.rightsAcesso Abertopt_BR
dc.rights.urihttp://creativecommons.org/licenses/by-nc-sa/3.0/us/*
dc.subjectMachine Learningpt_BR
dc.subjectDeep Neural Networkspt_BR
dc.subjectResource Managementpt_BR
dc.subjectVirtual Network Functionspt_BR
dc.subjectMulti-Access Edge Computingpt_BR
dc.subjectResource Predictionpt_BR
dc.subjectAuto-Scalingpt_BR
dc.titleRAFE: Resource Auto-Scaling For Multi-access Edge Computing With Machine Learningpt_BR
dc.title.alternativeRAFE: Escalonamento Automático de Recursos para Computação de Borda Multiacesso com Aprendizado de Máquinapt_BR
dc.typeDissertaçãopt_BR
dc.contributor.advisor1Silva, Flávio de Oliveira-
dc.contributor.advisor1Latteshttp://lattes.cnpq.br/3190608911887258pt_BR
dc.contributor.referee1Miani, Rodrigo Sanches-
dc.contributor.referee1Latteshttp://lattes.cnpq.br/2992074747740327pt_BR
dc.contributor.referee2Novais, Paulo Jorge Freitas de Oliveira-
dc.contributor.referee2Latteshttps://orcid.org/0000-0002-3549-0754pt_BR
dc.creator.Latteshttp://lattes.cnpq.br/7465249898268112pt_BR
dc.description.degreenameDissertação (Mestrado)pt_BR
dc.description.resumoTo provide connectivity to multiple devices, including IoT, 5G relies on technologies like Network Function Virtualization (NFV) and Multi-access Edge Computing (MEC). Due to the continuously varying network flows, the resource management of these devices is one of the most important tasks that require dynamic algorithms to scale the finite resources efficiently and to satisfy QoS requirements. For this reason, the combination of reactive autoscaling mechanisms and AI-driven resource estimation models are foreseen as promising enablers. This work proposes RAFE (Resource Auto-scaling For Everything), a framework to auto-scale VNF and MEC applications, reacting and anticipating resource requirement changes through Machine Learning (ML), distributed training processes, multiple AI models, and revalidation. To this end, we first conduct an in-depth analysis and comparison of several ML algorithms applied in diverse contexts commonly faced by edge and cloud applications. Employing open datasets, we conducted a comprehensive performance evaluation of these algorithms in various scenarios frequently encountered at the network's edge. We assessed their effectiveness in univariate and multivariate contexts, encompassing one-step and multistep forecasting and tasks involving regression and classification. Furthermore, we detail the architecture and mechanisms of the proposed framework and present a Docker-based orchestration testbed to assess its performance and functionality in a suitable configuration. Moreover, we validate and compare the performance of the implemented autoscaling mechanisms over the expected network workload and a different and unseen workload to assess the performance over expressive changes in the learned patterns. Additionally, we evaluated RAFE's integrability and long operation effects through the revalidation effectiveness. Experimental results show that the proposed scheme achieved outstanding performance in predicting and managing resources while requiring a short time to train the forecasting models. Additionally, the hybrid and the predictive solutions outperform the reactive solution in terms of latency to traffic change reaction. Still, principally, the hybrid approach is fundamental to achieving cost-effectiveness while ensuring good results over unforeseen patterns. Finally, RAFE shows outstanding overall performance for auto-scaling edge and cloud applications, presenting great integrability.pt_BR
dc.publisher.countryBrasilpt_BR
dc.publisher.programPrograma de Pós-graduação em Ciência da Computaçãopt_BR
dc.sizeorduration176pt_BR
dc.subject.cnpqCNPQ::CIENCIAS EXATAS E DA TERRApt_BR
dc.identifier.doihttps://doi.org/10.14393/ufu.di.2024.11pt_BR
dc.orcid.putcode164138436-
dc.crossref.doibatchidc973a458-5999-410c-877e-e97f567782e1-
dc.subject.autorizadoComputaçãopt_BR
dc.subject.odsODS::ODS 9. Indústria, Inovação e infraestrutura - Construir infraestrutura resiliente, promover a industrialização inclusiva e sustentável, e fomentar a inovação.pt_BR
Appears in Collections:DISSERTAÇÃO - Ciência da Computação

Files in This Item:
File Description SizeFormat 
ResourceAutoScaling.pdfDissertação39.65 MBAdobe PDFThumbnail
View/Open


This item is licensed under a Creative Commons License Creative Commons