Use este identificador para citar ou linkar para este item:
https://repositorio.ufu.br/handle/123456789/41771
ORCID: | http://orcid.org/0000-0002-3777-0952 |
Tipo do documento: | Dissertação |
Tipo de acesso: | Acesso Aberto |
Título: | RAFE: Resource Auto-Scaling For Multi-access Edge Computing With Machine Learning |
Título(s) alternativo(s): | RAFE: Escalonamento Automático de Recursos para Computação de Borda Multiacesso com Aprendizado de Máquina |
Autor(es): | Pereira, Lucas Vinhal |
Primeiro orientador: | Silva, Flávio de Oliveira |
Primeiro membro da banca: | Miani, Rodrigo Sanches |
Segundo membro da banca: | Novais, Paulo Jorge Freitas de Oliveira |
Resumo: | To provide connectivity to multiple devices, including IoT, 5G relies on technologies like Network Function Virtualization (NFV) and Multi-access Edge Computing (MEC). Due to the continuously varying network flows, the resource management of these devices is one of the most important tasks that require dynamic algorithms to scale the finite resources efficiently and to satisfy QoS requirements. For this reason, the combination of reactive autoscaling mechanisms and AI-driven resource estimation models are foreseen as promising enablers. This work proposes RAFE (Resource Auto-scaling For Everything), a framework to auto-scale VNF and MEC applications, reacting and anticipating resource requirement changes through Machine Learning (ML), distributed training processes, multiple AI models, and revalidation. To this end, we first conduct an in-depth analysis and comparison of several ML algorithms applied in diverse contexts commonly faced by edge and cloud applications. Employing open datasets, we conducted a comprehensive performance evaluation of these algorithms in various scenarios frequently encountered at the network's edge. We assessed their effectiveness in univariate and multivariate contexts, encompassing one-step and multistep forecasting and tasks involving regression and classification. Furthermore, we detail the architecture and mechanisms of the proposed framework and present a Docker-based orchestration testbed to assess its performance and functionality in a suitable configuration. Moreover, we validate and compare the performance of the implemented autoscaling mechanisms over the expected network workload and a different and unseen workload to assess the performance over expressive changes in the learned patterns. Additionally, we evaluated RAFE's integrability and long operation effects through the revalidation effectiveness. Experimental results show that the proposed scheme achieved outstanding performance in predicting and managing resources while requiring a short time to train the forecasting models. Additionally, the hybrid and the predictive solutions outperform the reactive solution in terms of latency to traffic change reaction. Still, principally, the hybrid approach is fundamental to achieving cost-effectiveness while ensuring good results over unforeseen patterns. Finally, RAFE shows outstanding overall performance for auto-scaling edge and cloud applications, presenting great integrability. |
Abstract: | To provide connectivity to multiple devices, including IoT, 5G relies on technologies like Network Function Virtualization (NFV) and Multi-access Edge Computing (MEC). Due to the continuously varying network flows, the resource management of these devices is one of the most important tasks that require dynamic algorithms to scale the finite resources efficiently and to satisfy QoS requirements. For this reason, the combination of reactive autoscaling mechanisms and AI-driven resource estimation models are foreseen as promising enablers. This work proposes RAFE (Resource Auto-scaling For Everything), a framework to auto-scale VNF and MEC applications, reacting and anticipating resource requirement changes through Machine Learning (ML), distributed training processes, multiple AI models, and revalidation. To this end, we first conduct an in-depth analysis and comparison of several ML algorithms applied in diverse contexts commonly faced by edge and cloud applications. Employing open datasets, we conducted a comprehensive performance evaluation of these algorithms in various scenarios frequently encountered at the network's edge. We assessed their effectiveness in univariate and multivariate contexts, encompassing one-step and multistep forecasting and tasks involving regression and classification. Furthermore, we detail the architecture and mechanisms of the proposed framework and present a Docker-based orchestration testbed to assess its performance and functionality in a suitable configuration. Moreover, we validate and compare the performance of the implemented autoscaling mechanisms over the expected network workload and a different and unseen workload to assess the performance over expressive changes in the learned patterns. Additionally, we evaluated RAFE's integrability and long operation effects through the revalidation effectiveness. Experimental results show that the proposed scheme achieved outstanding performance in predicting and managing resources while requiring a short time to train the forecasting models. Additionally, the hybrid and the predictive solutions outperform the reactive solution in terms of latency to traffic change reaction. Still, principally, the hybrid approach is fundamental to achieving cost-effectiveness while ensuring good results over unforeseen patterns. Finally, RAFE shows outstanding overall performance for auto-scaling edge and cloud applications, presenting great integrability. |
Palavras-chave: | Machine Learning Deep Neural Networks Resource Management Virtual Network Functions Multi-Access Edge Computing Resource Prediction Auto-Scaling |
Área(s) do CNPq: | CNPQ::CIENCIAS EXATAS E DA TERRA |
Assunto: | Computação |
Idioma: | eng |
País: | Brasil |
Editora: | Universidade Federal de Uberlândia |
Programa: | Programa de Pós-graduação em Ciência da Computação |
Referência: | VINHAL, Lucas Vinhal. RAFE: Resource Auto-Scaling For Multi-access Edge Computing With Machine Learning. 2024. 176 f. Dissertação (Mestrado em Ciência da Computação) - Universidade Federal de Uberlândia, Uberlândia, 2024. DOI https://doi.org/10.14393/ufu.di.2024.11. |
Identificador do documento: | https://doi.org/10.14393/ufu.di.2024.11 |
URI: | https://repositorio.ufu.br/handle/123456789/41771 |
Data de defesa: | 27-Dez-2023 |
Objetivos de Desenvolvimento Sustentável (ODS): | ODS::ODS 9. Indústria, Inovação e infraestrutura - Construir infraestrutura resiliente, promover a industrialização inclusiva e sustentável, e fomentar a inovação. |
Aparece nas coleções: | DISSERTAÇÃO - Ciência da Computação |
Arquivos associados a este item:
Arquivo | Descrição | Tamanho | Formato | |
---|---|---|---|---|
ResourceAutoScaling.pdf | Dissertação | 39.65 MB | Adobe PDF | Visualizar/Abrir |
Este item está licenciada sob uma Licença Creative Commons