Evaluating Cloud Auto-Scaler Resource Allocation Planning Under Multiple High-Performance Computing Scenarios

dc.contributor.advisorJohn, Eugene
dc.contributor.authorLeochico, Kester
dc.contributor.committeeMemberLin, Wei-Ming
dc.contributor.committeeMemberLee, Wonjun
dc.date.accessioned2024-02-12T14:53:30Z
dc.date.available2024-02-12T14:53:30Z
dc.date.issued2017
dc.descriptionThis item is available only to currently enrolled UTSA students, faculty or staff. To download, navigate to Log In in the top right-hand corner of this screen, then select Log in with my UTSA ID.
dc.description.abstractCloud computing enables users to elastically acquire as many computing resources as they need on-demand while minimizing resource allocation to what is needed on the provider side, allowing users to acquire only the resources that they need while reducing the costs of acquiring said resources over traditional datacenters. The key to enabling these improvements are cloud auto-scalers, which are subsystems that are responsible for planning out how many resources to provision in response to current/future demand. The current state of the art in terms of comparing different auto-scaling algorithms is immature, however, due to the lack of consistent, standard evaluation methodologies that analyze cloud auto-scalers under multiple scenarios, making it difficult to compare the results between proposed auto-scalers. In an effort to address some of these issues, this work analyzes the effects of changing the workload mix and lowering the average job runtime (as represented by the service rate) on the performance of three cloud auto-scalers from the literature in an effort to better model the behavior of cloud auto-scalers under high-performance computing scenarios involving long-running jobs rather than short-lived jobs as in previous studies. The simulation and analysis was carried out using the Performance Evaluation framework for Auto-Scaling (PEAS), a cloud simulator framework for modelling the performance of cloud auto-scaling algorithms using scenario theory.
dc.description.departmentElectrical and Computer Engineering
dc.format.extent73 pages
dc.format.mimetypeapplication/pdf
dc.identifier.isbn9780355166897
dc.identifier.urihttps://hdl.handle.net/20.500.12588/4382
dc.languageen
dc.subjectAuto-scaling
dc.subjectCloud computing
dc.subject.classificationComputer engineering
dc.titleEvaluating Cloud Auto-Scaler Resource Allocation Planning Under Multiple High-Performance Computing Scenarios
dc.typeThesis
dc.type.dcmiText
dcterms.accessRightspq_closed
thesis.degree.departmentElectrical and Computer Engineering
thesis.degree.grantorUniversity of Texas at San Antonio
thesis.degree.levelMasters
thesis.degree.nameMaster of Science

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Leochico_utsa_1283M_12323.pdf
Size:
1.76 MB
Format:
Adobe Portable Document Format