Optimization Model for Low Energy Computing in Cloud Datacenters

dc.contributor.advisorJamshidi, Mo
dc.contributor.authorPrevost, John Jeffery
dc.contributor.committeeMemberKelley, Brian
dc.contributor.committeeMemberLin, Wei-Ming
dc.contributor.committeeMemberHansen, Lars
dc.contributor.committeeMemberHuffman, James
dc.date.accessioned2024-02-12T19:51:01Z
dc.date.available2024-02-12T19:51:01Z
dc.date.issued2013
dc.descriptionThis item is available only to currently enrolled UTSA students, faculty or staff. To download, navigate to Log In in the top right-hand corner of this screen, then select Log in with my UTSA ID.
dc.description.abstractThis dissertation proposes an energy optimization model for the computing nodes that comprise cloud computing systems. One of the central premises of this research is that the compute nodes of the cloud should only be active, and therefore consuming power, when there is workload to process and otherwise should be in a low-power inactive state. One of the novel ideas presented in this dissertation is a stochastic model for physical server state-change time latencies. Understanding how much time it takes servers to change their state allows for the optimal choice of the prediction time horizon and also allows the time horizon to be dynamic. This allows for time-varying configurations of the cloud to not adversely affect prediction. The problem of determining the future workload will be examined using different approaches such as a fuzzy logic inference engine, a neural network and linear filters. Single and multi-tenant models of hosting cloud systems will be used to predict workloads and simulate potential energy savings. This dissertation will present then a minimum cost optimization model that is responsible for: predicting the incoming workload, determining the optimal system configuration of the cloud, and then making the required changes. The optimizer, using the developed cost model and runtime constraints, will create the minimum energy configuration of cloud compute nodes while ensuring all minimum performance guarantees are kept. The cost functions will cover the three key areas of concern: energy, performance and reliability. A reduction in the number of active servers was shown through simulation to reduce power consumption by at least 42%. Simulation will also show that the inclusion of the presented algorithms reduces the required number of calculations over 20% when compared with the traditional static approach. This allows the optimization algorithm to have minimal overhead on cloud compute resources while still offering substantial energy savings. An overall energy-aware optimization model is finally presented that describes the required systems constraints and proposes techniques for determining the best overall solution.
dc.description.departmentElectrical and Computer Engineering
dc.format.extent152 pages
dc.format.mimetypeapplication/pdf
dc.identifier.urihttps://hdl.handle.net/20.500.12588/5098
dc.languageen
dc.subjectcloud
dc.subjectenergy
dc.subjectgreen
dc.subjectoptimization
dc.subject.classificationElectrical engineering
dc.subject.classificationComputer engineering
dc.titleOptimization Model for Low Energy Computing in Cloud Datacenters
dc.typeThesis
dc.type.dcmiText
dcterms.accessRightspq_closed
thesis.degree.departmentElectrical and Computer Engineering
thesis.degree.grantorUniversity of Texas at San Antonio
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy

Files

Original bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Prevost_utsa_1283D_11205.pdf
Size:
10.78 MB
Format:
Adobe Portable Document Format