Optimization Model for Low Energy Computing in Cloud Datacenters
dc.contributor.advisor | Jamshidi, Mo | |
dc.contributor.author | Prevost, John Jeffery | |
dc.contributor.committeeMember | Kelley, Brian | |
dc.contributor.committeeMember | Lin, Wei-Ming | |
dc.contributor.committeeMember | Hansen, Lars | |
dc.contributor.committeeMember | Huffman, James | |
dc.date.accessioned | 2024-02-12T19:51:01Z | |
dc.date.available | 2024-02-12T19:51:01Z | |
dc.date.issued | 2013 | |
dc.description | This item is available only to currently enrolled UTSA students, faculty or staff. To download, navigate to Log In in the top right-hand corner of this screen, then select Log in with my UTSA ID. | |
dc.description.abstract | This dissertation proposes an energy optimization model for the computing nodes that comprise cloud computing systems. One of the central premises of this research is that the compute nodes of the cloud should only be active, and therefore consuming power, when there is workload to process and otherwise should be in a low-power inactive state. One of the novel ideas presented in this dissertation is a stochastic model for physical server state-change time latencies. Understanding how much time it takes servers to change their state allows for the optimal choice of the prediction time horizon and also allows the time horizon to be dynamic. This allows for time-varying configurations of the cloud to not adversely affect prediction. The problem of determining the future workload will be examined using different approaches such as a fuzzy logic inference engine, a neural network and linear filters. Single and multi-tenant models of hosting cloud systems will be used to predict workloads and simulate potential energy savings. This dissertation will present then a minimum cost optimization model that is responsible for: predicting the incoming workload, determining the optimal system configuration of the cloud, and then making the required changes. The optimizer, using the developed cost model and runtime constraints, will create the minimum energy configuration of cloud compute nodes while ensuring all minimum performance guarantees are kept. The cost functions will cover the three key areas of concern: energy, performance and reliability. A reduction in the number of active servers was shown through simulation to reduce power consumption by at least 42%. Simulation will also show that the inclusion of the presented algorithms reduces the required number of calculations over 20% when compared with the traditional static approach. This allows the optimization algorithm to have minimal overhead on cloud compute resources while still offering substantial energy savings. An overall energy-aware optimization model is finally presented that describes the required systems constraints and proposes techniques for determining the best overall solution. | |
dc.description.department | Electrical and Computer Engineering | |
dc.format.extent | 152 pages | |
dc.format.mimetype | application/pdf | |
dc.identifier.uri | https://hdl.handle.net/20.500.12588/5098 | |
dc.language | en | |
dc.subject | cloud | |
dc.subject | energy | |
dc.subject | green | |
dc.subject | optimization | |
dc.subject.classification | Electrical engineering | |
dc.subject.classification | Computer engineering | |
dc.title | Optimization Model for Low Energy Computing in Cloud Datacenters | |
dc.type | Thesis | |
dc.type.dcmi | Text | |
dcterms.accessRights | pq_closed | |
thesis.degree.department | Electrical and Computer Engineering | |
thesis.degree.grantor | University of Texas at San Antonio | |
thesis.degree.level | Doctoral | |
thesis.degree.name | Doctor of Philosophy |
Files
Original bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- Prevost_utsa_1283D_11205.pdf
- Size:
- 10.78 MB
- Format:
- Adobe Portable Document Format