A study of computation offloading frameworks for mobile cloud computing
Mobile devices have limited resources in terms of power, hardware and bandwidth. Modern mobile devices are powerful, but they often fail to run high end applications, such as graphics intensive games, video encoders and photo editors, smoothly. Most devices struggle to run these apps, and almost every device struggle to run more than 24 hours on a single charge. One way to overcome these shortcomings is to offload computation to the cloud. By offloading computation to the cloud, energy consumption and device latency can be reduced significantly. However, offloading must be done strategically to ensure optimum utilization of mobile device resources and cloud resources. This research focuses on how offloading computation from mobile devices to the cloud can be performed in a strategic way utilizing different frameworks. We have proposed a mathematical model to solve this problem. We have designed three implementations: 1) by utilizing fuzzy logic, 2) by utilizing neuro-fuzzy analysis, and 3) by utilizing machine learning to solve the computation offloading problem. All three implementations share some common characteristics: they change dynamically based on environmental and device parameters, they decide on run-time whether to offload to the cloud or not and they save significant device energy and reduces latency. We have created two sample apps to test the models and compared our results with previous approaches. We have shown that our implementations perform better than previous approaches. We also present a financial estimation to calculate the cost of offloading to the cloud.