Managing Cloud Products and Services and Optimizing the Solutions for Complex Problems Using GPU Computing

Dr S. Ramamoorthy, Ms. R. Poorvadevi

Abstract


So far, in cloud computing all the services and resources are pooled from the data centre. Client can request for web services anywhere in the world. However cloud vendor is providing massive amount of services, still cloud resource managing is a trivial task in the service provider and customer end. To solve the complex cloud problem and also managing the cloud products and services the high performance computing has been come with the new scope to provide the feasible solutions. In the cloud access platform GPU’s (Graphics Processing Unit) work with the conjunction to a cloud server and accelerating the application speed and processing performance with the CPU device by enabling with graphics card adapter. With the help of CPU it offloads the computational parts of application to process the large blocks of data at sequential execution environment. The proposed system will helps to analyze and rectifies the problem of cloud vendor lock-in, managing cloud service provisioning and optimizing the utility performance by using the high performance computing.

Full Text:

PDF

References


Ilija Hadžic; Martin D. Carroll et.al, “Low-Level Frame-Buffer Scraping for GPUs in the Cloud”, IEEE International Symposium on Multimedia (ISM), Pages: 529 – 532, 2016. 2. Leonardo Milhomem Franco Christino, Fernando dos Santos Osório et.al, “GPU-Services: Real-Time Processing of 3D Point Clouds for Robotic Systems Using GPUs”, Latin American Robotics Symposium and 2015 3rd Brazilian Symposium on Robotics (LARS-SBR), Pages: 151 – 156, 2015. 3. Ming Liu; Tao Li et.al, “Understanding the virtualization "Tax" of scale-out pass-through GPUs in GaaS clouds: An empirical study”, International Symposium on High Performance Computer Architecture (HPCA), Pages: 259 – 270, 2015. 4. Khaled M. Diab; M. Mustafa Rafique et.al, “Dynamic Sharing of GPUs in Cloud Systems”, International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum, Pages: 947 - 954, 2013.

Minoru Oikawa; Atsushi Kawai et.al, “;DS-CUDA: A Middleware to Use Many GPUs in the Cloud Environment”, SC Companion: High Performance Computing, Networking Storage and Analysis, Pages: 1207 – 1214, 2012.

Sébastien Frémal, Michel Bagein et.al, “Optimizing performance of batch of applications on cloud servers exploiting multiple GPUs”, International Conference on Complex Systems (ICCS), Pages: 1 – 6, 2012.

Roberto Di Lauro; Flora Giannone et.al, “Virtualizing General Purpose GPUs for High Performance Cloud Computing: An Application to a Fluid Simulator”, International Symposium on Parallel and Distributed Processing with Applications, Pages: 863 – 864, 2012.

Dazhong He; Zhenhua Wang et.al, “A Survey to Predict the Trend of AI-able Server Evolution in the Cloud”, IEEE Transactions on Systems Journal, Volume: PP, Issue: 99 Pages: 1 – 1, 2018.

Yassine Maalej; Ahmed Abderrahim et.al, “CUDA-accelerated task scheduling in vehicular clouds with opportunistically available V2I”,International Conference on Communications (ICC), Pages: 1 – 6, 2017.

Elliott Wen, Winston K. G. Seah et.al, “GBooster: Towards Acceleration of GPU-Intensive Mobile Applications”, International Conference on Distributed Computing Systems (ICDCS), Pages: 1408 – 1418, 2017.


Refbacks

  • There are currently no refbacks.