This gain is increasing dramatically with the size of the dataset. GPU-accelerated LIBSVM gives a performance gain depending on the size of input data set.Parameter optimization using the easy.py script provided by LIBSVM.20 different feature models with a variable number of input training vectors ranging from 36 up to 3772.Training vectors with a dimension of 6000. TRECVID 2007 Dataset for the detection of high level features in video shots.To showcase the performance gain using the GPU-accelerated LIBSVM we present an example run. Watch a short video on the capabilities of the GPU-accelerated LIBSVM package here The modifications were done in the kernel computation, that is now performed using the GPU. The functionality and interface of LIBSVM remains the same. GPU-accelerated LIBSVM is a modification of the original LIBSVM that exploits the CUDA framework to significantly reduce processing time while producing identical results. LIBSVM Accelerated with GPU using the CUDA Framework
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |