Date of Graduation

Spring 2017

Document Type

Honors Research Project

Degree Name

Bachelor of Science


Computer Science - Systems

Research Sponsor

Dr. Tim O'Neil

First Reader

Dr. Michael L. Collard

Second Reader

Dr. Zhong-Hui Duan


Bayesian networks can be used to analyze and find relationships among genetic profiles. Unfortunately, Bayesian network learning is an NP­-hard algorithm and thus takes a significant amount of time to generate an output. There has been research in this area in attempts to make this algorithm quicker, such as utilizing consensus networks. Consensus networks are aggregations of many “cheaper” Bayesian networks that are used to formulate a bigger picture. These “cheaper” networks have their search­ spaces restricted, and thus more are required to extract the relationships among the data points.

To accomplish this, I implemented Bayesian network learning in C++, using reference libraries which are programmed in C and MATLAB. The network learning was implemented and structured in such a fashion that CUDA may be used to accelerate matrix operations, since the datasets are typically large enough to warrant such measures (GPGPU acceleration).

However, after extensive testing, it was found that CUDA acceleration for Bayesian network learning does not significantly improve performance. In some cases, using the CUDA card is detrimental. This is mostly attributed to the fact that all the matrix operations performed are of linear nature (O(n)), and no matrix multiplication is performed (a O(n^3) operation). The cost incurred by copying the memory to and from the GPU simply outweighs the speed gained by using the GPU instead of the CPU.

It is unfortunate introducing matrix acceleration couldn’t speed the learning process up by an order of magnitude, but this implementation may still be reused in the future for applications which are highly reliant on matrix multiplication. I learned a significant amount from this research experience, and will be able to apply the knowledge gained to my future work.