Questions

  1. Difference Between SIMD and SPMD

    In an SPMD system, the parallel processing units execute the same program on multiple parts of the data. However, these processing units do not need to be executing the same instruction at the same time. In an SIMD system, all processing units are executing the same instruction at any instant

  2. What is CUDA C, and how does it differ from traditional programming languages?

  3. Explain the key components of a CUDA C program.

  4. What is a CUDA kernel, and how is it executed on the GPU?

  5. Explain the terms "host" and "device" in the context of CUDA C.

  6. How does CUDA C handle memory management on the GPU?

  7. Explain the concept of threads, blocks, and grids in CUDA C.

  8. How does CUDA C handle data parallelism?

  9. What is warp in the context of CUDA C, and why is it important for performance?

  10. Explain how CUDA C handles synchronization among threads within a block.

  11. What is the role of shared memory in CUDA C, and how does it differ from global memory?

  12. Explain the process of launching a CUDA kernel from the host code.

  13. How does CUDA C handle error checking, and what are common error-handling practices?

  14. What is the significance of the warp divergence problem in CUDA C, and how can it be mitigated?

  15. Explain how constant memory is used in CUDA C and its advantages.

  16. How does CUDA C support asynchronous execution, and why is it beneficial?

  17. Explain the concept of streaming multiprocessors (SMs) in CUDA architecture and their role in parallel computation.

  18. What is the purpose of warp-synchronous programming in CUDA C?

  19. How does CUDA C handle dynamic parallelism, and in what scenarios is it useful?