aboutsummaryrefslogtreecommitdiffhomepage
path: root/README.md
diff options
context:
space:
mode:
authorNils Martel <[email protected]>2020-11-27 14:26:27 +0100
committerGitHub <[email protected]>2020-11-27 14:26:27 +0100
commitf452550c4f9232530b4ecf36278f8ef2e8fa3beb (patch)
tree3717a8b5ff0f58c9273f972b4111b93ea66a0f58 /README.md
parent103881f70a2d79a6381556af319be3f3f621fe9d (diff)
downloadZLUDA-f452550c4f9232530b4ecf36278f8ef2e8fa3beb.tar.gz
ZLUDA-f452550c4f9232530b4ecf36278f8ef2e8fa3beb.zip
Fix small typo
Diffstat (limited to 'README.md')
-rw-r--r--README.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/README.md b/README.md
index 83becbe..80fc847 100644
--- a/README.md
+++ b/README.md
@@ -20,7 +20,7 @@ Overall in this suite of benchmarks faster by approximately 4% on ZLUDA.
### Explanation of the results
* Why is ZLUDA faster in Stereo Matching, Gaussian Blur and Depth of Field?\
This has not been precisely pinpointed to one thing or another but it's likely a combination of things:
- * ZLUDA uses Level 0, which in general is a more level, highr performance API
+ * ZLUDA uses Level 0, which in general is a more level, higher performance API
* Tying to the previous point, currently ZLUDA does not support asynchronous execution. This gives us an unfair advantage in a benchmark like GeekBench. GeekBench exclusively uses CUDA synchronous APIs
* There is a set of GPU instructions which are available on both NVIDIA hardware and Intel hardware, but are not exposed through OpenCL. We are comparing NVIDIA GPU optimized code with the more general OpenCL code. It's a lucky coincidence (and a credit to the underlying Intel Graphics Compiler) that this code also works well on an Intel GPU
* Why is OpenCL faster in Canny and Horizon Detection?\