aboutsummaryrefslogtreecommitdiffhomepage
path: root/README.md
diff options
context:
space:
mode:
authorvosen <[email protected]>2020-11-29 00:36:05 +0100
committerGitHub <[email protected]>2020-11-29 00:36:05 +0100
commita6a9eb347b03b682414df5f3e97fc3021a14408e (patch)
tree2322bc1bfe2958856cf534ca1a1770145bceb3a7 /README.md
parent295a70e1cb209dcd0e3210138be7f88d8c37f466 (diff)
parentf452550c4f9232530b4ecf36278f8ef2e8fa3beb (diff)
downloadZLUDA-a6a9eb347b03b682414df5f3e97fc3021a14408e.tar.gz
ZLUDA-a6a9eb347b03b682414df5f3e97fc3021a14408e.zip
Merge pull request #15 from nilsmartel/patch-2
Fix small typo
Diffstat (limited to 'README.md')
-rw-r--r--README.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/README.md b/README.md
index b50e18d..773da99 100644
--- a/README.md
+++ b/README.md
@@ -20,7 +20,7 @@ Overall in this suite of benchmarks faster by approximately 4% on ZLUDA.
### Explanation of the results
* Why is ZLUDA faster in Stereo Matching, Gaussian Blur and Depth of Field?\
This has not been precisely pinpointed to one thing or another but it's likely a combination of things:
- * ZLUDA uses Level 0, which in general is a more level, highr performance API
+ * ZLUDA uses Level 0, which in general is a more level, higher performance API
* Tying to the previous point, currently ZLUDA does not support asynchronous execution. This gives us an unfair advantage in a benchmark like GeekBench. GeekBench exclusively uses CUDA synchronous APIs
* There is a set of GPU instructions which are available on both NVIDIA hardware and Intel hardware, but are not exposed through OpenCL. We are comparing NVIDIA GPU optimized code with the more general OpenCL code. It's a lucky coincidence (and a credit to the underlying Intel Graphics Compiler) that this code also works well on an Intel GPU
* Why is OpenCL faster in Canny and Horizon Detection?\