How to find the size of the L1 cache line size with IO timing measurements?
作为学校作业,我需要找到一种方法来获取一级数据缓存线的大小,而不需要读取配置文件或使用API调用。应该使用内存访问读/写时间来分析和获取此信息。那我该怎么做呢?
在对分配的另一部分进行不完整的尝试时,为了找到缓存的级别和大小,我有:
1 2 3 | for (i = 0; i < steps; i++) { arr[(i * 4) & lengthMod]++; } |
我在想也许我只需要改变第2行,
更新
这是一个关于Github的尝试…下面主要部分
1 2 3 4 5 6 7 8 9 10 11 | // repeatedly access/modify data, varying the STRIDE for (int s = 4; s <= MAX_STRIDE/sizeof(int); s*=2) { start = wall_clock_time(); for (unsigned int k = 0; k < REPS; k++) { data[(k * s) & lengthMod]++; } end = wall_clock_time(); timeTaken = ((float)(end - start))/1000000000; printf("%d, %1.2f ", s * sizeof(int), timeTaken); } |
问题是时间上似乎没有太大的差别。FYI。因为它用于一级缓存。我的大小=32K(数组大小)
分配一个大的
开始按
基准测试并计算您可以处理多少字节/秒的
当
如果数组足够大,到达末尾时,数组开头的数据将再次超出缓存,这就是您想要的。因此,在递增
看看校准器,所有的作品都有版权,但源代码是免费的。从它的文档思想到计算缓存线的大小,听起来比这里已经说过的要受教育得多。
The idea underlying our calibrator tool is to have a micro benchmark whose performance only depends
on the frequency of cache misses that occur. Our calibrator is a simple C program, mainly a small loop
that executes a million memory reads. By changing the stride (i.e., the offset between two subsequent
memory accesses) and the size of the memory area, we force varying cache miss rates.In principle, the occurance of cache misses is determined by the array size. Array sizes that fit into
the L1 cache do not generate any cache misses once the data is loaded into the cache. Analogously,
arrays that exceed the L1 cache size but still fit into L2, will cause L1 misses but no L2 misses. Finally,
arrays larger than L2 cause both L1 and L2 misses.The frequency of cache misses depends on the access stride and the cache line size. With strides
equal to or larger than the cache line size, a cache miss occurs with every iteration. With strides
smaller than the cache line size, a cache miss occurs only every n iterations (on average), where n is
the ratio cache
line
size/stride.Thus, we can calculate the latency for a cache miss by comparing the execution time without
misses to the execution time with exactly one miss per iteration. This approach only works, if
memory accesses are executed purely sequential, i.e., we have to ensure that neither two or more load
instructions nor memory access and pure CPU work can overlap. We use a simple pointer chasing
mechanism to achieve this: the memory area we access is initialized such that each load returns the
address for the subsequent load in the next iteration. Thus, super-scalar CPUs cannot benefit from
their ability to hide memory access latency by speculative execution.To measure the cache characteristics, we run our experiment several times, varying the stride and
the array size. We make sure that the stride varies at least between 4 bytes and twice the maximal
expected cache line size, and that the array size varies from half the minimal expected cache size to
at least ten times the maximal expected cache size.< /块引用>
我不得不对
#include"math.h" 进行注释,以便编译它,然后它找到了我笔记本电脑的缓存值。我也无法查看生成的PostScript文件。您可以在汇编程序中使用
CPUID 函数,尽管它不可移植,但它会提供您想要的东西。For Intel Microprocessors, the Cache Line Size can be calculated by multiplying bh by 8 after calling cpuid function 0x1.
For AMD Microprocessors, the data Cache Line Size is in cl and the instruction Cache Line Size is in dl after calling cpuid function 0x80000005.
我从这篇文章中得到了这个。
我认为您应该编写程序,它将以随机顺序遍历数组,而不是直接执行,因为现代进程执行硬件预取。例如,创建int数组,其中的值将是下一个单元格的编号。一年前我做过类似的节目http://pastebin.com/9mfscs9z对不起,我英语不好,我不会说英语。
如果你陷在泥里不能出去,看看这里。
有手册和代码可以解释如何执行您的要求。代码的质量也相当高。看看"子程序库"。
代码和手册基于x86处理器。
参见如何实现memtest86。他们以某种方式测量和分析数据传输率。速率变化点对应于l1、l2和可能的l3缓存大小。
只是一张便条。
缓存线大小在少数ARM Cortex家族中是可变的,在执行期间可以更改,而不需要向当前程序发出任何通知。
我认为这应该足够给一个使用一定内存的操作计时了。然后继续增加操作使用的内存(例如操作数)。当操作性能严重下降时,你就找到了极限。
我只需读取大量字节而不打印它们(打印会严重影响性能,从而成为瓶颈)。在读取时,计时应该直接与读取的字节数成比例,直到数据不再适合l1,然后您将获得性能影响。
您还应该在程序开始时和开始计算时间之前分配一次内存。