发明名称 THREAD CACHE ALLOCATION
摘要 Systems and techniques are described for thread cache allocation. A described technique includes monitoring input and output accesses for a plurality of threads executing on a computing device that includes a cache comprising a quantity of memory blocks, determining a respective reuse intensity for each of the threads, determining a respective read ratio for each of the threads, determining a respective quantity of memory blocks for each of the partitions by optimizing a combination of cache utilities, each cache utility being based on the respective reuse intensity, the respective read ratio, and a respective hit ratio for a particular partition, and resizing one or more of the partitions to be equal to the respective quantity of the memory blocks for the partition.
申请公布号 US2015067262(A1) 申请公布日期 2015.03.05
申请号 US201314015784 申请日期 2013.08.30
申请人 VMware, Inc. 发明人 Uttamchandani Sandeep;Zhou Li;Meng Fei;Liu Deng
分类号 G06F12/08 主分类号 G06F12/08
代理机构 代理人
主权项 1. A computer-implemented method comprising: monitoring input and output accesses for a plurality of threads executing on a computing device that includes a cache comprising a quantity of memory blocks, each of the threads being associated with a respective partition including a first respective quantity of the memory blocks, each memory block being included in only one of the respective partitions; determining a respective reuse intensity for each of the threads, the reuse intensity being based on, at least, a quantity of unique memory blocks in the respective partition of the thread that have been read during a first period of time; determining a respective read ratio for each of the threads, the read ratio being based on the input and output accesses for the thread; determining a second respective quantity of memory blocks for each of the partitions by optimizing a combination of cache utilities, each cache utility being based on the respective reuse intensity, the respective read ratio, and a respective hit ratio for a particular partition wherein the respective hit ratio is a ratio of a total number of cache hits given a variable quantity of memory blocks and a total number of input and output accesses during a second period of time for the thread associated with the particular partition; and resizing one or more of the partitions to be equal to the second respective quantity of the memory blocks for the partition.
地址 Palo Alto CA US