发明名称 Transportation network micro-simulation with pre-emptive decomposition
摘要 In a parallel computing method performed by a parallel computing system comprising a plurality of central processing units (CPUs), a main process executes. Tasks are executed in parallel with the main process on CPUs not used in executing the main process. Results of completed tasks are stored in a cache, from which the main process retrieves completed task results when needed. The initiation of task execution is controlled by a priority ranking of tasks based on at least probabilities that task results will be needed by the main process and time limits for executing the tasks. The priority ranking of tasks is from the vantage point of a current execution point in the main process and is updated as the main process executes. An executing task may be pre-empted by a task having higher priority if no idle CPU is available.
申请公布号 US9400680(B2) 申请公布日期 2016.07.26
申请号 US201414532127 申请日期 2014.11.04
申请人 XEROX Corporation 发明人 Bouchard Guillaume;Ulloa Paredes Luis Rafael
分类号 G06F9/46;G06F9/48;G06N7/00 主分类号 G06F9/46
代理机构 Fay Sharpe LLP 代理人 Fay Sharpe LLP
主权项 1. A parallel computing method performed by a parallel computing system comprising a plurality of central processing units (CPUs), the parallel computing method comprising: executing a main process; executing a task priority queue update process to maintain a task priority queue that ranks tasks for a current execution point whose results will be needed with non-zero probability by the main process wherein the task priority queue update process comprises: identifying the tasks whose results will be needed with non-zero probability by the main process based on the current execution point in the main process,for each identified task, assigning a probability that the task result will be needed by the main process based on the current execution point in the main process, a time limit for the task relative to the current execution point in the main process, and a score for the task that is computed based on the probability and time limit, andranking the identified tasks in the task priority queue in accordance with the assigned scores; executing tasks in parallel with the executing of the main process on CPUs not used in executing the main process and with execution order of the tasks being in accordance with the task priority queue; and storing, in a completed task results cache, results of tasks whose execution is completed on CPUs not used in executing the main process; wherein the main process is configured to retrieve completed task results from the completed task results cache when needed by the main process.
地址 Norwalk CT US