发明名称 Hybrid dataflow processor
摘要 A network device that processes a stream of packets has an ingress front end. The ingress front end determines whether the packets are handled in a bounded latency path or in a best-effort path. The bounded latency path packets are granted a resource with a higher priority than the best-effort path packets. As the packets are processed through a number of processing stages, with processing engines, the bounded latency packets are processed within a period of time corresponding to a guaranteed rate. Resources are granted to the best-effort path packets only when the processing engines determine that the resource grant will not impact the latency bounds with respect to the first packets.
申请公布号 US9294410(B2) 申请公布日期 2016.03.22
申请号 US201313891707 申请日期 2013.05.10
申请人 MARVELL WORLD TRADE LTD. 发明人 Boden Kurt Thomas
分类号 H04L12/28;H04L12/851;H04L12/801 主分类号 H04L12/28
代理机构 代理人
主权项 1. A network device for processing packets, the network device comprising: an ingress front end configured to identify (i) a first packet in a stream of packets as a packet to be processed with a guaranteed rate at each processing stage of a plurality of processing stages that are programmable, and (ii) a second packet in the stream of packets as a packet to be processed without a guaranteed rate at each processing stage of the plurality of processing stages; a plurality of engines configured to provide a respective resource for use in processing the stream of packets at a processing stage, the resource being external to the plurality of processing stages; the plurality of processing stages are arranged in a pipeline and configured to: (i) for the first packet, selectively cause a first packet processing operation to be performed on the first packet using the resource obtained from one of the engines and to pass the first packet to the next processing stage within a period of time corresponding to the guaranteed rate; and(ii) for the second packet, selectively request the resource from one of the engines for processing the second packet, buffer the second packet in a buffer until the resource is available, cause a second packet processing operation to be performed on the second packet using the available resource, and, in response to receiving a backpressure signal from downstream in the pipeline indicating an ability to forward the second packet to the next processing stage in the pipeline, to pass the second packet to the next processing stage at a rate that is not guaranteed.
地址 St. Michael BB