DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-10 and 13-22 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Crupnicoff et al. US20210263744A1, hereinafter Crupnicoff.
Regarding claim 1, Crupnicoff teaches an apparatus comprising:
(Crupnicoff: Summary, para. [0090] operations for the methods described herein may be implemented using software instructions stored on a computer useable storage medium for execution by a computer. Fig. 10)
a network interface device comprising: (Crupnicoff: Fig. 1 and Fig. 2 and para. [0047] input interfaces and output interfaces)
a programmable packet processing pipeline (Crupnicoff: para. [0054] and FIG. 4A a programmable packet processing pipeline 420)
and one or more offload circuitries, (Crupnicoff: para. [0062 & 0078-0084] FIGS. 4A and 4B (e.g., a P4 programmable packet processing pipeline) that illustrates the processing of data corresponding to a packet being diverted from the match-action pipeline to a processor core 570 FIGS. 5A and 5B and 8B-8D and Fig. 9 for out-of-pipeline processing. For example, data corresponding to the packet is diverted from the match-action pipeline by diversion logic 580 (Fig. 9 980) to implement packet processing operations)
wherein configuration of operation of the programmable packet processing pipeline and the one or more offload circuitries (Crupnicoff: para. [0054] and FIG. 4A a programmable packet processing pipeline 420 that is programmable using a domain-specific language such as P4 and that can be used to implement the ingress and egress programmable packet processing pipelines 312 and 316 shown in FIG. 3 to process packet data. the P4 specification, a programmable packet processing pipeline includes a parser 422, a match-action pipeline 424 having a series of match-action units 426, and a deparser 428. The parser is a programmable element that is configured through the domain-specific language (e.g., P4) to extract information from a packet (e.g., information from the header of the packet). As described in the P4 specification, parsers describe the permitted sequences of headers within received packets, how to identify those header sequences, and the headers and fields to extract from packets. the information extracted from a packet by the parser is referred to as a packet header vector or “PHV.” the parser identifies certain fields of the header and extracts the data corresponding to the identified fields to generate the PHV. the PHV may include other data (often referred to as “metadata”)) is based on a program consistent with a programmable pipeline language (Crupnicoff:Para. [0062 & 0050] and FIG. 5A depicts a programmable packet processing pipeline 520 similar to the programmable packet processing pipeline 420 described with reference to FIGS. 4A and 4B (e.g., a P4 programmable packet processing pipeline) that illustrates the processing of data corresponding to a packet being diverted from the match-action pipeline to a processor core 570 for out-of-pipeline processing. For example, data corresponding to the packet is diverted from the match-action pipeline by diversion logic 580 to implement packet processing operations such as L7 applications (e.g., HTTP load balancing, L7 firewalling, and/or L7 telemetry), flow table insertion or table management events, connection setup/management, multicast group join, URL inspection, and storage volume management (e.g., NVMe volume setup and/or management), encryption, decryption, compression, decompression, which may not be readily implementable in the match-action pipeline but can be integrated into the process flow of the match-action pipeline in a manner that enables such packet processing to be implemented using a general purpose processor core to provide fast path performance as is expected of data plane processing and that does not involve sending the packet to the control plane for control plane processing. Once the desired out-of-pipeline processing is completed, data corresponding to the packet (e.g., an updated PHV) is returned to the match-action pipeline for further processing. For example, data corresponding to the packet (e.g., an updated PHV) is returned to a queue that feeds the next match-action unit in the match-action pipeline. As used herein, “out-of-pipeline processing” may refer to processing of data corresponding to a packet (e.g., including a PHV, header data, metadata, and/or payload data corresponding to the packet) that is not implemented by the parser, the deparser, or a match-action unit of a programmable packet processing pipeline, e.g., a programmable packet processing pipeline that was programmed using P4. Para. [0070] diversion logic 680 is described with reference to FIGS. 6A-6D, the diversion logic is programmed into a programmable packet processing pipeline in conjunction with the P4 programming)
wherein: the apparatus comprises multiple offload circuitry instances that comprise the one or more offload circuitries: and
(Crupnicoff: Para. [0076] and processor core 570 FIGS. 5A in particular 8A-8D FIG. 8A, there are four processor cores 870 (identified as processor cores 1-4) available for out-of-pipeline processing although the number of processor cores available for out-of-pipeline processing is implementation specific. there may be, for example, 2, 4, 8, 16, or 32 processor cores available for out-of-pipeline processing)
the one or more offload circuitries are (1) to be represented by at least one extern block
(Crupnicoff: Fig. 9 processor core 970, 570 FIGS. 5A and Para. [0084] packets from one flow (e.g., flow 1 (F1) that includes packets F1-1-F1-5) that is being processed through match-action units 926 of a match-action pipeline of a programmable packet processing pipeline are diverted to a processor core 970 (corresponds to claim limitation “extern block”) and packets from another flow (e.g., flow 2 (F2) that includes packets F2-1-F2-5) that is being processed in the same match-action pipeline of the programmable packet processing pipeline are processed in the match-action pipeline without being diverted to the processor core for out-of-pipeline processing. FIG. 9, packets F1-3 and F1-4 from flow 1 have been diverted to the processor core for out-of-pipeline processing while packets F2-3 and F2-4 from flow 2 are not diverted to the processor core but continue to be processed in the match-action pipeline without being diverted to the processor core for out-of-pipeline processing)
and (2) configurable to perform packet processing inline with programmable packet processing of the programmable packet processing pipeline.
(Crupnicoff: Para. [0062 & 0050 & 0059] and FIG. 5A depicts a programmable packet processing pipeline 520 similar to the programmable packet processing pipeline 420 described with reference to FIGS. 4A and 4B (e.g., a P4 programmable packet processing pipeline) that illustrates the processing of data corresponding to a packet being diverted from the match-action pipeline to a processor core 570 for out-of-pipeline processing. ... Once the desired out-of-pipeline processing is completed, data corresponding to the packet (e.g., an updated PHV) is returned to the match-action pipeline for further processing. For example, data corresponding to the packet (e.g., an updated PHV) is returned to a queue that feeds the next match-action unit in the match-action pipeline. Para. [0059] the result of the out-of-pipeline processing is returned back to the match-action pipeline for further processing such that the out-of-pipeline processing is seamlessly integrated into the process flow of the match-action pipeline)
Regarding claim 2, Crupnicoff teaches the apparatus of claim 1, wherein the programmable packet processing pipeline comprises one or more of: a central processing unit (CPU), application specific integrated circuit (ASIC), field programmable gate array (FPGA), or graphics processing unit (GPU).
(Crupnicoff: para. [0073] Elements of the programmable packet processing pipeline may be programmed into physical circuits of the I/O system using P4. the lookup table of the match unit of each match-action unit may be implemented in memory such as content addressable memory (CAM), including tertiary CAM (TCAM), and the action unit of each match-action unit may be implemented with an instruction fetch circuit, register file circuits, and arithmetic logic unit (ALU) circuits of, for example, an ASIC. para. [0049-0054] P4 (also referred to herein as the “P4 specification,” the “P4 language,” and the “P4 program”) is designed to be implementable on a large variety of targets including programmable NICs, software switches, FPGAs, and ASICs)
Regarding claim 3, Crupnicoff teaches the apparatus of claim 1, wherein the programmable packet processing pipeline comprises one or more of: a parser, at least one ingress packet processing pipeline to perform operations based on match-actions, traffic manager, at least one egress packet processing pipeline to perform operations based on match-actions, or de-parser. (Crupnicoff: para. [0054] and FIG. 4A a programmable packet processing pipeline 420 that is programmable using a domain-specific language such as P4 and that can be used to implement the ingress and egress programmable packet processing pipelines 312 and 316 shown in FIG. 3 to process packet data. the P4 specification, a programmable packet processing pipeline includes a parser 422, a match-action pipeline 424 having a series of match-action units 426, and a deparser 428. The parser is a programmable element that is configured through the domain-specific language (e.g., P4) to extract information from a packet (e.g., information from the header of the packet). As described in the P4 specification, parsers describe the permitted sequences of headers within received packets, how to identify those header sequences, and the headers and fields to extract from packets. the information extracted from a packet by the parser is referred to as a packet header vector or “PHV.” the parser identifies certain fields of the header and extracts the data corresponding to the identified fields to generate the PHV. the PHV may include other data (often referred to as “metadata”). Para. [0051] packet buffer/traffic manager 314) is based on a program consistent with a programmable pipeline language (Crupnicoff:Para. [0062 & 0050] and FIG. 5A depicts a programmable packet processing pipeline 520 similar to the programmable packet processing pipeline 420 described with reference to FIGS. 4A and 4B (e.g., a P4 programmable packet processing pipeline) that illustrates the processing of data corresponding to a packet being diverted from the match-action pipeline to a processor core 570 for out-of-pipeline processing. For example, data corresponding to the packet is diverted from the match-action pipeline by diversion logic 580 to implement packet processing operations such as L7 applications (e.g., HTTP load balancing, L7 firewalling, and/or L7 telemetry), flow table insertion or table management events, connection setup/management, multicast group join, URL inspection, and storage volume management (e.g., NVMe volume setup and/or management), encryption, decryption, compression, decompression, which may not be readily implementable in the match-action pipeline but can be integrated into the process flow of the match-action pipeline in a manner that enables such packet processing to be implemented using a general purpose processor core to provide fast path performance as is expected of data plane processing and that does not involve sending the packet to the control plane for control plane processing. Once the desired out-of-pipeline processing is completed, data corresponding to the packet (e.g., an updated PHV) is returned to the match-action pipeline for further processing. For example, data corresponding to the packet (e.g., an updated PHV) is returned to a queue that feeds the next match-action unit in the match-action pipeline. As used herein, “out-of-pipeline processing” may refer to processing of data corresponding to a packet (e.g., including a PHV, header data, metadata, and/or payload data corresponding to the packet) that is not implemented by the parser, the deparser, or a match-action unit of a programmable packet processing pipeline, e.g., a programmable packet processing pipeline that was programmed using P4. Para. [0070] diversion logic 680 is described with reference to FIGS. 6A-6D, the diversion logic is programmed into a programmable packet processing pipeline in conjunction with the P4 programming).
Regarding claim 4, Crupnicoff teaches the apparatus of claim 1, wherein the offload circuitry comprises one or more of: a central processing unit (CPU), application specific integrated circuit (ASIC), field programmable gate array (FPGA), or graphics processing unit (GPU). (Crupnicoff: para. [0065] diversion logic is programmed into hardware components of an I/O system such as into circuits of an ASIC. para. [0049-0054] P4 (also referred to herein as the “P4 specification,” the “P4 language,” and the “P4 program”) is designed to be implementable on a large variety of targets including programmable NICs, software switches, FPGAs, and ASICs)
Regarding claim 5, Crupnicoff teaches the apparatus of claim 1, wherein the programmable pipeline language comprises one or more of: Programming Protocol-independent Packet Processors (P4), (Crupnicoff: para. [0049-0054] P4 (also referred to herein as the “P4 specification,” the “P4 language,” and the “P4 program”) is designed to be implementable on a large variety of targets including programmable NICs, software switches, FPGAs, and ASICs) Software for Open Networking in the Cloud (SONiC), C, Python, Broadcom Network Programming Language (NPL), NVIDIA® CUDA®, NVIDIA® DOCA™, Infrastructure Programmer Development Kit (IPDK), or x86. (Crupnicoff: para. [0074] x86 processor cores)
Regarding claim 6, Crupnicoff teaches the apparatus of claim 1, wherein the programmable packet processing pipeline is to generate metadata associated with at least one packet and the metadata is to specify operation of the one or more offload circuitries to process the at least one packet. (Crupnicoff: para. [0054] and FIG. 4A a programmable packet processing pipeline 420 that is programmable using a domain-specific language such as P4 and that can be used to implement the ingress and egress programmable packet processing pipelines 312 and 316 shown in FIG. 3 to process packet data. the P4 specification, a programmable packet processing pipeline includes a parser 422, a match-action pipeline 424 having a series of match-action units 426, and a deparser 428. The parser is a programmable element that is configured through the domain-specific language (e.g., P4) to extract information from a packet (e.g., information from the header of the packet). As described in the P4 specification, parsers describe the permitted sequences of headers within received packets, how to identify those header sequences, and the headers and fields to extract from packets. the information extracted from a packet by the parser is referred to as a packet header vector or “PHV.” the parser identifies certain fields of the header and extracts the data corresponding to the identified fields to generate the PHV. the PHV may include other data (often referred to as “metadata”). Para. [0062] “out-of-pipeline processing” may refer to processing of data corresponding to a packet (e.g., including a PHV, header data, metadata, and/or payload data corresponding to the packet) that is not implemented by the parser, the deparser, or a match-action unit of a programmable packet processing pipeline)
Regarding claim 7, Crupnicoff teaches the apparatus of claim 6, wherein the metadata comprise one or more of: an identifier of an offload circuitry of the one or more offload circuitries, command to perform, or response to performance of the command.
(Crupnicoff: para. [0054] and FIG. 4A a programmable packet processing pipeline 420 that is programmable using a domain-specific language such as P4 and that can be used to implement the ingress and egress programmable packet processing pipelines 312 and 316 shown in FIG. 3 to process packet data. the P4 specification, a programmable packet processing pipeline includes a parser 422, a match-action pipeline 424 having a series of match-action units 426, and a deparser 428. The parser is a programmable element that is configured through the domain-specific language (e.g., P4) to extract information from a packet (e.g., information from the header of the packet). As described in the P4 specification, parsers describe the permitted sequences of headers within received packets, how to identify those header sequences, and the headers and fields to extract from packets. the information extracted from a packet by the parser is referred to as a packet header vector or “PHV.” the parser identifies certain fields of the header and extracts the data corresponding to the identified fields to generate the PHV. the PHV may include other data (often referred to as “metadata”). para. [0062]processing of data corresponding to a packet being diverted from the match-action pipeline to a processor core 570 for out-of-pipeline processing. For example, data corresponding to the packet is diverted from the match-action pipeline by diversion logic 580 to implement packet processing operations such as L7 applications (e.g., HTTP load balancing, L7 firewalling, and/or L7 telemetry), flow table insertion or table management events, connection setup/management, multicast group join, URL inspection, and storage volume management (e.g., NVMe volume setup and/or management), encryption, decryption, compression, decompression, which may not be readily implementable in the match-action pipeline but can be integrated into the process flow of the match-action pipeline in a manner that enables such packet processing to be implemented using a general purpose processor core to provide fast path performance as is expected of data plane processing and that does not involve sending the packet to the control plane for control plane processing. Once the desired out-of-pipeline processing is completed, data corresponding to the packet (e.g., an updated PHV) is returned to the match-action pipeline for further processing. For example, data corresponding to the packet (e.g., an updated PHV) is returned to a queue that feeds the next match-action unit in the match-action pipeline. As used herein, “out-of-pipeline processing” may refer to processing of data corresponding to a packet (e.g., including a PHV, header data, metadata, and/or payload data corresponding to the packet))
Regarding claim 8, Crupnicoff teaches the apparatus of claim 1, wherein the programmable packet processing pipeline is to prepend the metadata to at least one packet. (Crupnicoff: para. [0056] match-action unit 426 from the programmable packet processing pipeline 420 shown in FIG. 4A. As shown in FIG. 4B, the match-action unit includes a match unit 430 (also referred to as a “table engine”) that operates on an input PHV 432 and an action unit 434 that produces an output PHV 436, which may be a modified version of the input PHV. The match unit includes key construction logic 440 that is configured to generate a key from at least one field in the PHV, a lookup table 442 that is populated with key-action pairs, where a key-action pair includes a key (e.g., a lookup key) and corresponding action code 450 and/or action data 452, and selector logic 444. a P4 lookup table generalizes traditional switch tables, and can be programmed to implement, for example, routing tables, flow lookup tables, ACLs, and other user-defined table types, including complex multi-variable tables. The key generation and lookup function constitutes the “match” portion of the operation and produces an action that is provided to the action unit via the selector logic. The action unit executes an action over the input data (which may include data 454 from the PHV) and provides an output that forms at least a portion of the output PHV. For example, the action unit executes action code 450 on action data 452 and data 454 to produce an output that is included in the output PHV. Para. [0066-0068 & 0060 & 0065] diversion flag field (DFF) 690 in the PHV that is used by the diversion logic to determine whether the processing of data corresponding to a packet (e.g., the PHV) continues on in the match-action pipeline or is diverted to a processor core for out-of-pipeline processing. In an embodiment, the value of the diversion flag field is determined by a previous match-action unit in the match-action pipeline. Thus, whether or not the processing of data corresponding to a packet should be diverted for out-of-pipeline processing may be determined by a value that is generated by a previous match-action unit in the match-action pipeline)
Regarding claim 9, Crupnicoff teaches the apparatus of claim 1, wherein the programmable pipeline language is to specify a routing of at least one packet from the programmable packet processing pipeline to an offload circuitry of the one or more offload circuitries or from a first offload circuitry of the one or more offload circuitries to a second offload circuitry of the one or more offload circuitries. (Crupnicoff: para. [0062] processing of data corresponding to a packet being diverted from the match-action pipeline to a processor core 570 for out-of-pipeline processing. For example, data corresponding to the packet is diverted from the match-action pipeline by diversion logic 580 to implement packet processing operations such as L7 applications (e.g., HTTP load balancing, L7 firewalling, and/or L7 telemetry), flow table insertion or table management events, connection setup/management, multicast group join, URL inspection, and storage volume management (e.g., NVMe volume setup and/or management), encryption, decryption, compression, decompression, which may not be readily implementable in the match-action pipeline but can be integrated into the process flow of the match-action pipeline in a manner that enables such packet processing to be implemented using a general purpose processor core to provide fast path performance as is expected of data plane processing and that does not involve sending the packet to the control plane for control plane processing. Once the desired out-of-pipeline processing is completed, data corresponding to the packet (e.g., an updated PHV) is returned to the match-action pipeline for further processing. For example, data corresponding to the packet (e.g., an updated PHV) is returned to a queue that feeds the next match-action unit in the match-action pipeline. As used herein, “out-of-pipeline processing” may refer to processing of data corresponding to a packet (e.g., including a PHV, header data, metadata, and/or payload data corresponding to the packet))
Regarding claim 10, Crupnicoff teaches the apparatus of claim 1, wherein the one or more offload circuitries perform one or more of: packet buffering, cryptographic operations (Crupnicoff: para. [0062] processing of data corresponding to a packet being diverted from the match-action pipeline to a processor core 570 for out-of-pipeline processing. For example, data corresponding to the packet is diverted from the match-action pipeline by diversion logic 580 to implement packet processing operations such as L7 applications (e.g., HTTP load balancing, L7 firewalling, and/or L7 telemetry), flow table insertion or table management events, connection setup/management, multicast group join, URL inspection, and storage volume management (e.g., NVMe volume setup and/or management), encryption, decryption, etc.)), timer (Crupnicoff: para. 0059] out-of-pipeline processing may implement packet processing operations on high volume and/or time-sensitive packets), packet segmentation (Crupnicoff: para. [0088] encryption of data, redundant array of independent disks (RAID) processing, offload services, local storage operations, and/or segmentation operations), packet reassembly, or key-value store.
Regarding claims 13-19, Crupnicoff teaches at least one non-transitory computer-readable medium comprising instructions stored thereon, that if executed by one or more processors, cause the one or more processors to: (Crupnicoff: Summary, para. [0090] operations for the methods described herein may be implemented using software instructions stored on a computer useable storage medium for execution by a computer. Fig. 10) and teaches all the limitations as discussed in the rejection of claims 1, 3, 5-7 and 9-10, and therefore nt-CRM claims 13-19 are rejected using the same rationales.
Regarding claims 20-22, Crupnicoff teaches a method comprising: (Crupnicoff: Summary, para. [0090] operations for the methods described herein may be implemented using software instructions stored on a computer useable storage medium for execution by a computer. Fig. 10) and teaches all the limitations as discussed in the rejection of claims 1 and 5-6, and therefore method claims 20-22 are rejected using the same rationales.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 11 – 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Crupnicoff in view of Sood et al. US 20210157935 A1, hereinafter Sood.
Regarding claim 11, Crupnicoff teaches the apparatus of claim 1, wherein the network interface device comprises a system on chip (SoC). (Crupnicoff: para. [0086] FIG. 10, the I/O system includes processing circuits 1002, ROM 1004, RAM 1006, CAM 1008, and at least one interface 1010 (interface(s)). the processor cores described above are implemented in processing circuits and memory that is integrated into the same integrated circuit (IC) device as ASIC circuits and memory that are used to implement the programmable packet processing pipeline. For example, the processor cores and ASIC circuits are fabricated on the same semiconductor substrate to form a System-on-Chip (SoC))
It is noted that Crupnicoff does not explicitly disclose: switch system on chip (SoC).
However, Sood from the same or similar fields of endeavor teaches the use of: switch system on chip (SoC) (Sood: para. [0011] NIC 100 can comprise a discrete chip, a local area network (LAN) on motherboard (LOM) design, a chipset, an SoC design, as part of a network switch (e.g., top of rack (TOR), leaf switch), on a peripheral add-in board, etc.). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the teaching of Sood in the apparatus of Crupnicoff. One of ordinary skill in the art would be motivated to do so for SoC designs, peripheral bus 270 may be Intel On-Chip System Fabric (IOSF), Advanced Microcontroller Bus Architecture (AMBA), or similar. In chiplet designs, Peripheral Bus 270 may be a chip to chip interconnect such as Advanced Interface Bus (AIB), Kandou Bus interface (KBI), or similar. Processor 210 may be a central processing unit (CPU), a microengine, a microcontroller, a GPU, a DPU, or an XPU. There may be one or more processor 210 units in server 200. Processor 210 comprises processing core(s) 240. Processing core(s) 240 can execute instructions of and support virtual environment(s), operating system(s), application(s), and TE(s) (Sood: para. [0011-0012]).
Regarding claim 12, Crupnicoff teaches the apparatus of claim 11, comprising one or more ports and at least one memory coupled to the switch system on chip (SoC).
(Crupnicoff: para. [0086] FIG. 10, the I/O system includes processing circuits 1002, ROM 1004, RAM 1006, CAM 1008, and at least one interface 1010 (interface(s)). In an embodiment, the processor cores described above are implemented in processing circuits and memory that is integrated into the same integrated circuit (IC) device as ASIC circuits and memory that are used to implement the programmable packet processing pipeline. For example, the processor cores and ASIC circuits are fabricated on the same semiconductor substrate to form a System-on-Chip (SoC))
It is noted that Crupnicoff does not explicitly disclose: memory coupled to the switch system on chip (SoC).
However, Sood from the same or similar fields of endeavor teaches the use of: memory coupled to the switch system on chip (SoC) (Sood: para. [0011] NIC 100 can comprise a discrete chip, a local area network (LAN) on motherboard (LOM) design, a chipset, an SoC design, as part of a network switch (e.g., top of rack (TOR), leaf switch), on a peripheral add-in board, etc. Fig. 2 teach memory 280 coupled to processor 210 and to NIC 100 para. [0012 & 0015]). Thus, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the teaching of Sood in the apparatus of Crupnicoff. One of ordinary skill in the art would be motivated to do so for SoC designs, peripheral bus 270 may be Intel On-Chip System Fabric (IOSF), Advanced Microcontroller Bus Architecture (AMBA), or similar. In chiplet designs, Peripheral Bus 270 may be a chip to chip interconnect such as Advanced Interface Bus (AIB), Kandou Bus interface (KBI), or similar. Processor 210 may be a central processing unit (CPU), a microengine, a microcontroller, a GPU, a DPU, or an XPU. There may be one or more processor 210 units in server 200. Processor 210 comprises processing core(s) 240. Processing core(s) 240 can execute instructions of and support virtual environment(s), operating system(s), application(s), and TE(s) (Sood: para. [0011-0012]).
Response to Arguments
Applicant's arguments filed 06/13/2025 have been fully considered but they are not persuasive. With regard to applicant’s remark on claims 1, 13 and 20 on pages 6-7, applicant submits:
“The other independent claims, as amended, contain similar limitations to the above underlined limitations of independent claim 1, as amended, although in the other independent claims, these similar limitations may be cast somewhat differently depending upon the particular language employed in the specific independent claim in question. Although the claims are not bound to or limited by specific embodiments disclosed in the Specification, in disclosed embodiments, these claimed features permit these embodiments to operate in a manner that achieves advantages that cannot be achieved by the art cited by the Examiner, regardless of whether the art is taken singly or in any combination (see, e.g., paragraphs 1, 15, 21, 27, and 28 of the subject application as published by the USPTO). Accordingly, at least for these reasons, the Examiner's cited art cannot anticipate or render obvious the claimed invention, regardless of whether the cited art is taken alone or in any combination. Therefore, it is respectfully submitted that the Examiner's § 102 and § 103 rejections of the amended claims cannot be maintained, and must be withdrawn.” (page 7)
However, Crupnicoff in paragraph [0076] and processor core 570 FIGS. 5A in particular 8A-8D FIG. 8A teaches - “four processor cores 870 (identified as processor cores 1-4) available for out-of-pipeline processing although the number of processor cores available for out-of-pipeline processing is implementation specific. there may be, for example, 2, 4, 8, 16, or 32 processor cores available for out-of-pipeline processing”, which corresponds to the claimed limitation – “the apparatus comprises multiple offload circuitry instances that comprise the one or more offload circuitries”.
Furthermore, Crupnicoff in Fig. 9 processor core 970, 570 FIGS. 5A and paragraph [0084] teaches “packets from one flow (e.g., flow 1 (F1) that includes packets F1-1-F1-5) that is being processed through match-action units 926 of a match-action pipeline of a programmable packet processing pipeline are diverted to a processor core 970 and packets from another flow (e.g., flow 2 (F2) that includes packets F2-1-F2-5) that is being processed in the same match-action pipeline of the programmable packet processing pipeline are processed in the match-action pipeline without being diverted to the processor core for out-of-pipeline processing. FIG. 9, packets F1-3 and F1-4 from flow 1 have been diverted to the processor core for out-of-pipeline processing while packets F2-3 and F2-4 from flow 2 are not diverted to the processor core but continue to be processed in the match-action pipeline without being diverted to the processor core for out-of-pipeline processing”. The processor core 970 is out-of-pipeline processing processor core, which corresponds to claim limitation “extern block” as it is external/extern to the match-action pipeline of 926, and therefore, teaches claim limitation – “the one or more offload circuitries are (1) to be represented by at least one extern block”.
In addition, Crupnicoff in paragraphs [0062 & 0050] and FIG. 5A teaches “a programmable packet processing pipeline 520 similar to the programmable packet processing pipeline 420 described with reference to FIGS. 4A and 4B (e.g., a P4 programmable packet processing pipeline) that illustrates the processing of data corresponding to a packet being diverted from the match-action pipeline to a processor core 570 for out-of-pipeline processing. Once the desired out-of-pipeline processing is completed, data corresponding to the packet (e.g., an updated PHV) is returned to the match-action pipeline for further processing. For example, data corresponding to the packet (e.g., an updated PHV) is returned to a queue that feeds the next match-action unit in the match-action pipeline” and para. [0059] teaches “the result of the out-of-pipeline processing is returned back to the match-action pipeline for further processing such that the out-of-pipeline processing is seamlessly integrated into the process flow of the match-action pipeline” – such out of pipeline processing data corresponding to the packet is returned to a queue that feeds (back or inline or integrated) to the next match-action unit in the match-action pipeline, and therefore, teaches the claim limitation – “(2) configurable to perform packet processing inline with programmable packet processing of the programmable packet processing pipeline”. Thus, Crupnicoff teaches the amended claim limitations and rejection is thus maintained.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please also see PTO-892.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WUTCHUNG CHU whose telephone number is (571)272-4064. The examiner can normally be reached 10:00 AM - 4:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Moo R Jeong can be reached at (571) 272-9617. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/WUTCHUNG CHU/Primary Examiner, Art Unit 2418