DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-22 have presented and pending in the application.
Allowable Subject Matter
Claims 2 & 13 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The respective further limiting & dependent claims 3-4 & 14-15 would be allowable when the claims 2 & 13 become allowable, respectively.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 5-12 & 16-22 are rejected under 35 U.S.C. 103 as being unpatentable over PONG et al. (US 2012/0030451 A1).
Examiner relies on the entire teachings of the PONG reference for the following art rejection; the examiner kindly advises the applicant to carefully consider the entire teachings of the PONG reference to better understand the examiner’s position & interpretations applied to the claimed invention.
As for the independent claims 1 & 12, the PONG reference teaches functionally equivalent teachings of the claimed invention, when the examiner applies Broadest Reasonable Interpretations, as follows:
Claims 1 & 12
PONG Ref. Teachings (emphasis underlined)
1.An optimally balanced network system comprising:
Par 22, “packet processing architecture 100”
a fabric adapter communication system communicatively coupled to a plurality of network ports
Par 22, “packet processing chip 104” … Par 23, “ingress ports 116 receive packets from a packet source…internet”
and a plurality of controlling hosts,
Par 28, “assign headers based on the type or traffic class as indicated in fields of the header… may be assigned to packet processor 110a…110b…110c…110d”
the fabric adapter configured to: receive one or more network packets from one or more network ports of the plurality of network ports;
Par 22, “packet processing chip 104” … Par 23, “ingress ports 116 receive packets from a packet source…internet”
separate each network packet into different portions, each portion including a header or a payload;
Par 23, “Separator and scheduler 118 separates the header of each incoming packet from the payload…”
forward one or more headers of the different portions to one or more controlling hosts; and
Par 28, “assign headers based on the type or traffic class as indicated in fields of the header… may be assigned to packet processor 110a…110b…110c…110d”
forward multiple payloads of the different portions in parallel through a bundled interface to multiple memory buffers of a global memory pool based on one or more scatter gather lists (SGLs).
Obvious from the teachings of par 27, “a scatter-gather-list (SGL) 127 is used to keep track of parts of a packet that are stored across multiple buffers…If the packet is to be partitioned across multiple buffer, then SGL 127 tracks which buffers are storing which part of the packet”…par 36, “SGL 127 stores the location of the payload in payload memory 122”…par 45, “metering engine 126d, based on lookup tables in shared global 106, determines the amount of bandwidth that is to be allocated to a packet of a particular traffic class…”; the examiner notes that the “policy engine 126j” (par 41) in combination with SGL 127 teaches the claimed invention without expressly disclosing the claimed “buffers of global memory pool”.
12. A method for optimally balancing a networked system, comprising: receiving, at a fabric adapter communication system communicatively coupled to a plurality of network ports and a plurality of controlling hosts, one or more network packets; separating, by the fabric adapter communication system, the network packet into different portions, each portion including a header or a payload; forwarding, by the fabric adapter communication system, the headers of the different portions to one or more controlling hosts of a plurality of controlling hosts; and forwarding, by the fabric adapter communication system, multiple payloads of the different portions in parallel through a bundled interface to multiple memory buffers of a global memory pool based on one or more scatter gather lists (SGLs).
Teachings of the claim 1 are similarly applied
The examiner notes that the PONG reference does not expressly or identically disclose the claimed limitation regarding “buffers of global memory” for the claimed function or operation of receiving payloads (i.e., forwarding payloads); however, such not expressly disclosed limitation is an obvious functional equivalent feature of the teachings of, in paragraph 34, “forward the processed packet to egress port 124 for transmission…the egress ports 124 determine the location of the payload in the payload memory 122…One or more egress ports 124 combine the payload from the payload memory…and transmit the packet”. As can be seen from the above discussed teachings, the multiple egress ports 124 teaches the equivalent function of the claimed invention (e.g., the claimed function of receiving/forwarding payloads). In other words, the equivalent claimed function, without more (i.e., the claimed function of the “buffers of the global memory” is to simply receive data without being further utilized by the claimed invention), of the claimed buffers of global memory inherently and/or obviously performed/provided by the operations of the egress ports (i.e., general buffer operations commonly/typically required for data receiving and then transmitting operations of the egress ports).
Therefore, it would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to come up with the claimed invention from the teachings of the PONG reference for the detailed teachings and reasons discussed above.
As for the dependent claims 5-11 & 16-22, the PONG reference further teaches the functionally equivalent claimed recitations as follows:
Claims 5-11 & 16-22
PONG Ref. Teachings (emphasis underlined)
CONNOR Ref. Well-Known in the ART (not combined)
5. The system of claim 1, wherein the multiple memory buffers cross PCIe interface boundaries.
the examiner notes that the “policy engine 126j” (par 41) in combination with SGL 127 teaches the claimed invention without expressly disclosing the claimed “buffers of global memory pool”.
PCIe interfaces utilization are commonly practiced in the art of packet processing architecture art (See CONNER, par 33, “interface such as…PCIe…CXL”)
6. The system of claim 1, wherein the fabric adapter is further configured to: break down a payload of the packet into multiple payload chunks; and
Par 27, “a scatter-gather-list (SGL) 127 is used to keep track of parts of a packet that are stored across multiple buffers…If the packet is to be partitioned across multiple buffer
forward the payload chunks across multiple PCIe/CXL interfaces to the multiple memory buffers.
PCIe interfaces utilization are commonly practiced in the art of packet processing architecture art (See CONNER, par 33, “interface such as…PCIe…CXL”)
7. The system of claim 1, wherein a bandwidth of the bundled interface matches or exceeds a network bandwidth of a network the fabric adapter is connected to.
Par 31, “based on a data rate of incoming packets, determines whether packet processor 110 itself or one or more of custom hardware acceleration blocks 126 should process the header”
8. The system of claim 1, wherein separating packet payloads and forwarding the payloads to multiple memory buffers in the global memory pool prevents the payloads from entering a data cache of the controlling host.
Obvious from par 49, “congestion avoidance engine 126h delays transmission of low priority packets by buffering them”
9. The system of claim 1, wherein one or more compute threads of the controlling host only process network protocol headers.
Par 28, “assign headers based on the type or traffic class as indicated in fields of the header… may be assigned to packet processor 110a…110b…110c…110d”… par 52, “pipeline 200 of each packet processor 110”…par 59, “packet processing blocks 300 execute custom instructions that are design to speedup packet processing functions”
10. The system of claim 1, wherein the fabric adapter is further configured to forward application data of the payloads directly to one or more of a main memory of application processors, such as a high bandwidth memory (HBM) of a graphical processing unit (GPU), a static random access memory (SRAM) of an acceleration application specific integrated circuit (ASIC), or a dynamic random access memory (DRAM) of an ASIC.
Obvious from the teachings of par 60, “memory access and second execute sate 208, memory is accesses for either loading data or for storing data…custom hardware acceleration blocks 126a-n may be stored in Shared Data Ram (SDRAM) 336 or Private data RAM (PDRAM)…”; see also par 194, “application specific circuits (ASIC)
11. The system of claim 1, wherein, to prevent network incast, the fabric adapter is further configured to: forward the one or more headers to additional threads of the one or more controlling hosts or additional controlling hosts; and
Par 35, “Shared memory architecture may be utilized in conjunction with a private memory architecture…speed up processing of packets by packet processing engines 110 and/or custom hardware acceleration logic 126”… par 52, “pipeline 200 of each packet processor 110” … par 59, “packet processing blocks 300 execute custom instructions that are design to speedup packet processing functions”, the examiner notes that the plurality of pipelines for each packet processors 110 & plurality of acceleration logic 126a-n teaches the claimed invention
forward the multiple payloads in parallel to additional memory buffers of the global memory pool.
Obvious from the teachings of par 27, “a scatter-gather-list (SGL) 127 is used to keep track of parts of a packet that are stored across multiple buffers…If the packet is to be partitioned across multiple buffer, then SGL 127 tracks which buffers are storing which part of the packet”…par 36, “SGL 127 stores the location of the payload in payload memory 122”…par 45, “metering engine 126d, based on lookup tables in shared global 106, determines the amount of bandwidth that is to be allocated to a packet of a particular traffic class…”; the examiner notes that the “policy engine 126j” (par 41) in combination with SGL 127 teaches the claimed invention without expressly disclosing the claimed “buffers of global memory pool”.
16. The method of claim 12, wherein the multiple memory buffers cross PCIe interface boundaries.
Teachings of the claim 5 are similarly applied
17. The method of claim 12, further comprising: breaking down a payload of the packet into multiple payload chunks; and forwarding the payload chunks across multiple PCIe/CXL interfaces to the multiple memory buffers.
Teachings of the claim 6 are similarly applied
18. The method of claim 12, wherein a bandwidth of the bundled interface matches or exceeds a network bandwidth of a network the fabric adapter is connected to.
Teachings of the claim 7 are similarly applied
19. The method of claim 12, wherein separating packet payloads and forwarding the payloads to multiple memory buffers in the global memory pool prevents the payloads from entering a data cache of the controlling host.
Teachings of the claim 8 are similarly applied
20. The method of claim 12, wherein one or more compute threads of the controlling host only process network protocol headers.
Teachings of the claim 9 are similarly applied
21. The method of claim 12, further comprising forwarding application data of the payloads directly to one or more of a main memory of application processors, such as a high bandwidth memory (HBM) of a graphical processing unit (GPU), a static random access memory (SRAM) of an acceleration application specific integrated circuit (ASIC), or a dynamic random access memory (DRAM) of an ASIC.
Teachings of the claim 10 are similarly applied
22. The method of claim 12, wherein, to prevent network incast, the method comprises: forwarding the one or more headers to additional threads of the one or more controlling hosts or additional controlling hosts; and forwarding the multiple payloads in parallel to additional memory buffers of the global memory pool.
Teachings of the claim 11 are similarly applied.
As for the dependent claims 5-6 & 16-17, further add the limitations of PCIe/CXL interface environment/application; however, such added limitations are commonly/well-known practiced environment/application/platform in the art of packet processing, such as the claimed invention & the PONG reference art. As can be seen from a CONNER reference teachings (e.g., the teachings of the CONNER reference simply demonstrates the well/commonly-known practices in the art of packet processing, without the need to combine the teachings with the PONG reference), since the PCIe/CXL environment/application, without more utilization in the claim, are well-known & commonly practiced platform/environment/application in the art of packet processing, one of having ordinary skill in the art can easily apply the teachings of the PONG reference (i.e., the PONG reference openly teaches to apply the building blocks in any specific platform – See paragraph 197) in the above discussed commonly/well-known platform/environment/application such as the PCIe/CXL.
Therefore, as for the dependent claims 5-11 & 16-22, it would have been obvious before the effective filing date of the claimed invention to one having ordinary skill in the art to come up with the claimed invention from the teachings of the PONG reference in light of the commonly/well-known knowledge in the packet processing art for the detailed teachings and reasons discussed above.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER B SHIN whose telephone number is (571)272-4159. The examiner can normally be reached 8:00-4:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, IDRISS N ALROBAYE can be reached at 571-270-1023. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHRISTOPHER B SHIN/Primary Examiner, Art Unit 2181