Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This office action is in response to the application filed on 08/25/2023.
Claims 1-20 are currently pending.
Claims 1-20 are rejected.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 5, 7-8, 12, 14-15, 18, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Yong Wang et al (US 20210232417 A1) in view of Boon S. Ang et al (US 20220103487 A1) & Jiyuan Tang et al (US 20160308771 A1).
For Claim 1, Wang discloses a method for a computer system to perform receive side scaling (RSS) (Wang teaches, in ¶ 0028, lines 1-4, VM1 121 may be reconfigured (see 240 in FIG. 2) to perform packet handling based on multiprocessor architecture configuration), wherein the method comprises:
in response to receiving a first packet that is associated with the first packet flow and destined for a first virtualized computing instance supported by the computer system, the programmable PNIC matching the first packet with the first flow entry (Wang teaches, in ¶ 0032, lines 1-3, in response to receiving first ingress packets (see “P1” 270 in FIG. 2) that requires processing by first VCPU=VCPU-1 211 running on NUMA1 160) and steering the first packet towards the first queue (Wang teaches, in ¶ 0032, lines 4-5, ingress packets 270 may be steered towards first RX queue=RXQ-1 221, which is allocated with memory from NUMA1 160); and
in response to receiving a second packet that is associated with the second packet flow and destined for a second virtualized computing instance supported by the computer system (Wang teaches, in ¶ 0033, lines 1-4, in response to receiving second ingress packets (see “P2” 280 in FIG. 2) that requires processing by second VCPU=VCPU-5 215 running on NUMA2 170), the programmable PNIC matching the second packet with the second flow entry and steering the second packet towards the second queue (Wang teaches, in ¶ 0047, lines 4-5, ingress packets 280 may be steered towards second RX queue=RXQ-5 225, which is allocated with memory from NUMA2 170).
Wang fails to expressly disclose generating and sending one or more instructions to a programmable physical network interface controller (PNIC) of the computer system.
However, Ang, in the analogous art, discloses generating and sending one or more instructions (Ang teaches, in ¶ 0056, that The received flow entries are stored (at 735) in a set of flow entries (e.g., a flow entry table) of the FPO hardware. In some embodiments, the set of flow entries is stored in a memory cache (e.g., content-addressable memory (CAM), ternary CAM (TCAM), etc.) that can be used to identify a flow entry that specifies a set of matching criteria associated with a received data message) to a programmable physical network interface controller (PNIC) of the computer system (Ang teaches, in ¶ 0049, lines 11-15, the processing units executing the flow processing and action generator are processing units of a host computer, while in other embodiments, the pNIC is an integrated MC (e.g., a programmable NIC, smart NIC, etc.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system taught in Wang with the storing of the flow entries taught in Ang. The motivation is to optimize the flow processing offloaded to a programmable NIC [Ang: ¶ 0001].
Ang further teaches, in ¶ 0058, lines 1-6, determines (at 810) if the received data message matches a flow entry stored by the FPO hardware. In some embodiments, determining whether the FPO hardware stores a flow entry matching the received data message is based on a lookup in a set of stored flow entries based on characteristics of the received data message.
Wang and Ang fail to expressly disclose associating a first packet flow with a first queue and (b) associating a second packet flow with a second queue; and steering the first/second packet towards the first/second queue for processing by a first/second processing thread from the multiple processing threads.
However, Tang, in the analogous art, discloses associating a first packet flow with a first queue and (b) associating a second packet flow with a second queue (Tang teaches, in ¶ 0021, that acquire, from the data packet, identification information of a data stream … where the identification information of the data stream is used to differentiate the data stream to which the data packet belongs); and steering the first/second packet towards the first/second queue for processing by a first/second processing thread from the multiple processing threads (Tang teaches, in ¶ 0024, send the data packet to a cache queue of the thread corresponding to the data stream, so that the thread corresponding to the data stream acquires the data packet from the cache queue).
Tang also teaches, in ¶ 0040, that The data distribution method is applied to a data distribution system, where the data distribution system includes a splitter, a memory, and multiple threads used for processing data, and each thread corresponds to a cache queue.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system taught in Wang and Ang with the per-thread queuing taught in Tang. The motivation is to improve a processing capability of a multi-core processor [Tang: ¶ 0006].
For Claim 5, Wang discloses a method, wherein the method further comprises: prior to generating and sending the one or more instructions, receiving an advertisement from a device driver, wherein the advertisement indicates an RSS capability of the programmable PNIC (Wang teaches, in ¶ 0041, that PNIC1 181 may advertise various resource(s) associated with the NUMA-aware uplink capability, such as an uplink object (e.g., vmnicX) resides on NUMA1 160 and NUMA2 170; the number of RX queues supported by PNIC1 180 that reside on NUMA1 160 and NUMA2 170; support for packet filters based on receive-side scaling (RSS), destination MAC address (DMAC), layer-3 information, layer-4 information, application-layer information, or any combination thereof).
For Claim 7, Wang discloses a method, wherein generating and sending the one or more instructions comprises: generating and sending the one or more instructions to the programmable PNIC to configure the first flow entry based on non-uniform memory access (NUMA) affinity information associated with the first virtualized computing instance and the programmable PNIC (Wang teaches, in ¶ 0039, that PNIC1 181 may be attached to both NUMA1 160 and NUMA2 170 via separate peripheral component interconnect express (PCIe) interfaces ... The primary device (e.g., the PCIe device with more chips) may be used to steer RX packets to first queue set 221-224 supported by NUMA1 160, or second queue set 225-258 supported by NUMA2 170).
For Claim 8, please refer to the rejection of Claim 1, above.
For Claims 12, 18, please refer to the rejection of Claim 5, above.
For Claims 14, 20, please refer to the rejection of Claim 7, above.
For Claim 15, please refer to the rejection of Claim 1, above.
Claims 2-3, 9-10, 16 are rejected under 35 U.S.C. 103 as being unpatentable over Yong Wang et al (US 20210232417 A1) in view of Boon S. Ang et al (US 20220103487 A1) & Jiyuan Tang et al (US 20160308771 A1) as applied to claim 1, 8, or 15 above, and further in view of Kang Il Choi et al (US 20140376555 A1).
For Claims 2, 9, 16, Wang, Ang and Tang fail to expressly disclose supporting one or more application programming interface (API) functions for flow entry configuration.
However, Choi, in the analogous art, discloses supporting one or more application programming interface (API) functions for flow entry configuration (Choi teaches, in ¶ 0053, Finally, open application programming interfaces (APIs) (Openflow, OpenStack, OpenNaaS, OGF's NSI, etc.) may provide additional integration between the NFV and the cloud infrastructure).
Choi further teaches, in ¶ 0125, checking a data attribute or service attribute of the flow after the receiving the flow, wherein the switching of the flow switches the flow to the at least one network function virtual machine according to the switching table based on the data attribute or service attribute.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system taught in Wang, Ang and Tang with the flow controller taught in Choi. The motivation is to dynamically connect the processing modules to the flows or to disconnect the processing modules therefrom [Choi: ¶ 0113].
For Claims 3, 10, Ang discloses a method, wherein generating and sending the one or more instructions comprises: generating and sending, by a queue management layer of the computer system, the one or more instructions to the programmable PNIC via the programmable datapath interface (Ang teaches, in ¶ 0059, lines 1-4, that the flow processing and action generator sends (at 725) the generated flow entry to the FPO hardware. As described above, in some embodiments, the generated flow entry is sent to the FPO hardware using a PF of a PCIe connection between the processing units that execute the flow processing and action generator and the FPO hardware).
Claims 4, 11, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Yong Wang et al (US 20210232417 A1) in view of Boon S. Ang et al (US 20220103487 A1) & Jiyuan Tang et al (US 20160308771 A1) as applied to claim 1, 8, or 15 above, and further in view of David T. Hass et al (US 20090201935 A1).
For Claims 4, 11, 17, Ang discloses a method, wherein generating and sending the one or more instructions comprises: generating and sending the one or more instructions to burn the first flow entry and the second flow entry into the programmable PNIC configuration (Ang teaches, in ¶ 0047, The method includes providing the pNIC with a set of mappings between VPIDs and PPIDs. FIG. 6 conceptually illustrates a process 600 performed in some embodiments to provide VPID to PPID mappings to be stored in a mapping table of the pNIC to perform flow processing). Ang also teaches, in ¶ 0039, lines 8-12, that The flow entries, in some embodiments, specify a set of matching criteria and an action to take for data messages that match the matching criteria. One or both of the set of matching criteria and the action use VPIDs to identify compute-node interfaces.
Tang teaches, in ¶ 0024, send the data packet to a cache queue of the thread corresponding to the data stream, so that the thread corresponding to the data stream acquires the data packet from the cache queue.
Wang, Ang and Tang fail to expressly disclose a flow key specifying a destination information associated with a packet flow.
However, Hass, in the analogous art, discloses a flow key specifying a destination information associated with a packet flow (Hass teaches, in ¶ 0007, a parse operation is performed utilizing the packet information to generate a key, and a hash algorithm is performed on this key to produce a hash. Further, the packets are allocated to different processor threads, utilizing the hash or the key).
Hass further teaches, in ¶ 0151, Once the key has been formed, the packet director 810 may use the key to classify and dispatch the packet using a classifier 808, hash logic 812, and a distribution mask 814. Furthermore, the classification and dispatch may be implemented in combination with a lookup table (not shown).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system taught in Wang, Ang and Tang with the key taught in Hass. The motivation is to quickly distribute packets to the threads designated by software as processing threads [Hass: ¶ 0128, lines 2-4].
Claims 6, 13, 19 are rejected under 35 U.S.C. 103 as being unpatentable over Boon S. Ang et al (US 20220103487 A1) in view of Jiyuan Tang et al (US 20160308771 A1) as applied to claim 1, 8 or 15 above, and further in view of Ayyappan Veeraiyan (US 20140059111 A1).
For Claims 6, 13, 19, Ang and Tang fail to expressly disclose configuring a third flow entry that associates a third packet flow with an RSS pool that includes multiple third queues.
However, Veeraiyan, in the analogous art, discloses configuring a third flow entry that associates a third packet flow with an RSS pool that includes multiple third queues (Veeraiyan teaches, in ¶ 0034, lines 1-7, that After PNIC 322 allocates RSS receive queues 326, PNIC 322 can store incoming VXLAN encapsulated packets in different RSS receive queues based on the hash result of each packet's TCP/IP 5 tuple. In one embodiment load balancer modul3 328 only needs to issue one RSS receive queue allocation command for PNIC 322 to allocate a predetermined number of RSS receive queues (e.g., 4, 8, or more)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system taught in Ang and Tang with the number of RSS receive queues taught in Veeraiyan. The motivation is to facilitates multi-core processing of the received encapsulated packets [Veeraiyan: ¶ 0007, lines 2-4].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure includes: Ma (US 7701849 B1) is pertinent to a method for queuing packets is provided. The method may include receiving a packet; identifying a flow associated with the packet; determining whether a flow queue has been assigned to the identified flow; dynamically assigning the identified flow to an available flow queue when it is determined that a flow queue has not been assigned to the identified flow; and enqueuing the packet into the available flow queue.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMED A KAMARA whose telephone number is (571)270-5629. The examiner can normally be reached M-F 9AM-4PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, CHARLES JIANG can be reached on 5712707191. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MOHAMED A KAMARA/Primary Examiner, Art Unit 2412