Prosecution Insights
Last updated: April 19, 2026
Application No. 18/424,011

Data Transmission Method and Apparatus

Non-Final OA §103§DP
Filed
Jan 26, 2024
Examiner
BOURZIK, BRAHIM
Art Unit
2191
Tech Center
2100 — Computer Architecture & Software
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
65%
Grant Probability
Favorable
1-2
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 65% — above average
65%
Career Allow Rate
245 granted / 376 resolved
+10.2% vs TC avg
Strong +45% interview lift
Without
With
+45.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
34 currently pending
Career history
410
Total Applications
across all art units

Statute-Specific Performance

§101
13.0%
-27.0% vs TC avg
§103
62.8%
+22.8% vs TC avg
§102
4.3%
-35.7% vs TC avg
§112
8.1%
-31.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 376 resolved cases

Office Action

§103 §DP
ETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-20 are pending in this office action. NB: claim 20 recites in line 1 : “A computer program product comprising instructions stored on a non-transitory computer-readable medium”. The examiner interprets this limitation as the computer program product comprising a non-transitory computer-readable medium”. And for that reason, it is directed to statutory embodiment. Double Patenting The non-statutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A non-statutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on non-statutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a non-statutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based e-Terminal Disclaimer may be filled out completely online using web-screens. An e-Terminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about e-Terminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1, 14, 20 are rejected on the ground of non-statutory double patenting as being unpatentable over claims (1,3) 9, 14 of U.S. Patent No. 11,922,202. Although the claims at issue are not identical, they are not patentably distinct from each other. Mapping of independents claims is as follow where corresponding limitations have same cue. Application:18/424,011 Patent:11,922,202 1. A method, comprising: adding, by a virtual machine deployed on a host machine, first information for performing an acceleration operation to a predefined data structure, wherein the first information comprises multiple pieces of second information respectively corresponding to multiple parameters; storing the predefined data structure in a virtual input/output ring (Vring) of a target virtual accelerator; obtaining, from the Vring, the first information; determining, according to the first information, third information that is recognizable by a hardware accelerator; sending, to the hardware accelerator, the third information; obtaining, according to the third information, to-be-accelerated data; and performing, on the to-be-accelerated data, the acceleration operation. 1. (Currently Amended) A data transmission method, wherein the data transmission method comprises: adding, by a virtual machine, first information for performing an acceleration operation to a predefined data structure, wherein the first information comprises multiple pieces of information respectively corresponding to multiple parameters; storing, by the virtual machine, the predefined data structure in one entry of a plurality of entries of a virtual input/output ring (Vring)of a target virtual accelerator, ……. obtaining, by a daemon process on a host machine, the first information from the virtual input/output ring; determining, by the daemon process, according to the first information, second information that can be recognized by a hardware accelerator; sending, by the daemon process, the second information to the hardware accelerator; obtaining, by the hardware accelerator, to-be-accelerated data according to the second information; and performing, by the hardware accelerator, the acceleration operation on the to-be- accelerated data. Independent claim 14 Independent claim 9 Independent claim 20 Independent claim14 Claims 1, 14, 20 are rejected on the ground of non-statutory double patenting as being unpatentable over claims (1,3), 12, 14, of U.S. Patent No. 11,182,190. Although the claims at issue are not identical, they are not patentably distinct from each other. Mapping of independents claims is as follow where corresponding limitations have same cue. Application 17517862 Patent:11,182,190 1. A method, comprising: adding, by a virtual machine deployed on a host machine, first information for performing an acceleration operation to a predefined data structure, wherein the first information comprises multiple pieces of second information respectively corresponding to multiple parameters; storing the predefined data structure in a virtual input/output ring (Vring) of a target virtual accelerator; and performing, on the to-be-accelerated data, the acceleration operation. obtaining, from the Vring, the first information; determining, according to the first information, third information that is recognizable by a hardware accelerator; sending, to the hardware accelerator, the third information;obtaining, according to the third information, to-be-accelerated data; 1. (Currently Amended) A data transmission method applied to a host machine comprising a hardware accelerator, wherein the method comprises: obtaining information required to perform an acceleration operation in a virtual input/output ring (Vring) of a target virtual accelerator, wherein the information required to perform the acceleration operation comprises a virtual machine address of to-be-accelerated data and a length of the to-be-accelerated data, wherein the Vring of the target virtual accelerator includes three tables: a vringdesc table comprising entries that store information required to perform the acceleration operation; and sending the virtual machine address of the to-be-accelerated data and the length of the to- be-accelerated data. 3.wherein the virtual machine address of the to- be-accelerated data is a virtual machine physical address of the to-be-accelerated data, and wherein the method further comprises determining a host physical address of the to-be- accelerated data according to the virtual machine physical address of the to-be-accelerated data and a preset mapping relationship between the virtual machine physical address and the host physical address, and wherein sending the virtual machine address of the to-be-accelerated data to the hardware accelerator comprises sending the host physical address of the to-be-accelerated data to the hardware accelerator. Independent claim 14 Independent claim 12 Independent claim 20 Independent claim 14 Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Van et al US20070061492A1 in view of Bolic et al US20160210167A1. As per claim 1, Van discloses a method, comprising: adding, by a virtual machine deployed on a host machine, first information for performing an acceleration operation to a predefined data structure: [0017] “method and computer program product that extends hardware device acceleration-assist technology to virtualized environments, whereby the virtualization software implemented at the host O/S emulates network I/O hardware accelerated-assist operations providing zero-copy packet sending and receiving operations for virtual machines. Such hardware accelerated-assist emulations enable virtual machines present on the same computing system to communicate over an external network, communicate with the host system and/or, communicate with each other, without the overhead of excessive data copy operations: [0059] “To accomplish this, the host O/S 12 receives a sub-set of network state information 30 from the guest process that provides the location of the virtual address of the target guest process that is to receive packets. Such state information 30 that may be maintained at the host O/S,”; wherein the first information comprises multiple pieces of second information respectively corresponding to multiple parameters;: [0059] “Such state information 30 that may be maintained at the host O/S, may include, but is not limited to: source IP addresses, source port numbers, destination IP addresses, destination port numbers, expected packet sequence numbers and byte offsets, and, the corresponding physical memory addresses where headers and data should go for the aforementioned (source, destination, byte offset) tuple. Such state information may additionally include a protocol type (TCP, UDP, . . . ) or protocol type (IP, IPv6, . . . ), a TTL (time to live) value, a security label (for labelled ipsec networking), etc. Availability of such state information permits the host O/S to analyze the header portion of an arrived packet 25, apply firewall rules, and, subject to any firewall rules applied by the host O/S, determine a virtual memory address associated with a target guest process 55 that is to receive the network packet data payloads.”; storing the predefined data structure in a virtual input/output ring (Vring [0061]” In the embodiment depicted in FIG. 2(c), the NIC card 21 is provided with hardware-accelerated TCP (TSO) or like network I/O hardware acceleration-assist technology. Thus, as shown in FIG. 2(c), for zero copy receiving, when the guest O/S is aware of the virtualized network I/O hardware acceleration-assist technology, the host O/S 12 maintains a subset of the network state information 30 associated with the guest O/S. Additionally, the NIC hardware 21 itself may be provided, via the host O/S, with the subset of the network state information 30 associated with the guest O/S”; obtaining, from the Vring, the first information: [0061]” In the embodiment depicted in FIG. 2(c), the NIC card 21 is provided with hardware-accelerated TCP (TSO) or like network I/O hardware acceleration-assist technology. Thus, as shown in FIG. 2(c), for zero copy receiving, when the guest O/S is aware of the virtualized network I/O hardware acceleration-assist technology, the host O/S 12 maintains a subset of the network state information 30 associated with the guest O/S. Additionally, the NIC hardware 21 itself may be provided, via the host O/S, with the subset of the network state information 30 associated with the guest O/S”; determining, according to the first information, third information that is recognizable by a hardware accelerator: [0059]” Such state information 30 that may be maintained at the host O/S, may include, but is not limited to: source IP addresses, source port numbers, destination IP addresses, destination port numbers, expected packet sequence numbers and byte offsets, and, the corresponding physical memory addresses where headers and data should go for the aforementioned (source, destination, byte offset) tuple. Such state information may additionally include a protocol type (TCP, UDP, . . . ) or protocol type (IP, IPv6, . . . ), a TTL (time to live) value, a security label (for labelled ipsec networking), etc”; sending, to the hardware accelerator, the third information; obtaining, according to the third information, to-be-accelerated data: [0061]” Additionally, the NIC hardware 21 itself may be provided, via the host O/S, with the subset of the network state information 30 associated with the guest O/S”;”; and performing, on the to-be-accelerated data, the acceleration operation: [0064]” The host O/S informs the hardware-assisted NIC 21 to directly retrieve the data (D) and header (H) portions and assemble network packets, each specified by a header and part of the payload in a manner similar to the zero copy send in non-virtualized environments for communication over the network 99. Thus, the NIC hardware 21, without intervention by the host O/S, is enabled to directly copy the header (H) and data (D) portions of a packet 25 to be sent, subject to application of firewall rules. The host O/S will only need to examine the header (and possibly modify it for firewall rules) and perform an address translation for the data without actually needing to copy the data itself”; But not explicitly: A target virtual accelerator: Bolic discloses: A target virtual accelerator: [0083]”The scheduler module may be configured to schedule the first VM access request and the second VM access request based on a first memory access context associated with the first accelerator application and/or a second memory access context associated with the second accelerator application.” It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of cited references. Thus, one of ordinary skill in the art before the effective filling date of the claimed invention would have been motivated to incorporate teaching of Bolic into teaching of Van to include a priority-based plurality of virtual modules for effecting an acceleration of requests from different VMs, based on priority associated with the request or VM. Such assignment can reconfigure the data structure for each virtual module in order to use the hardware accelerator efficiently at the highest capacity as possible. Bolic[0083]. As per claim 2, the rejection of claim 1 is incorporated and furthermore Van discloses: wherein the first information comprises a first virtual machine physical address of the to-be-accelerated data, a length of the to-be-accelerated data, and a second virtual machine physical address for storing an acceleration result: [0059]“source IP addresses, source port numbers, destination IP addresses, destination port numbers, expected packet sequence numbers and byte offsets, and, the corresponding physical memory addresses where headers and data should go for the aforementioned (source, destination, byte offset) tuple”; [0060] “enabling the host O/S to retrieve a data (D) payload directly from a guest process 55 hosted by a guest O/S and retrieve a packet header portion (H) directly from a kernel buffer of the associated guest O/S 50 and accordingly assemble one or more packets or packet segments, depending upon the size of the payload.”; As per claim 3, the rejection of claim 1 is incorporated and furthermore Van discloses: wherein determining the third information comprises determining, according to a virtual machine physical address of the to-be-accelerated data in the first information and a preset mapping relationship between the virtual machine physical address and a host physical address of the to-be-accelerated data, the host physical address: [0020] a) receiving a physical memory address location known from the perspective of a guest O/S and corresponding to a guest process operating under control of the guest O/S which is to receive data from a network packet or provide data for assembly in a network packet; [0021] b) performing an address translation to obtain from the physical memory address location known from the perspective of the guest O/S, a corresponding physical memory location accessible by the host O/S; wherein, sending the third information comprises sending, to the hardware accelerator, the host physical address, and wherein the method further comprises obtaining, by the hardware accelerator, according to the host physical address, and from a virtual machine memory corresponding to the virtual machine physical address, the to-be-accelerated data. [0022] c) enabling host O/S access to the corresponding physical memory location for one of: copying data directly thereto from a received network packet or accessing data located thereat for assembly into a network packet”; [0031] informing the network interface device to forward the data portion of the network packet directly to the corresponding physical memory location of the guest process”; Examiner interpretation: the physical address used by the host (hardware acceleration enabled) is the result of translating guest virtual address as in Fig 4-5, and map it to a corresponding physical address of the actual hardware memory/disk (0069-0070) As per claim 4, the rejection of claim 3 is incorporated and furthermore Van discloses: further performing, by the target VF, the acceleration operation on the to-be-accelerated data. [0031]”informing the network interface device to forward the data portion of the network packet directly to the corresponding physical memory location of the guest process”; but not explicitly: supporting, by the hardware accelerator, a plurality of virtual functions (VFs); querying, after determining the host physical address and from a preset binding relationship between a virtual accelerator and one of the VFs, a target VF bound to the target virtual accelerator; further sending, to the target VF, the host physical address; further obtaining, by the target VF, according to the host physical address, and from the virtual machine memory, the to-be-accelerated data; Bolic discloses: supporting, by the hardware accelerator, a plurality of virtual functions (VFs): [0037]“An accelerator application controller 330 may direct the input streaming data from the DMA controller to first accelerator application (app1) 310 or second accelerator application (app2) 320, multiplex first accelerator application (app1) 310 and second accelerator application (app2) 320 to use the DMA write channel; maintain the accelerator status word; raise an interrupt when needed. “; querying, after determining the host physical address and from a preset binding relationship between a virtual accelerator and one of the VFs, a target VF bound to the target virtual accelerator: [0083]“The first and second shared memory spaces may include a group of physical memory pages used for user-kernel space data transfer in a VM and an inter-VM data transfer, and the first and second shared memory spaces may be accessed by the hardware acceleration module for data fetching and writing back.”; [0088]“The first and second access requests for the first and second accelerator applications may be executed in a direct memory access (DMA) read operation stage, an accelerator application computation stage, and a DMA write operation stage, where the DMA read operation stage and the DMA write operation stage may be implemented substantially simultaneously. The scheduler module may be configured to schedule the first VM access request and the second VM access request based on a first memory access context associated with the first accelerator application and/or a second memory access context associated with the second accelerator application.” further sending, to the target VF, the host physical address: [0088]“The first memory access context and the second memory access context may be based on RCBs associated the first VM or the second VM accessing the first accelerator application and the second accelerator application, respectively. The scheduler module may be configured to schedule the first VM access request and the second VM access request through insertion of the first access request in a first request queue associated with the first accelerator application and insertion of the second access request in a second request queue associated with the second accelerator application”; further obtaining, by the target VF, according to the host physical address, and from the virtual machine memory, the to-be-accelerated data: [0042] “(1) a process in a guest VM may specify the application number and the data size in the command channel, which may be mapped to the process virtual address space through a system call (e.g., mmap); (2) the process may directly put data in the shared data pool, which may also be mapped to the process virtual address space; (3) the process may notify a frontend driver (207 or 211) in the guest VM's kernel space that data is ready, and transition to a Sleep state; (4) the frontend driver (207 or 211) in the guest VM's kernel space may send a notification to a backend driver 305 in the privileged VM's kernel space; (5) the frontend driver (207 or 211) may pass the request to a device driver 307 in the VM's kernel space, and the device driver 307 may set the DMA transfer data size and the accelerator application number according to a parameter obtained from the command channel; (6) the device driver may initiate the start of the DMA controller in the FPGA accelerator; (7) the DMA controller may transfer the data to the FPGA accelerator in a pipelined way to perform a computation; (8) the DMA controller may transfer the results of the computation back to the data pool; It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of cited references. Thus, one of ordinary skill in the art before the effective filling date of the claimed invention would have been motivated to incorporate teaching of Bolic into teaching of Van to include a priority-based plurality of virtual modules for effecting an acceleration of requests from different VMs, based on priority associated with the request or VM. Such assignment can reconfigure the data structure for each virtual module in order to use the hardware accelerator efficiently at the highest capacity as possible. Bolic[0083]. As per claim 5, the rejection of claim 4 is incorporated and furthermore Van does not disclose: further comprising adding, to the Vring, according to a first identifier of the target virtual accelerator in the preset binding relationship, and after the to-be- accelerated data undergoes acceleration processing, a second identifier of an entry to the Vring. Bolic discloses: comprising adding, to the Vring, according to a first identifier of the target virtual accelerator in the preset binding relationship, and after the to-be- accelerated data undergoes acceleration processing, a second identifier of an entry to the Vring. [0049]“ Request states may include, for example, DMA READ, DMA WRITE, and DMA FIN (DMA finished). The application number may specify which accelerator application the request needs to use on the hardware acceleration module (e.g., 0—app0, 1—app1, etc.). The total buffer number may specify a total number of buffer fragments used by the request. The current buffer number may specify a current buffer fragment that needs to be transferred to the hardware acceleration module. The next request pointer may point to the next request in the queue.”; It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of cited references. Thus, one of ordinary skill in the art before the effective filling date of the claimed invention would have been motivated to incorporate teaching of Bolic into teaching of Van to include a priority-based plurality of virtual modules for effecting an acceleration of requests from different VMs, based on priority associated with the request or VM. Such assignment can reconfigure the data structure for each virtual module in order to use the hardware accelerator efficiently at the highest capacity as possible. Bolic[0083]. As per claim 6, the rejection of claim 4 is incorporated and furthermore Van does not disclose: wherein before obtaining the first information, the method further comprises: selecting, from the plurality of VFs, an unused target VF; and establishing, between the target virtual accelerator and the unused target VF, a binding relationship: Bolic discloses: wherein before obtaining the first information, the method further comprises: selecting, from the plurality of VFs, an unused target VF; and establishing, between the target virtual accelerator and the unused target VF, a binding relationship: [0031]" A configuration controller 214 may be configured to load one or more hardware accelerators (e.g., as one or more configure or configuration files, described in more detail below) onto the hardware acceleration module 218. In some embodiments, each hardware accelerator loaded on the hardware acceleration module 218 may be associated with one or more applications implemented on the virtual machines. " Examiner interpretation: As the hardware accelerators are created/loaded in the hardware accelerator 218 of fig.2/3, it is unused and it is a associated after creation with the VM 204/208. See also [0032], for creating a list of modules/accelerators in hardware accelerator 218. It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of cited references. Thus, one of ordinary skill in the art before the effective filling date of the claimed invention would have been motivated to incorporate teaching of Bolic into teaching of Van to include a priority-based plurality of virtual modules for effecting an acceleration of requests from different VMs, based on priority associated with the request or VM. Such assignment can reconfigure the data structure for each virtual module in order to use the hardware accelerator efficiently at the highest capacity as possible. Bolic[0083]. As per claim 7, the rejection of claim 3 is incorporated and furthermore Van does not disclose: wherein before sending the host physical address, the method further comprises recording a first identifier of the target virtual accelerator. Bolic discloses: wherein before sending the host physical address, the method further comprises recording a first identifier of the target virtual accelerator : [0045] “ A VM (VM1 204 or VM2 208) may notify the coprovisor 302 via an event channel 402, so the request inserter 410—responsible for inserting requests from the VMs into the corresponding request queues—may be invoked when an event notification is received at a backend driver (backend driver 305 in FIG. 3)”; [0049] “The application number may specify which accelerator application the request needs to use on the hardware acceleration module (e.g., 0—app0, 1—app1, etc.). The total buffer number may specify a total number of buffer fragments used by the request. “; It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of cited references. Thus, one of ordinary skill in the art before the effective filling date of the claimed invention would have been motivated to incorporate teaching of Bolic into teaching of Van to include a priority-based plurality of virtual modules for effecting an acceleration of requests from different VMs, based on priority associated with the request or VM. Such assignment can reconfigure the data structure for each virtual module in order to use the hardware accelerator efficiently at the highest capacity as possible. Bolic[0083]. As per claim 8, the rejection of claim 7 is incorporated and furthermore Van does not disclose: further comprising adding, according to the first identifier and after the to-be-accelerated data undergoes acceleration processing, a second identifier of an entry to the Vring. Bolic discloses: further comprising adding, according to the first identifier and after the to-be-accelerated data undergoes acceleration processing, a second identifier of an entry to the Vring: [0092]"The first memory access context and the second memory access context may be based on RCBs associated with the first VM or the second VM accessing the first accelerator application and the second accelerator application, respectively. The controller may be configured to schedule the first VM access request and the second VM access request through insertion of the first access request in a first request queue associated with the first accelerator application and insertion of the second access request in a second request queue associated with the second accelerator application" Examiner interpretation: each access request has an RCB that includes an identifier of the accelerator. An RCB may be a data structure containing the information needed to schedule a request using DMA, and maintained by a coprovisor [0034]. The application number may specify which accelerator application the request needs to use on the hardware acceleration module (e.g., 0-app0, 1-app1, etc.) ) [0049-0050]. It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of cited references. Thus, one of ordinary skill in the art before the effective filling date of the claimed invention would have been motivated to incorporate teaching of Bolic into teaching of Van to include a priority-based plurality of virtual modules for effecting an acceleration of requests from different VMs, based on priority associated with the request or VM. Such assignment can reconfigure the data structure for each virtual module in order to use the hardware accelerator efficiently at the highest capacity as possible. Bolic[0083]. As per claim 9, the rejection of claim 1 is incorporated and furthermore Van discloses: wherein determining the third information comprises:determining, according to a virtual machine physical address of the to-be-accelerated data in the first information : [0020] a) receiving a physical memory address location known from the perspective of a guest O/S and corresponding to a guest process operating under control of the guest O/S which is to receive data from a network packet or provide data for assembly in a network packet; [0021] b) performing an address translation to obtain from the physical memory address location known from the perspective of the guest O/S, a corresponding physical memory location accessible by the host O/S; according to a first preset mapping relationship between the virtual machine physical address and a host virtual address of the to-be-accelerated data in a virtual machine memory, the host virtual address: [0070] As shown in FIG. 4, the virtual address 202 of a socket buffer in virtual memory that is associated with the guest process 55, and in which the host O/S accesses when emulating hardware -accelerated TCP or like network I/O hardware acceleration-assist technology for sending and receiving data, needs to be first translated into what the guest o/s thinks is a physical address. This requires determining at step 204 whether, from the perspective of the guest O/S, the virtual address is resident in virtual memory space of the guest O/S. If the virtual address is not resident from the perspective of the guest O/S 50, the address is made resident at step 208 which may be accomplished using virtual memory management techniques implementing address translation tables, as well known to skilled artisans. Examiner interpretation: A virtual machine environment may call for application: first at the guest level, to translate a guest virtual address through guest managed translation tables into a guest real address, and then, for a pageable guest, at the host level, to translate the corresponding host virtual address to a host real address" (Gained et al US2015/0269004A1) copying, to a host memory buffer and according to the host virtual address, the to-be- accelerated data: [0058] "program modules and application interfaces 45 enable the host O/S to perform the necessary virtual memory address translations enabling the host o/s network stack to access the socket buffer (or the user space buffer) of an executing process inside the guest O/S, including delivering data directly to the socket buffer in the case of receiving data or, removing data placed at the socket buffer by the guest process when sending data. In the embodiment depicted in FIG. 2 (a), the NIC network interface controller device 20 is not provided with accelerated network I/O hardware assist technology such as accelerated TCP (IOAT) or the like". Examiner interpretation: in Fig 2 (a) the host 12 receive packet data from network and need to be forwarded to guest. The translation is from the physical address of the host to virtual address of the guest in order to access the data directly without copying. and determining, according to the host virtual address and a second preset mapping relationship between the host virtual address and a host physical address of the to-be-accelerated data in the host memory buffer, the host physical address: [0058] "program modules and application interfaces 45 enable the host O/S to perform the necessary virtual memory address translations enabling the host O/S network stack to access the socket buffer (or the user space buffer) of an executing process inside the guest O/S, including delivering data directly to the socket buffer in the case of receiving data or, removing data placed at the socket buffer by the guest process when sending data. In the embodiment depicted in FIG. 2 (a), the NIC network interface controller device 20 is not provided with accelerated network I/O hardware assist technology such as accelerated TCP (IOAT) or the like". Examiner interpretation: Any process executing in the host has a corresponding virtual address (first address) that maps to host physical address. And after any finished process, the resource are aged/old address are mapped (modify reclaimed by the host .i.e. mapping: US-20060206658A1) to new process. and further obtaining, by the hardware accelerator, from the host memory buffer, and according to the host physical address, the to-be-accelerated data, fig. 2(a) and [0059] "To accomplish this, the host O/S 12 receives a sub-set of network state information 30 from the guest process that provides the location of the virtual address of the target guest process that is to receive packets. Such state information 30 that may be maintained at the host O/S, may include, but is not limited to: source IP addresses, source port numbers, destination IP addresses, destination port numbers, expected packet sequence numbers and byte offsets, and, the corresponding physical memory addresses where headers and data should go for the aforementioned (source, destination, byte offset) tuple. Such state information may additionally include a protocol type (TCP, UDP, ) or protocol type (IP, IPv6, ), a TTL (time to live) value, a security label (for labelled ipsec networking), etc. Availability of such state information permits the host O/S to analyze the header portion of an arrived packet 25, apply firewall rules, and, subject to any firewall rules applied by the host O/S, determine a virtual memory address associated with a target guest process 55 that is to receive the network packet data payloads. wherein sending the third information comprises sending, to the hardware accelerator, the host physical address. [0020] a) receiving a physical memory address location known from the perspective of a guest O/S and corresponding to a guest process operating under control of the guest O/S which is to receive data from a network packet or provide data for assembly in a network packet; [0021] b) performing an address translation to obtain from the physical memory address location known from the perspective of the guest O/S, a corresponding physical memory location accessible by the host O/S; As per claim 10, the rejection of claim 9 is incorporated and furthermore Van does not explicitly discloses: monitoring the Vring to obtain an acceleration result from after acceleration processing on the to-be-accelerated data is completed: Bolic discloses: monitoring the Vring to obtain an acceleration result from after acceleration processing on the to-be-accelerated data is completed: [0058] “ In this case, the state of the request may be updated with DMA READ after initiating the DMA write operation. In another scenario, the request may have completed its data processing (current buffer number=total buffer number). In that case, the request may be marked with DMA FIN state, and the request remover may be invoked to remove this finished request from the related queue”; [0042] the device driver may initiate the start of the DMA controller in the FPGA accelerator; (7) the DMA controller may transfer the data to the FPGA accelerator in a pipelined way to perform a computation; (8) the DMA controller may transfer the results of the computation back to the data pool; (9) the DMA controller may send an interrupt to the device driver 307 when all the results are transferred to the data pool; (10) the backend driver 305 may send a notification to the frontend driver (207 or 211) that the results are ready, (11) the frontend driver (207 or 211) may wake up the process in sleep state; (12) the process may retrieve the results from the data pool”; It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of cited references. Thus, one of ordinary skill in the art before the effective filling date of the claimed invention would have been motivated to incorporate teaching of Bolic into teaching of Van to include a priority-based plurality of virtual modules for effecting an acceleration of requests from different VMs, based on priority associated with the request or VM. Such assignment can reconfigure the data structure for each virtual module in order to use the hardware accelerator efficiently at the highest capacity as possible. Bolic[0083]. As per claim 11, the rejection of claim 10 is incorporated and furthermore Van discloses: monitoring a vring-used table of the Vring in real time in a polling manner: [0061]”In the embodiment depicted in FIG. 2(c), the NIC card 21 is provided with hardware-accelerated TCP (TSO) or like network I/O hardware acceleration-assist technology. Thus, as shown in FIG. 2(c), for zero copy receiving, when the guest O/S is aware of the virtualized network I/O hardware acceleration-assist technology, the host O/S 12 maintains a subset of the network state information 30 associated with the guest O/S. Additionally, the NIC hardware 21 itself may be provided, via the host O/S, with the subset of the network state information 30 associated with the guest O/S.”; obtaining, when the vring-used table is updated, an identifier of an entry in the vring-used table: [0066] As firewall rules are applied before anything is done with the network packet, as shown in FIG. 3(a), this may involve synching of network state information 30 associated with the sending guest process 55a between the guest O/S 50a and the host O/S 12, just as network state information is synched between the host O/S and a network card having I/O hardware assist.”; obtaining, from a vring-desc table of the Vring and according to the identifier, the virtual machine physical address: [0061]“Thus, the NIC hardware 21, without intervention by the host O/S, is enabled to directly deliver the header portion (H) of the arrived packet 25, subject to application of firewall rules, to the kernel buffer in the guest O/S 50. The host O/S may perform an address translation to determine a physical memory address associated with a target guest process 55 which may also directly receive the network packet data payload from the NIC hardware 21.”; [0070] “ Once the virtual address associated with the guest process is made resident, i.e., is translated to a physical memory address from the perspective of the guest O/S at step 210, a further step 213 is implemented to ensure that this address remains resident from the perspective of the corresponding guest O/S 50. It is understood that steps 202-213 of FIG. 4 are performed by the guest O/S. Continuing to step 215, the physical memory address from the perspective of the guest O/S determined at step 210, in turn, needs to be translated into the actual physical address in hardware accessible by the host O/S.”; and obtaining, from the virtual machine memory corresponding to the virtual machine physical address, the acceleration result: [0077]”At step 504, the host O/S reads the header and performs a check at step 508 if the packet is allowed by firewall rules. If the packet is allowed by firewall rules, then a check is performed at step 510 to determine if the packet destination is on the same computer. If the packet destination is on the same computer, then the local packet delivery is performed as described in greater detail herein with respect to FIG. 6. If the packet destination is not on the same computer, then a network packet is delivered through the network interface card.”; As per claim 12, the rejection of claim 10 is incorporated and furthermore Van does not explicitly disclose: copying, to the virtual machine memory after the to-be-accelerated data undergoes the acceleration processing, the acceleration result; and adding, to the Vring and according to a first identifier of the target virtual accelerator, a second identifier of an entry to the Vring: Bolic discloses: copying, to the virtual machine memory after the to-be-accelerated data undergoes the acceleration processing, the acceleration result: [0058]"In another scenario, the request may have completed its data processing (current buffer number=total buffer number). In that case, the request may be marked with DMA FIN state, and the request remover may be invoked to remove this finished request from the related queue"; [0042]”the device driver may initiate the start of the DMA controller in the FPGA accelerator; (7) the DMA controller may transfer the data to the FPGA accelerator in a pipelined way to perform a computation;(8) the DMA controller may transfer the results of the computation back to the data pool; (9) the DMA controller may send an interrupt to the device driver 307 when all the results are transferred to the data pool; (10) the backend driver 305 may send a notification to the frontend driver (207 or 211) that the results are ready, (11) the frontend driver (207 or 211) may wake up the process in sleep state; (12) the process may retrieve the results from the data pool. and adding, to the Vring and according to a first identifier of the target virtual accelerator, a second identifier of an entry to the Vring. [0049] "In some examples, the RCB may be a data stack and include a VM identifier, a port identifier, a request state, an application number, a total buffer number, a current buffer number, and a next request pointer. The VM identifier may denote the identifier of the VM from which the request originates. The port identifier may identify the port number of the event channel and may be used to notify the request's VM through the corresponding event channel. Request states may include, for example, DMA READ, DMA WRITE, and DMA FIN (DMA finished) The application number may specify which accelerator application the request needs to use on the hardware acceleration module (e.g., 0-app0, 1-app1, etc. ) It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of cited references. Thus, one of ordinary skill in the art before the effective filling date of the claimed invention would have been motivated to incorporate teaching of Bolic into teaching of Van to include a priority-based plurality of virtual modules for effecting an acceleration of requests from different VMs, based on priority associated with the request or VM. Such assignment can reconfigure the data structure for each virtual module in order to use the hardware accelerator efficiently at the highest capacity as possible. Bolic[0083]. As per claim 13, the rejection of claim 5 is incorporated and furthermore Van does not explicitly disclose: wherein after adding the second identifier, further comprises sending, to the virtual machine, an interrupt request carrying the first identifier to trigger the virtual machine to query, according to the first identifier, the second identifier and obtain an acceleration result generated after the to-be-accelerated data undergoes the acceleration processing. Bolic discloses: wherein after adding the second identifier, further comprises sending, to the virtual machine, an interrupt request carrying the first identifier to trigger the virtual machine to query, according to the first identifier, the second identifier: [0048] To support DMA context switching, requests to access the hardware acceleration module 218 may need to become context-aware. Similar to the functionality of a process control block (PCB), each request may have its own request control block (RCB) which may be used by the co-provisor to set up a DMA executing context. [0049] In some examples, the RCB may be a data stack and include a VM identifier, a port identifier, a request state, an application number, a total buffer number, a current buffer number, and a next request pointer. The VM identifier may denote the identifier of the VM from which the request originates. The port identifier may identify the port number of the event channel and may be used to notify the request's VM through the corresponding event channel. Request states may include, for example, DMA READ, DMA WRITE, and DMA FIN (DMA finished). The application number may specify which accelerator application the request needs to use on the hardware acceleration module (e.g., 0—app0, 1—app1, etc.). The total buffer number may specify a total number of buffer fragments used by the request. The current buffer number may specify a current buffer fragment that needs to be transferred to the hardware acceleration module. The next request pointer may point to the next request in the queue. and obtain an acceleration result generated after the to-be-accelerated data undergoes the acceleration processing. [0058] "In another scenario, the request may have completed its data processing (current buffer number=total buffer number) In that case, the request may be marked with DMA FIN state, and the request remover may be invoked to remove this finished request from the related queue"; [0042]" 6) the device driver may initiate the start of the DMA controller in the FPGA accelerator; (7) the DMA controller may transfer the data to the FPGA accelerator in a pipelined way to perform a computation; (8) the DMA controller may transfer the results of the computation back to the data pool; (9) the DMA controller may send an interrupt to the device driver 307 when all the results are transferred to the data pool; (10) the backend driver 305 may send a notification to the frontend driver (207 or 211) that the results are ready, (11) the frontend driver (207 or 211) may wake up the process in sleep state; (12) the process may retrieve the results from the data pool. It would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to combine the teachings of cited references. Thus, one of ordinary skill in the art before the effective filling date of the claimed invention would have been motivated to incorporate teaching of Bolic into teaching of Van to include a priority-based plurality of virtual modules for effecting an acceleration of requests from different VMs, based on priority associated with the request or VM. Such assignment can reconfigure the data structure for each virtual module in order to use the hardware accelerator efficiently at the highest capacity as possible. Bolic[0083]. Claims 14, 15, 16, 17, 18, 19 are the computer claim corresponding to method claims 1, 2, 3, 4, (5 and 6), 7 and rejected under the same rational set forth in connection with the rejection of claims 1, 2, 3, 4 (5 and 6), 7 above. Claim 20 is the computer program product claim corresponding to method claim 1 and rejected under the same rational set forth in connection with the rejection of claims 1 above. Pertinent arts: US20130152075A1: An approach is provided in which a hardware accelerated bridge executing on a network adapter receives an ingress data packet. The data packet includes a destination MAC address that corresponds to a virtual machine, which interfaces to a software bridge executing on a hypervisor. The hardware accelerated bridge identifies a software bridge table entry that includes the destination MAC address and a virtual function identifier, which identifies a virtual function corresponding to the software bridge. US20110295967A1: In such an embodiment, a bus and an accelerator are coupled to one another. The accelerator has an application function block. The application function block is to process data to provide processed data to storage. A network interface is coupled to obtain the processed data from the storage for transmission. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRAHIM BOURZIK whose telephone number is (571)270-7155. The examiner can normally be reached Monday-Friday (8-4:30). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wei Y Mui can be reached at 571-270-2738. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRAHIM BOURZIK/Examiner, Art Unit 2191 /WEI Y MUI/Supervisory Patent Examiner, Art Unit 2191
Read full office action

Prosecution Timeline

Jan 26, 2024
Application Filed
Jan 12, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585459
UPDATING SYSTEM, ELECTRONIC CONTROL UNIT, UPDATING MANAGEMENT DEVICE, AND UPDATING MANAGEMENT METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12578931
INTELLIGENT AND EFFICIENT PIPELINE MANAGEMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12566600
LIMITED USE LINKS FOR DATA ITEM DISTRIBUTION
2y 5m to grant Granted Mar 03, 2026
Patent 12561228
Optimal Just-In-Time Trace Sizing for Virtual Machines
2y 5m to grant Granted Feb 24, 2026
Patent 12554625
TESTING CONTINUOUS INTEGRATION AND CONTINUOUS DEPLOYMENT (CI/CD) PIPELINE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
65%
Grant Probability
99%
With Interview (+45.0%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 376 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month