Prosecution Insights
Last updated: April 19, 2026
Application No. 18/857,174

DATA PROCESSING METHOD AND APPARATUS

Final Rejection §103
Filed
Oct 15, 2024
Examiner
BOWEN, RICHARD L
Art Unit
2165
Tech Center
2100 — Computer Architecture & Software
Assignee
BEIJING VOLCANO ENGINE TECHNOLOGY CO., LTD.
OA Round
2 (Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
437 granted / 544 resolved
+25.3% vs TC avg
Strong +28% interview lift
Without
With
+27.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
14 currently pending
Career history
558
Total Applications
across all art units

Statute-Specific Performance

§101
14.5%
-25.5% vs TC avg
§103
41.1%
+1.1% vs TC avg
§102
20.5%
-19.5% vs TC avg
§112
13.5%
-26.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 544 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on January 21, 2026 and January 23, 2026 being considered by the examiner. Response to Arguments Applicant’s arguments with respect to claim(s) 1-12, 14, 16 and 18-23 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 14 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Yan et al. (CN 114285676 A, hereinafter referred to as “Yan,” citing to machine translation for mapping, which has been provided along with this Office Action. It is noted this reference has been previously cited in Applicant’s IDS) in view of Piao (U.S. Publication No. 2025/0184224 A1, hereinafter referred to as “Piao”). Regarding claim 1, Yan discloses a data processing method, comprising: (e.g., abstract and page 1) receiving, by a network card module in a smart network card, a data operation request sent from a client; (“In a first aspect, the present disclosure provides an intelligent network card, comprising: a receiving module, a request processing module, a storage control module and a reply module; the receiving module is used for receiving the request sent by the client, and sending the request to the request processing module;” – receiving module/request processing module is considered to be a network card module)(e.g., page 1 of translation) calling a request analysis module in the smart network card to parse the data operation request to obtain data to be processed and data operation type information; (“a request processing module, for analyzing the request, obtaining the type of the request and the content of the request, and sending the type and content to the storage control module; a storage control module, used for processing the request according to the type and content,” “the request processing module is used for analyzing the request, obtaining the type of the request and the content of the request, and sending the type and the content to the storage control module;”)(e.g., abstract and page 1 of translation) inputting the data to be processed and the data operation type information to an execution engine module in the smart network card; (“the intelligent network card comprises: …a storage control module” “the storage control module is used for processing the request according to the type and the content, obtaining the processing result, and sending the processing result to the reply module;”)(e.g., abstract and page 1 of translation) calling the execution engine module in the smart network card to perform, based on the data to be processed, a data operation indicated by the data operation type information to obtain a data operation result; (“the intelligent network card comprises: …a storage control module” “the storage control module is used for processing the request according to the type and the content, obtaining the processing result, and sending the processing result to the reply module;”)(e.g., abstract and page 1 of translation) sending, by the network card module in the smart network card, the response to the data operation request to the client. (“the reply module returns the processing result to the client.” – receiving module/request processing module is considered to be a network card module)(e.g., abstract and page 1 of translation) However, Yan, does not appear to specifically disclose calling the request analysis module in the smart network card to encapsulate the data operation result to obtain a response to the data operation request; and On the other hand, Piao, which relates to virtual resource processing method and apparatus (title), does disclose calling the request analysis module in the smart network card to encapsulate the data operation result to obtain a response to the data operation request; and (“encapsulating the parsing result according to a preset network transmission protocol and sending an encapsulation result to a storage server via the physical network card, where the encapsulation result is parsed by the storage server which returns to-be-read data of the target virtual machine according to the parsing result.” “the physical network card encapsulates the parsing result according to a preset network transmission protocol, and sends an encapsulation result to a storage server, where the encapsulation result is parsed by the storage server which returns to-be-read data of the target virtual machine according to the parsing result.”)(e.g., paragraphs [0018] and [0100]). Yan discloses an intelligent network card, network storage method and medium of intelligent network card. Yan discloses the intelligent network card to include a receiving module, a request processing module, a storage control module and a reply module, which improves the speed of the network storage, reduces the delay of the data read-write and avoids the waste of the resource in the CPU. However, Yan does not appear to specifically disclose the encapsulation. On the other hand, Piao discloses that the network card can encapsulate the data according to a preset network transmission protocol, which provides an enhanced manner to ensure that the transmission of data is consistent with the transmission capacity of the network, along with providing security of the data. E.g., paragraph [0003]-[0004]. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of Applicant’s claimed invention to incorporate the encapsulation of data as disclosed in Piao to Yan to further provide the benefit of how the data is transmitted ensuring data is provided considering bandwidth requirements and providing enhanced security. Regarding claim 14, Yan discloses an electronic device, comprising: a memory and a processor, wherein the memory is configured to store computer program instructions; and the processor is configured to execute the computer program instructions to cause the electronic device to: (“the third aspect, the present disclosure further provides a computer readable storage medium which is stored with a computer program, the program is executed by a processor to realize any one of the intelligent network card network storage method in the disclosed embodiments.”)(e.g., abstract and page 1 of translation) receive, by a network card module in a smart network card, a data operation request sent from a client; (“In a first aspect, the present disclosure provides an intelligent network card, comprising: a receiving module, a request processing module, a storage control module and a reply module; the receiving module is used for receiving the request sent by the client, and sending the request to the request processing module;” – receiving module/request processing module is considered to be a network card module)(e.g., page 1 of translation) call a request analysis module in the smart network card to parse the data operation request to obtain data to be processed and data operation type information; (“a request processing module, for analyzing the request, obtaining the type of the request and the content of the request, and sending the type and content to the storage control module; a storage control module, used for processing the request according to the type and content,” “the request processing module is used for analyzing the request, obtaining the type of the request and the content of the request, and sending the type and the content to the storage control module;”)(e.g., abstract and page 1 of translation) input the data to be processed and the data operation type information to an execution engine module in the smart network card; (“the intelligent network card comprises: …a storage control module” “the storage control module is used for processing the request according to the type and the content, obtaining the processing result, and sending the processing result to the reply module;”)(e.g., abstract and page 1 of translation) call the execution engine module in the smart network card to perform, based on the data to be processed, a data operation indicated by the data operation type information to obtain a data operation result; (“the intelligent network card comprises: …a storage control module” “the storage control module is used for processing the request according to the type and the content, obtaining the processing result, and sending the processing result to the reply module;”)(e.g., abstract and page 1 of translation) send, by the network card module in the smart network card, the response to the data operation request to the client. (“the reply module returns the processing result to the client.” – receiving module/request processing module is considered to be a network card module)(e.g., abstract and page 1 of translation) However, Yan does not appear to specifically disclose call the request analysis module in the smart network card to encapsulate the data operation result to obtain a response to the data operation request; and On the other hand, Piao, which relates to virtual resource processing method and apparatus (title), does disclose call the request analysis module in the smart network card to encapsulate the data operation result to obtain a response to the data operation request; and (“encapsulating the parsing result according to a preset network transmission protocol and sending an encapsulation result to a storage server via the physical network card, where the encapsulation result is parsed by the storage server which returns to-be-read data of the target virtual machine according to the parsing result.” “the physical network card encapsulates the parsing result according to a preset network transmission protocol, and sends an encapsulation result to a storage server, where the encapsulation result is parsed by the storage server which returns to-be-read data of the target virtual machine according to the parsing result.”)(e.g., paragraphs [0018] and [0100]). It would have been obvious to combine Piao with Yan for the same reasons as provided in claim 1, above. Regarding claim 16, Yan discloses a non-transitory readable storage medium, comprising: computer program instructions which, when executed by at least one processor of an electronic device, cause the electronic device to: (“the third aspect, the present disclosure further provides a computer readable storage medium which is stored with a computer program, the program is executed by a processor to realize any one of the intelligent network card network storage method in the disclosed embodiments.”)(e.g., abstract and page 1 of translation) receive, by a network card module in a smart network card, a data operation request sent from a client; (“In a first aspect, the present disclosure provides an intelligent network card, comprising: a receiving module, a request processing module, a storage control module and a reply module; the receiving module is used for receiving the request sent by the client, and sending the request to the request processing module;” – receiving module/request processing module is considered to be a network card module)(e.g., page 1 of translation) call a request analysis module in the smart network card to parse the data operation request to obtain data to be processed and data operation type information; (“a request processing module, for analyzing the request, obtaining the type of the request and the content of the request, and sending the type and content to the storage control module; a storage control module, used for processing the request according to the type and content,” “the request processing module is used for analyzing the request, obtaining the type of the request and the content of the request, and sending the type and the content to the storage control module;”)(e.g., abstract and page 1 of translation) input the data to be processed and the data operation type information to an execution engine module in the smart network card; (“the intelligent network card comprises: …a storage control module” “the storage control module is used for processing the request according to the type and the content, obtaining the processing result, and sending the processing result to the reply module;”)(e.g., abstract and page 1 of translation) call the execution engine module in the smart network card to perform, based on the data to be processed, a data operation indicated by the data operation type information to obtain a data operation result; (“the intelligent network card comprises: …a storage control module” “the storage control module is used for processing the request according to the type and the content, obtaining the processing result, and sending the processing result to the reply module;”)(e.g., abstract and page 1 of translation) send, by the network card module in the smart network card, the response to the data operation request to the client. (“the reply module returns the processing result to the client.” – receiving module/request processing module is considered to be a network card module)(e.g., abstract and page 1 of translation) However, Yan, does not appear to specifically disclose call the request analysis module in the smart network card to encapsulate the data operation result to obtain a response to the data operation request; and On the other hand, Piao, which relates to virtual resource processing method and apparatus (title), does disclose call the request analysis module in the smart network card to encapsulate the data operation result to obtain a response to the data operation request; and (“encapsulating the parsing result according to a preset network transmission protocol and sending an encapsulation result to a storage server via the physical network card, where the encapsulation result is parsed by the storage server which returns to-be-read data of the target virtual machine according to the parsing result.” “the physical network card encapsulates the parsing result according to a preset network transmission protocol, and sends an encapsulation result to a storage server, where the encapsulation result is parsed by the storage server which returns to-be-read data of the target virtual machine according to the parsing result.”)(e.g., paragraphs [0018] and [0100]). It would have been obvious to combine Piao with Yan for the same reasons as provided in claim 1, above. Claims 2, 18 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Yan in view of Piao and in further view of Duraisamy et al. (U.S. Publication No. 2023/0344921 A1, hereinafter referred to as “Duraisamy”). Regarding claim 2, Yan in view of Piao discloses the data processing method according to claim 1. Yan discloses read-write requests; however, neither reference appears to specifically disclose wherein the calling the request analysis module to encapsulate the data operation result to obtain the response to the data operation request comprises: calling the request analysis module to update a target field in a data structure corresponding to the data operation request based on the data operation result to obtain the response to the data operation request. On the other hand, Duraisamy, which relates to methods for UDP network traffic routing to distributed data centers via cloud VPN (title), does disclose wherein the calling the request analysis module to encapsulate the data operation result to obtain the response to the data operation request comprises: calling the request analysis module to update a target field in a data structure corresponding to the data operation request based on the data operation result to obtain the response to the data operation request. (“Once the backend server-side MUX channel 330 is established with the connector 405, the cloud VPN 175 can update backend server-side MUX channel 330 to the lookup table 410 with data center 350. VPN server 195 can then forward the encapsulated UDP packet 320 over the backend server-side MUX channel.” “Once the settings are established, the system may then perform UDP routing, based on the updated configuration 810 corresponding to, or within the, routing table 805.”)(e.g., figure 8 and paragraphs [0102] and [0136]-[0138]). It would have been obvious to combine Piao with Yan for the same reasons as provided in claim 1, above. However, neither reference appear to specifically disclose calling the request analysis module to update a target field in a data structure corresponding to the data operation request based on the data operation result to obtain the response to the data operation request. On the other hand, Duraisamy, which relates to methods for UDP network traffic routing to distributed data centers via cloud vpn (title), does disclose that it is beneficial to encapsulate the data as an effective way for data to be processed and updated via a cloud and vpn. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the encapsulation of data as provided in Duraisamy to to the Yan-Paio combination to provide an effective manner for the data to be processed via VPN and cloud resources, which provides enhanced privacy and safe remote access, while the use of cloud computing provides scalable, flexible and cost-efficient access to resources, data storage and advanced services. Claims 18 and 21 have substantially similar limitations as stated in claim 2; therefore, they are rejected under the same subject matter. Claims 3-10, 19, 20, 22 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Yan in view of Piao and in further view of Wan, Mingxiang (WO 2022/156650 A1, hereinafter referred to as “Wan” – Examiner is relying on corresponding European Application EP 293 530 A1, which is being used as a convenient translation – for purposes of the mapping, Examiner is citing to EP 293 530 A1). Regarding claim 3, Yan in view of Piao discloses the data processing method according to claim 1. However, neither reference appears to specifically disclose wherein the receiving, by the network card module in the smart network card, the data operation request sent from the client comprises: receiving, by the network card module, a plurality of data operation requests that are sent from the client and have a same transaction identifier. On the other hand, Wan, which relates to a data processing method, server and system (title), does disclose wherein the receiving, by the network card module in the smart network card, the data operation request sent from the client comprises: receiving, by the network card module, a plurality of data operation requests that are sent from the client and have a same transaction identifier. (“After receiving the first write request, the network interface card of the server may allocate an identifier to the second key-value pair. The identifier indicates a write sequence of the second key-value pair. In addition, the network interface card of the server may store the second key-value pair in the memory based on the identifier. Specifically, the network interface card of the server may allocate, in the memory, a write address to the second keyvalue pair, and send the identifier and the allocated write address (or indication information of the write address) to the client. After receiving the identifier and the write address, the client may add the identifier to the second key-value pair, for example, add the identifier to a second value of the second key-value pair, and send a second write request to the server via the network interface card. The second write request includes the write address and the second key-value pair to which the identifier is added. After receiving the second write request via the network interface card, the server may write the second key-value pair to which the identifier is added to a location corresponding to the write address in the second write request. In this way, a sequence of the key-value pair written to the server can be ensured.”)(e.g., paragraphs [0074], [0077] and [0083]). It would have been obvious to combine Piao with Yan for the same reasons as provided in claim 1, above. However, neither reference appear to specifically disclose the request including multiple requests having a same transaction identifier. On the other hand, Wan discloses that using identifiers for transactions is useful in order to provide an effective way to ensure sequence of the requests to be ensured and to maintain information and provide an effective way to keep data together. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of Applicant’s claimed invention to combine the use of identifiers as disclosed in Wan to the Yan-Piao combination to provide an effective manner to keep certain requests to be processed and stored together. Regarding claim 4, Yan in view of Piao and in further view of Wan discloses the data processing method according to claim 3. Wan further discloses wherein the calling the request analysis module in the smart network card to parse the data operation request to obtain the data to be processed and the data operation type information comprises: calling the request analysis module to respectively parse the plurality of data operation requests that have the same transaction identifier to obtain a plurality of data fields and a plurality of identical data operation type information items; and (“The identifier indicates a write sequence of the second key-value pair. In addition, the network interface card of the server may store the second key-value pair in the memory based on the identifier. Specifically, the network interface card of the server may allocate, in the memory, a write address to the second keyvalue pair, and send the identifier and the allocated write address (or indication information of the write address) to the client. After receiving the identifier and the write address, the client may add the identifier to the second key-value pair, for example, add the identifier to a second value of the second key-value pair, and send a second write request to the server via the network interface card. The second write request includes the write address and the second key-value pair to which the identifier is added.”)(e.g., paragraphs [0074], [0077] and [0083]) calling the request analysis module to splice the plurality of data fields to be processed based on sequence indication information respectively comprised in the plurality of data operation requests to obtain the data to be processed. (“After receiving the second write request via the network interface card, the server may write the second key-value pair to which the identifier is added to a location corresponding to the write address in the second write request. In this way, a sequence of the key-value pair written to the server can be ensured.”)(e.g., paragraphs [0074], [0075], [0077] and [0083]). Regarding claim 5, Yan in view of Piao and in further view of Wan discloses the data processing method according to claim 3. Wan further discloses wherein the plurality of data operation requests that have the same transaction identifier have a same data structure. (“The client may send the-sided RDMA read request to the server via the network interface card. The-sided RDMA read request includes the address of the bucket. The server returns a bucket memory of one bucket to the client via the network interface card. The client may locate one slot of the bucket based on the slot identifier.”)(e.g., paragraphs [0084] and [0104]). Regarding claim 6, Yan in view of Piao discloses the data processing method according to claim 1. However, neither reference appears to specifically disclose wherein the calling the execution engine module to perform, based on the data to be processed, the data operation indicated by the data operation type information to obtain the data operation result comprises: calling the execution engine module to determine, from an index structure stored in a memory comprised in the smart network card based on the data to be processed, a target index slot corresponding to the data to be processed; and calling the execution engine module to perform, on the target index slot, the data operation indicated by the data operation type information to obtain the data operation result. On the other hand, Wan, does disclose wherein the calling the execution engine module to perform, based on the data to be processed, the data operation indicated by the data operation type information to obtain the data operation result comprises: calling the execution engine module to determine, from an index structure stored in a memory comprised in the smart network card based on the data to be processed, a target index slot corresponding to the data to be processed; and (“After receiving an operation request from a client, a server accesses data in a memory of the server based on first index information and second index information in response to the operation request. The first index information includes a plurality of key-value pairs, and the second index information includes a plurality of key-value pair groups. One key-value pair group includes a keyword group and a value group. The keyword group corresponds to a plurality of keywords in the first index information, and the value group corresponds to a plurality of values in the first index information. The keyword group and the value group included in the key-value pair group are stored in a segment of continuous space of the memory. Therefore, when the client requests the server to query a specific keyword, the server can determine, based on the first index information and the second index information, a value group corresponding to a key word group corresponding to the keyword, and return, to the client, data in the continuous storage space storing the keyword group and the value group. In this way, when the server is the key-value database, range query of the key-value database is implemented, in other words, a plurality of key-value pairs are queried based on one keyword, thereby reducing communication overheads and improving communication efficiency.”)(e.g., paragraph [0048], [0084] and [0104]) calling the execution engine module to perform, on the target index slot, the data operation indicated by the data operation type information to obtain the data operation result. (“The client may send the-sided RDMA read request to the server via the network interface card. The-sided RDMA read request includes the address of the bucket. The server returns a bucket memory of one bucket to the client via the network interface card. The client may locate one slot of the bucket based on the slot identifier.”)(e.g., paragraphs [0084] and [0104]). It would have been obvious to combine Wan, Piao and Yan for the same reasons as provided in claim 3, above. Regarding claim 7, Yan in view of Piao and in further view of Wan discloses the data processing method according to claim 6. Wan further discloses wherein: the index structure is implemented by using a hash bucket, wherein the hash bucket comprises a plurality of index slots; and (“A server may configure the hash table on at least one segment of continuous memory in the memory, where each segment of continuous memory corresponds to one hash table. For example, each segment of continuous memory may be divided into a plurality of buckets (buckets), and one bucket is further divided into a plurality of slots (slots). One slot is configured to store a keyword group and a value group included in one key-value pair group, or is configured to store a plurality of pieces of pointer information corresponding to a plurality of keyvalue pairs corresponding to the keyword group and the value group.”)(e.g., paragraph [0079]) the calling the execution engine module to determine, from the index structure stored in the memory comprised in the smart network card based on the data to be processed, the target index slot corresponding to the data to be processed comprises: calling the execution engine module to perform hash calculation on the data to be processed to obtain a hash value, and match the hash value and the hash bucket in the index structure based on the hash value to obtain a successfully matched hash bucket; and (“Assuming that the preset algorithm is the hash algorithm, a process in which the network interface card determines the slot of the bucket based on the keyword group and the preset algorithm is as follows: The network interface card calculates a hash value corresponding to the keyword group by using the hash algorithm. The hash value includes an identifier of the directory, an offset value, and a slot identifier. The offset value indicates an offset value of a to-be-determined bucket relative to a first bucket in the bucket array. In this way, the network interface card may first locate the directory based on the identifier of the directory, to locate a start location of a bucket array corresponding to the directory, that is, an address of the first bucket. Then, the network interface card locates an address of one bucket based on the offset value and the start location of the bucket array. Finally, the network interface card locates a slot of the bucket based on the slot identifier and the address of the bucket.”)(e.g., paragraphs [0082]-[0084]) calling the execution engine module to perform matching in index slots comprised in the successfully matched hash bucket based on the data to be processed to obtain a matching result, and determine the target index slot based on the matching result. (“The hash value includes an identifier of the directory, an offset value, and a slot identifier. The offset value indicates an offset value of a to-be-determined bucket relative to a first bucket in the bucket array. In this way, the network interface card may first locate the directory based on the identifier of the directory, to locate a start location of a bucket array corresponding to the directory, that is, an address of the first bucket. Then, the network interface card locates an address of one bucket based on the offset value and the start location of the bucket array. Finally, the network interface card locates a slot of the bucket based on the slot identifier and the address of the bucket.”)(e.g., paragraphs [0082]-[0084]) Regarding claim 8, Yan in view of Piao and in further view of Wan discloses the data processing method according to claim 7. Wan further discloses wherein the calling the execution engine module to perform hash calculation on the data to be processed to obtain the hash value, and match in the index structure based on the hash value to obtain the successfully matched hash bucket comprises: calling the execution engine module to perform hash calculation on the data to be processed by using a plurality of preset hash algorithms respectively to obtain a plurality of hash values; and (“determines the slot of the bucket based on the keyword group and the preset algorithm is as follows: The network interface card calculates a hash value corresponding to the keyword group by using the hash algorithm. The hash value includes an identifier of the directory, an offset value, and a slot identifier. The offset value indicates an offset value of a to-be-determined bucket relative to a first bucket in the bucket array. In this way, the network interface card may first locate the directory based on the identifier of the directory, to locate a start location of a bucket array corresponding to the directory, that is, an address of the first bucket. Then, the network interface card locates an address ofone bucket based on the offset value and the start location of the bucket array. Finally, the network interface card locates a slot of the bucket based on the slot identifier and the address of the bucket.”)(e.g., paragraphs [0082]-[0085]) calling the execution engine module to match the plurality of hash values with identifiers of hash buckets comprised in the index structure to obtain a plurality of successfully matched hash buckets. (“The server returns a bucket memory of one bucket to the client via the network interface card. The client may locate one slot of the bucket based on the slot identifier. If one slot includes the keyword group and the value group, the network interface card of the client may obtain the plurality of key-value pairs from the slot. If one slot includes the plurality of pieces of pointer information, the client may obtain the plurality of key-value pairs from the server by performing a one-sided RDMA read operation.”)(e.g., paragraphs [0083]-[0084]). Regarding claim 9, Yan in view of Piao and in further view of Wan discloses the data processing method according to claim 7. Wan further discloses wherein the calling the execution engine module to perform matching in index slots comprised in the successfully matched hash bucket based on the data to be processed to obtain the matching result comprises: calling the execution engine module to perform matching in the successfully matched hash bucket based on the data to be processed; or calling the execution engine module to calculate fingerprint digest information corresponding to the data to be processed, and perform matching in the successfully matched hash bucket based on the fingerprint digest information. (it is noted that the claim uses “or”. “The server returns a bucket memory of one bucket to the client via the network interface card. The client may locate one slot of the bucket based on the slot identifier. If one slot includes the keyword group and the value group, the network interface card of the client may obtain the plurality of key-value pairs from the slot. If one slot includes the plurality of pieces of pointer information, the client may obtain the plurality of key-value pairs from the server by performing a one-sided RDMA read operation.”)(e.g., paragraphs [0083]-[0084]). Regarding claim 10, Yan in view of Piao and in further view of Wan discloses the data processing method according to claim 6. Wan further discloses wherein in response to the data operation request being a data read request, the calling the execution engine module to perform, on the target index slot, the data operation indicated by the data operation type information to obtain the data operation result comprises: in response to determining that both the data to be processed and target data corresponding to the data to be processed are stored in an inline manner, reading, from the target index slot, the target data indicated by the data to be processed; and (“A server may configure the hash table on at least one segment of continuous memory in the memory, where each segment of continuous memory corresponds to one hash table. For example, each segment of continuous memory may be divided into a plurality of buckets (buckets), and one bucket is further divided into a plurality of slots (slots). One slot is configured to store a keyword group and a value group included in one key-value pair group, or is configured to store a plurality of pieces of pointer information corresponding to a plurality of keyvalue pairs corresponding to the keyword group and the value group… One directory includes a first address of a bucket array and an address of an overflow bucket. Addresses of buckets included in the bucket array are continuous. The bucket includes a plurality of slots, and one slot is configured to store one key-value pair group or pointer information corresponding to the key-value pair group. Storage space in which the key-value pair group is located further stores a header (header), a bitmap, and a plurality of key-value pairs. The plurality of key-value pairs may be data, or may be pointer information of each key-value pair.”)(e.g., paragraphs [0079]-[0080]) in response to determining that the data to be processed is stored in the inline manner and the target data corresponding to the data to be processed is stored in a non-inline manner, or in response to determining that the data to be processed is stored in a non-inline manner, obtaining pointer information from the target index slot, and reading, from a memory of a server indicated by the pointer information, the target data corresponding to the data to be processed. (Examiner notes the use of “or” language. “When the hash table is stored in the memory of the server, and a client sends an operation request to the server, the server may access data in the memory of the server based on the hash table in response to the operation request. The operation request may be a query request, a one-sided RDMA read request, or a doublesided RDMA read request.” “If the slot includes a plurality of pieces of pointer information, the plurality of key-value pairs are first obtained based on the plurality of pieces of pointer information, and then returned to the client.”)(e.g., paragraphs [0081]-[0082]). Claims 19, 20, 22 and 23 have substantially similar limitations as stated in claims 2, 3, 2 and 3, respectively; therefore, they are rejected under the same subject matter. Claims 11-12 are rejected under 35 U.S.C. 103 as being unpatentable over Yan in view of Piao in further view of Wan and in further view of Kohli et al. (U.S. 2020/0014688 A1, hereinafter referred to as “Kohli”). Regarding claim 11, Yan in view of Piao and in further view of Wan discloses the data processing method according to claim 6. However, none of the references appears to specifically disclose wherein in response to the data operation request being a data delete request, the calling the execution engine module to perform, on the target index slot, the data operation indicated by the data operation type information to obtain the data operation result comprises: in response to determining that both the data to be processed and target data indicated by the data to be processed are stored in an inline manner, deleting the target index slot; and in response to determining that the data to be processed is stored in the inline manner and the target data indicated by the data to be processed is stored in a non-inline manner, or in response to determining that the data to be processed is stored in a non-inline manner, obtaining pointer information from the target index slot, deleting data in a memory of a server indicated by the pointer information, and releasing the target index slot. On the other hand, Kohli, which relates to a data processing unit with key value store (title), does disclose wherein in response to the data operation request being a data delete request, the calling the execution engine module to perform, on the target index slot, the data operation indicated by the data operation type information to obtain the data operation result comprises: in response to determining that both the data to be processed and target data indicated by the data to be processed are stored in an inline manner, deleting the target index slot; and (“in response to a request to perform an operation on data associated with a key: obtaining a lock on the key; determining, based on a hash of the key, a page associated with the key, wherein: the page associated with the key is in a set of pages stored in a volume, and each respective page of the one or more pages stores a respective part of an array of slots; after obtaining the lock on the key, obtaining a lock on the page associated with the key; after obtaining the lock on the page associated with the key: determining a slot associated with the key, wherein the part of the array of slots stored by the page associated with the key contains the slot associated with the key or contains keys used to determine the slot associated with the key; using the slot associated with the key to perform the operation on the data associated with the key, wherein the operation is a get operation, a put operation, or a delete operation;”)(e.g., paragraph [0006]) Wan in view of Kohli discloses in response to determining that the data to be processed is stored in the inline manner and the target data indicated by the data to be processed is stored in a non-inline manner, or in response to determining that the data to be processed is stored in a non-inline manner, obtaining pointer information from the target index slot, deleting data in a memory of a server indicated by the pointer information, and releasing the target index slot. (Examiner notes the use of “or” language. “When the hash table is stored in the memory of the server, and a client sends an operation request to the server, the server may access data in the memory of the server based on the hash table in response to the operation request. The operation request may be a query request, a one-sided RDMA read request, or a doublesided RDMA read request.” “If the slot includes a plurality of pieces of pointer information, the plurality of key-value pairs are first obtained based on the plurality of pieces of pointer information, and then returned to the client.”)(Wan: e.g., paragraphs [0081]-[0082])(“ Furthermore, in the example of FIG. 11, key-value storage system 704 may read an address stored in the slot associated with the key (1106). Key-value storage system 704 may then use the address to determine a storage location in a second volume (e.g., LVS volume 720) (1108). Key-value storage system 704 may then perform the operation with respect to data in the storage location (1110). The operation may be a get operation, a put operation, or a delete operation, as described elsewhere in this disclosure.”)(Kohli: e.g., paragraphs [0006], [0073], [0074] and [0186]). It would have been obvious to combine Yan with Wan and Piao for the same reasons as provided in claim 3, above. Yan and Wan discloses an operation request that may be a query request, a read request, or the like; however, neither reference appears to specifically disclose that the operation request being a deletion request. On the other hand, Kohli does disclose that it is known for key value stores that requests may include deletion requests. This provides an effective way to update data in an effective manner where there is typically a considerable amount of computational resources to perform the updates. E.g., paragraphs [0003]-[0004]. Therefore, it would have been obvious to incorporate delete requests as disclosed in Kohli to the Yan-Wan-Piao combination to allow for the requests to include delete requests to provide an effective manner to remove data within a key-value store. Regarding claim 12, Yan in view of Piao in further view of Wan and in further view of Kohli discloses the data processing method according to claim 11. Kohli further discloses wherein the deleting the data in the memory of the server indicated by the pointer information comprises: controlling, by the execution engine module, a memory management module of the smart network card to release the memory of the server indicated by the pointer information, wherein the memory management module is configured to manage the memory of the server. (Examiner notes the use of “or” language. “When the hash table is stored in the memory of the server, and a client sends an operation request to the server, the server may access data in the memory of the server based on the hash table in response to the operation request. The operation request may be a query request, a one-sided RDMA read request, or a doublesided RDMA read request.” “If the slot includes a plurality of pieces of pointer information, the plurality of key-value pairs are first obtained based on the plurality of pieces of pointer information, and then returned to the client.”)(Wan: e.g., paragraphs [0081]-[0082])(“ Furthermore, in the example of FIG. 11, key-value storage system 704 may read an address stored in the slot associated with the key (1106). Key-value storage system 704 may then use the address to determine a storage location in a second volume (e.g., LVS volume 720) (1108). Key-value storage system 704 may then perform the operation with respect to data in the storage location (1110). The operation may be a get operation, a put operation, or a delete operation, as described elsewhere in this disclosure.”)(Kohli: e.g., paragraphs [0006], [0073], [0074] and [0186]). Conclusion The prior art made of record, listed on form PTO-892, and not relied upon is considered pertinent to applicant's disclosure. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICHARD L BOWEN whose telephone number is (571)270-5982. The examiner can normally be reached Monday through Friday 7:30AM - 4:00PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aleksandr Kerzhner can be reached at (571)270-1760. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RICHARD L BOWEN/Primary Examiner, Art Unit 2165
Read full office action

Prosecution Timeline

Oct 15, 2024
Application Filed
Sep 09, 2025
Non-Final Rejection — §103
Dec 05, 2025
Response Filed
Mar 02, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602365
Method for Transmitting a Bloom Filter From a Transmitter Unit to a Receiver Unit
2y 5m to grant Granted Apr 14, 2026
Patent 12597044
TRANSFORMING QUALITATIVE SURVEY INTO QUANTITATIVE SURVEY USING DOMAIN KNOWLEDGE AND NATURAL LANGUAGE PROCESSING
2y 5m to grant Granted Apr 07, 2026
Patent 12596752
INFORMATION PROCESSING APPARATUS, CONTENT GENERATION SYSTEM, AND CONTROL METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12585921
NODE SELECTION APPARATUS AND METHOD FOR MAXIMIZING INFLUENCE USING NODE METADATA IN NETWORK WITH UNKNOWN TOPOLOGY
2y 5m to grant Granted Mar 24, 2026
Patent 12585699
SYSTEM, METHOD, AND COMPUTER PROGRAM FOR MULTIMODAL VIDEO RETRIEVAL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+27.7%)
2y 10m
Median Time to Grant
Moderate
PTA Risk
Based on 544 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month