Prosecution Insights
Last updated: April 19, 2026
Application No. 18/784,916

DATA PROCESSING SYSTEM METHOD AND APPARATUS, COMPUTER DEVICE AND STORAGE MEDIUM

Non-Final OA §102§103§112
Filed
Jul 26, 2024
Examiner
KWONG, EDMUND H
Art Unit
2137
Tech Center
2100 — Computer Architecture & Software
Assignee
BEIJING VOLCANO ENGINE TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
94%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
280 granted / 324 resolved
+31.4% vs TC avg
Moderate +7% lift
Without
With
+7.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
17 currently pending
Career history
341
Total Applications
across all art units

Statute-Specific Performance

§101
8.2%
-31.8% vs TC avg
§103
52.6%
+12.6% vs TC avg
§102
21.5%
-18.5% vs TC avg
§112
6.4%
-33.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 324 resolved cases

Office Action

§102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Application This action is in response Applicant’s filing on 22 August 2024. Claims 1-12 are presently pending and under consideration. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 22 August 2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Specification The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. The following title is suggested: SHARED TARGET MEMORY SPACE FOR ZERO COPY DATA SEGMENTING AND TRANSMISSION. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: data management module is configured to read, and storage acceleration module is configured to control, as in claim 1, the storage acceleration module…is configured acquire….control…control, as in claim 3, the storage acceleration module…is configured to use, as in claim 4, the data management module determines, as in claim 5, the data management module is configured to split...determine,..take…take…store, as in claim 6. Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. A review of the specification appears to disclose the following portions appears to be the corresponding structure described in the specification for the 35 USC 112(f) limitations: Fig. 1 disclosing data management module 120 and storage acceleration module 130 as part of data processing system 100, Fig. 5, disclosing data processing apparatus 500 including data splitting module 510, information determination module 520, data assembly module 530, and instruction sending module 540, and corresponding paragraph [0128], disclosing data management module and data processing apparatus may be the same device, and Fig. 6, and corresponding paragraph [0139]-[0141], disclosing “In the embodiment of the present application, the memory 620 is specifically configured to store an application code for executing the scheme of the present application, and the execution is controlled by the processor 610. That is, when the computer device 600 is running, the processor 610 communicates with the memory 620 through the bus 630, so that the processor 610 executes the application code stored in the memory 620, so as to execute the steps of the data processing method described in any of the foregoing embodiments”. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 2 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. It is unclear how the network card module, the data management module and the storage acceleration module “run in a same thread”. The specification describes at [0013] the result of how running in the same thread “may access the same target memory space, so the data can be directly transmitted in the memory without the need for additional memory copying and data replication, facilitating efficient memory universality and zero-copy data transmission. Additionally, it eliminates the need for complex memory sharing settings, making it easy to implement.” However, it is unclear how three hardware devices or virtualized hardware devices may run in the same thread (where threads are generally understood to be lightweight processes and the emulation of three hardware devices/modules appears to be suggestive of more resources than what can be provided by a single thread). For purposes of examination, “run in the same thread” is being interpreted by the resulting “zero-copy” data transmission. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1 and 5-9 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Xu et al (US 2024/0354021 A1, hereinafter Xu). Regarding claims 1, 7, and 8, taking claim 1 as exemplary, Xu discloses a data processing system, comprising a network card module, a data management module, a storage acceleration module and a hard disk (See Xu, Fig 1, disclosing smart NIC and storage device and Fig. 11, disclosing processing module 112, calculating module 113, and sending module 114 and reading module 115 and [0029] disclosing “smart network card and the solid-state storage device” and [0084], disclosing “a data processing apparatus according to an embodiment of the present application. The data processing apparatus includes: a receiving module 111 configured to receive a data stream; a processing module 112 configured to perform a segmenting process on the data stream to obtain a plurality of segmented data blocks; a calculating module 113 configured to respectively calculate check information corresponding to the plurality of segmented data blocks; a sending module 114 configured to send the segmented data blocks and corresponding check information, as a target logic block, to a storage device”. Examiner notes Applicant’s hard disk defined at specification [0065] may be a Non Volatile Memory express (NVMEe), and Xu at [0062] discloses an NVMe solid-state hard drive); wherein the network card module, the data management module and the storage acceleration module have read-write access permission for a target memory space (See Xu, [0033], the smart NIC receives the data stream and segments the data stream, which necessarily requires some temporary storage to receive and segment the stream, or in other words, read-write permission for the storage space, the smart NIC using a processor to calculate the check information such as Cyclic Redundancy Check CRC information corresponding to each segmented block and added to the corresponding segmented data block to obtain a target storage logic block conforming to the size of logic blocks of the storage device, which also necessarily requires read-write permission); the network card module is configured to receive first data transmitted by a network and store the first data in a first memory space of the target memory space (See Xu, [0032], the data stream received by the Smart NIC and perform overall check on the data stream, or in other words, the data must be at least temporarily stored/buffered in a target memory space); the data management module is configured to read the first data from the first memory space and split the first data to obtain multiple segments of second data (See Xu, [0033], the Smart NIC performs a segmenting process on the data stream to obtain a plurality of segmented data blocks and [0084], disclosing “a data processing apparatus according to an embodiment of the present application. The data processing apparatus includes: a receiving module 111 configured to receive a data stream; a processing module 112 configured to perform a segmenting process on the data stream to obtain a plurality of segmented data blocks”); determine check information corresponding to each segment of the second data, and store the check information of each segment of the second data as third data in a second memory space of the target memory space (See Xu, [0033], “the Smart NIC uses a processor to calculate the check information such as Cyclic Redundancy Check CRC information (“third data”) corresponding to each segmented data block (“second data”) and the check information obtained from the calculation is added to the corresponding segmented data block to obtain a target logic block, or in other words, at least temporarily stored in a second space and [0084], disclosing “a calculating module 113 configured to respectively calculate check information corresponding to the plurality of segmented data blocks”); and generate a data assembly instruction based on a data assembly order for the second data and the third data (See Xu, [0035], “the Smart NIC combines the segmented data blocks with the check information to obtain target logic blocks” or in other words, generation of an assembly instruction to construct the logic blocks for storage and [0045], disclosing use of a generated descriptor for describing each segmented data block, including address information and a configuration process to obtain the target logic blocks); wherein the data assembly instruction is used for generating fourth data with a target data structure, the target data structure indicates data source of each segment of subdata in the fourth data, and the subdata comprises the second data and the third data (See Xu, [0035], “the Smart NIC combines the segmented data blocks with the check information to obtain target logic blocks” (applicant’s “fourth data”) and [0045], disclosing use of a generated descriptor for describing each segmented data block, including address information and a configuration process to obtain the target logic blocks, or in other words, the target data structure of the segmented data blocks (second data) and check data (third data) is the source of the data for the created target logic blocks (fourth data), as described in applicant’s specification at [0094] and [0097]); and the storage acceleration module is configured to control the hard disk to read the second data and the third data from the target memory space, and write the fourth data into the hard disk based on read data and the target data structure (See Xu, [0033], the smart NIC using a processor to calculate the check information such as Cyclic Redundancy Check CRC information (“third data”) corresponding to each segmented block (“second data”) and added to the corresponding segmented data block to obtain a target storage logic block conforming to the size of logic blocks of the storage device so that the storage device can directly store the segmented data blocks carrying the check information and [0084], “a sending module 114 configured to send the segmented data blocks and corresponding check information, as a target logic block, to a storage device”, the target logic block corresponding to “fourth data”). Regarding claim 5, Xu disclosed the system according to claim 1 as described above. Xu further discloses wherein the data management module determines the data assembly order by following steps: determining position information of the multiple segments of second data in the first data (See Xu, [0049] & [0050], disclosing each segmented data block is configured with corresponding configuration descriptor including its memory address and data length); and determining the data assembly order based on the position information and an association relationship between the second data and the third data (See Xu, [0050] and [0051], disclosing transfer of the data of the data stream according to the configuration descriptors in the SGL list. The cyclic redundancy check information will be injected continuously after each logic block, thereby obtaining a complete target logic block. The target logic block is stored in the storage device at a specified address). Regarding claim 6, Xu disclosed the system according to claim 1 as described above. Xu further discloses wherein the data management module is configured to: split the first data based on a preset data segmentation size to obtain multiple segments of second data, and perform a padding operation on target second data at a tail end of the first data to obtain padded data (See Xu, [0033], the smart NIC using a processor to calculate the check information such as Cyclic Redundancy Check CRC information corresponding to each segmented block and added to the corresponding segmented data block to obtain a target storage logic block conforming to the size of logic blocks of the storage device so that the storage device can directly store the segmented data blocks carrying the check information, or in other words, the CRC information is padded to the segmented data); determine cyclic redundancy check information corresponding to other second data than the target second data and cyclic redundancy check information of the padded data; for any of the other second data, take the cyclic redundancy check information of the other second data as check information corresponding to the other second data (See Xu [0051] In actual applications, respectively calculating check information corresponding to the plurality of segmented data blocks includes: adding, if the first segmented sub-data block is obtained by segmenting, padding information to a free data space in the first segmented sub-data block according to the size of the logic block; re-calculating the corresponding check information based on the segmented data block containing the padding information and the first segmented sub-data block); take the cyclic redundancy check information of the padded data as check information of the target second data; and store padding data added to the target second data in the padding operation into the second memory space as the third data (See Xu, [0051] In actual applications, respectively calculating check information corresponding to the plurality of segmented data blocks includes: adding, if the first segmented sub-data block is obtained by segmenting, padding information to a free data space in the first segmented sub-data block according to the size of the logic block; re-calculating the corresponding check information based on the segmented data block containing the padding information and the first segmented sub-data block). Regarding claim 9, Xu discloses a non-transitory computer-readable storage medium, wherein a computer program is stored on the non-transitory computer-readable storage medium, and when the computer program is run on a computer device, the computer device executes the steps of the data processing method (See Xu, [0105], the embodiments may be implemented by means of software plus a necessary general-purpose hardware platform, but also by means of hardware. With this understanding in mind, the above-described technical solutions may be in essence or a portion thereof making contribution over the prior art may be embodied in the form of a software product which may be stored on a computer-readable storage medium, such as ROM/RAM, magnetic diskettes, optical disks, etc., that includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the methods stated in various embodiments or portions of the embodiments) according to claim 7 (See rejection of exemplary claim 1 above). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Xu et al (US 2024/0354021 A1, hereinafter Xu) in view of Chandrasekaran et al (US 2024/0256126 A1, hereinafter Chandrasekaran). Regarding claim 2, Xu disclosed the system according to claim 1 as described above. Xu does not disclose wherein the network card module, the data management module and the storage acceleration module run in a same thread. Xu does disclose the performing the segmenting of data and performing of CRC of segments, combining segments and CRC data to create target logic blocks of a size matching the logic blocks of the storage device entirely by the Smart NIC, the Smart NIC comprised of the network card module, the data management module and the storage acceleration module (See Xu, [0033] and [0084], disclosing “a data processing apparatus according to an embodiment of the present application. The data processing apparatus includes: a receiving module 111 configured to receive a data stream; a processing module 112 configured to perform a segmenting process on the data stream to obtain a plurality of segmented data blocks; a calculating module 113 configured to respectively calculate check information corresponding to the plurality of segmented data blocks; a sending module 114 configured to send the segmented data blocks and corresponding check information, as a target logic block, to a storage device”). However, Chandrasekaran discloses wherein the network card module, the data management module and the storage acceleration module run in a same thread (See Chandrasekaran, [0041], disclosing storage agents implementing a management plane for storage resources and utilize RDMA supporting zero-copy networking). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to combine the segmenting CRC storage system of Xu with the zero-copy scheme of Chandrasekaran as system performance can be increased by avoiding buffering of data in management buffers and reducing/eliminating the use of processors, caches, and/or context switches (See Chandrasekaran, [0041]). Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Xu et al (US 2024/0354021 A1, hereinafter Xu) in view of Markuze et al (US 2023/0385094 A1, hereinafter Markuze). Regarding claim 3, Xu disclosed the system according to claim 1 as described hereinabove. Xu further discloses wherein the storage acceleration module, when controlling the hard disk to read the second data and the third data from the target memory space, is configured to: control the hard disk to read the second data and the third data based on the physical memory addresses (See Xu, [0043] Sending the segmented data blocks (“second data”) and the corresponding check information (“third data”), as the target logic block, to the storage device includes: configuring the cyclic redundancy check information to the check information addresses in the segmented data blocks according to description types in the segmented data blocks indicated by the configuration descriptors, and generating the target logic block which can be processed by the storage device; sending the target logic block to the storage device. Examiner notes Applicant’s hard disk defined at specification [0065] may be a Non Volatile Memory express (NVMEe), and Xu at [0062] discloses an NVMe solid-state hard drive). Xu does not disclose acquire a memory mapping table of the network card module, wherein the memory mapping table contains a mapping relationship between a virtual memory address and a physical memory address of the network card module; control the hard disk to determine physical memory addresses of the second data and the third data based on the memory mapping table. However, Markuze discloses acquire a memory mapping table of the network card module, wherein the memory mapping table contains a mapping relationship between a virtual memory address and a physical memory address of the network card module (See Markuze, [0034] disclosing “translates (at 310) the logical memory address to a physical or virtual memory address of a particular device. In some embodiments, the NIC uses a set of page tables or other memory address translation tables to determine these values. As shown in FIG. 4, the translation logic 400 of the smart NIC includes a set of translation tables 410 that allow the NIC to translate the logical memory address 405” in conjunction with paragraph [0003], disclosing multiple devices across which logical memory accessible by a NIC include the physical memory of the NIC itself); control the hard disk to determine physical memory addresses of the second data and the third data based on the memory mapping table (See Markuze [0023] disclosing “The NVMe devices 140 connect to the host computer 135 as well as the smart NIC 100 via the PCIe bus 125. The NVMe devices can be used as storage (e.g., disk storage) for the system. In some embodiments, the NIC 100 is configured to access memory spanning its own memory 115, any NVMe devices 140, and the I/O virtual memory 150. These different memories are combined to form a NIC logical memory 155, with the NIC able to translate between logical memory addresses and the physical (or virtual) memory addresses of these different memory components and [0036] disclosing “Again returning to FIG. 3, with the correct device identified, the process 300 reads (at 315) the payload data for the data message from the memory of the identified device using the translated memory address. If the translated memory address refers to a location on the physical NIC, the NIC is responsible for reading this data (or using zero-copy techniques to avoid unnecessary read operations). If the translated memory address refers to a location on an NVMe device, the NIC uses the PCIe interface to retrieve data from the NVMe device”). Xu and Markuze are analogous art directed to improved storage management techniques. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to combine the segmenting CRC storage system of Xu with the address translation of Markuze as the translation capability of the NIC enables more efficient data message processing thus improving system performance (See Markuze, [0002]). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over (US 2024/0354021 A1, hereinafter Xu) over Yang et al (US 2021/0072927 A1, hereinafter Yang). Regarding claim 4, Xu disclosed the system according to claim 1 as described above. Xu does not disclose wherein the storage acceleration module, when controlling the hard disk to read the second data and the third data from the target memory space, and write the fourth data into the hard disk based on the read data and the target data structure, is configured to: use a storage performance development kit (SPDK) to control the hard disk to read the second data and the third data from the target memory space, and write the fourth data into the hard disk based on the read data and the target data structure. Xu does disclose wherein the storage acceleration module, when controlling the hard disk to read the second data and the third data from the target memory space, and write the fourth data into the hard disk based on the read data and the target data structure (See Xu, [0033], the smart NIC using a processor to calculate the check information such as Cyclic Redundancy Check CRC information corresponding to each segmented block and added to the corresponding segmented data block to obtain a target storage logic block conforming to the size of logic blocks of the storage device so that the storage device can directly store the segmented data blocks carrying the check information and [0084], “a sending module 114 configured to send the segmented data blocks and corresponding check information, as a target logic block, to a storage device” or in other words, the target data structure of the segmented data blocks (second data) and check data (third data) is the source of the data for the created target logic blocks (fourth data), as described in applicant’s specification at [0094] and [0097]. Examiner notes Applicant’s hard disk defined at specification [0065] may be a Non Volatile Memory express (NVMEe), and Xu at [0062] discloses an NVMe solid-state hard drive). However, Yang discloses wherein the storage acceleration module, when controlling the hard disk to read the second data and the third data from the target memory space, and write the fourth data into the hard disk based on the read data and the target data structure, is configured to: use a storage performance development kit (SPDK) to control the hard disk to read the second data and the third data from the target memory space, and write the fourth data into the hard disk based on the read data and the target data structure (See Yang, [0002], disclosing “The Storage Performance Development Kit (SPDK) provides a user space NVMe driver library of Application Programming Interfaces (APIs) that applications directly call to access the NVMe driver and NVMe storage devices, to allow for direct, zero-copy data transfer to and from NVME storage devices, such as SSDs. The SPDK NVMe drivers execute in the user space, as opposed to the kernel space, to provide improved performance and minimize processor usage”). Xu and Yang are analogous art directed to improved storage management techniques. It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to combine the segmenting CRC storage system of Xu with the SPDK of Yang as system performance can be increased by allowing for zero-copy data transfer to and from NVME storage devices which increases performance and minimizes processor usage (See Yang, [0002]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Ben-Ishay et al (US 2023/0010150 A1) disclosing a system to complete first I/O transactions for a host in the remote storage device by (i) translating between the first I/O transactions of the bus storage protocol and second I/O transactions of a network storage protocol, and (ii) executing the second I/O transactions in the remote storage device. For receiving and completing the first I/O transactions, the processor is to cause the network interface controller to transfer data of the first and second I/O transactions directly between the remote storage device and a memory of the host using zero-copy transfer. Any inquiry concerning this communication or earlier communications from the examiner should be directed to EDMUND H KWONG whose telephone number is (571)272-8691. The examiner can normally be reached Monday-Friday 10-6 PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Arpan P. Savla can be reached at 571-272-1077. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /E.H.K/Examiner, Art Unit 2137 /Arpan P. Savla/Supervisory Patent Examiner, Art Unit 2137
Read full office action

Prosecution Timeline

Jul 26, 2024
Application Filed
Mar 04, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585383
Method and System for Hardware Accelerated Online Capacity Expansion
2y 5m to grant Granted Mar 24, 2026
Patent 12561250
STORAGE DEVICE FOR MANAGING MAP DATA IN A HOST AND OPERATION METHOD THEREOF
2y 5m to grant Granted Feb 24, 2026
Patent 12554591
DYNAMIC ADAPTATION OF BACKUP POLICY SCHEMES BASED ON THREAT CONFIDENCE
2y 5m to grant Granted Feb 17, 2026
Patent 12541314
INFORMATION PROCESSING APPARATUS, AND CONTROL METHOD FOR MANAGING LOG INFORMATION THAT PROVIDES A STORAGE FUNCTION CONNECTED TO A NETWORK
2y 5m to grant Granted Feb 03, 2026
Patent 12536097
PSEUDO MAIN MEMORY SYSTEM
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
94%
With Interview (+7.3%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 324 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month