Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Specification
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-14 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
The term “associated” in claim 1 is a relative term which renders the claim indefinite. The term “associated” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Similar problems exist in claims 3, 8, and 10.
Appropriate correction is required.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-15 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ben-Yehuda et al. (U.S. Patent No. 11,256,431 A1), hereafter referred to as Ben-Yehuda’431.
Referring to claim 1, Ben-Yehuda’431 as claimed, a remote computing apparatus that communicates with a local computing apparatus (see Figs. 3, 4, 12, 18), comprising: a communication interface (network interface and/or storage interface, see Col. 9, line 55, Col. 10, lines 25, 67, Col. 41, line 60 to Col. 42, line 18 and Figs. 1-4) ; a storage device (storages such as SSD units 15, see Figs. 2-4); a memory in which a driver of the storage device is executed (The accelerator device driver runs in the Linux kernel and is the entity responsible for initializing and driving the accelerator, see Col. 23, lines 46-52); and a processor that stores data and metadata (metadata, see Col. 21, lines 4-49, Col. 22, lines 7-36, and Fig. 9) associated with the data in the storage device (processors such as storage processor, user processor, etc. that perform storage management tasks and/or random logic, see Col. 36, lines 26-49), wherein the driver is configured to receive input/output commands for the storage device from the local computing apparatus through the communication interface (the driver acts as a pipe between the Global FTL and then accelerator. It runs the in the context of the Global FTL and takes commands from the readers/writers, see Col. 23, lines 46-62), the processor is configured to process the received input/output commands for the storage device using a plurality of queues (submission/completion queues, see Col. 15, line 60 to Col. 16, line 32, Figs. 2 and 3; also note Col. 41, lines 46-51), and the driver is further configured to provide a result of processing the input/output commands to the local computing apparatus through the communication interface (data is transferred by the driver to/from the accelerator and processed by the accelerator in the form of accelerator objects…Each accelerator object is processed (e.g. compressed/decompressed, encrypted/decrypted) independently, see Col. 24, lines 5-13; The Global FTL polls the completions from the driver, see Col. 24, line 51 to Col. 25, line 2, Col. 52, lines 41-44).
As to claim 2, Yehuda’431 also discloses the driver is further configured to: receive a write command for writing data of a specific size to a specific address of the storage device from the local computing apparatus (the front-ends receive customer read/write requests via a block access protocol such as NVMeoF or object get/set requests via a key/value access protocol over the network., see Col. 15, lines 11-20) through the communication interface and add the write command to a submission queue (submit a command on the submission queue to a backend reader core asking to read some data from the SSDs, see Col. 15, line 61 to Col. 16, line 16), and while waiting for an execution response of the processor for the write command, receive at least a portion of the data of the specific size (specific addresses of the blocks that are being written, see Col. 41, lines 17-30, Col. 46, line 65 to Col. 57, and Col. 19, lines 26-44) from the local computing apparatus through the communication interface and store it in a buffer (Data is copied into the system once when it arrives from the network and from that point, it remains in system memory (non volatile) or transferred to the accelerator memory…customer writes are always written to NVRAM first, to maintain persistence in the presence of unexpected software bugs or power loss, see Col. 14, line 61 to Col. 15, line 3).
As to claim 3, Yehuda’431 also discloses the driver is further configured to sequentially transmit data received in the buffer to the storage device when receiving an execution response of the processor for the write command, and the processor is further configured to store the sequentially transmitted data and associated metadata (metadata, see Col. 21, lines 4-49, Col. 22, lines 7-36, and Fig. 9) at a specific address of the storage device (write data to SSDs sequentially, see Col. 15, lines 35-41, Col. 20, lines 5-14).
As to claim 4, Yehuda’431 also discloses the processor is further configured to share result information corresponding to the write command with the driver using a completion queue (completion queues, see Fig. 3 and Col. 15, line 61 to Col. 16, line 15), and the driver is further configured to transmit the result information corresponding to the write command to the local computing apparatus through the communication interface (data is transferred by the driver to/from the accelerator and processed by the accelerator in the form of accelerator objects…Each accelerator object is processed (e.g. compressed/decompressed, encrypted/decrypted) independently, see Col. 24, lines 5-13; The Global FTL polls the completions from the driver, see Col. 24, line 51 to Col. 25, line 2, Col. 52, lines 41-44).
As to claim 5, Yehuda’431 also discloses the driver is further configured to: receive a read command for reading data of a specific size at a specific address of the storage device from the local computing apparatus (the front-ends receive customer read/write requests via a block access protocol such as NVMeoF or object get/set requests via a key/value access protocol over the network., see Col. 15, lines 11-20) through the communication interface, add the read command to a submission queue (submit a command on the submission queue to a backend reader core asking to read some data from the SSDs, see Col. 15, line 61 to Col. 16, line 16), and provide metadata for the read command to the processor (metadata, see Col. 25, lines 3-7, 50-57; also note: acknowledging back to the client, see Col. 17, lines 4-20, readers/writers process after receiving read/write request, see Col. 19, lines 39 to Col. 20, line 66).
As to claim 6, Yehuda’431 also discloses the processor is further configured to share result information corresponding to the read command with the driver using a completion queue (completion queues, see Fig. 3 and Col. 15, line 61 to Col. 16, line 15), and the driver is further configured to transmit data corresponding to the read command to the local computing apparatus through the communication interface (data is transferred by the driver to/from the accelerator and processed by the accelerator in the form of accelerator objects…Each accelerator object is processed (e.g. compressed/decompressed, encrypted/decrypted) independently, see Col. 24, lines 5-13; The Global FTL polls the completions from the driver, see Col. 24, line 51 to Col. 25, line 2, Col. 52, lines 41-44).
As to claim 7, Yehuda’431 also discloses the driver is further configured to provide a transmission completion signal to the local computing apparatus through the communication interface when data transmission corresponding to the read command is completed (posts a completion to the driver which is polled by the Global FTL., see Col. 24, line 67 to Col. 25, line 3, Col. 25, lines 50-54; also note: send completions, see Col. 27, lines 7-63 and as soon as a request has been written, it can be acknowledged by to the client, see Col. 17, lines 5-8).
Note claims 8 recites similar limitations of claim 1. Therefore it is rejected based on the same reason accordingly.
Note claims 9 recites the corresponding limitations of claim 2. Therefore it is rejected based on the same reason accordingly.
Note claims 10 recites the corresponding limitations of claim 3. Therefore it is rejected based on the same reason accordingly.
Note claims 11 recites the corresponding limitations of claim 4. Therefore it is rejected based on the same reason accordingly.
Note claims 12 recites the corresponding limitations of claim 5. Therefore it is rejected based on the same reason accordingly.
Note claims 13 recites the corresponding limitations of claim 6. Therefore it is rejected based on the same reason accordingly.
Note claims 14 recites the corresponding limitations of claim 7. Therefore it is rejected based on the same reason accordingly.
Referring to claim 15, Ben-Yehuda’431 as claimed, a data storage system (see Figs. 3, 4, 12, 18), comprising: a first field programmable gate array (FPGA) board (remote clients, see Col. 15, lines 55-63; FPGA, see Col. 4, lines 3-66); and a second FPGA board (FPGA, see Col. 4, lines 3-66) including a communication interface (network interface and/or storage interface, see Col. 9, line 55, Col. 10, lines 25, 67, Col. 41, line 60 to Col. 42, line 18 and Figs. 1-4), a data buffer (FE write-buffer receives incoming write requests and write them to non-volatile memory, see Col. 17, lines 3-25 and Col. 46, lines 22-59; also note: data is copied into the system once (or few times) when it arrives from the network and from that point on, it remains in the system memory (non volatile memory) or transferred to the accelerator memory such as accelerator’s FPGA’s internal DDR; NVRAM, DRAM, see Col. 14, line 59 to Col. 15, line 3), a storage device (storages such as SSD units 15, see Figs. 2-4), and a driver for controlling the storage device (The accelerator device driver runs in the Linux kernel and is the entity responsible for initializing and driving the accelerator, see Col. 23, lines 46-52), wherein the data buffer stores data received through the communication interface, and the driver is further configured to manage input/output commands for the storage device (the driver acts as a pipe between the Global FTL and then accelerator. It runs the in the context of the Global FTL and takes commands from the readers/writers, see Col. 23, lines 46-62; also note: storage management tasks and/or random logic, see Col. 36, lines 26-49) using a plurality of queues (submission/completion queues, see Col. 15, line 60 to Col. 16, line 32, Figs. 2 and 3; also note Col. 41, lines 46-51), and provide information on a result of executing the input/output commands to the first FPGA board through the interface (data is transferred by the driver to/from the accelerator and processed by the accelerator in the form of accelerator objects…Each accelerator object is processed (e.g. compressed/decompressed, encrypted/decrypted) independently, see Col. 24, lines 5-13; The Global FTL polls the completions from the driver, see Col. 24, line 51 to Col. 25, line 2, Col. 52, lines 41-44).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Bleiweiss et al. (U.S. Publication No. 2023/0053289 A1) discloses an apparatus to facilitate acceleration of machine learning operations.
Raman et al. (U.S. Patent No. 11,221,972 B1) discloses method for increasing fairness for small vs large NVME IO commands.
Lal et al. (U.S. Publication No. 2024/0184639 A1) discloses disaggregated computing for distributed confidential computing environment.
Gao et al. (U.S. Publication No. 2021/0182190 A1) discloses intelligent die aware storage device scheduler.
Malwankar et al. (U.S. Patent No. 10,079,889 B1) discloses a transparent protocol for providing remote access to NVMe drives and other SSDs.
Dreier (U.S. Publication No. 2023/0127976 A1) discloses queue utilization for optimized storage access without routing the NVMe/FC command through a kernel space.
Lal et al. (U.S. Publication No. 2022/0100582 A1) discloses an apparatus including a processor executing a trusted execution environment comprising a FPGA driver to interface with an FPGA device that is remote to the apparatus.
Kagan et al. (U.S. Publication No. 2015/0261434 A1) discloses a data storage system including a storage server and the host runs a driver that is configured to initiate a RDMA operation.
Kagan et al. (U.S. Publication No. 2015/0261720 A1) discloses configuring a driver program on a host for accessing remote storage devices using a local bus protocol.
BEYGI et al. (U.S. Publication No. 2021/0247935 A1) discloses remote direct attached multiple storage function storage device with a bridging device configured to map the host to the virtual function.
Zhang et al. (U.S. Publication No. 2022/0284075 A1) discloses a method of warp accumulation that receives from a vector processing unit a warp accumulation instruction.
Hsu et al. (U.S. Publication No. 2020/0117378 A1) discloses a method for performing read acceleration associated with data storage device and controller.
SHIM et al. (U.S. Publication No. 2014/0149607 A1) discloses a host bus adapter simultaneously performing read DMA on one interface while performing write DMA on a second interface.
Rachlin et al. (U.S. Publication No. 2017/0228329 A1) discloses relay mechanism to facilitate processor communication with inaccessible I/O device.
The examiner requests, in response to this office action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line number(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application. When responding to this office action, applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections. See 37 C.F.R. 1.111(c).
In amending in reply to a rejection of claims in an application or patent under reexamination, the applicant or patent owner must clearly point out the patentable novelty which he or she thinks the claims present in view the state of the art disclosed by the references cited or the objections made. The applicant or patent owner must also show how the amendments avoid such references or objections.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TITUS WONG whose telephone number is (571)270-1627. The examiner can normally be reached Monday-Friday, 10am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Idriss Alrobaye can be reached on (571) 270-1023. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TITUS WONG/Primary Examiner, Art Unit 2181