Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Information Disclosure Statement
As required by M.P.E.P. 609 (C), the applicant’s submission of the information Disclosure Statement dated 10/25/2024 and 12/27/2024 is acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. As required by M.P.E.P. 609 C(2), a copy of the PTOL-1449 initialed and dated by the examiner is attached to the instant office action.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over BLANER et al. (US Pub. 2019/0332549) in view of Feeley et al. (Pub. No.: US 2014/0108678).
Regarding independent claims 1 and 13, BLANER discloses a computational storage system comprising:
storage (Fig.1: system memory 106) configured to store data of a shared file system (Fig.1 and [0036]: memory access requests by processing units 102 and other coherence participants requesting coherent access to various memory blocks within various shared system memories 106 or cached within data processing system 100. Also coupled to system interconnect 110 is a nest memory management unit (NMMU) 112, which provides effective (virtual)-to-real address translation services to requesting device. Thus, BLANER teaches storage configured to store data that is shared among multiple processing units); and
accelerator logic (Fig.1: an accelerator unit 120) configured to perform a computational task using the stored data responsive to a command received from a host (Fig.1: processing unit 102) ([0038]: one or more of processing units 102 may be coupled by an accelerator interface 116 to an accelerator unit 120, as described further below. As utilized herein, the term "accelerator" is defined to refer to a computational device specifically configured to perform one or more computational, data flow, data storage, and/or functional tasks as compared with a general. Thus, BLANER teaches accelerator logic configured to perform computational tasks using data stored in shared system memory),
wherein the shared file system is accessible to the host and the computational storage system (Fig.1 & 2 and [0036]: Each of processing units 102 is coupled by a memory bus 104 to a respective one of shared system memories 106, the contents of which may generally be accessed by any of processing units 10, thereby disclosing host access to the stored data, [0045]: Host attach logic 240 may issue memory access requests and participate in coherency messaging on behalf of accelerator unit 120. Thus, BLANER teaches that the same stored data is accessible to both the host and the accelerator logic, corresponding to a shared file system as recited in claim 1).
However, BLANER does not specifically teach wherein the accelerator logic is configured to access the stored data using a pointer received from the host.
Feeley teaches wherein the accelerator logic is configured to access the stored data using a pointer received from the host ([0013]: setting a pointer to the command in a register in a host controller, directing access to the one or more of host system memory and host controller memory with the memory device via the host controller; and executing the command with the memory device. [0035]: a pointer can be included in the DID registers. The pointer can consist of the address in memory 316 where the data space is located. The address included in the pointer can point the DID registers containing device class independent information to the device class dependent information in memory 316).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate a pointer identifying a location of data, as taught by Feeley into the data processing system of BLANER, in order to enable the computational storage system to efficiently identify the specific data to be processed without requiring address translation.
Regarding claims 2 and 15, Feeley teaches wherein the accelerator logic is further configured to access the stored data using a data size associated with the pointer ([0035]: a pointer can be included in the DID registers. The pointer can consist of the address in memory 316 where the data space is located. The address included in the pointer can point the DID registers containing device class independent information to the device class dependent information in memory 316.).
Regarding claims 3 and 16, BLANER teaches wherein the accelerator logic is configured to perform the computational task based at least in part on configuration information or compiled instructions received from the host ([0053]: To support the configurability of an associated real address (RA)-based directory 600 of the contents of accelerator cache 302 that resides in host attach logic 240 (see, e.g., FIG. 6 described below), accelerator unit 120 preferably allocates internal or external storage for cache configuration parameters defining a desired configuration of RA-based directory 600. Although in some embodiments, these cache configuration parameters can be stored, for example, in software-defined storage locations (e.g., in system memory 106), in the depicted embodiment accelerator unit 120 is equipped with a set of hardware cache configuration registers 330 for storing the cache configuration parameters. In the illustrated example, cache configuration
registers 330 include at least a host tag number (HTN) register 332 for specifying a desired number of entries 320 to be utilized in host tag array 308).
Regarding claims 4 and 18, BLANER teaches wherein the shared file system is accessible by at least one other host (Fig.1: two processors).
Regarding claim 5, BLANER teaches wherein the accelerator logic includes an application-specific integrated circuit (ASIC) ([0038]: Accelerator units 120 can be implemented, for example, as an integrated circuit including programmable logic (e.g., programmable logic array (PLA) or field programmable gate array ( FPGA)) and/or custom integrated circuitry (e.g., application-specific integrated circuit ( ASIC))).
Regarding claim 6, BLANER teaches wherein the accelerator logic includes a field programmable gate array (FPGA) ([0038]: Accelerator units 120 can be implemented, for example, as an integrated circuit including programmable logic (e.g., programmable logic array (PLA) or field programmable gate array ( FPGA)) and/or custom integrated circuitry (e.g., application-specific integrated circuit ( ASIC))).
Regarding claims 7 and 19, BLANER teaches wherein the command includes at least one application program interface (API) call ([0041]: fixed- and floating-point arithmetic instructions, logical instructions, and memory access instructions that request read and/or write access to a memory block in the coherent address space of data processing system 100).
Regarding claims 8 and 17, BLANER teaches wherein the storage is configured to receive the data from a data collection device separate from the host at a location provided by the host (Fig.1 and [0036]: memory access requests by processing units 102 and other coherence participants requesting coherent access to various memory blocks within various shared system memories 106 or cached within data processing system 100).
Regarding claims 9 and 14, BLANER teaches wherein the storage is configured to store a result of the computational task (Fig.1 and [0038]: one or more of processing units 102 may be coupled by an accelerator interface 116 to an accelerator unit 120, as described further below. As utilized herein, the term “accelerator” is defined to refer to a computational device specifically configured to perform one or more computational, data flow, data storage, and/or functional tasks (as compared with a general-purpose CPU, which is designed to handle a wide variety of different computational tasks).
Regarding claims 10 and 20, BLANER teaches wherein the accelerator logic is configured to access other data directly from a different computational storage system via a data path that bypasses the host (Fig.1 and Fig.2).
Regarding claim 11, BLANER teaches host interface logic configured to provide direct memory access (DMA) to the host (Fig.1 and Fig.2).
Regarding claim 12, BLANER teaches a storage controller configured to communicate with the storage based on the command (Fig.1 & 2 and [0036]: Each of processing units 102 is coupled by a memory bus 104 to a respective one of shared system memories 106, the contents of which may generally be accessed by any of processing units 10, thereby disclosing host access to the stored data, [0045]: Host attach logic 240 may issue memory access requests and participate in coherency messaging on behalf of accelerator unit 120. Thus, BLANER teaches that the same stored data is accessible to both the host and the accelerator logic).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Hall (Pub. No.: US 2008/0104315) “Techniques for improving hard disk drive efficiency”
Considered for teachings related to disk drives, and more particularly, to techniques for improving the efficiency of hard disk drives.
Does not disclose or suggest an accelerator logic configured to perform a computational task using the stored data responsive to a command received from a host, wherein the shared file system is accessible to the host and the computational storage system, and wherein the accelerator logic is configured to access the stored data using a pointer received from the host.
Davis (Pub. No.: US 2008/0082811) “System And Method For Boot Loading Of Programs Within A Host Operating Environment Having One Or More Linked Guest Operating Systems”
Considered for teachings related generally to loading programs during system boot from files resident in a host operating system, from a prior operation, into computer memory for use by one or more guest operating systems, for their initialization, which are in communication with the host operating environment.
Does not disclose or suggest an accelerator logic configured to perform a computational task using the stored data responsive to a command received from a host, wherein the shared file system is accessible to the host and the computational storage system, and wherein the accelerator logic is configured to access the stored data using a pointer received from the host.
Any inquiry concerning this communication should be directed to Yong Choe at telephone number 571-270-1053 or email to yong.choe@uspto.gov. The examiner can normally be reached on M-F 10:00 am to 6:30pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Rutz, Jared Ian can be reached on (571) 272-5535. Any inquiry of a general nature or relating to the status of this application should be directed to the TC 2100 whose telephone number is (571) 272-2100.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PMR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-irect.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/YONG J CHOE/Primary Examiner, Art Unit 2135