Prosecution Insights
Last updated: April 19, 2026
Application No. 18/651,933

SINGLE-SIDED DISTRIBUTED STORAGE SYSTEM

Non-Final OA §102§103§DP
Filed
May 01, 2024
Examiner
TSAI, SHENG JEN
Art Unit
2139
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
3y 6m
To Grant
83%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
556 granted / 790 resolved
+15.4% vs TC avg
Moderate +13% lift
Without
With
+13.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
25 currently pending
Career history
815
Total Applications
across all art units

Statute-Specific Performance

§101
2.1%
-37.9% vs TC avg
§103
48.7%
+8.7% vs TC avg
§102
24.7%
-15.3% vs TC avg
§112
12.2%
-27.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 790 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION 1. This Office Action is taken in response to Applicants’ application 18/651,933 filed on 5/1/2024. Claims 1-20 are pending for consideration. 2. Examiner’s Note (1) In the case of amending the Claimed invention, Applicant is respectfully requested to indicate the portion(s) of the specification which dictate(s) the structure relied on for proper interpretation and also to verify and ascertain the metes and bounds of the claimed invention. This will assist in expediting compact prosecution. MPEP 714.02 recites: “Applicant should also specifically point out the support for any amendments made to the disclosure. See MPEP § 2163.06. An amendment which does not comply with the provisions of 37 CFR 1.121(b), (c), (d), and (h) may be held not fully responsive. See MPEP § 714.” Amendments not pointing to specific support in the disclosure may be deemed as not complying with provisions of 37 C.F.R. 1.131(b), (c), (d), and (h) and therefore held not fully responsive. Generic statements such as “Applicants believe no new matter has been introduced” may be deemed insufficient. (2) Examiner has cited particular columns/paragraph and line numbers in the references applied to the claims above for the convenience of the applicant. Although the specified citations are representative of the teachings of the art and are applied to specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant in preparing responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. Double Patenting 3. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the "right to exclude" granted by a patent and to prevent possible harassment by multiple assignees. See In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and, In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent is shown to be commonly owned with this application. See 37 CFR 1.130(b). Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b). 4. Claims 1-4, 7, 9, 11-14, 17, and 19 are rejected under the judicially created doctrine of obvious-type double patenting as being unpatentable over independent claims 1-31 of US Patent 9,058,122. Although not all of the conflicting claims are exactly identical, they are extremely similar and are not patentably distinct from each other as shown in the example below: 18/651,933 9,058,122 1. A computer-implemented method executed by data processing hardware of a distributed storage system that causes the data processing hardware to perform operations comprising: dividing a file into a plurality of data stripes; for each respective data stripe of the plurality of data stripes, allocating storage of the respective data stripe to a respective memory host of a plurality of memory hosts of the distributed storage system; receiving, from a client, a request to perform a read operation to access the file; executing the read operation; receiving a notification indicating that the executed read operation failed to access a particular data stripe of the plurality of data stripes of the file; and based on receiving the notification, replacing the particular data stripe with a new uninitialized data stripe. 1. A distributed storage system comprising: a curator managing striping of data across non-transitory memory by: dividing a file into data stripes and replicating the data stripes into data chunks; allocating storage of the data stripes and data chunks in the non-transitory memory; and memory hosts in communication with the curator, each memory host comprising: a set of remote direct memory accessible regions of the non-transitory memory allocated by the curator, the memory regions storing data chunks of files, each data chunk associated with one or more clients in an access control list, the access control list provides an access permission for each associated client of each associated data chunk; a network interface controller in communication with the memory and servicing remote direct memory access requests; and a computing processor in communication with the memory and the network interface controller, the computing processor executing a host process that registers the set of remote direct memory accessible regions of the memory with the network interface controller; wherein in response to receiving a connection request from a client process of a client in communication with curator and the memory host to access a data chunk, the host process establishes a remote direct memory access capable connection with the client process when the client is associated with the data chunk in the access control list, the host process associating the established connection with a protection domain having associated memory regions; and wherein in response to a notification from the client indicating that the data chunk stored on one of the memory regions is corrupt during the established connection between the client process and the host process, the curator attempting to reconstruct the corrupt data chunk, or if the curator is unable to reconstruct the corrupt data chunk, the curator replacing the corrupt data chunk with a new uninitialized data chunk. 9. The distributed storage system of claim 8, wherein in response to a memory access request from a client in communication with the memory hosts and the curator, the curator returns a file descriptor to the client that maps data chunks of a file on the memory hosts for remote direct memory access of the data chunks on the memory hosts. 10. The distributed storage system of claim 9, wherein the file descriptor comprises a client key for each data chunk of the file, each client key allowing access to the corresponding data chunk on its memory host. 5. Claims 1-7, 11-17, and 19 are rejected under the judicially created doctrine of obvious-type double patenting as being unpatentable over independent claims 1-20 of US Patent 12,001,380. Although not all of the conflicting claims are exactly identical, they are extremely similar and are not patentably distinct from each other as shown in the example below: 18/651,933 12,001,380 1. A computer-implemented method executed by data processing hardware of a distributed storage system that causes the data processing hardware to perform operations comprising: dividing a file into a plurality of data stripes; for each respective data stripe of the plurality of data stripes, allocating storage of the respective data stripe to a respective memory host of a plurality of memory hosts of the distributed storage system; receiving, from a client, a request to perform a read operation to access the file; executing the read operation; receiving a notification indicating that the executed read operation failed to access a particular data stripe of the plurality of data stripes of the file; and based on receiving the notification, replacing the particular data stripe with a new uninitialized data stripe. 1. A computer-implemented method when executed by data processing hardware of a distributed storage system causes the data processing hardware to perform operations comprising: dividing a file into a plurality of data stripes; for each respective data stripe of the plurality of data stripes, allocating storage of the respective data stripe to a respective memory host of a plurality of memory hosts of the distributed storage system; receiving, from a client, a memory access request requesting access to the file; in response to the memory access request, retrieving a file descriptor mapping each respective data stripe to the respective memory host of the plurality of memory hosts for access of the file by the client; receiving an external notification indicating that an attempt to access a particular data stripe of the plurality of data stripes stored at a particular memory host of the plurality of memory hosts has failed; and in response to receiving the external notification, reconstructing the particular data stripe. 6. Claims 1-7, 11-17, and 19 are rejected under the judicially created doctrine of obvious-type double patenting as being unpatentable over independent claims 1-20 of US Patent 11,645,223. Although not all of the conflicting claims are exactly identical, they are extremely similar and are not patentably distinct from each other as shown in the example below: 18/651,933 11,645,223 1. A computer-implemented method executed by data processing hardware of a distributed storage system that causes the data processing hardware to perform operations comprising: dividing a file into a plurality of data stripes; for each respective data stripe of the plurality of data stripes, allocating storage of the respective data stripe to a respective memory host of a plurality of memory hosts of the distributed storage system; receiving, from a client, a request to perform a read operation to access the file; executing the read operation; receiving a notification indicating that the executed read operation failed to access a particular data stripe of the plurality of data stripes of the file; and based on receiving the notification, replacing the particular data stripe with a new uninitialized data stripe. 1. A computer-implemented method when executed by data processing hardware of a distributed storage system causes the data processing hardware to perform operations comprising: dividing a file into a plurality of data stripes; for each respective data stripe of the plurality of data stripes: replicating the respective data stripe into respective one or more replica data stripes; and allocating storage of the respective data stripe and the respective one or more replica data stripes to a plurality of memory hosts of the distributed storage system; receiving, from a client, a memory access request requesting access to data stored at one or more of the plurality of memory hosts; in response to the memory access request, retrieving a file descriptor mapping each respective data stripe and each respective one or more replica data stripes to the plurality of memory hosts for access of the file by the client; receiving a notification indicating that an attempt to access the data stored at one or more of the plurality of memory hosts has failed; and in response to the notification, generating an updated file descriptor re-mapping each respective data stripe and each respective one or more replica data stripes to the plurality of memory hosts for access of the file by the client. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. 7. Claims 1, 3-7, 9, 11, 13-17, and 19 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Patel et al. (US Patent Application Publication 2003/0135514, hereinafter Patel). As to claim 1, Patel teaches A computer-implemented method executed by data processing hardware of a distributed storage system [as shown in figures 1 and 2; The intelligent distributed file system enables the storing of file data among a plurality of smart storage units which are accessed as a single file system. The intelligent distributed file system utilizes a metadata data structure to track and manage detailed information about each file, including, for example, the device and block locations of the file's data blocks, to permit different levels of replication and/or redundancy within a single file system, to facilitate the change of redundancy parameters, to provide high-level protection for metadata, to replicate and move data in real-time, and to permit the creation of virtual hot spares among the smart storage units without the need to idle any single smart storage unit in the intelligent distributed file system (abstract)] that causes the data processing hardware to perform operations comprising: dividing a file into a plurality of data stripes [as shown in figures 1, 2, and 9; The intelligent distributed file system enables the storing of file data among a plurality of smart storage units which are accessed as a single file system. The intelligent distributed file system utilizes a metadata data structure to track and manage detailed information about each file, including, for example, the device and block locations of the file's data blocks, to permit different levels of replication and/or redundancy within a single file system, to facilitate the change of redundancy parameters, to provide high-level protection for metadata, to replicate and move data in real-time, and to permit the creation of virtual hot spares among the smart storage units without the need to idle any single smart storage unit in the intelligent distributed file system (abstract); In addition, the intelligent distributed file system advantageously includes a "virtual hot spare" which provides extra data storage space not used for data storage by the file system, but does not require any storage device or drive to stand idle. In one embodiment, and as detailed below, this is accomplished by storing data blocks across some or all storage devices in a "data stripe" of stripe width less than the total number of storage devices in the file system, or by leaving at least one empty block in each stripe of data blocks in the file system (¶ 0010); FIG. 9 illustrates a sample data location table 910, parity map 920, and virtual hot spare map 930 and the corresponding devices on which the data is stored. The example of FIG. 9 shows how data may be stored in varying locations on the devices, that the "stripes" of data are stored across different offset addresses on each device, and that the parity data may be stored in various devices, even for data from the same file. In other embodiments, the data may be stored at the same offset address on each device … (¶ 0167-0172)]; for each respective data stripe of the plurality of data stripes, allocating storage of the respective data stripe to a respective memory host of a plurality of memory hosts of the distributed storage system [as shown in figures 1, 2, and 9; The forward allocator module determines which device's blocks should be used for a WRITE request based upon factors, such as, for example, redundancy, space, and performance. These parameters may be set by the system administrator, derived from information embedded in the intelligent distributed file system 110, incorporated as logic in the intelligent distributed file system 110 … (¶ 0091-0098)]; receiving, from a client, a request to perform a read operation to access the file [The intelligent distributed file system advantageously provides access to data in situations where there are large numbers of READ requests especially in proportion to the number of WRITE requests … (¶ 0039); The block request translator module receives incoming READ requests, performs name lookups, locates the appropriate devices, and pulls the data from the device to fulfill the request … (¶ 0087)]; executing the read operation [The intelligent distributed file system advantageously provides access to data in situations where there are large numbers of READ requests especially in proportion to the number of WRITE requests … (¶ 0039); The block request translator module receives incoming READ requests, performs name lookups, locates the appropriate devices, and pulls the data from the device to fulfill the request … (¶ 0087)]; receiving a notification indicating that the executed read operation failed to access a particular data stripe of the plurality of data stripes of the file [The block request translator module may also respond to device failure. For example, if a device is down, the block request translator module may request local and remote data blocks that may be used to reconstruct the data using, for example, parity information … (¶ 0089); The failure recovery module reconfigures the intelligent distributed file system 110, in real-time, to recover data which is no longer available due to a device failure. The failure recovery module may perform the reconfiguration without service interruptions while maintaining performance and may return the data to desired redundancy levels in a short period of time … (¶ 0100-0105); The remote block manager module 337 manages inter-device communication, including, for example, block requests, block responses, and the detection of remote device failures. In one embodiment, the remote block manager module 337 resides at the Local File System layer (¶ 0116)]; and based on receiving the notification, replacing the particular data stripe with a new uninitialized data stripe [as shown in figure 9, where spare/blank/uninitialized data stripe is used to replace failed data stripe; FIG. 9 illustrates a sample data location table 910, parity map 920, and virtual hot spare map 930 and the corresponding devices on which the data is stored … For example, the parity data for the first stripe is stored on device 3 at location 400 and relates to data block 0 stored on device 0 at location 100, data block 1 stored on device 1 at location 200, and data block 2 stored on device 2 at location 300. The virtual hot spare block for the first stripe is reserved on device 4 at location 500. The parity data for the second stripe is stored on device 2 at location 600 and relates to data block 3 stored on device 0 at location 300, data block 4 stored on device 4 at location 800, and data block 5 stored on device 1 at location 700. The virtual hot spare block for the second stripe is stored on device 3 at location 600 (¶ 0167-0168); The "per stripe" method appends one virtual hot spare ("VHS") block (or a VHS "hole") to some of the data stripe written in the intelligent distributed file system 110. In one embodiment, a predetermined location within or adjacent to each stripe is left blank (e.g., NULL, VOID, or empty, whether allocated or not) for use as a virtual hot spare in the event of failure of one or more smart storage units 114 … (¶ 0210-0212)]. As to claim 3, Patel teaches The computer-implemented method of claim 1, wherein the request to perform the read operation comprises a RDMA read network operation [as shown in figures 1 and 2; remote block manager module, figure 3, 337; Another response has been to allow multiple servers access to shared disks using architectures, such as, Storage Area Network solutions (SANs), but such systems are expensive and require complex technology to set up and to control data integrity. Further, high speed adapters are required to handle large volumes of data requests (¶ 0005); FIG. 1 illustrates one embodiment of an intelligent distributed file system 110 which communicates with a network server 120 to provide remote file access. The intelligent distributed file system 110 may communicate with the network server 120 using a variety of protocols, such as, for example, NFS or CIFS. Users 130 interact with the network server 120 via a communication medium 140, such as the Internet 145, to request files managed by the intelligent distributed file system 110. The exemplary intelligent distributed file system 110 makes use of a switch component 125 which communicates with a set of smart storage units 114 and the network server 120 … (¶ 0056); The exemplary processing module 330 may be configured to receive requests for data files, retrieve locally and/or remotely stored metadata about the requested data files, and retrieve the locally and/or remotely stored data blocks of the requested data files … (¶ 0081-0082); The block request translator module may also respond to device failure. For example, if a device is down, the block request translator module may request local and remote data blocks that may be used to reconstruct the data using, for example, parity information … (¶ 0089); The remote block manager module 337 manages inter-device communication, including, for example, block requests, block responses, and the detection of remote device failures. In one embodiment, the remote block manager module 337 resides at the Local File System layer … (¶ 0116-0119)]. As to claim 4, Patel teaches The computer-implemented method of claim 1, wherein the operations further comprise, based on receiving the request to perform the read operation, retrieving a file descriptor mapping each respective data stripe to the respective memory host of the plurality of memory hosts for access of the file by the client [as shown in figures 5, 7A, and 7B; FIG. 5 illustrates a sample data structure 510 for storing metadata. The exemplary data structure 510 stores the following information: Field Description Mode The mode of the file (e.g., regular file, block special, character special, directory, symbolic link, fifo, socket, whiteout, unknown) Owner Account on the smart storage unit which has ownership of the file Timestamp Time stamp of the last modification of the file Size Size of the metadata file Parity Count Number of parity devices used Mirror Count Number of mirrored devices used VHS Count Number of virtual hot spares used Version Version of metadata structure Type Type of data location table (e.g., Type 0, Type 1, Type 2, or Type 3) Data Location Table Address of the data location table or actual data location table information Reference Count Number of metadata structures referencing this one Flags File permissions (e.g., standard UNIX permissions) Parity Map Pointer Pointer to parity block information (¶ 0139); The block request translator module receives incoming READ requests, performs name lookups, locates the appropriate devices, and pulls the data from the device to fulfill the request. If the data is directly available, the block request translator module sends a data request to the local block manager module or to the remote block manager module depending on whether the block of data is stored on the local storage device or on the storage device of another smart storage unit (¶ 0087); The systems and methods of the present invention provide an intelligent distributed file system which enables the storing of data among a set of smart storage units which are accessed as a single file system. The intelligent distributed file system tracks and manages detailed metadata about each file. Metadata may include any data that relates to and/or describes the file, such as, for example, the location of the file's data blocks, including both device and block location information, the location of redundant copies of the metadata and/or the data blocks (if any), error correction information, access information, the file's name, the file's size, the file's type, and so forth … (¶ 0038)]. As to claim 5, Patel teaches The computer-implemented method of claim 4, wherein the file descriptor comprises an array of stripe protocol buffers, each stripe protocol buffer describing a corresponding data stripe [as shown in figures 5, 7A, and 7B; FIG. 5 illustrates a sample data structure 510 for storing metadata. The exemplary data structure 510 stores the following information: Field Description Mode The mode of the file (e.g., regular file, block special, character special, directory, symbolic link, fifo, socket, whiteout, unknown) Owner Account on the smart storage unit which has ownership of the file Timestamp Time stamp of the last modification of the file Size Size of the metadata file Parity Count Number of parity devices used Mirror Count Number of mirrored devices used VHS Count Number of virtual hot spares used Version Version of metadata structure Type Type of data location table (e.g., Type 0, Type 1, Type 2, or Type 3) Data Location Table Address of the data location table or actual data location table information Reference Count Number of metadata structures referencing this one Flags File permissions (e.g., standard UNIX permissions) Parity Map Pointer Pointer to parity block information (¶ 0139)]. As to claim 6, Patel teaches The computer-implemented method of claim 4, wherein the file descriptor comprises one or more of: a file state attribute indicating a state of the file; a data chunks attribute indicating a number of replica data stripes per respective data stripe; a stripe length attribute indicating a number of bytes per respective data stripe; or a sub-stripe length attribute indicating a number of bytes per sub-stripe in the file descriptor [as shown in figures 5, 7A, and 7B; FIG. 5 illustrates a sample data structure 510 for storing metadata. The exemplary data structure 510 stores the following information: Field Description Mode The mode of the file (e.g., regular file, block special, character special, directory, symbolic link, fifo, socket, whiteout, unknown) Owner Account on the smart storage unit which has ownership of the file Timestamp Time stamp of the last modification of the file Size Size of the metadata file Parity Count Number of parity devices used Mirror Count Number of mirrored devices used VHS Count Number of virtual hot spares used Version Version of metadata structure Type Type of data location table (e.g., Type 0, Type 1, Type 2, or Type 3) Data Location Table Address of the data location table or actual data location table information Reference Count Number of metadata structures referencing this one Flags File permissions (e.g., standard UNIX permissions) Parity Map Pointer Pointer to parity block information (¶ 0139); The systems and methods of the present invention provide an intelligent distributed file system which enables the storing of data among a set of smart storage units which are accessed as a single file system. The intelligent distributed file system tracks and manages detailed metadata about each file. Metadata may include any data that relates to and/or describes the file, such as, for example, the location of the file's data blocks, including both device and block location information, the location of redundant copies of the metadata and/or the data blocks (if any), error correction information, access information, the file's name, the file's size, the file's type, and so forth … (¶ 0038)]. As to claim 7, Patel teaches The computer-implemented method of claim 1, wherein each respective memory host of the plurality of memory hosts comprises a network interface controller in communication with a memory of the respective memory host, the network interface controller servicing remote direct memory access requests [as shown in figures 1 and 2; remote block manager module, figure 3, 337; Another response has been to allow multiple servers access to shared disks using architectures, such as, Storage Area Network solutions (SANs), but such systems are expensive and require complex technology to set up and to control data integrity. Further, high speed adapters are required to handle large volumes of data requests (¶ 0005); FIG. 1 illustrates one embodiment of an intelligent distributed file system 110 which communicates with a network server 120 to provide remote file access. The intelligent distributed file system 110 may communicate with the network server 120 using a variety of protocols, such as, for example, NFS or CIFS. Users 130 interact with the network server 120 via a communication medium 140, such as the Internet 145, to request files managed by the intelligent distributed file system 110. The exemplary intelligent distributed file system 110 makes use of a switch component 125 which communicates with a set of smart storage units 114 and the network server 120 … (¶ 0056)]. As to claim 9, Patel teaches The computer-implemented method of claim 1, wherein the operations further comprise, before replacing the particular data stripe with the new uninitialized data stripe, determining that the executed read operation failed due to a permanent error [Yet another benefit of some embodiments is that the systems and methods may include one or more virtual hot spare(s) among some or all of the smart storage units, by providing virtual hot spare blocks among the data blocks distributed in stripes among the smart storage units. These virtual hot spare blocks permit the recovery of data lost due to the failure of one or more smart storage units to the remaining smart storage units without the need to leave any smart storage units idle, while providing the function of a traditional idle hot spare storage device (¶ 0045); A method for using a virtual hot spare in an intelligent distributed file system, the method comprising: detecting the failure of at least one smart storage unit in the intelligent distributed file system; recovering data blocks stored on the at least one failed smart storage unit in each stripe in the file system; determining the location of virtual hot spare blocks on at least one functional smart storage unit in the intelligent distributed file system; and, storing the recovered data blocks in stripes on the virtual hot spare blocks on the at least one functional smart storage unit (claim 16)]. As to claim 10, Patel teaches The computer-implemented method of claim 9, wherein determining that the executed read operation failed due to the permanent error comprises: re-executing the read operation; and receiving another notification indicating that the re-executed read operation failed to access the particular data stripe of the plurality of data stripes of the file As to claim 11, it recites substantially the same limitations as in claim 1, and is rejected for the same reasons set forth in the analysis of claim 1. Refer to “As to claim 1” presented earlier in this Office Action for details. As to claim 13, it recites substantially the same limitations as in claim 3, and is rejected for the same reasons set forth in the analysis of claim 3. Refer to “As to claim 3” presented earlier in this Office Action for details. As to claim 14, it recites substantially the same limitations as in claim 4, and is rejected for the same reasons set forth in the analysis of claim 4. Refer to “As to claim 4” presented earlier in this Office Action for details. As to claim 15, it recites substantially the same limitations as in claim 5, and is rejected for the same reasons set forth in the analysis of claim 5. Refer to “As to claim 5” presented earlier in this Office Action for details. As to claim 16, it recites substantially the same limitations as in claim 6, and is rejected for the same reasons set forth in the analysis of claim 6. Refer to “As to claim 6” presented earlier in this Office Action for details. As to claim 17, it recites substantially the same limitations as in claim 7, and is rejected for the same reasons set forth in the analysis of claim 7. Refer to “As to claim 7” presented earlier in this Office Action for details. As to claim 18, it recites substantially the same limitations as in claim 8, and is rejected for the same reasons set forth in the analysis of claim 8. Refer to “As to claim 8” presented earlier in this Office Action for details. As to claim 19, it recites substantially the same limitations as in claim 9, and is rejected for the same reasons set forth in the analysis of claim 9. Refer to “As to claim 9” presented earlier in this Office Action for details. As to claim 20, it recites substantially the same limitations as in claim 10, and is rejected for the same reasons set forth in the analysis of claim 10. Refer to “As to claim 10” presented earlier in this Office Action for details. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 8. Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Patel et al. (US Patent Application Publication 2003/0135514, hereinafter Patel), and in view of Grawrock (US Patent 6,360,322). Regarding claim 2, Patel does not teach returning a key allowing the client access to data on the plurality of memory hosts. However, returning a key allowing the client access to data is well known and a common practice in the art. For example, Grawrock specifically teaches returning a key allowing the client access to data [A method of granting a user access to encrypted data stored on a user's computer, said user and said user's computer remote from an authenticating entity, comprising the steps of: automatically authenticating said user by an authenticating computer at said authenticating entity; upon authentication, automatically providing an access key to said authenticated user, enabling said user to access said encrypted data stored on said user's computer (claim 1)]. Therefore, it would have been obvious for one of ordinary skills in the art prior to Applicant’s inventio to return a key allowing the client access to data, as specifically demonstrated by Grawrock, and to incorporate it into the existing scheme disclosed by Patel, in order to ensure that only authorized users are able to access the data. As to claim 12, it recites substantially the same limitations as in claim 2, and is rejected for the same reasons set forth in the analysis of claim 2. Refer to “As to claim 2” presented earlier in this Office Action for details. 9. Claims 10 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Patel et al. (US Patent Application Publication 2003/0135514, hereinafter Patel), and in view of Arakawa (US Patent Application Publication 2003/0163759). Regarding claim 10, Patel does not teach re-executing the read operation; and receiving another notification indicating that the re-executed read operation failed to access the particular data. However, Arakawa specifically teaches re-executing the read operation; and receiving another notification indicating that the re-executed read operation failed to access the particular data [as shown in figure 2, steps S2-S14; … Error retry is executed when an error has occurred during execution of a read/write command supplied from the host 2. The error log includes information in fields 201-203. In the field 201, the command in which an error (sector error) has occurred is stored … (¶ 0025); On the other hand, if a target sector could not be normally read, i.e., if retry has failed (step S11), the CPU 17 returns to the step S9. At the step S9, the CPU 17 again determines whether or not a retry timeout occurs. If no timeout occurs, the CPU 17 re-executes error retry (step S10) … (¶ 0038); Upon fetching one error log, the CPU 17 starts the timer TM1 as in the standard case where a read/write operation is executed in units of sectors (step S23). Subsequently, the CPU 17 controls the reading of the sector whose retry has failed, on the basis of the read command and address of the sector contained in the error log (step S24) … (¶ 0045)]. Therefore, it would have been obvious for one of ordinary skills in the art prior to Applicant’s inventio to re-execute the read operation, as specifically demonstrated by Arakawa, and to incorporate it into the existing scheme disclosed by Patel, because Arakawa teaches doing this allows verification of a defective sector/area of a memory/storage device [Some recent HDDs automatically allocate a normal sector in place of a defective sector, i.e., automatically execute an alternate process. For the verification of the defective sector, a lot of time is used, since, as stated above, HDDs have been used as external storages for computers. Specifically, in conventional HDDs, retry (error retry) is executed not more than a predetermined number of times on a sector that may be defective. In error retry, data is read or written from or to a sector that may be defective. Only if the error sector is not restored even after error retry is executed a predetermined number of times is the sector considered a defective one and subjected to an alternate process. Thus, in the process executed upon detection of a sector error in conventional HDDs, much more time is required for verification of a defective sector, i.e., for error retry, than for an alternate process (¶ 0010)]. As to claim 20, it recites substantially the same limitations as in claim 10, and is rejected for the same reasons set forth in the analysis of claim 10. Refer to “As to claim 10” presented earlier in this Office Action for details. 10. Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Patel et al. (US Patent Application Publication 2003/0135514, hereinafter Patel), and in view of Donaldson et al. (US Patent 5,297,269, hereinafter Donaldson). Regarding claim 8, Patel does not teach determining that at least one of the plurality of data stripes is migrating to a respective destination storage location of the plurality of memory hosts; and denying the request to perform the read operation. However, Donaldson specifically teaches the cited limitation [… The transitional states tell the main memory module whether an outstanding data transfer operation regarding a particular data block is being executed so that the memory module can block or inhibit a subsequent read request for that data block until the already commenced operation has been completed. The transitional states therefor provide an automatic conflict check mechanism during accelerated cache operations (c4 L46-62)]. Therefore, it would have been obvious for one of ordinary skills in the art prior to Applicant’s inventio to determine that at least one of the plurality of data stripes is migrating to a respective destination storage location of the plurality of memory hosts; and denying the request to perform the read operation, as specifically demonstrated by Donaldson, and to incorporate it into the existing scheme disclosed by Patel, because Donaldson teaches doing this prevents conflicts from occurring [… The transitional states tell the main memory module whether an outstanding data transfer operation regarding a particular data block is being executed so that the memory module can block or inhibit a subsequent read request for that data block until the already commenced operation has been completed. The transitional states therefor provide an automatic conflict check mechanism during accelerated cache operations (c4 L46-62)]. As to claim 18, it recites substantially the same limitations as in claim 8, and is rejected for the same reasons set forth in the analysis of claim 8. Refer to “As to claim 8” presented earlier in this Office Action for details. Conclusion 11. Claims 1-20 are rejected as explained above. 12. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHENG JEN TSAI whose telephone number is 571-272-4244. The examiner can normally be reached on Monday-Friday, 9-6. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kenneth Lo can be reached on 571-272-9774. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /SHENG JEN TSAI/Primary Examiner, Art Unit 2136 October 5, 2025
Read full office action

Prosecution Timeline

May 01, 2024
Application Filed
Oct 05, 2025
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596490
MEMORY MANAGEMENT USING A REGISTER
2y 5m to grant Granted Apr 07, 2026
Patent 12585387
Clock Domain Phase Adjustment for Memory Operations
2y 5m to grant Granted Mar 24, 2026
Patent 12579075
USING RETIRED PAGES HISTORY FOR INSTRUCTION TRANSLATION LOOKASIDE BUFFER (TLB) PREFETCHING IN PROCESSOR-BASED DEVICES
2y 5m to grant Granted Mar 17, 2026
Patent 12572474
SPARSITY COMPRESSION FOR INCREASED CACHE CAPACITY
2y 5m to grant Granted Mar 10, 2026
Patent 12561070
AUTONOMOUS BATTERY RECHARGE CONTROLLER
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
83%
With Interview (+13.0%)
3y 6m
Median Time to Grant
Low
PTA Risk
Based on 790 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month