Prosecution Insights
Last updated: April 19, 2026
Application No. 18/477,523

ORPHAN BUCKET SCANNER

Final Rejection §103
Filed
Sep 28, 2023
Examiner
KRIEGER, JONAH C
Art Unit
2133
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
4 (Final)
86%
Grant Probability
Favorable
5-6
OA Rounds
2y 7m
To Grant
95%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
127 granted / 147 resolved
+31.4% vs TC avg
Moderate +8% lift
Without
With
+8.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
31 currently pending
Career history
178
Total Applications
across all art units

Statute-Specific Performance

§101
3.4%
-36.6% vs TC avg
§103
69.8%
+29.8% vs TC avg
§102
12.5%
-27.5% vs TC avg
§112
11.9%
-28.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 147 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1 and 11 have been amended. Claims 4 and 14 remain cancelled. No new claims have been added. Claims 1-3, 5-13 and 15-20 remain pending and are ready for examination. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3, 5, 11, 13 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Alkalay et al. (US Publication No. 2022/0398034 – “Alkalay”) in view of Yang et al. (US Publication No. 2024/0126446 -- "Yang") in further view of Taneja et al. (US Publication No. 2020/0394078 -- "Taneja") in further view of Venetsapoulos et al. (US Publication No. 2018/0239559 – “Venetsapoulos”). Regarding claim 11, Alkalay teaches A system comprising: data processing hardware; and memory hardware in communication with the data processing hardware, the memory hardware storing instructions that when executed by the data processing hardware cause the data processing hardware to: (Alkalay paragraph [0002], Each such storage node of a distributed storage system typically processes input-output (IO) operations from one or more host devices. During the processing of those IO operations, the storage node runs various storage application processes. The storage application processes in some cases handle the persistent storage of metadata pages on storage devices of the storage system. Memory hardware may be communicating with data processing hardware for various operations, also see Alkalay paragraph [0004], an apparatus comprises a storage system comprising a plurality of storage devices and at least one processing device comprising a processor coupled to a memory. The at least one processing device is configured to persistently store a plurality of metadata pages of the storage system on the plurality of storage devices) Obtain a directory that includes information for one or more respective pluralities of storage buckets deployed in a container-based environment, (Alkalay paragraph [0045], An address-to-hash (A2H) table provides mapping between logical block addresses (LBAs) and hashes of respective user data pages. The A2H table is illustratively backed up in persistent storage as a set of hash-based backup (HBBKUP) pages, each 16 KB in size, although other page types and page sizes can be used in other embodiments. A given HBBKUP page contains a plurality of LBA-hash pairs, each such pair providing a mapping between a particular LBA and a hash of the corresponding data page. The HBBKUP pages are examples of what are more generally referred to herein as “metadata pages.” The HBBKUP pages are illustratively organized in buckets, where each such bucket contains a plurality of HBBKUP pages. A given HBBKUP page can therefore be uniquely identified by a pair comprising a bucket identifier (ID) and a page index of that HBBKUP page within the bucket having the bucket ID. An address mapping directory can be used to identify pluralities of storage buckets, which can be deployed in a container-based environment, as seen in Alkalay paragraph [0204], A given such processing device in some embodiments may correspond to one or more virtual machines or other types of virtualization infrastructure such as Docker containers or Linux containers (LXCs). Host devices, distributed storage controllers and other system components may be implemented at least in part using processing devices of such processing platforms. For example, respective distributed modules of a distributed storage controller can be implemented in respective containers running on respective ones of the processing devices of a processing platform) Wherein the container-based environment includes a plurality of containers, (Alkalay paragraph [0019-0020], The storage nodes 102 illustratively comprise respective processing devices of one or more processing platforms. For example, the storage nodes 102 can each comprise one or more processing devices each having a processor and a memory, possibly implementing virtual machines and/or containers, although numerous other configurations are possible. The storage node can be implemented as a container based environment, also see Alkalay paragraph [0204], A given such processing device in some embodiments may correspond to one or more virtual machines or other types of virtualization infrastructure such as Docker containers or Linux containers (LXCs). Host devices, distributed storage controllers and other system components may be implemented at least in part using processing devices of such processing platforms. For example, respective distributed modules of a distributed storage controller can be implemented in respective containers running on respective ones of the processing devices of a processing platform) wherein each container from the plurality of containers is allocated one respective plurality of storage buckets from the one or more respective pluralities of storage buckets, (Alkalay paragraph [0168], Metadata storage logic 110 is configured to reserve the possibility of creating a predetermined number of logical storage volumes where, for example, the maximum number of possible logical storage volumes may be set initially, e.g., during setup of the storage system. The metadata storage logic 110 implements a two-hierarchy configuration for storing the metadata on the storage devices 106. For example, metadata storage logic 110 maintains a data structure of bucket ranges, each corresponding to a plurality of buckets. While the data structure of bucket ranges may be implemented statically, the memory corresponding to buckets for at least some of the bucket ranges may remain unallocated until needed for storing metadata pages. The storage devices may contain a plurality of storage buckets from the overall selection of storage buckets) and wherein each at least one resource maps each container to one or more storage buckets from the one respective plurality of storage buckets allocated to the container; (Alkalay paragraph [0092], The content-based signature in the present example comprises a content-based digest of the corresponding data page. Such a content-based digest is more particularly referred to as a “hash digest” of the corresponding data page, as the content-based signature is illustratively generated by applying a hash function such as the SHA1 secure hashing algorithm to the content of that data page. The full hash digest of a given data page is given by the above-noted 20-byte signature. The hash digest may be represented by a corresponding “hash handle,” which in some cases may comprise a particular portion of the hash digest. The hash handle illustratively maps on a one-to-one basis to the corresponding full hash digest within a designated cluster boundary or other specified storage resource boundary of a given storage system. In arrangements of this type, the hash handle provides a lightweight mechanism for uniquely identifying the corresponding full hash digest and its associated data page within the specified storage resource boundary. The hash digest and hash handle are both considered examples of “content-based signatures” as that term is broadly used herein. The container (i.e., grouping of storage buckets) may be associated with a storage range indicating a storage resource boundary) for each respective storage bucket from the one respective plurality of storage buckets allocated to a first container from the plurality of containers: identify a prefix associated with the respective storage bucket from the one respective plurality of storage buckets allocated to the first container; (Alkalay paragraph [0157], As an example, in a system where there are 64 million buckets, a 16-bit volume identifier (also referred to herein as a lun_id or LUN ID) with a 10-bit slice offset may be utilized to access a given bucket comprising a given metadata page stored on the storage devices 106, where each bucket is represented by a structure of 6 bytes. In such an example system, 384 megabytes (MB) (64 million×6 bytes) would be utilized for persistently storing the corresponding metadata pages. As the number of logical storage volumes increases, the number of metadata pages that need to be stored will also increase. To handle the additional metadata pages, a number of approaches maybe utilized. Each of the plurality of storage buckets may have a unique identifier assigned to it, as previously described, this can be implemented in a container-based environment, such as in Alkalay paragraph [0204]) Identify, based on the prefix associated with the respective storage bucket, a resource from the at least one resource that is associated with the first container; (Alkalay paragraph [0004], In one embodiment, an apparatus comprises a storage system comprising a plurality of storage devices and at least one processing device comprising a processor coupled to a memory. The at least one processing device is configured to persistently store a plurality of metadata pages of the storage system on the plurality of storage devices. The metadata pages are organized into a plurality of buckets. The at least one processing device is further configured to access a given metadata page of the plurality of metadata pages based at least in part on a bucket identifier. The given metadata page corresponds to a given logical volume of a plurality of logical volumes of the storage system. The bucket identifier comprises a first portion comprising an indication of a given bucket range of a plurality of bucket ranges that corresponds to the given logical volume. Each bucket may have a storage identifier indicating an address and logical volume corresponding to the given storage bucket assigned to a given container). Alkalay does not teach wherein each at least one resource maps each container to one or more storage buckets from the one respective plurality of storage buckets allocated to the container; for at least one storage bucket from the one respective plurality of storage buckets: determine whether the resource associated with the at least one storage bucket from the one respective plurality of storage buckets has been deleted from the container-based environment; responsive to determining that the resource associated with the at least one storage bucket from the one respective plurality of storage buckets has been deleted from the container-based environment, add the at least one storage bucket from the one respective plurality of storage buckets to a subset of storage buckets from the one respective plurality of storage buckets; generate an alert indicating the subset of storage buckets. However, Yang teaches responsive to determining that the resource associated with the at least one storage bucket from the one respective plurality of storage buckets has been deleted from the container-based environment, add the at least one storage bucket from the one respective plurality of storage buckets to a subset of storage buckets from the one respective plurality of storage buckets; (Yang paragraph [0017], Additionally, management service 130 can determine how a failure of at least one local data store will affect objects in hyperconverged data store 140. The objects can include one or more disk files, configuration files, and the like that are indicated for each virtual machine on the host. For example, if a data store fails on host 111, management service 130 can identify orphan or unclaimed objects in association with virtual machines that include at least one object on the local data store, wherein the orphan object can be located on other local data stores or hyperconverged data store 140. Once the orphan objects are identified, management service 130 can provide a summary to an administrator of computing environment 100. In some examples, a user generates a request in association with a data store to determine how a failure will affect other objects located on other data stores. The user may generate the request in anticipation of a failure of the data store either intentionally (through an update or configuration change), or unintentionally (via a failure to one or more storage devices or the storage controller). In response to the request, management service 130 can identify virtual machines with objects on the data store and identify orphan objects for the virtual machines on one or more other data stores. A summary can then be provided to an administrator by management service 130, wherein the summary can indicate the orphan disks, virtual machine identifiers for virtual machines affected by the failure, or some other information. A summary can also be provided by management service 130 when a failure or potential failure is identified through the aggregated health and performance information supplied by host 111. When a particular resource is deleted associated with a storage object, a subset (i.e., list) of the orphaned storages may be comprised and utilized) generate an alert indicating the subset of storage buckets (Yang paragraph [0026], Method 300 includes, for at least one host in computing environment 100, identifying (301) a data store location for each disk object associated each virtual machine on the at least one host. The data store locations can be identified via configuration files for the virtual machines, wherein the configuration files can indicate a data store, a file path location, or some other location information associated with the objects. Method 300 further includes determining (302) orphan disk objects (e.g., virtual disk files) in the hyperconverged data store based on a failure to a local data store and the identified data store locations for the disk objects associated with the virtual machines. Once the orphan disk objects are identified, method 300 further provides for generating (303) a summary of the orphan objects for display to an administrator of computing environment 100. The summary may indicate names and file path locations to the orphan objects, a virtual machine name associated with the hyperconverged storage objects, or some other information associated with the orphaned hyperconverged storage objects. An alert comprising a notification may be generated and sent corresponding to the orphaned storage object list). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Alkalay with those of Yang. Yang teaches identifying resources associated with particular storage units (i.e., buckets) being deleted, which are then grouped together into a subset and an alert generated. Notifying the system of these orphaned storage units is an improvement as the orphaned storage objects cannot be properly referenced or targeted and can result in inefficiency in the system (Yang paragraph [0004], However, while the virtual machines can use multiple data stores, difficulties exist for administrators in determining the effects of a failure associated with a local data store on other reliant data stores. For example, when a virtual machine that uses both local and hyperconverged data stores encounters a failure with the local data store, orphan objects, or unclaimed objects on the hyperconverged storage can be created. This can create inefficiencies as no virtual machine can be assigned to use the resources of the orphan objects during the downtime of the local data store). Alkalay in view of Yang does not teach wherein each at least one resource maps each container to one or more storage buckets from the one respective plurality of storage buckets allocated to the container; for at least one storage bucket from the one respective plurality of storage buckets: determine whether the resource associated with the at least one storage bucket from the one respective plurality of storage buckets has been deleted from the container-based environment. However, Taneja teaches for at least one storage bucket from the one respective plurality of storage buckets: determine whether the resource associated with the at least one storage bucket from the one respective plurality of storage buckets has been deleted from the container-based environment (Taneja paragraph [0016], In some embodiments, the system identifies a bucket identifier with a designated prefix. The system can determine that a bucket associated with the bucket identifier having the designated prefix is a snapbucket and that objects for temporary and intermediate operations can go into that bucket. The system can assign an expiry duration to the snapbucket. Following the expiry period, the system can delete the bucket identifier. Deleting the bucket identifier means that the reference to the objects is lost. A prefix associated with a bucket can be used to identify reference resources within said bucket). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Alkalay and Yang with those of Taneja. Taneja teaches using a prefix associated with a bucket, and based on the prefix, identifying a resource associated with said prefix, which can be used to provide more efficient access and operations related to the storage bucket (Taneja paragraph [0040], The OVM 210 can create a snapbucket 225. The OVM 210 can receive, from the client 205, the bucket ID associated with the snapbucket 225 and add the bucket ID to the data structure 240. The OVM 210 can include, in the bucket ID, the prefix that will later be used to identify the snapbucket 225 as a bucket of the snapbucket type). Alkalay in view of Yang in further view of Taneja does not teach wherein each at least one resource maps each container to one or more storage buckets. However, Venetsanopoulos teaches wherein each at least one resource maps each container to one or more storage buckets (Venetsanopoulos paragraph [0090], Data Service 904 is a service that connects to Data Service 902 (the composition), and organizes existing VSRs, as known by Data Service 902, into arbitrary groups, called buckets. The service is exposed to end users. Users can login and create empty buckets, which they can then fill with Snapshots, by associating a user-provided filename with an underlying VSR of Data Service 902. One can call this process a registration. The user can register an underlying storage resource, a VSR, with Data Service 904, by giving it a human-friendly name and putting it inside one of their buckets. No data gets moved or copied during the registration process, since Data Service 904 has access to the same VSRs as the compute platform, via its direct connection to Data Service 902. Registration needs to take place when the compute platform produces new storage resources (snapshots) on Data Service 902. Storage resources can be used to virtually map containers to various storage groupings such as storage buckets, also see paragraph [0095], This network does not exchange traditional media or file content in our case, as happens with typical peer-to-peer networks based on torrent technology, but rather snapshots which are essentially VM, container or even bare metal machine disks as well as claims 4 and 17). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Alkalay and Yang and Taneja with those of Venetsanopoulos. Venetsanopoulos teaches using a storage resource to virtual map containers and storage buckets, which can allow for more efficient data distribution and data replication (i.e., see Venetsanopoulos paragraph [0084], This Data Service is the core of the Snapshot Delivery Network. It is the Data Service that implements the thin clone and snapshot functionality of the underlying data. Data Service 902 can compose virtual storage resources, using virtual data blocks. A virtual storage resource (VSR) represents a linearly-addressable set of fixed-length blocks, and a virtual data block (VDB) represents a named blob. A VSR is composed from a number of VDBs put together, one after the other. The composition service provides VSRs for consumption from the upper layers and stores VDBs on the lower levels. It also allows the upper layers to access the VDBs themselves, if needed. A VSR can be thinly copied to a new immutable VSR (a snapshot), or a new mutable VSR (a clone). This happens without having to copy the actual data, but rather with re-mapping on internal structures that the data service uses to construct the VSRs. Only when data is changed, for example when upper services write to a VSR, new VDBs get allocated and the relative re-mappings happen, following a copy-on-write methodology. Accordingly, garbage collection can take place when data gets deleted and no more references to VDBs exist). Claim 1 is the corresponding method claim to system claim 11. It is rejected with the same references and rationale. Regarding claim 13, Alkalay in view of Yang in further view of Taneja in further view of Venetsapoulos teaches The system of claim 11, wherein the alert comprises a user-interface listing each storage bucket of the subset of storage buckets (Yang paragraph [0026], Method 300 includes, for at least one host in computing environment 100, identifying (301) a data store location for each disk object associated each virtual machine on the at least one host. The data store locations can be identified via configuration files for the virtual machines, wherein the configuration files can indicate a data store, a file path location, or some other location information associated with the objects. Method 300 further includes determining (302) orphan disk objects (e.g., virtual disk files) in the hyperconverged data store based on a failure to a local data store and the identified data store locations for the disk objects associated with the virtual machines. Once the orphan disk objects are identified, method 300 further provides for generating (303) a summary of the orphan objects for display to an administrator of computing environment 100. The summary may indicate names and file path locations to the orphan objects, a virtual machine name associated with the hyperconverged storage objects, or some other information associated with the orphaned hyperconverged storage objects. An alert comprising a notification may be generated and sent corresponding to the orphaned storage object list). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Alkalay with those of Yang and Taneja and Venetsanopoulos. Yang teaches identifying resources associated with particular storage units (i.e., buckets) being deleted, which are then grouped together into a subset and an alert generated. Notifying the system of these orphaned storage units is an improvement as the orphaned storage objects cannot be properly referenced or targeted and can result in inefficiency in the system (Yang paragraph [0004], However, while the virtual machines can use multiple data stores, difficulties exist for administrators in determining the effects of a failure associated with a local data store on other reliant data stores. For example, when a virtual machine that uses both local and hyperconverged data stores encounters a failure with the local data store, orphan objects, or unclaimed objects on the hyperconverged storage can be created. This can create inefficiencies as no virtual machine can be assigned to use the resources of the orphan objects during the downtime of the local data store). Claim 3 is the corresponding method claim to system claim 13. It is rejected with the same references and rationale. Regarding claim 15, Alkalay in view of Yang in further view of Taneja in further view of Venetsapoulos teaches The system of claim 11, wherein the prefix is a unique identification number comprising a fixed number of alphanumeric characters (Taneja paragraph [0038], The OVM 210 can manage an object (e.g. object data and metadata) stored in the volatile store 215 and the persistent store 220. In some embodiments, the OVM 210 includes programmed instructions to read a data structure 240 (e.g. a registry) including one or more bucket identifiers (IDs) corresponding to one or more buckets. A bucket identifier can be a name, an alphanumeric string, a binary number, and a hexadecimal number, among others. As shown in FIG. 2, the data structure includes bucket IDs 1-N. The data structure 240 can be stored in the config database 235. The OVM 210 can include programmed instructions to determine whether a bucket is assigned to the volatile store 215 or the persistent store 220 based on identifying a prefix of a bucket ID corresponding to the bucket. For example, the prefix can be “snap.” The prefix identifier may be an alphanumeric string of characters). Claim 5 is the corresponding method claim to system claim 15. It is rejected with the same references and rationale. Claim(s) 2, 10, 12 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Alkalay in view of Yang in further view of Taneja in further view of Venetsapoulos as applied to claims 1 and 11 above, and further in view of Ding (US Publication No. 2023/0035929 -- "Ding"). Regarding claim 12, Alkalay in view of Yang in further view of Taneja in further view of Venetsapoulos in further view of Ding teaches The system of claim 11, wherein the operations further comprise deleting each storage bucket of the subset of storage buckets from the plurality of storage buckets from the container-based environment (Ding paragraph [0016], In various embodiments, among the set of blocks identified based on the CBT data, the data management system may identify one or more data blocks that are allocated (e.g., associated with block allocation status indicating “allocated”) and only ingest/read both changed and allocated data blocks, such as blocks 606 as illustrated in FIG. 6, to the storage appliance. Under this approach, the data management system may avoid ingesting unchanged data blocks, or changed data blocks that have already been deleted (e.g., unallocated) to the storage appliance for downstream data backup and recovery operations. Therefore, this approach may help significantly reduce inbound (e.g., data ingestion) and outbound (e.g., data export) network traffic, save storage space, reduce data backup time, and improve system performance for data export and recovery. The storage units that have been unallocated may be deleted). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Alkalay, Yang, Taneja and Venetsanopoulos with those of Ding. Ding teaches storage resources/containers can be deleted based on being orphaned/unallocated, which can improve memory space management and improve system performance (Ding paragraph [0016], In various embodiments, among the set of blocks identified based on the CBT data, the data management system may identify one or more data blocks that are allocated (e.g., associated with block allocation status indicating “allocated”) and only ingest/read both changed and allocated data blocks, such as blocks 606 as illustrated in FIG. 6, to the storage appliance. Under this approach, the data management system may avoid ingesting unchanged data blocks, or changed data blocks that have already been deleted (e.g., unallocated) to the storage appliance for downstream data backup and recovery operations. Therefore, this approach may help significantly reduce inbound (e.g., data ingestion) and outbound (e.g., data export) network traffic, save storage space, reduce data backup time, and improve system performance for data export and recovery). Claim 2 is the corresponding method claim to system claim 12. It is rejected with the same references and rationale. Regarding claim 20, Alkalay in view of Yang in further view of Taneja in further view of Venetsapoulos in further view of Ding teaches The system of claim 11, wherein each storage bucket of the subset of storage buckets cannot be reconnected to a new resource of the container-based environment (Ding paragraph [0053], In various embodiments, the data management system 302 may be responsible for accessing (or receiving) changed block tracking (CBT) data and block allocation status data from a virtual machine 220, identifying one or more allocated data blocks from the changed blocks based on the CBT data and block allocation status data, and ingesting only those identified one or more data blocks for downstream data backup and recovery operations. This way, the data management system may avoid ingesting unchanged data blocks or changed data blocks that have already been deleted (e.g., unallocated) back to the storage appliance for backup. This approach may significantly reduce inbound (e.g., data ingestion) and outbound (e.g., data export) network traffic, and improve system performance and the efficiency of data export and recovery. Storage resources that have been deleted may not be reconnected or "reingested" to the system). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Alkalay, Yang, Taneja and Venetsanopoulos with those of Ding. Ding teaches storage resources/containers that have been deleted not being able to be reconnected or readded to the system, which can improve system performance by minimizing inbound and outbound network traffic (Ding paragraph [0053], In various embodiments, the data management system 302 may be responsible for accessing (or receiving) changed block tracking (CBT) data and block allocation status data from a virtual machine 220, identifying one or more allocated data blocks from the changed blocks based on the CBT data and block allocation status data, and ingesting only those identified one or more data blocks for downstream data backup and recovery operations. This way, the data management system may avoid ingesting unchanged data blocks or changed data blocks that have already been deleted (e.g., unallocated) back to the storage appliance for backup. This approach may significantly reduce inbound (e.g., data ingestion) and outbound (e.g., data export) network traffic, and improve system performance and the efficiency of data export and recovery). Claim 10 is the corresponding method claim to system claim 20. It is rejected with the same references and rationale. Claim(s) 6, 9, 16 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Alkalay in view of Yang in further view of Taneja in further view of Venetsapoulos as applied to claims 1 and 11 above, and further in view of Bass et al. (US Publication No. 2024/0168664 -- "Bass"). Regarding claim 16, Alkalay in view of Yang in further view of Taneja in further view of Venetsapoulos in further view of Bass teaches The system of claim 11, wherein the instructions further cause the data processing hardware to: periodically obtain a new directory comprising a new plurality of storage buckets deployed in the container-based environment; (Bass paragraph [0524], In temporal period 3102.2 after temporal period 3102.1, the storage system 3105 transitions from storage scheme 3101.A to storage scheme 3101.B. Rather than going offline to complete this transition, the record storage module 2502 can write newly received records to storage system 3105 in accordance with transitioning to storage scheme 3101.B, for example, based on a predetermination and/or instruction to write all new records in accordance with storage scheme 3101.B and/or in accordance with transitioning to the storage scheme 3101.B in accordance with performing the transition. New storage units (i.e., buckets) can be added to the storage environment/scheme) and identify a new subset of storage buckets from the new plurality of storage buckets that correspond to respective resources that have been deleted from the container-based environment (Bass paragraph [0626], In various examples, at a first time during the second temporal period prior to the at least one first expansion and prior to the at least one second expansion, a first portion of the full storage resources of the each storage device is consumed by the single storage structure and the remaining ones of the corresponding plurality of data storage structures, and wherein a remaining portion of the full storage resources is unallocated. In various examples, the first expanded storage size is based on expanding the single storage structure to include all of the remaining portion of the full storage resources. The resource for a particular storage may be unallocated). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Alkalay, Yang, Taneja and Venetsanopoulos with those of Bass. Bass teaches using a directory environment to select new or additional storage resources for storage units (i.e., storage buckets) when necessary, which can allow for more optimal use of storage size and remaining portions of storage space (i.e., see Bass paragraph [0626], In various examples, at a first time during the second temporal period prior to the at least one first expansion and prior to the at least one second expansion, a first portion of the full storage resources of the each storage device is consumed by the single storage structure and the remaining ones of the corresponding plurality of data storage structures, and wherein a remaining portion of the full storage resources is unallocated. In various examples, the first expanded storage size is based on expanding the single storage structure to include all of the remaining portion of the full storage resources). Claim 6 is the corresponding method claim to system claim 16. It is rejected with the same references and rationale. Regarding claim 19, Alkalay in view of Yang in further view of Taneja in further view of Venetsapoulos in further view of Bass teaches The system of claim 11, wherein the one or more respective pluralities of storage buckets are stored at a data store of the container-based environment (Bass Figure 1A; Bass paragraph [0068], FIG. 1A is a schematic block diagram of an embodiment of a database system 10 that includes a parallelized data input sub-system 11, a parallelized data store, retrieve, and/or process sub-system 12, a parallelized query and response sub-system 13, system communication resources 14, an administrative sub-system 15, and a configuration sub-system 16. The system communication resources 14 include one or more of wide area network (WAN) connections, local area network (LAN) connections, wireless connections, wireline connections, etc. to couple the sub-systems 11, 12, 13, 15, and 16 together. The storage buckets/resources may be stored at a data store, also see Bass paragraph [0072], As another example, the segmenting factor indicates a number of segments to include in a segment group. As another example, the segmenting factor identifies how to segment a data partition based on storage capabilities of the data store and processing sub-system. As a further example, the segmenting factor indicates how many segments for a data partition based on a redundancy storage encoding scheme and Bass paragraph [0076], A designated computing device of the parallelized data store, retrieve, and/or process sub-system 12 receives the restructured data segments and the storage instructions). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Alkalay, Yang, Taneja and Venetsanopoulos with those of Bass. Bass teaches using a directory environment to select new or additional storage resources for storage units (i.e., storage buckets) when necessary, which can allow for more optimal use of storage size and remaining portions of storage space (i.e., see Bass paragraph [0626], In various examples, at a first time during the second temporal period prior to the at least one first expansion and prior to the at least one second expansion, a first portion of the full storage resources of the each storage device is consumed by the single storage structure and the remaining ones of the corresponding plurality of data storage structures, and wherein a remaining portion of the full storage resources is unallocated. In various examples, the first expanded storage size is based on expanding the single storage structure to include all of the remaining portion of the full storage resources). Claim 9 is the corresponding method claim to system claim 19. It is rejected with the same references and rationale. Claim(s) 7-8 and 17-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Alkalay in view of Yang in further view of Taneja in further view of Venetsapoulos as applied to claims 1 and 11 above, and further in view of Sitaram et al. (US Publication No. 2024/0220301 -- "Sitaram"). Regarding claim 17, Alkalay in view of Yang in further view of Taneja in further view of Venetsapoulos in further view of Sitaram teaches The system of claim 11, wherein the container-based environment comprises an air-gapped environment that is not connected to the Internet (Sitaram paragraph [0005], An air-gapped environment is a network security measure employed to ensure a computing machine or network is secure by isolating (e.g., using a firewall) it from unsecured networks, such as the public Internet or an unsecured local area network. As such, a computing machine having containerized services running thereon may be disconnected from all other systems. The storage may utilize an air-gapped environment not connected to a public internet, and may include devices communicatively coupled through a network, see Sitaram paragraph [0006], Because the network is isolated, air-gapped environments help to keep critical systems and sensitive information safe from potential data theft or security breaches. As another layer of protection, organizations can vet the container images that are allowed to run on these clusters to reduce the risk of a malicious attack. In addition, air-gapped environments can operate in low bandwidth or with a poor internet connection, ensuring the continuous availability of their mission-critical applications. While air-gapped environments offer many security and workflow advantages, they also introduce new challenges. These challenges are particularly present when deploying cloud native applications having an arbitrary number of constituent services in such restrictive environments. In particular, in air-gap deployments the installation and maintenance of microservices in the container-based cluster becomes increasingly complex. Isolated deployments require additional planning and implementation details to implement successfully). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Alkalay, Yang, Taneja and Venetsanopoulos with those of Sitaram. Sitaram teaches using an air-gapped environment for storage, which can improve the security and prevent unauthorized services being run on it (Sitaram paragraphs [0005-0006], An air-gapped environment is a network security measure employed to ensure a computing machine or network is secure by isolating (e.g., using a firewall) it from unsecured networks, such as the public Internet or an unsecured local area network. As such, a computing machine having containerized services running thereon may be disconnected from all other systems. Because the network is isolated, air-gapped environments help to keep critical systems and sensitive information safe from potential data theft or security breaches. As another layer of protection, organizations can vet the container images that are allowed to run on these clusters to reduce the risk of a malicious attack. In addition, air-gapped environments can operate in low bandwidth or with a poor internet connection, ensuring the continuous availability of their mission-critical applications. While air-gapped environments offer many security and workflow advantages, they also introduce new challenges. These challenges are particularly present when deploying cloud native applications having an arbitrary number of constituent services in such restrictive environments. In particular, in air-gap deployments the installation and maintenance of microservices in the container-based cluster becomes increasingly complex. Isolated deployments require additional planning and implementation details to implement successfully). Claim 7 is the corresponding method claim to system claim 17. It is rejected with the same references and rationale. Regarding claim 18, Alkalay in view of Yang in further view of Taneja in further view of Venetsapoulos in further view of Sitaram teaches The system of claim 17, wherein the air-gapped environment comprises a plurality of edge devices communicatively coupled through a network of the air-gapped environment (Sitaram paragraph [0005], An air-gapped environment is a network security measure employed to ensure a computing machine or network is secure by isolating (e.g., using a firewall) it from unsecured networks, such as the public Internet or an unsecured local area network. As such, a computing machine having containerized services running thereon may be disconnected from all other systems. The storage may utilize an air-gapped environment not connected to a public internet, and may include devices communicatively coupled through a network, see Sitaram paragraph [0006], Because the network is isolated, air-gapped environments help to keep critical systems and sensitive information safe from potential data theft or security breaches. As another layer of protection, organizations can vet the container images that are allowed to run on these clusters to reduce the risk of a malicious attack. In addition, air-gapped environments can operate in low bandwidth or with a poor internet connection, ensuring the continuous availability of their mission-critical applications. While air-gapped environments offer many security and workflow advantages, they also introduce new challenges. These challenges are particularly present when deploying cloud native applications having an arbitrary number of constituent services in such restrictive environments. In particular, in air-gap deployments the installation and maintenance of microservices in the container-based cluster becomes increasingly complex. Isolated deployments require additional planning and implementation details to implement successfully). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the invention to combine the teachings of Alkalay, Yang, Taneja and Venetsanopoulos with those of Sitaram. Sitaram teaches using an air-gapped environment for storage, which can improve the security and prevent unauthorized services being run on it (Sitaram paragraphs [0005-0006], An air-gapped environment is a network security measure employed to ensure a computing machine or network is secure by isolating (e.g., using a firewall) it from unsecured networks, such as the public Internet or an unsecured local area network. As such, a computing machine having containerized services running thereon may be disconnected from all other systems. Because the network is isolated, air-gapped environments help to keep critical systems and sensitive information safe from potential data theft or security breaches. As another layer of protection, organizations can vet the container images that are allowed to run on these clusters to reduce the risk of a malicious attack. In addition, air-gapped environments can operate in low bandwidth or with a poor internet connection, ensuring the continuous availability of their mission-critical applications. While air-gapped environments offer many security and workflow advantages, they also introduce new challenges. These challenges are particularly present when deploying cloud native applications having an arbitrary number of constituent services in such restrictive environments. In particular, in air-gap deployments the installation and maintenance of microservices in the container-based cluster becomes increasingly complex. Isolated deployments require additional planning and implementation details to implement successfully). Claim 8 is the corresponding method claim to system claim 18. It is rejected with the same references and rationale. Response to Arguments Applicant’s arguments, see pages 1-2 (numbered pages 7-8), filed September 24th, 2025, with respect to the rejection(s) of claim(s) 1 and 11 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Alkalay et al. (US Publication No. 2022/0398034 – “Alkalay”) in view of Yang et al. (US Publication No. 2024/0126446 -- "Yang") in further view of Taneja et al. (US Publication No. 2020/0394078 -- "Taneja") in further view of Venetsapoulos et al. (US Publication No. 2018/0239559 – “Venetsapoulos”). The 35 U.S.C. 103 Rejection has been amended to recite the Venetsapoulos reference in response to the claim amendments to independent claims 1 and 11. The Venetsapoulos reference has been added to disclose the now explicitly claimed limitation regarding using a storage resource as a virtual mapping between a storage container and a storage bucket as described in further detail in the rejection above. In light of the newly added reference, the 35 U.S.C. 103 Rejection is maintained. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Sorornejad (US Patent No. 12,099,473) teaches using a lambda function in a storage bucket environment to identify locations for storage containers and the use of eviction policies for sorting and organizing the stored data (i.e., see Sorornejad column 5; lines 15-26, The systems and methods also utilize a customized sorting assistance tool referred to as a “lambda” function. The lambda function reads data streamed into the object storage container (or “bucket”) and parses (or extracts) information including log group and account identifiers. The parsed data is used to create locations (or entries) in object storage containers (or buckets) specific to both accounts and subscribed log groups. In order to protect the originally streamed data for archival and forensics, the sorted data is placed in a separate object storage location with strict data eviction policies as to not store large amounts of duplicate data). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONAH C KRIEGER whose telephone number is (571)272-3627. The examiner can normally be reached Monday - Friday 8 AM - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an inter
Read full office action

Prosecution Timeline

Sep 28, 2023
Application Filed
Sep 26, 2024
Non-Final Rejection — §103
Nov 26, 2024
Response Filed
Feb 20, 2025
Final Rejection — §103
May 06, 2025
Interview Requested
May 13, 2025
Applicant Interview (Telephonic)
May 16, 2025
Examiner Interview Summary
May 29, 2025
Request for Continued Examination
Jun 03, 2025
Response after Non-Final Action
Aug 19, 2025
Non-Final Rejection — §103
Sep 08, 2025
Interview Requested
Sep 15, 2025
Applicant Interview (Telephonic)
Sep 19, 2025
Examiner Interview Summary
Sep 24, 2025
Response Filed
Dec 12, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572298
ADAPTIVE SCANS OF MEMORY DEVICES OF A MEMORY SUB-SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12566705
SYSTEM ON CHIP, A COMPUTING SYSTEM, AND A STASHING METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12566556
DATA SECURITY PROTECTION METHOD, DEVICE, SYSTEM, SERVER-SIDE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 03, 2026
Patent 12554441
TRANSFERRING COMPRESSED DATA BETWEEN LOCATIONS
2y 5m to grant Granted Feb 17, 2026
Patent 12547582
Cloning a Managed Directory of a File System
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
86%
Grant Probability
95%
With Interview (+8.2%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 147 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month