Prosecution Insights
Last updated: April 19, 2026
Application No. 18/345,045

USING CHUNKS OF DATA TO STORE STREAMING DATA AT A CLOUD SERVICE PROVIDER

Final Rejection §103
Filed
Jun 30, 2023
Examiner
LE, JESSICA N
Art Unit
2169
Tech Center
2100 — Computer Architecture & Software
Assignee
DELL PRODUCTS, L.P.
OA Round
2 (Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
3y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
366 granted / 504 resolved
+17.6% vs TC avg
Strong +29% interview lift
Without
With
+28.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
21 currently pending
Career history
525
Total Applications
across all art units

Statute-Specific Performance

§101
18.0%
-22.0% vs TC avg
§103
48.8%
+8.8% vs TC avg
§102
12.8%
-27.2% vs TC avg
§112
12.2%
-27.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 504 resolved cases

Office Action

§103
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION This communication is responsive to the amendment filed on 11/26/2025. Claims 1, 10, and 17 are independent claims, and are amended. Claims 1-20 are pending in this application. This Action has been made FINAL. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-11, 14, and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Verbitski et al., US Patent No. 12,292,881 (hereinafter as “Verbitski”) in view of Shilane et al., US Pub. No. 2020/0019629 A1 (hereinafter as “Shilane”) and further in view of Colenbrander, US Pub. No. 2022/0001279 A1 (hereinafter as “Colenbrander”), and Devadas et al., US Pub. No. 2024/0354144 A1 (hereinafter as “Devadas”). Regarding independent claim 1, Verbitski teaches: a method, comprising: based on first application data (col. 15, lines 47-51, e.g., “the API calls” and “APIs 521-529” which are interpreted as the application data), initiating, by a system comprising at least one processor, a multipart cloud storage transaction (col. 21, lines 58-62: e.g., “Query engine 1312 may receive start transaction 1342 indication and then begin receiving transaction statements 1342. Query engine 1312 may obtain a transaction start time 1344 from time sync service agent 1314”, wherein the “start transaction” is interpreted as the initiating transaction; and col. 4, line 29, e.g., “provide one or more services (such as various types of cloud-based storage)”) with a data lake implemented on cloud storage server that enables services associated with a cloud service provider (col. 4, lines 41-43, e.g., “provider network can be formed as a number of regions, where a region is a separate geographical area in which the cloud provider clusters data centers…”, and col. 24, lines 16-19, e.g., “an object-based storage service, data-lake storage service, or other type of storage that may be implemented on one of other service(s) 230”), wherein the multipart cloud storage transaction corresponds to a chunk of data (col. 26, lines 36-43: e.g., “Transactions are a feature of databases that allow for multiple different instructions to read and/or write to a database, such the instructions succeed or fail together. Because a database may have both system-managed tables across multiple shards and client-managed tables at a single location, implementing transactions for such databases may make use of dynamic protocol selection in order to determine how to correctly and efficiently handle transactions…”, wherein the “multiple different instructions to read and/or write” teaches the multipart transaction, and the multiple “shards” are interpreted as the partitions/chunks of data; and col. 11, lines 1-19 and lines 60-67 and col. 12, lines 1-16 teach the segment of data which is interpreted as the chunk of data). Vebitski teaches loading the table slice=chunk/segment/block to the table slice group, then to the volume slice group, and shard into the database storage (see Fig. 10; and col. 18, lines 49-67 to col. 19, lines 1-67), and the commit transaction based on the initiated transaction (see Fig. 13). However, Vebitski does not explicitly teach: “based on the first application data, facilitating, by the system, communicating a first data part of the chunk to the cloud storage server; and based on second application data, facilitating, by the system, communicating a second data part of the chunk to the cloud storage server” and “wherein the first data part and the second data part are, as generated, stored in a commit buffer, and wherein a first chunk offset of the first data part and a second chunk offset of the second data part are stored in metadata;” and “based on a size of the chunk, facilitating, by the system, communicating a commit signal to commit the cloud storage transaction.” In the same field of endeavor (i.e., data processing in the cloud storage), Shilane teaches: based on the first application data (Fig. 1, element 202 via Apps), facilitating, by the system, communicating a first data part of the chunk to the cloud storage server (Fig. 1, and pars. [0070-72] such that the slice identifiers are interpreted as the data parts of the chunks specify one or more slice recipes stored in the persistent storage in the cloud server; and pars. [0050] “ the frontend micro-services 316 may be micro-services executing on a cloud platform. The frontend micro-services 316 may also obtain requests for data stored in the persistent storage 350” and [0119] “… non-transitory storage media also embraces cloud-based storage systems and structures, although the scope of the invention is not limited to these examples of non-transitory storage media”). based on second application data (Fig. 1, element 202 via Apps), facilitating, by the system, communicating a second data part of the chunk to the cloud storage server (Fig. 1, and Figs. 2C and 3A-3C should contain plurality data part of slices/chunks; and also in pars. [0070-72]) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to combine the teachings of the cited references because the teachings of Shilane would have provided Verbitski with the above indicated limitation allowing a skill artisan in motivation to perform loading/communicating/transmitting the data parts of chunks to the storage at cloud server/system (Shilane: Figs. 1-2, and pars. [0042 and 49-50]). Verbitski and Shilane do not explicitly teach: “wherein the first data part and the second data part are, as generated, stored in a commit buffer, and wherein a first chunk offset of the first data part and a second chunk offset of the second data part are stored in metadata;” and “based on a size of the chunk, facilitating, by the system, communicating a commit signal to commit the cloud storage transaction.” In the same field of endeavor (i.e., data processing and archiving), Colenbrander teaches: wherein the first data part and the second data part are, as generated, stored in a commit buffer (par. [0041] e.g., “when the cloud gaming server 103-1 to 103-N calls the commit API, the cloud gaming server 103-1 to 103-N allows the management server 105-1 to 105-X to also commit its buffer changes back to the cloud storage server 109”, and par. [0047] “split data into multiple data chunks for placement into respective locations in a computer memory 337 of the cloud gaming server 103-1”), and based on a size of the chunk (par. [0040] e.g., “a transaction data buffer can be provided by either the video game or the cloud gaming system 103-1 to 103-N. In some embodiments, the transaction data buffer is in RAM. In some embodiments, the cloud gaming system 103-1 to 103-N has the video game provide the transaction buffer because the transaction buffer is small in size”, and par. [0047] e.g., multiple data chunks), facilitating, by the system, communicating a commit signal to commit the multipart cloud storage transaction (par. [0042] “the cloud storage server 109 is configured to support transactions. In these embodiments, upon mount by the management server 105-1 to 105-X, the cloud server 109 tracks data changes.”). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to combine the teachings of the cited references because the teachings of Colenbrander would have provided Verbitski and Shilane with the above indicated limitation allowing a skill artisan in motivation to perform storing data parts of chunk based on the chunk size in the committed buffer in the local storage (Colenbrander: Figs. 1-2, and pars. [0039-42]). However, Verbitski, Shilane and Colenbrander do not explicitly teach: “wherein a first chunk offset of the first data part and a second chunk offset of the second data part are stored in metadata.” In the same field of endeavor (i.e., data processing and archiving), Devadas teaches: “wherein a first chunk offset of the first data part and a second chunk offset of the second data part are stored in metadata” (par. [0028] e.g., “A data object can be divided into a collection of data chunks (a single data chunk or multiple data chunks). Each data chunk has a specified size (a static size or a size that can dynamically change). The storage locations of the data chunks are storage locations in the shared storage system 104. The storage location metadata maintained by the chunkstore module can include any or some combination of the following: an offset, a storage address, a block number, and so forth.”) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to combine the teachings of the cited references because the teachings of Devadas would have provided Verbitski, Shilane and Colenbrander with the above indicated limitation allowing a skill artisan in motivation to perform storing the chunk offsets in storage location metadata efficient for data access purpose (Devadas: Figs. 1-2, and pars. [0026-31]). Regarding claim 2, Colenbrander teaches: “wherein the commit buffer is comprised in local, volatile storage.” (par. [0033] e.g., “RAM” as known as the local, volatile memory/storage, par. [0039] “write to RAM only”, and par. [0041] e.g., “commit of buffer”) Regarding claim 3, Devadas teaches: wherein the commit signal is communicated to result in: the first data part and the second data part being combined, resulting in combined data parts (par. [0028] teaches the data chunks are collected in the data object which implies to combined data parts), and the combined data parts being stored as a data object in the cloud storage server, resulting in a stored data object (par. [0033] “. Note that a write of the subject data object is complete if the subject data object has been written to either a write buffer (discussed further below) or the shared storage system 104…”; and par. [0044] “a bucket can refer to any type of container that includes a collection of data objects. Each bucket is identified by a Bucket ID. A specific example of a bucket is an S3 bucket in an Amazon cloud storage.”). Regarding claim 4, Shilane, Colenbrander, Devadas, in combination, teach: after the commit signal is communicated, facilitating, by the system, receiving a request to retrieve the second application data (Colenbrander: par. [0032] such that the ““mount” API call” interpreted as the request to retrieve technique; and Abstract); based on the metadata (Colenbrander: par [0045] disclosed “metadata”; and Devadas: par. [0025]): identifying, by the system, that the stored data object contains the second application data (Shilane: Fig. 1, element 202, Figs. 12A-2D, par. [0064] e.g. “identifies a group of slices that each include similar but unique data or include identical data”), and based on the second chunk offset, identifying, by the system, an object offset corresponding to the second application data stored in the stored data object (Devadas: par. [0028] e.g., “The storage location metadata maintained by the chunkstore module can include any or some combination of the following: an offset, a storage address, a block number, and so forth.”); and retrieving the second application data from the stored data object, wherein the second application data was identified for retrieval based on the object offset of the second application data (Shilane: Fig. 1, element 202, Figs. 12A-2D, par. [0064] e.g. “identifies a group of slices that each include similar but unique data or include identical data”, and pars. [0070-72]; and Devadas: par. [0028] e.g., “The storage location metadata maintained by the chunkstore module can include any or some combination of the following: an offset, a storage address, a block number, and so forth.”). Regarding claim 5, Shilane teaches: “wherein the stored data object is immutable” (Fig. 1, element 350, Persistent Storage which is used to store data object that is immutable, see in par. [0034] “Once the first version is stored in a persistent storage, the versions of the large word document subsequently stored will be deduplicated before being stored in the persistent storage resulting in much less storage space of the persistent storage being required to store the subsequently stored versions when compared to the amount of storage space of the persistent storage required to store the first stored version”, and par. [0043]). Regarding claim 6, Shilane teaches: “wherein the size of the chunk was selected based on a data retrieval constraint of an application that generated the first application data and the second application data.” (par. [0035] e.g., “a size of about 20 bytes”, “segments may be about 8KB in size”) Regarding claim 7, Shilane teaches: “wherein the first data part and the second data part are communicated as generated without implicating local non-volatile storage.” (Fig. 1, and pars. [0042-44]) Regarding claim 8, Colenbrander teaches: before the commit signal is communicated, facilitating, by the system, receiving a request to retrieve the second application data (pars. [0039], and [0041]), and [0052] teaches “requested data being retrieved from a data storage device and …”); and in response to receiving the request, retrieving, by the system, the second application data from the second data part stored in the commit buffer (pars. [0039] such that retrieving data stored in commit buffer of RAM, par. [0033] e.g., “RAM” as known as the local, volatile memory/storage, par. [0041] e.g., “commit of buffer”). Regarding claim 9, Shilane teaches: wherein the first data part and the second data part comprise a stream of application data (Fig. 1, element 202; and par. [0032] “data stream segmentation processes, data chunks, data blocks, atomic data, emails, objects of any type, files, contacts, directories, sub-directories, volumes, and any group of one or more of the foregoing”). Regarding independent claim 10, in the same endeavor (i.e., data processing), Verbitski, Shilane, Colenbrander, and Devadas, in combination, teach: cloud storage equipment (Verbitski: col. 4, lines 22-67 via the cloud provider clusters data centers and different types of cloud-based storage=cloud storage equipment; and Shilane: see pars. [0027 and 33] disclosed at least one cloud storage equipment), comprising: at least one processing unit (Verbitski: see Fig. 22; and Shilane: see Fig. 4); and at least one memory coupled to the at least one processing unit (Verbitski: see again in Fig. 22, element 3010a-n, and element 3020; and Shilane: again Fig. 4) and storing instructions configured to be executed by the at least one processing unit, wherein the instructions, when executed by the at least one processing unit (Shilane: par. [0048] “The non-transitory storage may include instructions which, when executed by the one or more processors, enable the physical device to perform the functions”), cause the cloud storage equipment to perform actions comprising: based on a request from stream storage equipment implementing a data lake (Verbitski: col. 24, line 15-18: “Backup storage 1610 may be a separate storage service, in some embodiments, such as an object-based storage service, data-lake storage service, or other type of storage that may be implemented on one of other service(s) 230”), generating a multipart transaction (Verbitski: see Fig. 7, element 344, and Fig. 18, element 1810; and Shilane: pars. [0083] “the object name when generating an object recipe name”, par. [0093-94], e.g., “transaction ID” in object storage), receiving, from the stream storage equipment, a first block of data and a second block of data allocated to the multipart transaction (Verbitski: Figs. 7 and 10 such that the slices=chunks or shards=blocks; and Shilane: see Figs. 3A-3C, par. [0032] “data segments such as may be produced by data stream segmentation processes, data chunks, data blocks, atomic data, emails, objects of any type, files, contacts, directories, sub-directories, volumes, and any group of one or more of the foregoing”, and pars. [0093-94] via “transaction ID”), wherein the stream storage equipment has retained a copy of the first block of data and the second block of data in volatile storage (Shilane: see pars. [0032-33] via “data stream segmentation processes”, and “clones, snapshots, any other type of copies of data”, Fig. 4 discloses the volatile storage); based on an instruction from the stream storage equipment to commit the multipart transaction, aggregating the first block of data and the second block of data into a data object (Verbitski: see Fig. 10 via slices groups, volume groups and shards and cloud database storages, which teaches the technique of “aggregating”; and Colenbrander: pars. [0065] “the particular grouping of data is a data object, a data file, or a data block.”, [0066] via “a commit API call” functionality, and [0069-70]), wherein the stream storage equipment stored, as metadata, offset information describing placement of the first block of data and the second block of data within the data object. (Verbitski: Fig. 11 at element 345 as the placement management, and col. 29, lines 1-35 teaches placement of the table slices=blocks within the database objects, see Fig. 10; and Devadas: pars. [0028] “…The storage location metadata maintained by the chunkstore module can include any or some combination of the following: an offset, a storage address, a block number, and so forth”, [0049] e.g., “the given data object is subsequently modified (e.g., overwritten, replaced, etc.), subsequent version(s) of the given data object is (are) generated…, and [0050-51]). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to combine the teachings of Verbitski, Shilane, Colenbrander, and Devadas as explained the above indicated limitations which allow a skill artisan in motivation to perform the streaming the data chunks/slices to store in cloud storage(s) for future accessing purpose. Regarding claim 11, Verbitski, Shilane, Colenbrander, and Devadas, in combination, teach: “storing the data object, resulting in a stored data object” (Verbitski: see Figs. 10-11; and Shilane: pars. [0091-92]; Colenbrander: par. [0065] “the particular grouping of data is a data object, a data file, or a data block. In some embodiments, the particular grouping of data is a save data disk image for the video game that includes save data for the user's play of the video game”; Devadas: see par. [0028]). Regarding claim 14, Verbitski, Shilane, Colenbrander, and Devadas, in combination, teach: wherein the first block of data and the second block of data were grouped into a storage group that corresponds to the multipart transaction (Verbitski: see again in Fig. 10 in the groups of slices, volumes; Shilane: see in Figs. 2A-D, and pars. [0021-22] via “a similarity group” storing in object storage including “transaction ID”, and [0090-92]; and Colenbrander: par. [0065] “the particular grouping of data is a data object, a data file, or a data block. In some embodiments, the particular grouping of data is a save data disk image for the video game that includes save data for the user's play of the video game”) wherein the metadata comprises group offset information that corresponds to group offset data for the first block of data and the second block of data allocated to the multipart transaction (Shilane: par. [0068] e.g., “an offset” from the “region, a bit sequence, a name or other types of data”; and Colenbrander: par. [0065] “the particular grouping of data is a data object, a data file, or a data block. In some embodiments, the particular grouping of data is a save data disk image for the video game that includes save data for the user's play of the video game”; and Devadas: see pars. [0028] “…The storage location metadata maintained by the chunkstore module can include any or some combination of the following: an offset, a storage address, a block number, and so forth”, and [0049]). Regarding claim 16, Shilane teaches: “wherein the first block of data and the second block of data comprise a stream of application data.” (Fig. 1, element 202; par. [0028] “data stream segmentation processes, data chunks, data blocks, atomic data, emails, objects of any type, files, contacts, directories, sub-directories, volumes, and any group of one or more of the foregoing”) Regarding independent claim 17, the claim is rejected by the same reasons set forth above to claim 1. Furthermore, Verbitski, Shilane, Colenbrander, and Devadas, in combination, teach: a computer program product, tangibly stored on a non-transitory computer-readable medium and comprising machine-executable instructions, wherein the machine-executable instructions, when executed, cause a stream storage device to perform operations (Verbitski: see col. 35, lines 1-25; and Shilane: see par. [0119]), comprising: receiving a part of a stream linked to a multipart cloud storage transaction with a data lake implemented on cloud storage server corresponding to a cloud service provider (Verbitski: Fig. 10, Fig. 13, and col. 4, lines 41-67 via the cloud provider clusters in data centers/data lake(s), and col. 24, lines 17-19; and Shilane: see Abstract: “receiving a write request that includes a data structure version to be written, wherein the data structure version is associated with a unique identifier, storing the data structure version in association with the unique identifier, receiving a read request for a most recent version of the data structure…”, pars. [0020-22], and [0028] discloses at least “a cloud provider”); storing, in a part buffer, the part of stream received (Colenbrander: pars. [0004] “The data access request identifies requested data stored in the data storage device within the cloud storage server.”, [0039] “ the cloud storage server 109 are collectively configured to handle use of transactions for storage access”, and [0040] via a transaction data buffer); identifying that the multipart cloud storage transaction is not committed, and comprises another part of the stream, other than the part, that was previously included in the multipart cloud storage transaction (Colenbrander: pars. [0039] teaches that the “transactions” and “The unmount API is an “implicit commit””, which is interpreted as the “not committed”, [0040] via “a transaction data buffer” is interpreted as another part of stream, and [0060] “With use of transactions, the commit API will flush any changes in data back to the cloud storage system 390 from the management server 105-1 to 105-X”); appending the part of stream to an end of the other part of the stream (Verbitski: see Figs. 10 and 13; and Shilane: see Fig. 3A-3C and par. [0054-55] via “queues” including “queue slices of data”, e.g.,”… first in first out queues. The queues of the request queues 320 may be other types of queues without departing from the invention. For example, the queues may be configured to prioritize certain slices for processing by the backend micro-services 314 over other slices. For example, certain slices may be moved to the front of the queue based on a type, quality, or meta-data associated with the slices” teaches the appending part of request queue/stream); and storing metadata comprising an offset value corresponding to an offset of the part of stream, from a beginning of the cloud storage transaction (Devadas: par. [0028] e.g., “A data object can be divided into a collection of data chunks (a single data chunk or multiple data chunks). Each data chunk has a specified size (a static size or a size that can dynamically change). The storage locations of the data chunks are storage locations in the shared storage system 104. The storage location metadata maintained by the chunkstore module can include any or some combination of the following: an offset, a storage address, a block number, and so forth.”) Accordingly, in the same field of endeavor (i.e., data processing), it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to combine the teachings of Verbitski, Shilane, Colenbrander, and Devadas with the above indicated limitations for allowing a skill artisan in motivation to perform storing the chunks in location metadata of the cloud storage efficient for future data access purpose. Regarding claim 18, Shilane, Colenbrander, and Devadas, in combination, teach: “based on a total size of the part of stream and the other part of the stream, committing the cloud storage transaction to be stored via storage equipment corresponding to the cloud service provider” (Shilane: pars. [0028] discloses “a cloud provider”, [0035] “The slices, in turn, are subdivided into segments. In at least one implementation, these segments are approximately 8 KB, with the segment boundary selected in a content-defined manner that tends to produce consistent segments”, and [0055]; Colenbrander: see pars. [0016-17] via stream data, [0030] via “a few megabytes (MB) in size”, [0040] “the cloud gaming system 103-1 to 103-N has the video game provide the transaction buffer because the transaction buffer is small in size”, and [0066-67] via complete/commit transaction to the cloud storage server (as shown in Fig. 1, wherein the cloud storage server is interpreted as the cloud service provider)). Regarding claim 19, Colenbrander and Devadas, in combination, teach: wherein the instructions further comprise: receiving, by the system, a request to retrieve the other part of the stream (Devadas: par. [0090] e.g., “a data read to retrieve the subject data object. If the response information from the data virtual processor 114-N includes the list of storage locations, then the chunkstore module 118-1 of the source virtual processor 114-1 can read the data chunks from the storage locations of the shared storage system 104”, wherein the information of data chunks are shown in Fig. 2A-2B. Since the claim does not require any particular “other part of the stream”; thus, the information of data chunks, e.g., key/value, storage location information, chunk ID, etc. should be matched as broadest reasonable interpretation. See MPEP 2111); responsive to the request, retrieving, by the system, the other part of the stream from the other part stored in the part buffer (Colenbrander: par. [0040-41] teaches buffer; Devadas: par. [0018] “a read request (e.g., a get request to retrieve a data object from the shared storage system)”, and par. [0090] “a data read to retrieve the subject data object. If the response information from the data virtual processor 114-N includes the list of storage locations, then the chunkstore module 118-1 of the source virtual processor 114-1 can read the data chunks from the storage locations of the shared storage system 104”); and after the retrieving, sending a commit signal to commit the cloud storage transaction (Colenbrander: pars. [0039-41] teaches complete/commit data indicates for transaction flush data into the cloud storage; and Devadas: par. [0042] e.g., “... The subject data object is “committed” if the write operation initiated by the write request has persistently stored the subject data object such that the subject data object is available for later retrieval. For example, the subject data object may be persistently stored in a write buffer (e.g., 130-i) or persistently stored in the shared storage system 104”). Regarding claim 20, Colenbrander teaches: wherein the part buffer is comprised in volatile storage that is local to the stream storage device (see par. [0040] e.g., “the transaction data buffer is in RAM”). Claims 12-13 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Verbitski, Shilane, Colenbrander, and Devadas, and further in view of Paduroiu et al., US Pub. No. 2022/0113871 A1 (hereinafter as “Paduroiu”). Regarding claim 12, the claim rejected by the same reasons set forth above to claim 10. However, Verbitski, Shilane, Colenbrander, and Devadas do not explicitly teach: “identifying a data object offset value” after the multipart transaction is committed. In the same field of endeavor (i.e., data processing), Paduroiu teaches: after the multipart transaction is committed, identifying a data object offset value that corresponds to the second block of data stored in the data object (see par. [0006] teaches the distributed transactions that applied to the segments, and the transaction only become available=after for reading/accessing once the transaction is “committed”; par. [0060] e.g., “obtaining a stream cut comprising identifiers for respective segments of the data stream and offset values representing lengths of the respective segments”); and based on the data object offset value, determining a storage location of the second block of data within the data object (pars. [0038] via “the target offset location”, and [0060] via “offset value” and “in response to determining that the current lengths of the replicated segments are greater than or equal to the offset values in the stream cut, updating target offset data of the replicated segments to match the offset values in the stream cut, resulting in updated target offset data (operation 1110), and allowing reading of the streamed data from the replicated segments up to locations in the replicated segments represented by the updated target offset data”). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to combine the teachings of the cited references because the teachings of Paduroiu would have provided Verbitski, Shilane, Colenbrander and Devadas with the above indicated limitation allowing a skill artisan in motivation to perform storing data stream based on offset values to allocate the chunks/segments storage efficiently (Paduroiu: Figs. 6-10, and pars. [0006, 24-26, and 60]). Regarding claim 13, the claim is rejected by the same reasons set forth above to claims 10 and 12. Furthermore, Paduroiu teaches: wherein identifying the data object offset value comprises: receiving the metadata stored by the stream storage equipment that implicates the data object offset value (pars. [0007] and [0056] “a read request based on the corresponding segment length metadata in the target segment data store”); and based on the metadata, identifying the data object offset value (see pars. [0038], [0056] and [0060]). Regarding claim 15, the claim is rejected by the same reasons set forth above to claim 10-14. However, Verbitski, Shilane, Colenbrander and Devadas do not explicitly teach: “wherein identifying the data object offset value based on the metadata comprises mapping the group offset data corresponding to the second block of data to the data object offset value.” In the same field of endeavor (i.e., data processing), Paduroiu teaches: wherein identifying the data object offset value based on the metadata comprises mapping the group offset data corresponding to the second block of data to the data object offset value (see pars. [0050-52] teach the groups of segments, and [0060] teaches matching offset values in segments, e.g., “…, updating target offset data of the replicated segments to match the offset values in the stream cut, resulting in updated target offset data (operation 1110), and allowing reading of the streamed data from the replicated segments up to locations in the replicated segments represented by the updated target offset data (operation 1112)”). Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the instant application to combine the teachings of the cited references because the teachings of Paduroiu would have provided Verbitski, Shilane, Colenbrander and Devadas with the above indicated limitation allowing a skill artisan in motivation to perform updating offset length of the segments corresponding target offset data in the obtained stream (Paduroiu: Figs. 6-10, and pars. [0006, 24-26, and 60]). Response to Arguments Referring to claim rejections under 35 U.S.C. §103, Applicant’s arguments to the newly amended limitations/features (e.g., “a multipart cloud storage transaction” and “a data lake implemented on a cloud storage server…” in claim 1 (see Remarks, page 8) have been considered but are moot in view of the new grounds of rejection necessitated by applicant's amendment to the claims. Applicant's newly amended features are taught implicitly, expressly, or impliedly by the prior art of record. Prior Arts The prior art made of record on form PTO-892 and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. It is noted that any citation to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. See In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275,277 (CCPA 1968)); Merck & Co. v. Biocraft Laboratories, 874 F.2d 804, 10 USPQ2d 1843 (Fed. Cir.), cert. denied, 493 U.S. 975 (1989). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jessica N. Le whose telephone number is (571)270-1009. The examiner can normally be reached M-F 9:30 am - 5:30 pm (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SHERIEF BADAWI can be reached at (571) 272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Jessica N Le/Examiner, Art Unit 2169 /MD I UDDIN/ Primary Examiner, Art Unit 2169
Read full office action

Prosecution Timeline

Jun 30, 2023
Application Filed
Sep 13, 2025
Non-Final Rejection — §103
Nov 20, 2025
Applicant Interview (Telephonic)
Nov 20, 2025
Examiner Interview Summary
Nov 26, 2025
Response Filed
Mar 19, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585711
SYSTEMS AND METHODS FOR WEB SCRAPING
2y 5m to grant Granted Mar 24, 2026
Patent 12554704
STALE DATA RECOGNITION
2y 5m to grant Granted Feb 17, 2026
Patent 12475100
USING AD-HOC STORED PROCEDURES FOR ONLINE TRANSACTION PROCESSING
2y 5m to grant Granted Nov 18, 2025
Patent 12450225
DYNAMICALLY LIMITING THE SCOPE OF SPREADSHEET RECALCULATIONS
2y 5m to grant Granted Oct 21, 2025
Patent 12393604
SYSTEMS AND METHODS FOR PREVENTING DATABASE DEADLOCKS DURING SYNCHRONIZATION
2y 5m to grant Granted Aug 19, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
99%
With Interview (+28.6%)
3y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 504 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month