Prosecution Insights
Last updated: April 19, 2026
Application No. 18/436,889

DEFRAGMENTATION METHOD, APPARATUS, ELECTRONIC APPARATUS AND COMPUTER-READABLE STORAGE MEDIUM

Non-Final OA §103
Filed
Feb 08, 2024
Examiner
SHAH, VAISHALI
Art Unit
2156
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
3 (Non-Final)
57%
Grant Probability
Moderate
3-4
OA Rounds
3y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
128 granted / 224 resolved
+2.1% vs TC avg
Strong +57% interview lift
Without
With
+57.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
27 currently pending
Career history
251
Total Applications
across all art units

Statute-Specific Performance

§101
18.7%
-21.3% vs TC avg
§103
55.0%
+15.0% vs TC avg
§102
3.7%
-36.3% vs TC avg
§112
16.0%
-24.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 224 resolved cases

Office Action

§103
DETAILED ACTION In response to communication filed on 23 January 2026, claims 1, 11 and 22 are amended. Claims 20 and 21 are canceled. Claims 1-19 and 22 are pending. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 23 January 2026 has been entered. Response to Arguments Applicant’s arguments, see “Claim Interpretation”, filed 23 January 2026, have been carefully considered and are considered to be persuasive. Applicant’s arguments, see “Claim Rejections – 35 USC § 103”, filed 23 January 2026, have been carefully considered but are not considered to be persuasive. APPLICANT’S ARGUMENT: Applicant respectfully submits that the cited references do not disclose at least the claimed "wherein the chunk index structure of the each stream is a tree structure, each of the plurality of chunks corresponding to a node, and a location of the node being determined according to the fragmentation degree of the at least one chunk." EXAMINER’S RESPONSE: Examiner has carefully considered the argument but respectfully disagrees. The arguments are related to newly added limitations and are addressed in the rejection below. APPLICANT’S ARGUMENT: Applicant argues that the Office has not clearly demonstrated how the "blocks and pages" is relied upon to allegedly teach the claimed "updating the fragmentation degree of the at least one chunk, based on the number of the pages of the at least one chunk occupied by the data." Yang appears to describe that the "chunks [are] without [being] required to align with ... blocks and pages." Id. Danliov does not cure the deficiencies of the Office's application of Yang, as the cited portions of Danliov merely describe replacing coding fragments. Danliov, at paragraph [0041]. Therefore, Applicant respectfully submits that the cited references, either individually or in combination, do not teach or suggest at least the claimed "updating the fragmentation degree of the at least one chunk, based on the number of the pages of the at least one chunk occupied by the data and the fragmentation degree of the at least one chunk recorded in the chunk index structure." EXAMINER’S RESPONSE: Examiner has carefully considered the argument. Based on the claim amendments in the independent claims and the arguments, Youngworth reference has been incorporated to teach the above argued limitations. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 11 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Yang et al. (US 11,392,297 B2, hereinafter “Yang”) in view of Danilov et al. (US 2020/0004447 A1, hereinafter “Danilov”) further in view of Hashimoto (US 10,649,677 B2, hereinafter “Hashimoto”) and Youngworth (US 2015/0006846 A1, hereinafter “Youngworth”). Regarding claim 1, Yang teaches acquiring at least one chunk information of data in response to a writing request for the data, (see Yang, [col 6 lines 28-31] “As write commands are received, the chunks (or rather, identifiers (IDs) of these chunks) associated with those write commands may be placed in submission queue 335”) wherein the at least one chunk information comprises an identification of at least one chunk and a stream identification assigned to data of the at least one chunk, and a logical address assigned to the data corresponds to a plurality of chunks including the at least one chunk; (see Yang, [col 20 lines 12-22] “including: a receiver to receive a write command including a logical block address (LBA); an LBA mapper to map the LBA to a chunk identifier (ID); stream selection logic to select a stream ID based on the chunk ID using the chunk-to-stream mapper; a stream ID adder to add the stream ID to the write command; a queuer to place the chunk ID in the submission queue”; Fig. 5; [col 8 lines 11-13] “FIG. 5 shows the logical block addresses (LBAs) of various commands being mapped to chunks identifiers (IDs) and then to stream IDs for use”; [col 7 lines 1-3] “Stream ID adder 420 may then add the selected stream ID to the write command, using logic to write data into the write command”). … of each of the at least one chunk and a stream to which the each of the at least one chunk (see Yang, [col 6 lines 35-42] “Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335 (or the lack of chunk IDs in submission queue 335—chunks that are not being used may be assigned to lower priority streams as a result of non-use)”) belongs in a chunk index structure to obtain an updated chunk index structure, (see Yang, [col 6 lines 33-38] “chunks may be removed from submission queue 335 and the stream assignments for these chunks may be updated. Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335”; [col 9 lines 34-38] “chunk-to-stream mapper 340 of FIG. 3 may include a Sequential, Frequency, Recency (SFR) table. “Sequential, Frequency, Recency” refers to the manner in which the stream ID to be assigned to a chunk may be determined and updated”) based on the at least one chunk information, (see Yang, [col 6 lines 35-42] “Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335 (or the lack of chunk IDs in submission queue 335—chunks that are not being used may be assigned to lower priority streams as a result of non-use)”) wherein the chunk index structure comprises (see Yang, [col 6 lines 33-38] “chunks may be removed from submission queue 335 and the stream assignments for these chunks may be updated. Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335”) identifications of the plurality of chunks, stream identifications corresponding to the plurality of chunks, and… (see Yang, [col 6 lines 35-42] “Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335 (or the lack of chunk IDs in submission queue 335—chunks that are not being used may be assigned to lower priority streams as a result of non-use)”; [col 22 lines 66-67] “SFR table including the chunk ID and the stream ID for the chunk ID”; [col 23 lines 49-52] “wherein the chunk-to-stream mapper includes a node entry, the node 50 entry including the chunk ID and the stream ID for the chunk ID”). … in the updated chunk index structure; and (see Yang, [col 6 lines 33-38] “chunks may be removed from submission queue 335 and the stream assignments for these chunks may be updated. Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335”; [col 9 lines 34-38] “chunk-to-stream mapper 340 of FIG. 3 may include a Sequential, Frequency, Recency (SFR) table. “Sequential, Frequency, Recency” refers to the manner in which the stream ID to be assigned to a chunk may be determined and updated”). transmitting a first logical address and a first stream identification corresponding to the first chunk to a storage device,… (see Yang [col 19 lines 32-33] “Associated data may be delivered over transmission environments, including the physical and/or logical network”; [col 22 lines 37-39] “receive a write command for a Solid State Drive (SSD), the write command including a logical block address (LBA)”; [col 6 line 60 – col 7 line 6] “to accomplish this stream selection, and may include logic to search chunk-to-stream mapper 340 of FIG. 3 to find an entry corresponding to the selected chunk… by calculating the stream ID from an access count for the chunk… Once the stream ID has been attached to the write command, transmitter 425 may transmit the write command (with the attached stream ID) toward SSD 120”). wherein the chunk index structure of the each stream is a queue… (see Yang, [col 6 lines 33-38] “chunks may be removed from submission queue 335 and the stream assignments for these chunks may be updated. Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335”) each of the plurality of chunks corresponds to a node, and (see Yang, [col 13 lines 39-40] “There may be one node for each chunk in SSD 120”). Yang does not explicitly teach a defragmentation method, comprising:; updating a fragmentation degree of each of the at least one chunk; fragmentation degree of the plurality of chunks; determining a first chunk to be defragmented, based on the fragmentation degrees of the plurality of chunks in the updated chunk index structure; to enable the storage device to defragment data of the first chunk, wherein the fragmentation degree of each of the at least one chunk corresponds to a number of fragmentations in each of the at least one chunk, and a tree structure, a location of the node being determined according to the fragmentation degree of the at least one chunk. However, Danilov discloses data chunks divided into k data fragments and teaches updating a fragmentation degree in a chunk… (see Danilov, [0041] “Moreover, n*m coding fragments for source data portions are replaced with just m coding fragments of the standard size for a united data portion”; [0044] “Performing data protection at the meta chunk level (instead of source chunk level) allows to reduce the capacity overheads by n times, where n is a number of source portions united in one meta chunk. Moreover, n*m previously generated coding fragments for the source portions are replaced with just m coding fragments of the standard size for a meta chunk”) fragmentation degrees of the plurality of chunks; (see Danilov, [0029] “the chunk manager 120 can partition a piece of data (e.g., chunk) into k data fragments of equal size”; [0037] “a data store 304 (e.g., chunk table) can store information about portions/chunks, for example, the number of data fragments stored in each portion/chunk and their indices”; [0008] “the operations comprise combining a group of the chunks to generate a meta chunk, wherein the group of the chunks are determined not to have more than a defined number of data fragments”; [0059] “wherein a data block (e.g., data chunk) is divided into k data fragments and m coding fragments are created (e.g., by encoding the k data fragments)”). … the fragmentation degrees of the plurality of chunks (see Danilov, [0029] “the chunk manager 120 can partition a piece of data (e.g., chunk) into k data fragments of equal size”; [0037] “a data store 304 (e.g., chunk table) can store information about portions/chunks, for example, the number of data fragments stored in each portion/chunk and their indices”; [0008] “the operations comprise combining a group of the chunks to generate a meta chunk, wherein the group of the chunks are determined not to have more than a defined number of data fragments”; [0059] “wherein a data block (e.g., data chunk) is divided into k data fragments and m coding fragments are created (e.g., by encoding the k data fragments)”). wherein the fragmentation degree of each of the at least one chunk corresponds to a number of fragmentations in each of the at least one chunk, and (see Danilov, [0029] “the chunk manager 120 can partition a piece of data (e.g., chunk) into k data fragments of equal size”; [0037] “a data store 304 (e.g., chunk table) can store information about portions/chunks, for example, the number of data fragments stored in each portion/chunk and their indices”; [0008] “the operations comprise combining a group of the chunks to generate a meta chunk, wherein the group of the chunks are determined not to have more than a defined number of data fragments”; [0059] “wherein a data block (e.g., data chunk) is divided into k data fragments and m coding fragments are created (e.g., by encoding the k data fragments)”). according to the fragmentation degree of the at least one chunk (see Danilov, [0029] “the chunk manager 120 can partition a piece of data (e.g., chunk) into k data fragments of equal size”; [0037] “a data store 304 (e.g., chunk table) can store information about portions/chunks, for example, the number of data fragments stored in each portion/chunk and their indices”; [0008] “the operations comprise combining a group of the chunks to generate a meta chunk, wherein the group of the chunks are determined not to have more than a defined number of data fragments”; [0059] “wherein a data block (e.g., data chunk) is divided into k data fragments and m coding fragments are created (e.g., by encoding the k data fragments)”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of fragmentation degree, updating fragmentation degree and first chunk as being disclosed and taught by Danilov, in the system taught by Yang to yield the predictable results of efficiently recovering source portions at meta chunk level (see , [0045] “system 500 that facilitates efficient data recovery by employing meta chunks. In one aspect, a recovery component 502 can be utilized to recover one or more source portions that have been protected at a meta chunk level”). The proposed combination of Yang and Danilov does not explicitly teach a defragmentation method, comprising: determining a first chunk to be defragmented, based on the fragmentation degrees; to enable the storage device to defragment data of the first chunk; a tree structure, a location of the node being determined. However, Hashimoto discloses defragmentation and teaches A defragmentation method, comprising: (see Hashimoto, [col 10 lines 29-30] “a flowchart of a defragmentation operation carried out”; [col 13 line 43] “A method performed by a host device”). determining a first chunk to be defragmented, based on number of fragmented blocks… (see Hashimoto, [col 10 lines 7-49] “determines the number of physical blocks (Number of Fragmented Blocks=NFB) that include the specified physical addresses… a flowchart of a defragmentation operation carried out… selects one or more files (target files) to undergo the defragmentation operation by referring to the index 19. For example, files that have undergone defragmentation in the LBA space are selected as the target files. Alternatively, files that appear be fragmented based on the NFB or PFR received in response to the GPFI command may be selected”; [col 11 lines 57-63] “After selecting the LBA region, the OS 7 operates to read data of the physically fragmented file corresponding to the LBA region and physically write the read data as one or more chunks of data larger than fragments of the file… according to the defragmentation operation of the above embodiment, free blocks for storing the data that undergo the defragmentation are prepared in advance”). … to enable the storage device to defragment data of the first chunk (see Hashimoto, [col 10 lines 7-49] “determines the number of physical blocks (Number of Fragmented Blocks=NFB) that include the specified physical addresses… a flowchart of a defragmentation operation carried out… selects one or more files (target files) to undergo the defragmentation operation by referring to the index 19. For example, files that have undergone defragmentation in the LBA space are selected as the target files. Alternatively, files that appear be fragmented based on the NFB or PFR received in response to the GPFI command may be selected”; [col 11 lines 57-63] “After selecting the LBA region, the OS 7 operates to read data of the physically fragmented file corresponding to the LBA region and physically write the read data as one or more chunks of data larger than fragments of the file… according to the defragmentation operation of the above embodiment, free blocks for storing the data that undergo the defragmentation are prepared in advance”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of defragmentation, stream identification information is the same and different, receiving address update information and page identification as being taught and disclosed by Hashimoto, in the system taught by the proposed combination of Yang and Danilov to yield the predictable results of carrying out write operations and accessing files more efficiently (see Hashimoto, [col 7 lines 57-66] “According to the above-described architecture of the stream-based data writing, data stored in each of the stream blocks 45 of the stream block pools 450 can be sorted out based on the types or attributes of the data… As a result, the write operation and 65 the garbage collection operation can be carried out more efficiently”; [col 1 lines 48-49] “Since the LBA regions of the file are sequential, the file can be accessed more quickly and more efficiently”). The proposed combination of Yang, Danilov and Hashimoto does not explicitly teach a tree structure, a location of the node being determined. However, Youngworth discloses unique chunk identifier for a memory chunk and teaches a tree structure,… a location of the node being determined (see Youngworth, [0130] “the red/black tree element identifies a chunk”; [0141] “The node structure will contain the chunk ID, the VLUN ID, the NA_LUN ID, the offset, and the size, The Red/Black tree object will contain the node structure”; [0226] “Operations that relate the chunk ID to the memory location (e.g., block size and offset) at the object storage disk 1104… the number of nodes in a red/black tree, they might also be used to allow hashed lookups on ranges of chunk IDs”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of tree structure, location of the node and red-black tree as being taught and disclosed by Youngworth, in the system taught by the proposed combination of Yang, Danilov and Hashimoto to yield the predictable results of efficiently avoiding excess fragmentation (see Youngworth, [0135] “With respect to excess fragmentation, the space on the free list is managed by elements that track ranges of free space. In this way a large range of free space may be represented by a single element. This system is very efficient unless there is a great deal of fragmentation of free space. To avoid excess fragmentation the free list is monitored for length”). Claim 22 incorporates substantively all the limitations of claim 1 in an apparatus form (see Hashimoto, [col 2 lines 19-29] “A storage system according to an embodiment is directed to carrying out a physical defragmentation of data stored in physical blocks of a storage device through a defragmentation operation performed cooperatively by a file system and a storage device… a storage system includes a host including a processor, and a storage device including a controller and a flash memory unit”) and is rejected under the same rationale. Regarding claim 11, Yang teaches … apparatus, comprising: at least one memory storing instructions; and at least one processor configured to execute the instructions to: (see Yang, [col 5 lines 65–66] “typically, machine 105 includes one or more processors 110, which may include memory controller”; [col 19 lines 41-43] “instructions executable by one or more processors, the instructions comprising instructions to perform the elements of the inventive concepts as described herein”). determine at least one chunk information of data in response to a writing request for the data, (see Yang, [col 6 lines 28-31] “As write commands are received, the chunks (or rather, identifiers (IDs) of these chunks) associated with those write commands may be placed in submission queue 335”) wherein a logical address which is assigned to the data corresponds to at least one chunk, each chunk corresponds to one chunk information, each chunk information comprises an identification of at least one chunk and a stream identification which is assigned to the data of the at least one chunk; (see Yang, [col 20 lines 12-22] “including: a receiver to receive a write command including a logical block address (LBA); an LBA mapper to map the LBA to a chunk identifier (ID); stream selection logic to select a stream ID based on the chunk ID using the chunk-to-stream mapper; a stream ID adder to add the stream ID to the write command; a queuer to place the chunk ID in the submission queue”; Fig. 5; [col 8 lines 11-13] “FIG. 5 shows the logical block addresses (LBAs) of various commands being mapped to chunks identifiers (IDs) and then to stream IDs for use”; [col 7 lines 1-3] “Stream ID adder 420 may then add the selected stream ID to the write command, using logic to write data into the write command”). … of the at least one chunk and a stream to which the at least one chunk (see Yang, [col 6 lines 35-42] “Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335 (or the lack of chunk IDs in submission queue 335—chunks that are not being used may be assigned to lower priority streams as a result of non-use)”) belongs in a chunk index structure to obtain an updated chunk index structure, (see Yang, [col 6 lines 33-38] “chunks may be removed from submission queue 335 and the stream assignments for these chunks may be updated. Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335”; [col 9 lines 34-38] “chunk-to-stream mapper 340 of FIG. 3 may include a Sequential, Frequency, Recency (SFR) table. “Sequential, Frequency, Recency” refers to the manner in which the stream ID to be assigned to a chunk may be determined and updated”) based on the at least one chunk information, (see Yang, [col 6 lines 35-42] “Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335 (or the lack of chunk IDs in submission queue 335—chunks that are not being used may be assigned to lower priority streams as a result of non-use)”) wherein the chunk index structure comprises (see Yang, [col 6 lines 33-38] “chunks may be removed from submission queue 335 and the stream assignments for these chunks may be updated. Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335”) identifications of a plurality of chunks, stream identifications corresponding to the plurality of chunks, and… (see Yang, [col 6 lines 35-42] “Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335 (or the lack of chunk IDs in submission queue 335—chunks that are not being used may be assigned to lower priority streams as a result of non-use)”; [col 22 lines 66-67] “SFR table including the chunk ID and the stream ID for the chunk ID”; [col 23 lines 49-52] “wherein the chunk-to-stream mapper includes a node entry, the node 50 entry including the chunk ID and the stream ID for the chunk ID”). … in the updated chunk index structure; and (see Yang, [col 6 lines 33-38] “chunks may be removed from submission queue 335 and the stream assignments for these chunks may be updated. Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335”; [col 9 lines 34-38] “chunk-to-stream mapper 340 of FIG. 3 may include a Sequential, Frequency, Recency (SFR) table. “Sequential, Frequency, Recency” refers to the manner in which the stream ID to be assigned to a chunk may be determined and updated”). transmit a first logical address and a first stream identification corresponding to the first chunk to a storage device,… (see Yang [col 19 lines 32-33] “Associated data may be delivered over transmission environments, including the physical and/or logical network”; [col 22 lines 37-39] “receive a write command for a Solid State Drive (SSD), the write command including a logical block address (LBA)”; [col 6 line 60 – col 7 line 6] “to accomplish this stream selection, and may include logic to search chunk-to-stream mapper 340 of FIG. 3 to find an entry corresponding to the selected chunk… by calculating the stream ID from an access count for the chunk… Once the stream ID has been attached to the write command, transmitter 425 may transmit the write command (with the attached stream ID) toward SSD 120”). wherein the chunk index structure of the each stream is a queue… (see Yang, [col 6 lines 33-38] “chunks may be removed from submission queue 335 and the stream assignments for these chunks may be updated. Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335”) each of the plurality of chunks corresponds to a node, (see Yang, [col 13 lines 39-40] “There may be one node for each chunk in SSD 120”). Yang does not explicitly teach A defragmentation apparatus; update a fragmentation degree of the at least one chunk; fragmentation degree of the plurality of chunks; determine a first chunk to be defragmented, based on the fragmentation degrees of the plurality of chunks in the updated chunk index structure; to enable the storage device to defragment data of the first chunk, wherein the fragmentation degree of each of the at least one chunk corresponds to a number of fragmentations in each of the at least one chunk, and a tree structure, and a location of the node being determined according to the fragmentation degree of the at least one chunk . However, Danilov discloses data chunks divided into k data fragments and teaches update a fragmentation degree in a chunk… (see Danilov, [0041] “Moreover, n*m coding fragments for source data portions are replaced with just m coding fragments of the standard size for a united data portion”; [0044] “Performing data protection at the meta chunk level (instead of source chunk level) allows to reduce the capacity overheads by n times, where n is a number of source portions united in one meta chunk. Moreover, n*m previously generated coding fragments for the source portions are replaced with just m coding fragments of the standard size for a meta chunk”) fragmentation degrees of the plurality of chunks; (see Danilov, [0029] “the chunk manager 120 can partition a piece of data (e.g., chunk) into k data fragments of equal size”; [0037] “a data store 304 (e.g., chunk table) can store information about portions/chunks, for example, the number of data fragments stored in each portion/chunk and their indices”; [0008] “the operations comprise combining a group of the chunks to generate a meta chunk, wherein the group of the chunks are determined not to have more than a defined number of data fragments”; [0059] “wherein a data block (e.g., data chunk) is divided into k data fragments and m coding fragments are created (e.g., by encoding the k data fragments)”). … the fragmentation degrees of the plurality of chunks (see Danilov, [0029] “the chunk manager 120 can partition a piece of data (e.g., chunk) into k data fragments of equal size”; [0037] “a data store 304 (e.g., chunk table) can store information about portions/chunks, for example, the number of data fragments stored in each portion/chunk and their indices”; [0008] “the operations comprise combining a group of the chunks to generate a meta chunk, wherein the group of the chunks are determined not to have more than a defined number of data fragments”; [0059] “wherein a data block (e.g., data chunk) is divided into k data fragments and m coding fragments are created (e.g., by encoding the k data fragments)”). wherein the fragmentation degree of each of the at least one chunk corresponds to a number of fragmentations in each of the at least one chunk, and (see Danilov, [0029] “the chunk manager 120 can partition a piece of data (e.g., chunk) into k data fragments of equal size”; [0037] “a data store 304 (e.g., chunk table) can store information about portions/chunks, for example, the number of data fragments stored in each portion/chunk and their indices”; [0008] “the operations comprise combining a group of the chunks to generate a meta chunk, wherein the group of the chunks are determined not to have more than a defined number of data fragments”; [0059] “wherein a data block (e.g., data chunk) is divided into k data fragments and m coding fragments are created (e.g., by encoding the k data fragments)”). … according to the fragmentation degree of the at least one chunk (see Danilov, [0029] “the chunk manager 120 can partition a piece of data (e.g., chunk) into k data fragments of equal size”; [0037] “a data store 304 (e.g., chunk table) can store information about portions/chunks, for example, the number of data fragments stored in each portion/chunk and their indices”; [0008] “the operations comprise combining a group of the chunks to generate a meta chunk, wherein the group of the chunks are determined not to have more than a defined number of data fragments”; [0059] “wherein a data block (e.g., data chunk) is divided into k data fragments and m coding fragments are created (e.g., by encoding the k data fragments)”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of fragmentation degree, updating fragmentation degree and first chunk as being disclosed and taught by Danilov, in the system taught by Yang to yield the predictable results of efficiently recovering source portions at meta chunk level (see , [0045] “system 500 that facilitates efficient data recovery by employing meta chunks. In one aspect, a recovery component 502 can be utilized to recover one or more source portions that have been protected at a meta chunk level”). The proposed combination of Yang and Danilov does not explicitly teach a defragmentation method, comprising: determining a first chunk to be defragmented, based on the fragmentation degrees; to enable the storage device to defragment data of the first chunk; a tree structure, a location of the node being determined. However, Hashimoto discloses defragmentation and teaches A defragmentation (see Hashimoto, [col 2 lines 19-29] “A storage system according to an embodiment is directed to carrying out a physical defragmentation of data stored in physical blocks of a storage device through a defragmentation operation performed cooperatively by a file system and a storage device… a storage system includes a host including a processor, and a storage device including a controller and a flash memory unit”). determine a first chunk to be defragmented, based on number of fragmented blocks (see Hashimoto, [col 10 lines 7-49] “determines the number of physical blocks (Number of Fragmented Blocks=NFB) that include the specified physical addresses… a flowchart of a defragmentation operation carried out… selects one or more files (target files) to undergo the defragmentation operation by referring to the index 19. For example, files that have undergone defragmentation in the LBA space are selected as the target files. Alternatively, files that appear be fragmented based on the NFB or PFR received in response to the GPFI command may be selected”; [col 11 lines 57-63] “After selecting the LBA region, the OS 7 operates to read data of the physically fragmented file corresponding to the LBA region and physically write the read data as one or more chunks of data larger than fragments of the file… according to the defragmentation operation of the above embodiment, free blocks for storing the data that undergo the defragmentation are prepared in advance”). to enable the storage device to defragment data of the first chunk, (see Hashimoto, [col 10 lines 7-49] “determines the number of physical blocks (Number of Fragmented Blocks=NFB) that include the specified physical addresses… a flowchart of a defragmentation operation carried out… selects one or more files (target files) to undergo the defragmentation operation by referring to the index 19. For example, files that have undergone defragmentation in the LBA space are selected as the target files. Alternatively, files that appear be fragmented based on the NFB or PFR received in response to the GPFI command may be selected”; [col 11 lines 57-63] “After selecting the LBA region, the OS 7 operates to read data of the physically fragmented file corresponding to the LBA region and physically write the read data as one or more chunks of data larger than fragments of the file… according to the defragmentation operation of the above embodiment, free blocks for storing the data that undergo the defragmentation are prepared in advance”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of defragmentation, stream identification information is the same and different, receiving address update information and page identification as being taught and disclosed by Hashimoto, in the system taught by the proposed combination of Yang and Danilov to yield the predictable results of carrying out write operations and accessing files more efficiently (see Hashimoto, [col 7 lines 57-66] “According to the above-described architecture of the stream-based data writing, data stored in each of the stream blocks 45 of the stream block pools 450 can be sorted out based on the types or attributes of the data… As a result, the write operation and 65 the garbage collection operation can be carried out more efficiently”; [col 1 lines 48-49] “Since the LBA regions of the file are sequential, the file can be accessed more quickly and more efficiently”). The proposed combination of Yang, Danilov and Hashimoto does not explicitly teach a tree structure, a location of the node being determined. However, Youngworth discloses unique chunk identifier for a memory chunk and teaches a tree structure,… and a location of the node being determined (see Youngworth, [0130] “the red/black tree element identifies a chunk”; [0141] “The node structure will contain the chunk ID, the VLUN ID, the NA_LUN ID, the offset, and the size, The Red/Black tree object will contain the node structure”; [0226] “Operations that relate the chunk ID to the memory location (e.g., block size and offset) at the object storage disk 1104… the number of nodes in a red/black tree, they might also be used to allow hashed lookups on ranges of chunk IDs”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of tree structure, location of the node, red-black tree and number of pages as being taught and disclosed by Youngworth, in the system taught by the proposed combination of Yang, Danilov and Hashimoto to yield the predictable results of efficiently avoiding excess fragmentation (see Youngworth, [0135] “With respect to excess fragmentation, the space on the free list is managed by elements that track ranges of free space. In this way a large range of free space may be represented by a single element. This system is very efficient unless there is a great deal of fragmentation of free space. To avoid excess fragmentation the free list is monitored for length”). Claims 2-7, 9, 12-17 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Yang, Danilov, Hashimoto and Youngworth further in view of Varadarajan et al. (US 2020/0142878 A1, hereinafter “Varadarajan”). Regarding claim 2, the proposed combination of Yang, Danilov, Hashimoto and Youngworth teaches wherein the at least one chunk information further comprises: (see Yang, [col 6 lines 28-31] “As write commands are received, the chunks (or rather, identifiers (IDs) of these chunks) associated with those write commands may be placed in submission queue 335”) a number of pages of the at least one chunk occupied by the data, and… (see Youngworth, [0142] “With respect to the implementation of the physical storage layout, the allocation of space for Metadata structures occurs in 32 k chunks (e.g., 8 pages)”) in the at least one chunk (see Yang, [col 25 line 15] “a number of sectors in the chunk”). The proposed combination of Yang, Danilov, Hashimoto and Youngworth does not explicitly teach a starting page number of the data. However, Varadarajan discloses data structure including a root tree and teaches a starting page number of the data (see Varadarajan, [0068] “The page range rows may contain a version number, page range start”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of starting page number, threshold and removing a node as being taught and disclosed by Varadarajan, in the system taught by the proposed combination of Yang, Danilov, Hashimoto and Youngworth to yield the predictable results of providing improved concurrency and performance in page range index management (see Varadarajan, [0191] “Therefore, while this disclosure includes particular examples, the true scope of the disclosure should not be so limited since other modifications will become apparent upon a study of the drawings, the specification, and the following claims. It should be understood that one or more steps within a method may be executed in different order (or concurrently) without altering the principles of the present disclosure”). Claim 12 incorporates substantively all the limitations of claim 2 in an apparatus form and is rejected under the same rationale. Regarding claim 3, the proposed combination of Yang, Danilov, Hashimoto, Youngworth and Varadarajan teaches wherein the updating the fragmentation degree (see Danilov, [0041] “Moreover, n*m coding fragments for source data portions are replaced with just m coding fragments of the standard size for a united data portion”; [0044] “Performing data protection at the meta chunk level (instead of source chunk level) allows to reduce the capacity overheads by n times, where n is a number of source portions united in one meta chunk. Moreover, n*m previously generated coding fragments for the source portions are replaced with just m coding fragments of the standard size for a meta chunk”) of the at least one chunk in the chunk index structure comprises: (see Yang, [col 6 lines 33-38] “chunks may be removed from submission queue 335 and the stream assignments for these chunks may be updated. Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335”). updating the fragmentation degree of the at least one chunk, (see Danilov, [0041] “Moreover, n*m coding fragments for source data portions are replaced with just m coding fragments of the standard size for a united data portion”; [0044] “Performing data protection at the meta chunk level (instead of source chunk level) allows to reduce the capacity overheads by n times, where n is a number of source portions united in one meta chunk. Moreover, n*m previously generated coding fragments for the source portions are replaced with just m coding fragments of the standard size for a meta chunk”) based on the number of the pages (see Youngworth, [0142] “With respect to the implementation of the physical storage layout, the allocation of space for Metadata structures occurs in 32 k chunks (e.g., 8 pages)”) of the at least one chunk occupied by the data and (see Yang, [col 25 line 15] “a number of sectors in the chunk”) the fragmentation degree of chunks (see Danilov, [0029] “the chunk manager 120 can partition a piece of data (e.g., chunk) into k data fragments of equal size”; [0037] “a data store 304 (e.g., chunk table) can store information about portions/chunks, for example, the number of data fragments stored in each portion/chunk and their indices”; [0008] “the operations comprise combining a group of the chunks to generate a meta chunk, wherein the group of the chunks are determined not to have more than a defined number of data fragments”; [0059] “wherein a data block (e.g., data chunk) is divided into k data fragments and m coding fragments are created (e.g., by encoding the k data fragments)”) the at least one chunk recorded (see Yang, [col 25 line 15] “a number of sectors in the chunk”) in the chunk index structure (see Yang, [col 6 lines 33-38] “chunks may be removed from submission queue 335 and the stream assignments for these chunks may be updated. Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335”). The motivation for the proposed combination is maintained. Claim 13 incorporates substantively all the limitations of claim 3 in an apparatus form and is rejected under the same rationale. Regarding claim 4, the proposed combination of Yang, Danilov, Hashimoto, Youngworth and Varadarajan teaches wherein the updating the stream to which the at least one chunk belongs in the chunk index structure comprises: (see Yang, [col 6 lines 33-41] “chunks may be removed from submission queue 335 and the stream assignments for these chunks may be updated. Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335 (or the lack of chunk IDs in submission queue 335—chunks that are not being used may be assigned to lower priority streams as a result of non-use)”) determining whether the stream identification which is assigned to the data is the same as stream ID of the input blocks (see Hashimoto, [col 6 lines 64-67] “When write data are associated with a stream ID, then the write data are input in one of the input blocks 45 that is associated with the same stream ID. Thus, in order to write the write data associated with the stream ID, an input block associated with the same stream ID has to be mapped”) the stream identification corresponding to the at least one chunk (see Yang, [col 6 lines 35-42] “Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335 (or the lack of chunk IDs in submission queue 335—chunks that are not being used may be assigned to lower priority streams as a result of non-use)”; [col 22 lines 66-67] “SFR table including the chunk ID and the stream ID for the chunk ID”; [col 23 lines 49-52] “wherein the chunk-to-stream mapper includes a node entry, the node 50 entry including the chunk ID and the stream ID for the chunk ID”) in the chunk index structure; and (see Yang, [col 6 lines 35-42] “Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335 (or the lack of chunk IDs in submission queue 335—chunks that are not being used may be assigned to lower priority streams as a result of non-use)”). updating the stream identification corresponding to the at least one chunk in the chunk index structure (see Yang, [col 6 lines 33-41] “chunks may be removed from submission queue 335 and the stream assignments for these chunks may be updated. Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335 (or the lack of chunk IDs in submission queue 335—chunks that are not being used may be assigned to lower priority streams as a result of non-use)”; [col 16 lines 18-27] “But other processing may still be performed, such as to update stream ID 530-1 of FIG. 5 assigned to chunk ID 515-1 of FIG. 5… may update stream ID 530-1 of FIG. 5 for chunk ID 515-1”) if different (see Hashimoto, [col 6 lines 60-62] “each of the input blocks 45 is associated with a different stream identification code (stream ID)”). The motivation for the proposed combination is maintained. Claim 14 incorporates substantively all the limitations of claim 4 in an apparatus form and is rejected under the same rationale. Regarding claim 5, the proposed combination of Yang, Danilov, Hashimoto, Youngworth and Varadarajan teaches wherein the chunk index structure comprises chunk index structures of each of a plurality of streams, and (see Yang, [col 6 lines 33-38] “chunks may be removed from submission queue 335 and the stream assignments for these chunks may be updated. Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335”). wherein the updating the stream identification corresponding to the at least one chunk in the chunk index structure comprises: (see Yang, [col 6 lines 33-41] “chunks may be removed from submission queue 335 and the stream assignments for these chunks may be updated. Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335 (or the lack of chunk IDs in submission queue 335—chunks that are not being used may be assigned to lower priority streams as a result of non-use)”). removing the at least one chunk in the chunk index structure of the stream of the at least one chunk; and (see Yang, [col 20 lines 23-25] “to remove the chunk ID from the submission queue and update the chunk-to-stream mapper”). adding the at least one chunk to the chunk index structure of the stream to which the data is assigned (see Yang, [col 20 lines 21-25] “a queuer to place the chunk ID in the submission queue… and update the chunk-to-stream mapper”). Claim 15 incorporates substantively all the limitations of claim 5 in an apparatus form and is rejected under the same rationale. Regarding claim 6, the proposed combination of Yang, Danilov, Hashimoto, Youngworth and Varadarajan teaches wherein the method further comprises: recording a file writing frequency (see Yang, [col 10 lines 26-28] “access counts 820-1 through 820-4 may represent the total number of accesses (both reads and writes) of the chunks, or just the write accesses of the chunk”; [col 4 lines 9-27] “Since write commands may be issued by different sources (applications, file systems, virtual machines, etc.).. Frequency may be measured as access counts. Whenever a chunk is accessed (written), the access count for that chunk is incremented by 1. Higher access counts indicate a shorter life time for that chunk. Frequency thus reflects the temperature of a data chunk”). wherein the determining the first chunk to be defragmented based on number of fragmented blocks (see Hashimoto, [col 10 lines 7-49] “determines the number of physical blocks (Number of Fragmented Blocks=NFB) that include the specified physical addresses… a flowchart of a defragmentation operation carried out… selects one or more files (target files) to undergo the defragmentation operation by referring to the index 19. For example, files that have undergone defragmentation in the LBA space are selected as the target files. Alternatively, files that appear be fragmented based on the NFB or PFR received in response to the GPFI command may be selected”; [col 11 lines 57-63] “After selecting the LBA region, the OS 7 operates to read data of the physically fragmented file corresponding to the LBA region and physically write the read data as one or more chunks of data larger than fragments of the file… according to the defragmentation operation of the above embodiment, free blocks for storing the data that undergo the defragmentation are prepared in advance”) the fragmentation degrees of the plurality of chunks (see Danilov, [0029] “the chunk manager 120 can partition a piece of data (e.g., chunk) into k data fragments of equal size”; [0037] “a data store 304 (e.g., chunk table) can store information about portions/chunks, for example, the number of data fragments stored in each portion/chunk and their indices”; [0008] “the operations comprise combining a group of the chunks to generate a meta chunk, wherein the group of the chunks are determined not to have more than a defined number of data fragments”; [0059] “wherein a data block (e.g., data chunk) is divided into k data fragments and m coding fragments are created (e.g., by encoding the k data fragments)”) in the updated chunk index structure comprises: (see Yang, [col 6 lines 33-38] “chunks may be removed from submission queue 335 and the stream assignments for these chunks may be updated. Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335”; [col 9 lines 34-38] “chunk-to-stream mapper 340 of FIG. 3 may include a Sequential, Frequency, Recency (SFR) table. “Sequential, Frequency, Recency” refers to the manner in which the stream ID to be assigned to a chunk may be determined and updated”). determining the file writing frequency is greater than a second threshold; (see Yang, [col 12 lines 52-53] “if adjusted access count 1205 is greater than the threshold”). determining a fragmentation degree of plurality of chunks (see Danilov, [0029] “the chunk manager 120 can partition a piece of data (e.g., chunk) into k data fragments of equal size”; [0037] “a data store 304 (e.g., chunk table) can store information about portions/chunks, for example, the number of data fragments stored in each portion/chunk and their indices”; [0008] “the operations comprise combining a group of the chunks to generate a meta chunk, wherein the group of the chunks are determined not to have more than a defined number of data fragments”; [0059] “wherein a data block (e.g., data chunk) is divided into k data fragments and m coding fragments are created (e.g., by encoding the k data fragments)”) a second chunk (see Yang, [col 27 line 16] “selecting a second chunk ID”) corresponding to data of the first file, (see Hashimoto, [col 2 lines 29-31] “The host is configured to read physically fragmented data of a file stored in one or more physical storage regions”) from the chunk index structure of the each of the plurality of streams; (see Yang, [col 6 lines 33-38] “chunks may be removed from submission queue 335 and the stream assignments for these chunks may be updated. Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335”). based on the fragmentation degree of plurality of chunks (see Danilov, [0029] “the chunk manager 120 can partition a piece of data (e.g., chunk) into k data fragments of equal size”; [0037] “a data store 304 (e.g., chunk table) can store information about portions/chunks, for example, the number of data fragments stored in each portion/chunk and their indices”; [0008] “the operations comprise combining a group of the chunks to generate a meta chunk, wherein the group of the chunks are determined not to have more than a defined number of data fragments”; [0059] “wherein a data block (e.g., data chunk) is divided into k data fragments and m coding fragments are created (e.g., by encoding the k data fragments)”) the second chunk (see Yang, [col 27 line 16] “selecting a second chunk ID”) being greater than a third threshold, (see Varadarajan, [0178] “if the size of the last anchor tree is greater than the threshold”) determining that the second chunk is the first chunk; and (see Yang, [col 31 lines 48-51] “identifying a first identifier for a chunk on a storage device including the logical address; accessing a second identifier associated with the first identifier”). removing a node corresponding to (see Varadarajan, [0072] “the stream removes the extents from the corresponding extent node servers”) the first chunk (see Danilov, [claim 5] “the source chunks are first source chunks”; [0031] “consolidating two or more erasure-coded data portions (e.g., normal/source chunks) that have a reduced sets of data fragments”) from the chunk index structure (see Yang, [col 6 lines 33-38] “chunks may be removed from submission queue 335 and the stream assignments for these chunks may be updated. Chunk-to-stream mapper 340 may store information about what streams are currently assigned to various chunks: this information may be updated as a result of chunk IDs in submission queue 335”) of a first stream corresponding to chunk ID (see Yang, [col 21 lines 42-43] “a plurality of stream IDs, responsive to the stream ID for the chunk ID”) the first chunk (see Danilov, [claim 5] “the source chunks are first source chunks”; [0031] “consolidating two or more erasure-coded data portions (e.g., normal/source chunks) that have a reduced sets of data fragments”). The motivation for the proposed combination is maintained. Claim 16 incorporates substantively all the limitations of claim 6 in an apparatus form and is rejected under the same rationale. Regarding claim 7, the proposed combination of Yang, Danilov, Hashimoto, Youngworth and Varadarajan teaches wherein the third threshold is (see Varadarajan, [0178] “if the size of the last anchor tree is greater than the threshold”) a fragmentation degree of a chunk (see Danilov, [0029] “the chunk manager 120 can partition a piece of data (e.g., chunk) into k data fragments of equal size”; [0037] “a data store 304 (e.g., chunk table) can store information about portions/chunks, for example, the number of data fragments stored in each portion/chunk and their indices”; [0008] “the operations comprise combining a group of the chunks to generate a meta chunk, wherein the group of the chunks are determined not to have more than a defined number of data fragments”; [0059] “wherein a data block (e.g., data chunk) is divided into k data fragments and m coding fragments are created (e.g., by encoding the k data fragments)”) at a root node of a red- black tree (see Youngworth, [0133]-[0134] “With respect to Garbage collection in the SSBLC LUN space, all elements in the LUN space management implementation are union objects of a common root… when the amount of storage on a free list rises above a threshold, or the level of fragmentation of the free list rises above a threshold… All allocations are done in 32 k chunks… There is a back pointer in the red/black tree element. This is used to find the parent of an active element… The parent elements pointer to the targeted object is updated and the old element's space is placed on the free list”; [0141] “The Red/Black tree object will contain the node structure, right and left pointers, and the color”) of a stream corresponding to the second chunk (see Yang, [col 27 lines 16-17] “a second chunk ID in the queue corresponding to the stream ID”). The motivation for the proposed combination is maintained. Claim 17 incorporates substantively all the limitations of claim 7 in an apparatus form and is rejected under the same rationale. Regarding claim 9, the proposed combination of Yang, Danilov, Hashimoto, Youngworth and Varadarajan teaches wherein the method further comprises: receiving address update information transmitted by the storage device, the address update information comprising an update logical address and a physical address of data; (see Hashimoto, [col 3 line 66 – col 4 line ] “The RAM 12 includes storage regions for storing a look-up table (LUT) 13, which is used to manage mapping between LBAs and physical addresses”; [col 8 lines 33-34] “the controller 10 updates the LUT 13 so as to reflect changes in the correspondence between LBAs and physical addresses of blocks”). determining an identification of a corresponding chunk and (see Yang, [col 16 line 40] “to determine chunk ID 515-1”) an identification of a page based on the update logical address; and (see Hashimoto, [col 9 lines 6-7] “LBAs of the copied valid data are mapped to the pages of the input”; [col 8 lines 33-34] “the controller 10 updates the LUT 13 so as to reflect changes in the correspondence between LBAs and physical addresses of blocks”). setting the page corresponding to the identification of the page to 1 (see Youngworth, [0056] “if a chunk is allocated, but only the first page is written to, only a single page of actual storage is allocated within the SSD”) based on the identification of the corresponding chunk and (see Yang, [col 16 line 40] “to determine chunk ID 515-1”) the identification of the page (see Hashimoto, [col 9 lines 6-7] “LBAs of the copied valid data are mapped to the pages of the input”). The motivation for the proposed combination is maintained. Claim 19 incorporates substantively all the limitations of claim 9 in an apparatus form and is rejected under the same rationale Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Yang, Danilov, Hashimoto and Youngworth in view of Hamo et al. (US 2024/0411684 A1, hereinafter “Hamo”). Regarding claim 10, the proposed combination of Yang, Danilov and Hashimoto teaches wherein the storage device defragments the data of the first chunk by: (see Hashimoto, [col 10 lines 29-31] “a defragmentation operation carried out by the OS 7 and the storage device 2 of the storage system 1 cooperatively”; [col 10 lines 44-46] “selects one or more files (target files) to undergo the defragmentation operation by referring to the index 19… files that have undergone defragmentation in the LBA space are selected as the target files”; [col 11 lines 57-60] “operates to read data of the physically fragmented file corresponding to the LBA region and physically write the read data as one or more chunks of data larger than fragments of the file”). defragmenting the data of the first chunk using… (see Hashimoto, [col 10 lines 29-31] “a defragmentation operation carried out by the OS 7 and the storage device 2 of the storage system 1 cooperatively”; [col 10 lines 44-46] “selects one or more files (target files) to undergo the defragmentation operation by referring to the index 19… files that have undergone defragmentation in the LBA space are selected as the target files”; [col 11 lines 57-60] “operates to read data of the physically fragmented file corresponding to the LBA region and physically write the read data as one or more chunks of data larger than fragments of the file”). The proposed combination of Yang, Danilov and Hashimoto does not explicitly teach a Host Initiated Defrag (HID). However, Hamo discloses data storage devices and teaches a Host Initiated Defrag (HID) (see Hamo, [0029] “the action comprises a host-initiated defragmentation operation”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include the functionality of HID as being taught and disclosed by Hamo, in the system taught by the proposed combination of Yang, Danilov and Hashimoto to yield the predictable results of efficiently taking suitable actions (see Hamo, [0051] “If the area of the memory 104 does not satisfy the target operating condition, the controller 103 can perform an action on the area of the memory 104 to attempt to cause the area of the memory 104 to satisfy the target operating condition. The action take any suitable form, such as, but not limited to, a host-initiated defragmentation (HID) operation”). Allowable Subject Matter Claims 8 and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAISHALI SHAH whose telephone number is (571)272-8532. The examiner can normally be reached Monday - Friday (7:30 AM to 4:00 PM). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, AJAY BHATIA can be reached at (571)272-3906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /VAISHALI SHAH/Primary Examiner, Art Unit 2156
Read full office action

Prosecution Timeline

Feb 08, 2024
Application Filed
Jun 07, 2025
Non-Final Rejection — §103
Jul 11, 2025
Interview Requested
Jul 17, 2025
Examiner Interview Summary
Jul 17, 2025
Applicant Interview (Telephonic)
Sep 09, 2025
Response Filed
Oct 21, 2025
Final Rejection — §103
Dec 24, 2025
Response after Non-Final Action
Jan 23, 2026
Request for Continued Examination
Jan 31, 2026
Response after Non-Final Action
Feb 17, 2026
Non-Final Rejection — §103
Mar 20, 2026
Interview Requested
Apr 06, 2026
Applicant Interview (Telephonic)
Apr 06, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596730
SYSTEM TO ASSIST USERS OF A SOFTWARE APPLICATION
2y 5m to grant Granted Apr 07, 2026
Patent 12585682
METHOD AND SYSTEM FOR GENERATING LONGFORM TECHNICAL QUESTION AND ANSWER DATASET
2y 5m to grant Granted Mar 24, 2026
Patent 12579193
SELF-DISCOVERY AND CONSTRUCTION OF TYPE-SENSITIVE COLUMNAR FORMATS ON TYPE-AGNOSTIC STORAGE SERVERS TO ACCELERATE OFFLOADED QUERIES
2y 5m to grant Granted Mar 17, 2026
Patent 12579199
SYSTEMS AND METHODS FOR TRACKING DOCUMENT REUSE AND AUTOMATICALLY UPDATING DOCUMENT FRAGMENTS ACROSS ONE OR MORE PLATFORMS
2y 5m to grant Granted Mar 17, 2026
Patent 12572604
VEHICLE DATA COLLECTION SYSTEM AND METHOD INCLUDING RELIABILITY INFORMATION FOR A STORAGE UNIT FOR STORING PARTIAL LOG DATA
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
57%
Grant Probability
99%
With Interview (+57.0%)
3y 8m
Median Time to Grant
High
PTA Risk
Based on 224 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month