Prosecution Insights
Last updated: April 19, 2026
Application No. 17/740,041

DATA COMPRESSION API

Non-Final OA §101§103
Filed
May 09, 2022
Examiner
TRUONG, LECHI
Art Unit
2194
Tech Center
2100 — Computer Architecture & Software
Assignee
Nvidia Corporation
OA Round
3 (Non-Final)
87%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
766 granted / 879 resolved
+32.1% vs TC avg
Strong +37% interview lift
Without
With
+37.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
32 currently pending
Career history
911
Total Applications
across all art units

Statute-Specific Performance

§101
16.9%
-23.1% vs TC avg
§103
55.8%
+15.8% vs TC avg
§102
3.1%
-36.9% vs TC avg
§112
7.6%
-32.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 879 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 1. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/17/2025 has been entered. DETAILED ACTION Claims 1-28 are presented for the examination. § 101 2. 35 U.S.C. 101 reads as follows Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 2, 4, 5, 9, 10, 11, 14, 16, 15, 19, 22 , 25, 26 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. As to Claims 1, 2, 4, 5, 9, 10, 11, 14, 16, 15, 19, 22 , 25, 26 have been rejected under 35 USC 101 for abstract idea without significantly more. Under Step 2A, Prong 1, the “ indicate based on an input parameter of the API call the one or more blocks of allocated storage to store information to be compressed”, “ is to indicate that the one or more blocks of storage”, “ indicate that data stored in the storage”, “ indicating that the information can be compressed”, “ indicating that an allocated block of memory” , “ indicate a type of compression” recite a mental process since “ indicate” is function that can be reasonably performed in the human mind with the aid of pen and paper through observation, evaluation, judgment, opinion. Under Prong 2, the additional element “ perform an application programming interface ("API") to allocate one or more blocks of storage” are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component, or merely a generic computer or generic computer components to perform the judicial exception, Accordingly, the additional elements do not integrate the recited judicial exception into a practical application, and the claim is therefore directed to the judicial exception. See MPEP 2106.05(f). Under Step 2B, the additional elements “ perform an application programming interface ("API") to allocate one or more blocks of storage” this is just information that is being processed in the identified mental process and should have been included in the mental process and this is mere instructions to apply the mental process under mpep 2106.05(f), amounts to merely generally linking the use of the judicial exception to a particular technological environment or field or use, and is merely applying the judicial exception, therefore, does not amount to significantly more, hence, cannot provide an inventive concept. 5. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application. See MPEP 2106.05(d). Thus, the claim is not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 6. Claim(s) 1, 9, 15, 22 are rejected under 35 U.S.C. 103 as being unpatentable over Kataoka ( US 20150248432 A1) in view of Azizi(US 20190213120 A1) and further in view of Lin( US 9635132 B1). As to claim 1, Kataoka teaches circuitry in response an application programming interface ("API") allocate one or more blocks of storage and indicate the one or more blocks of allocated storage to store information to be compressed( circuit configured to control communication, para[0113], ln 13-16/ First, the compression function is called by operations of an operating system and application program included in the computer 1 (in 8101). When the compression function is called, the controller 111 executes a pre-process such as securing of, for example, the storage regions A1, A2, A3, and A4 (the storage regions A1, A2, and A3 are illustrated in FIG. 1) and setting of the positional information (for example, the positional information illustrated in FIG. 9) within the storage regions (in S102).When the process of S102 is terminated, the controller 111 loads the content part of the file F1 to be compressed into the storage region A1 (in S103), para[0074] to para[0075], ln 1-5/ FIG. 1 illustrates the flow of the compression process using LZ77. First, a storage region A1, a storage region A2, and a storage region A3 are secured in a memory, for example. Data of a content part included in a file F1 illustrated in FIG. 1 is loaded into the storage region A, para[0038], ln 1-8). Azizi teaches in response to an application programming interface ("API") allocate one or more blocks of storage and indicate the one or more blocks of allocated storage to store information (In response to receiving a reserve( ) function call, the memory overcommitment circuitry 302 performs the method 600 illustrated in FIG. 6. FIG. 6 is a flowchart illustrating a method 600 for remapping data in a data region 308, according to an embodiment. At 602, the memory overcommitment circuitry 302 attempts to allocate a new block of data out of the data region 308 that matches the size specified in the reserve( ) call. If the memory overcommitment circuitry 302 is not able to allocate the memory, it returns a failure notification to the calling software. At 604, the memory overcommitment circuitry 302 copies the contents of the current block from the data region corresponding to the address as provided in the reserve( ) call, to the newly allocated data block in the data region 308…. the memory reduction circuitry 314 may implement one or more compression techniques to compress data in the data region 308 , para[0058], para[0059], ln 1-5/ para[0060], ln 16-20/ The memory device 306 stores a data region 308. The data region 308 stores the data contents that are written into the memory system. The data stored may be compressed, deduplicated, or both, para[0046], ln 3-9 ). It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Kataoka with Azizi to incorporate the above feature because this reduces the amount of memory used by using data compression techniques on the memory contents and eliminates duplicate copies of data in memory. Lin teaches indicate, based on an input parameter of the API call, the one or more blocks of allocated storage to store information to be compressed( upload blocks of data from a client network 250 for storing in the remote data store 214 via the APIs 242, col 12, ln 14-16/ In at least some embodiments, input parameters to the upload block API may include, but are not limited to, one or more authentication parameters, a volume identifier (volume ID), a snapshot identifier (snapshot ID), and, for each of one or more data blocks to be downloaded, a data offset, a data length, and a compressed parameter, col 17, ln 57-64/ The volume ID parameter may be used to specify a volume on remote data store 212 from which data is to be downloaded, col 20, ln 2-5/ The snapshot ID parameter may be used to specify a snapshot on remote data store 212 from which data is to be downloaded. In at least some embodiments, a snapshot version may also be included as an input parameter, col 20, ln 5-10/ The compressed parameter may be used to indicate if the data being downloaded is or is not compressed according to a compression scheme., col 20, ln 15-17). It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Kataoka and Azizi with Lin to incorporate the above feature because this reduces bandwidth usage when uploading data from local application to block storage service , and thus more of the connection's bandwidth may be available for other applications. As to claims 9, 15, 22, they are rejected for the same reason as to claim 1 above. 7. Claim(s) 2 is rejected under 35 U.S.C. 103 as being unpatentable over Kataoka ( US 20150248432 A1) in view of Azizi(US 20190213120 A1) in view of Yamamoto( US 20050102491 A1) in view of Lin( US 9635132 B1) and further in view of YOO( US 20170308226 A1 As to claim 2, Yamamoto teaches the circuitry, in response to the API call is to indicate that the one or more blocks of storage are to comprise information that is compressible for transmission to circuitry in a processing device( the conversion unit may include: a compression information addition subunit operable to add, to a call instruction for calling[API] the predetermined function, information indicating to the processor that the data retained in the register should be compressed and then saved to the stack memory when the predetermined function is called, para[0039], ln 1-8/ In the description area 11a relating to the function "main" instructions respectively calling "func_a", "func_b", and "func_e" are described, para[0082], ln 1-3/ The description area 11e relating to "func_e" includes a description area 11f for a pragma #STACK_COMPRESS. In the present embodiment, the pragma #STACK_COMPRESS is a kind of pragma that indicates, to the program conversion apparatus 100, that the data to be saved from the guaranteed register to the stack memory, in response to call of a function, should be compressed, para[0083], ln 21-31/ para[0082], ln 1-3). It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Kataoka, Azizi and Lin with Yamamoto to incorporate the feature of compressible for transmission to circuitry in a processing device because this issues to mount a large-capacity stack memory to a processor, for avoiding overflow at the stack memory. Yoo teaches compressible for transmission to circuitry in a processing device(the first processor 110 may compress screen frame data by using a specified algorithm and may transmit the compressed screen frame data to the first display driving integrated circuit 130, para[0051], ln 3-10). It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Kataoka, Azizi and Yamamoto with Yoo to incorporate the feature of compressible for transmission to circuitry in a processing device because this improve the quality of data for transferring to the circuit at high speed . 8. Claims 3, 10, 16, 18, 23 are rejected under 35 U.S.C. 103 as being unpatentable over Kataoka ( US 20150248432 A1) in view of Azizi(US 20190213120 A1) in view of Lin( US 9635132 B1) and further in Bowman( US 20210019284 A1). As to claim 3, Bowman teaches the circuitry, in response to the API call, is to designate performance of the application programming interface designates a region of storage to be allocated ( the protocols for communications between the multiple node devices and the set of storage devices may be at a relatively high level (e.g., through use of a storage substrate API) that may be entirely independent of the selection of the file system used across the set of storage devices. Thus, in such embodiments, the multiple node devices may play little or no role in dividing the data set parts into data blocks of a type and/or size that may be required for storage in accordance with requirements associated with the file system, and/or may play little or no role in the selection and/or specification of storage locations within the set of storage devices at which such data blocks may be stored. Also, in such embodiments, it may be that the further compression and/or the encryption of portions of data set parts and/or of whole data set parts may be performed directly within the data aggregation threads by the processor(s) of the node devices (e.g., using one or more thread-safe callable libraries of routines), and/or may be performed by the processor(s) of the set of storage devices, para[0059], ln 24-35). It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Kataoka, Azizi and Lin with Bowam to incorporate the feature of performance of the application programming interface designates a region of the storage to be allocated because this enables efficient data searches, and efficiently retrieving some or all of such large data sets for use by a grid of node devices. As to claim 10, it is rejected for the same reason as to claim 2 above. In additional, Bowman teaches compressible for transmission between components of a processing device( para[0051] to para[0052]) for the same reason as to claim 2 above. As to claim 16, it is rejected for the same reason as to claim 10 above. As to claim 18, Bowman teaches the API comprises a function to allocate a block of storage to store compressible information( para[0051] to para[0052]) for the same reason as to claim 2 above. As to claim 23, Bowman teaches providing a function in the API to indicate that the information can be compressed prior to transmission between components of the processing device( para[0051] to para[0052]) for the same reason as to claim 2 above. 9. Claims 4, 11, 17, 24 are rejected under 35 U.S.C. 103 as being unpatentable over Kataoka ( US 20150248432 A1) in view of Azizi(US 20190213120 A1) in view of Lin( US 9635132 B1) and further in view of Surti US(9912957 B1). As to claim 4, Surti teaches the information is compressed by a processing device, based at least in part on the indicating for transmission to an L2 cache(In one embodiment a compression operation can be performed by the compression/decompression unit 628 to compress the data that is evicted from the render cache 624 before the data is written to the L3 cache 630 , col 23, ln 59-67). It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Kataoka, Azizi and Lin with Surti to incorporate the feature of compressible for transmission to circuitry in a processing device because this may improve the quality of data for transferring to the circuit at high speed . As to claim 11, it is rejected for the same reason as to claim 4 above. As to claim 17, Surti teaches a processing device compresses information stored in the storage and transmits the compressed information to an L2 cache( col 23, ln 59-67) for the same reason as to claim 4 above. As to claim 24, Surti teaches compressing the information in response to the indication; and transmitting the compressed information to an L2 cache( col 23, ln 59-67) for the same reason as to claim 4 above. 10. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Kataoka ( US 20150248432 A1) in view of Azizi(US 20190213120 A1) in view of Lin( US 9635132 B1) and further in view of Kato( US 5897251 A). As to claim 5, Kato teaches the circuitry, in response to the API call, is to cause data to be stored in a page table to indicate that the one or more blocks of storage comprises compressible data. ( FIG. 3 is a block diagram showing a circuit structure of a control portion of a copying machine, col 3, n 15-20/ Code memory 306 is managed by management table MT1 stored in RAM 126. FIGS. 5A and 5B show relation between management table M1 and code memory 306. Code memory 306 is divided into memory areas of 32K byte unit. In order to enable simultaneous control of writing (at the time of reading of image data) and reading (at the time of printing), code data is stored in each area. Management table MT1 of each page stores the number indicating the code memory area, a page number, the number of concatenated area, data indicating divisional scan data, various information necessary for compression/decompression such as the manner of compression, data length and the like. Based on the information, code memory 306 is dynamically managed, col 6, ln 34-52). It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Kataoka, Azizi and Lin with Kato to incorporate the feature of circuits to cause data to be stored in a page table to indicate that the storage comprises compressible data because this reduces of data size or change in data direction and allows the data to be presented as a continuous information. 11. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Kataoka ( US 20150248432 A1) in view of Azizi(US 20190213120 A1) in view of Lin( US 9635132 B1) in view of Wisor( US 7219219 B1). As to claim 6, Wisor teaches the compressed information is uncompressed by post-cache compression circuitry(FIG. 3 illustrates the middle POST phase 12. The remainder of the BIOS code may be stored in a nonvolatile memory in compressed form. The middle POST phase 12 decompresses the BIOS (block 42). For example, the decompressed BIOS may be stored in the memory system, since the memory system was enabled in the early POST phase 10. The middle POST phase 12 enables any caches in the system (e.g. caches in the processors, external caches, etc.) and establishes the stack in memory (block 44), col 3, ln 8-17) . It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Kataoka, Azizi and Lin with wisor to incorporate the feature of the compressed information is uncompressed by post-cache compression circuitry because this provides various BIOS vendors implement a different software architecture for implementing the POST code. 12. Claim(s) 7, 14, 19, 21, 26 are rejected under 35 U.S.C. 103 as being unpatentable over Kataoka ( US 20150248432 A1) in view of Azizi(US 20190213120 A1) in view of Lin( US 9635132 B1) and further in view of WEGENER( US 20140101485 A1). As to claim 7, Wegener teaches information to be compressed is to be uncompressed by post-cache compression circuitry ( A set of different compression and decompression algorithms can be included in the operations of the API, and compression parameters of the API can identify a selected one of the different algorithms to be applied for compression and decompression operations in a particular data move operation. The set of different algorithms can include algorithms specialized for data types identified in the parameters of the API, including for example algorithms for compression of floating-point numbers, algorithms for compression of integers, algorithms for compression of image data, and so on, para[0141], ln 28-41). It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Kataoka, Azizi and Lin with Wegener to incorporate the feature of a function of the API comprises a parameter to indicate a type of data compression to be used to compress the information A set of different compression because this utilize the models for the plurality of compression algorithms to select a best fit for the provided criteria. As to claim 14, Wegener the API comprises at least one of a function or parameter to indicate a type of compression to use to transmit information stored in the storage (para[0141], ln 28-41/ para[0041]) for the same reason as to claim 7 above. As to claim 19, Wegener teaches a function of the API comprises a parameter to indicate that data stored in the storage can be compressed for transmission between components of a processing device( para[0141], ln 28-41/ para[0041]) for the same reason as to claim 7 above. As to claim 21, Wegener teaches a function or parameter to indicate a type of compression to use to transmit information stored in the storage( (para[0141], ln 28-41/ para[0041]) for the same reason as to claim 7 above. As to claim 26, Wegener teaches a function of the API comprises a parameter to indicate a type of compression( (para[0141], ln 28-41/ para[0041]) for the same reason as to claim 7 above. . 13. Claims 8, 20, 25, 28 are rejected under 35 U.S.C. 103 as being unpatentable over Kataoka ( US 20150248432 A1) in view of Azizi(US 20190213120 A1) in view of Lin( US 9635132 B1) and further in view of Mammou( US 20140139513 A1). As to claim 8, Mammou teaches circuitry, in response to the API call to store the information to be compressed in a cache and decompress the information to transmit the information to client circuitry of the cache ( the API 600 is operative to interface with, for example, Application 114, to receive one or more requests to process 3D graphics data, for example, the request 200 as shown in FIG. 2. The API 600, in this example, is also operative to return information indicating memory addresses where the processed 3D graphics data is made available to the requester, e.g., the Application 114., para[0048], ln 1-9/the cache control module 604 is operatively connected to the API 600, the decompress control module 602, and compress control module 606. In an event that the Application Request such as the request 200 indicates that the requested 3D graphics data is to be cached in the GPU memory (e.g., the request 200 includes a cache request) the cache control module 604 is operative to determine how to cache the requested 3D graphics data. In cases when the requested to be cached 3D graphics data is already compressed with an encoding format supported by the video acceleration hardware provided by the GPU; e.g., the requested 3D graphics data is compressed as intra-frames of one or more H.264 formatted videos, the cache control module 604 is operative to generate a control command instructing the GPU to store the requested 3D graphics data in the GPU's memory. In those cases, the cache table 900, for example, stored in the system memory 126 as shown, para[0050], ln 1-21/ The request 200 may also indicate that processed 3D graphics data, in a decompressed state, is to be cached on the GPU memory for further processing, para[0050], ln 14-18). It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Kataoka, Azizi and Lin with Mammou to incorporate the feature of the application programming interface causes a processing unit to store the compressed information in a cache and decompress the information to transmit the information to client circuitry of the cache because this achieves efficient usage of cache space for acceleration hardware provided by the GPU. As to claim 20, it is rejected for the same reason as to claim 8 above. As to claim 25, Mammou teaches the indication comprises data indicating that an allocated block of memory is to comprise data to be compressed for transmission between components of the processing device( para[000151], ln 26-33) for the same reason as to claim 8 above. As to claim 28, Mammou teaches providing, by the API, a memory allocation function to allocate memory whose contents are to be compressed in response to initiation of a transmission between components of the processing device( para[000151], ln 26-33) for the same reason as to claim 8 above. 14. Claims 12, 27 are rejected under 35 U.S.C. 103 as being unpatentable over Kataoka ( US 20150248432 A1) in view of Azizi(US 20190213120 A1) in view of Lin( US 9635132 B1) and further in view of Guilford( US 20160283504 A1). As to claim 12, Guilford teaches the one or more processors, in response to the API call, are to further indicate that the one or more blocks of memory comprises data to be compressed for transmission between component ( Such jobs may be initiated by processor 1802 writing control and status registers (CSRs) to indicate parameters of compression, what type of compression is to be performed, and where such data to be compressed is located. Some parameters and data to be compressed may be located in, for example, memory hierarchy 1832. Memory hierarchy 1832 may include any suitable number of combinations of physical memory, caches, or other storage, para[0151], ln 5-15/ The resulting data from data compression may be sent by compression module 1818 to memory hierarchy 1832. The results may be read, sent to recipients, decompressed, or otherwise utilized or processed by cores 1814. While cores 1814 could themselves perform the lossless data compression, para[000151], ln 26-33). It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Kataoka, Azizi and Lin with Mammou to incorporate the feature of the indication indicates that an allocated block of memory comprises data to be compressed for transmission between components because this produces the output data sequence, and send the output data sequence to the memory hierarchy. As to claim 27, Guilford teaches storing compressed information in a cache; and decompressing the compressed information prior to transmitting the decompressed information to a component of the processing device( para[0050], ln 14-18) for the same reason as to claim 8 above. 15. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Kataoka ( US 20150248432 A1) in view of Azizi(US 20190213120 A1) in view of Lin( US 9635132 B1) and further in view of Henkel( US 6691305 B1). As to claim 13, Henkel teaches information to be compressed is to be decompressed by circuitry of a processing device ( the invention further provides a circuit for decompressing compressed object code instructions that have been compressed to reduce power consumption, col 6, ln 60-56). It would have been obvious to one of the ordinary skill in the art before the effective filling date of claimed invention was made to modify the teaching of Kataoka, Azizi and Lin with Henkel to incorporate the feature of the compressed information is decompressed by circuitry of a processing device because this reduces the power consumption of a complete system comprising a CPU, instruction cache, data cache, main memory, data buses and address bus. Response to the argument: A. Applicant amendment filed on 11/17/2025 has been considered but they are not persuasive: Applicant argued in substance that : (1) “ The element that the Office characterizes as a mental process is performed as part of an API operation, which cannot be carried out in the human mind”. (2) “ do not teach or suggest "circuitry to, in response to an application programming interface ("API") call ... indicate, based on an input parameter of the API call, the one or more blocks of allocated storage to store information to be compressed." B. Examiner respectfully disagreed with Applicant's remarks: As to the point(1), Claims 1-28 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. As to Claims 1, 8, 15 should have been rejected under 35 USC 101 for abstract idea without significantly more. Under Step 2A, Prong 1, the “ indicate based on an input parameter of the API call the one or more blocks of allocated storage to store information to be compressed” recite a mental process since “ indicate” is function that can be reasonably performed in the human mind with the aid of pen and paper through observation, evaluation, judgment, opinion. Under Prong 2, the additional element “ perform an application programming interface ("API") to allocate one or more blocks of storage” are recited at a high-level of generality such that it amounts no more than mere instructions to apply the exception using a generic computer component, or merely a generic computer or generic computer components to perform the judicial exception, Accordingly, the additional elements do not integrate the recited judicial exception into a practical application, and the claim is therefore directed to the judicial exception. See MPEP 2106.05(f). Under Step 2B, the additional elements “ perform an application programming interface ("API") to allocate one or more blocks of storage” this is just information that is being processed in the identified mental process and should have been included in the mental process and this is mere instructions to apply the mental process under mpep 2106.05(f), amounts to merely generally linking the use of the judicial exception to a particular technological environment or field or use, and is merely applying the judicial exception, therefore, does not amount to significantly more, hence, cannot provide an inventive concept. As to the point (2), Kataoka teaches circuit configured to control communication, para[0113], ln 13-16/ First, the compression function is called by operations of an operating system and application program included in the computer 1 (in 8101). When the compression function is called, the controller 111 executes a pre-process such as securing of, for example, the storage regions A1, A2, A3, and A4 (the storage regions A1, A2, and A3 are illustrated in FIG. 1) and setting of the positional information (for example, the positional information illustrated in FIG. 9) within the storage regions (in S102).When the process of S102 is terminated, the controller 111 loads the content part of the file F1 to be compressed into the storage region A1 (in S103), para[0074] to para[0075], ln 1-5/ FIG. 1 illustrates the flow of the compression process using LZ77. First, a storage region A1, a storage region A2, and a storage region A3 are secured in a memory, for example. Data of a content part included in a file F1 illustrated in FIG. 1 is loaded into the storage region A, para[0038], ln 1-8). Azizi teaches (In response to receiving a reserve( ) function call, the memory overcommitment circuitry 302 performs the method 600 illustrated in FIG. 6. FIG. 6 is a flowchart illustrating a method 600 for remapping data in a data region 308, according to an embodiment. At 602, the memory overcommitment circuitry 302 attempts to allocate a new block of data out of the data region 308 that matches the size specified in the reserve( ) call. If the memory overcommitment circuitry 302 is not able to allocate the memory, it returns a failure notification to the calling software. At 604, the memory overcommitment circuitry 302 copies the contents of the current block from the data region corresponding to the address as provided in the reserve( ) call, to the newly allocated data block in the data region 308…. the memory reduction circuitry 314 may implement one or more compression techniques to compress data in the data region 308 , para[0058], para[0059], ln 1-5/ para[0060], ln 16-20/ The memory device 306 stores a data region 308. The data region 308 stores the data contents that are written into the memory system. The data stored may be compressed, deduplicated, or both, para[0046], ln 3-9 ). Lin teaches upload blocks of data from a client network 250 for storing in the remote data store 214 via the APIs 242, col 12, ln 14-16/ In at least some embodiments, input parameters to the upload block API may include, but are not limited to, one or more authentication parameters, a volume identifier (volume ID), a snapshot identifier (snapshot ID), and, for each of one or more data blocks to be downloaded, a data offset, a data length, and a compressed parameter, col 17, ln 57-64/ The volume ID parameter may be used to specify a volume on remote data store 212 from which data is to be downloaded, col 20, ln 2-5/ The snapshot ID parameter may be used to specify a snapshot on remote data store 212 from which data is to be downloaded. In at least some embodiments, a snapshot version may also be included as an input parameter, col 20, ln 5-10/ The compressed parameter may be used to indicate if the data being downloaded is or is not compressed according to a compression scheme., col 20, ln 15-17). Conclusion US 20120271868 A1 teaches when a continuous region on the disk cannot be secured from the original storage location for the compressed extent, the file system program 3022 assigns a new region to another region. US 8341133 B2 teaches If preferred write log memory 18A becomes full, additional memory 18B is allocated for write log entries 52 and a subsequent transactional lock is stored as an uncompressed transactional lock 72 in an auxiliary memory block 70 as shown in FIG. 2B. US 20220253236 A1 teaches The memory, such as random access memory (RAM), operates at a high speed, as a distinction from storage that provides slow-to-access information but offers higher capacities. US 20210112256 A1 teaches each instance, the program instructions determine and save a map of quality values of corresponding blocks of pixels indicating the level of compression applied to the corresponding pixel or block of pixels. As will be understood, each of the compressors 250, 260 act upon the relevant pixel or block of pixels of each frame in place so that the compressed form of each frame is the byproduct of compression process. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LECHI TRUONG whose telephone number is (571)272-3767. The examiner can normally be reached 10-8 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Young Kevin can be reached on (571)270-3180. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LECHI TRUONG/ Primary Examiner, Art Unit 2194 .
Read full office action

Prosecution Timeline

May 09, 2022
Application Filed
Mar 08, 2025
Non-Final Rejection — §101, §103
Apr 03, 2025
Interview Requested
Apr 24, 2025
Examiner Interview Summary
Apr 24, 2025
Applicant Interview (Telephonic)
Jun 13, 2025
Response Filed
Sep 12, 2025
Final Rejection — §101, §103
Oct 21, 2025
Interview Requested
Nov 07, 2025
Applicant Interview (Telephonic)
Nov 13, 2025
Examiner Interview Summary
Nov 17, 2025
Response after Non-Final Action
Dec 02, 2025
Request for Continued Examination
Dec 10, 2025
Response after Non-Final Action
Feb 23, 2026
Non-Final Rejection — §101, §103
Apr 01, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602245
QUANTUM ISOLATION ZONESLC
2y 5m to grant Granted Apr 14, 2026
Patent 12602255
Transaction Method and Apparatus with Fixed Execution Order
2y 5m to grant Granted Apr 14, 2026
Patent 12596580
METHOD AND SYSTEM FOR OPTIMIZING GPU UTILIZATION
2y 5m to grant Granted Apr 07, 2026
Patent 12596952
QUANTUM RESOURCE ACCESS CONTROL THROUGH CONSENSUS
2y 5m to grant Granted Apr 07, 2026
Patent 12583106
AUTOMATION WINDOWS FOR ROBOTIC PROCESS AUTOMATION
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+37.1%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 879 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month