Prosecution Insights
Last updated: April 19, 2026
Application No. 19/018,818

APPARATUS AND METHOD WITH CHECKPOINT DATA PROCESSING

Non-Final OA §102
Filed
Jan 13, 2025
Examiner
YU, JAE UN
Art Unit
2138
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 6m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
666 granted / 741 resolved
+34.9% vs TC avg
Moderate +10% lift
Without
With
+10.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 6m
Avg Prosecution
9 currently pending
Career history
750
Total Applications
across all art units

Statute-Specific Performance

§101
4.9%
-35.1% vs TC avg
§103
49.2%
+9.2% vs TC avg
§102
28.2%
-11.8% vs TC avg
§112
8.4%
-31.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 741 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 1. Claims 1, 4, 8, 9, 12, and 16-18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Zhao et al. (US 2019/0324856), “Zhao”. 2. As per claim 1, Zhao discloses one or more processors [figure 6] configured to: in response to receiving a checkpoint signal, determine a location of related data of an application in a memory region [in order to perform a checkpoint operation, locating an intermediate DL model residing in a server node, abstract]; based on the location of the related data in the memory region [an intermediated DL model residing in a server node, abstract], perform a checkpointing operation on the related data using either one or both of a host processor and an accelerator processor [performing a checkpoint operation on the intermediate DL mode using a CPU, paragraph 45]; and compress the related data using the either one or both of the host processor and the accelerator processor that has performed the checkpointing operation [compressing the intermediate DL model using the CPU, paragraph 45], wherein the location of the related data in the memory region comprises a location in either one or both of a host memory region and an accelerator memory region [an intermediated DL model residing with accelerator devices in a server node, abstract]. 3. As per claim 4, Zhao discloses wherein, for the compressing, the one or more processors are configured to: in response to performing the checkpointing operation using the accelerator processor on data located in the accelerator memory region of the related data [the checkpoint optimization module function executed in a host processor or an accelerator device, paragraph 35], determine a size and a distribution characteristic of the data located in the accelerator memory region [checkpointing in a “disturbed environment” for the data size of 500 MB or greater residing in a GPU device memory, paragraph 5]; and compress the data located in the accelerator memory region based on the determined size and the determined distribution characteristic [compressing the intermediate DL model residing with accelerator devices in the server node, paragraph 45, abstract]. 4. As per claim 8, Zhao discloses wherein for the performing of the checkpointing operation, the one or more processors are configured to perform the checkpointing operation [the checkpoint optimization module function executed in a host processor or an accelerator device, paragraph 35] in parallel [parallel GPU devices 762, figure 7], and respective files generated by the checkpointing operation performed in parallel are independent of each other [such parallel GPU devices operating independent of each other, figure 7]. 5. As per claim 9, Zhao discloses receiving a checkpoint signal [step 502, figure 5]; in response to receiving the checkpoint signal, determining a location of related data of an application in a memory region [in order to perform a checkpoint operation, locating an intermediate DL model residing in a server node, abstract]; based on the location of the related data in the memory region, performing a checkpointing operation on the related data using either one or both of a host processor and an accelerator processor [performing a checkpoint operation on the intermediate DL mode using a CPU, paragraph 45]; and compressing the related data using the either one or both of the host processor and the accelerator processor that has performed the checkpointing operation [compressing the intermediate DL model using the CPU, paragraph 45], wherein the location of the related data in the memory region comprises a location in either one or both of a host memory region and an accelerator memory region [an intermediated DL model residing with accelerator devices in a server node, abstract]. 6. As per claim 12, Zhao discloses wherein the compressing comprises: in response to performing the checkpointing operation using the accelerator processor on data located in the accelerator memory region of the related data [the checkpoint optimization module function executed in a host processor or an accelerator device, paragraph 35], determining a size and a distribution characteristic of the data located in the accelerator memory region [checkpointing in a “disturbed environment” for the data size of 500 MB or greater residing in a GPU device memory, paragraph 5]; and compressing the data located in the accelerator memory region based on the determined size and the determined distribution characteristic [compressing the intermediate DL model residing with accelerator devices in the server node, paragraph 45, abstract]. 7. As per claim 16, Zhao discloses wherein the performing of the checkpointing operation comprises performing the checkpointing operation [the checkpoint optimization module function executed in a host processor or an accelerator device, paragraph 35] in parallel [parallel GPU devices 762, figure 7], and respective files generated by the checkpointing operation performed in parallel are independent of each other [such parallel GPU devices operating independent of each other, figure 7]. 8. As per claim 17, Zhao discloses A non-transitory computer-readable storage medium storing instructions [an article of manufacture embodiment, claim 10] that, when executed by one or more processors, configure the one or more processors to perform the method of claim 9. 9. As per claim 18, Zhao discloses one or more processors [figure 6] configured to: based on a location of related data of an application in a memory region [an intermediated DL model residing in a server node, abstract], determining whether to perform a checkpointing operation on the related data using either one of a host processor and an accelerator processor [performing a checkpoint operation on the intermediate DL mode using a CPU, paragraph 45]; and in response to determining to perform the checkpointing operation using the accelerator processor on data located in an accelerator memory region of the related data [the checkpoint optimization module function executed in a host processor or an accelerator device, paragraph 35], determine a size and a distribution characteristic of the data located in the accelerator memory region [checkpointing in a “disturbed environment” for the data size of 500 MB or greater residing in a GPU device memory, paragraph 5]; and compress the data located in the accelerator memory region based on the determined size and the determined distribution characteristic [compressing the intermediate DL model residing with accelerator devices in the server node, paragraph 45, abstract]. Conclusion A. Allowable Subject Matter Claims 2, 3, 5-7, 10, 11, 13-15, and 19 are objected to. The closest prior art of record, “Zhao” discloses data compression using processing units. The primary reasons for allowance of claim 2 in the instant application is the combination with the inclusion in these claims that “wherein, for the performing of the checkpointing operation, the one or more processors are configured to: in response to first partial data of the related data being in the host memory region, determine a first overhead for performing the checkpointing operation on the first partial data using the host processor; determine a second overhead for performing the checkpointing operation on the first partial data using the accelerator processor; perform the checkpointing operation on the first partial data using a processor corresponding to a smaller one between the first overhead and the second overhead; and in response to second partial data of the related data being in the accelerator memory region, perform the checkpointing operation on the second partial data using the accelerator processor”. The prior art of record neither anticipates nor renders obvious the above recited combination. The primary reasons for allowance of claim 5 in the instant application is the combination with the inclusion in these claims that “wherein, for the compressing, the one or more processors are configured to: in response to the determined size of the data located in the accelerator memory region being greater than a first threshold value, and in response to the determined distribution characteristic indicating that the data located in the accelerator memory region is stored successively, partition the data located in the accelerator memory region into a plurality of data blocks based on a second threshold value greater than the first threshold value; and compress each of the plurality of data blocks using the accelerator processor”. The prior art of record neither anticipates nor renders obvious the above recited combination. The primary reasons for allowance of claim 6 in the instant application is the combination with the inclusion in these claims that “wherein the data located in the accelerator memory region comprises a plurality of data blocks, for the compressing, the one or more processors are configured to: in response to the distribution characteristic indicating that storage locations of the plurality of data blocks are proximate to each other, merge the plurality of data blocks; and compress the plurality of data blocks using the accelerator processor, and the plurality of data blocks has a size less than a third threshold value”. The prior art of record neither anticipates nor renders obvious the above recited combination. The primary reasons for allowance of claim 7 in the instant application is the combination with the inclusion in these claims that “wherein the data located in the accelerator memory region comprises a plurality of data blocks, and for the compressing, the one or more processors are configured to: in response to the distribution characteristic indicating that the plurality of data blocks is sparsely distributed, partition the plurality of data blocks into a plurality of groups; assign a plurality of streams to each of the plurality of groups; and compress the data located in the accelerator memory region based on the accelerator processor and the plurality of streams”. The prior art of record neither anticipates nor renders obvious the above recited combination. The primary reasons for allowance of claim 10 in the instant application is the combination with the inclusion in these claims that “wherein the performing of the checkpointing operation comprises: in response to first partial data of the related data being in the host memory region, determining a first overhead for performing the checkpointing operation on the first partial data using the host processor; determining a second overhead for performing the checkpointing operation on the first partial data using the accelerator processor; performing the checkpointing operation on the first partial data using a processor corresponding to a smaller one between the first overhead and the second overhead; and in response to second partial data of the related data being in the accelerator memory region, performing the checkpointing operation on the second partial data using the accelerator processor”. The prior art of record neither anticipates nor renders obvious the above recited combination. The primary reasons for allowance of claim 13 in the instant application is the combination with the inclusion in these claims that “wherein the compressing comprises: in response to the determined size of the data located in the accelerator memory region being greater than a first threshold value, and in response to the determined distribution characteristic indicating that the data located in the accelerator memory region is stored successively, partitioning the data located in the accelerator memory region into a plurality of data blocks based on a second threshold value greater than the first threshold value; and compressing each of the plurality of data blocks using the accelerator processor”. The prior art of record neither anticipates nor renders obvious the above recited combination. The primary reasons for allowance of claim 14 in the instant application is the combination with the inclusion in these claims that “wherein the compressing comprises: in response to the data comprising a plurality of data blocks, and in response to the distribution characteristic indicating that storage locations of the plurality of data blocks are proximate to each other, merging the plurality of data blocks; and compressing the plurality of data blocks using the accelerator processor, and the plurality of data blocks has a size less than a third threshold value”. The prior art of record neither anticipates nor renders obvious the above recited combination. The primary reasons for allowance of claim 15 in the instant application is the combination with the inclusion in these claims that “wherein the data located in the accelerator memory region comprises a plurality of data blocks, and the compressing comprises: in response to the distribution characteristic indicating that the plurality of data blocks is sparsely distributed, partitioning the plurality of data blocks into a plurality of groups; assigning a plurality of streams to each of the plurality of groups; and compressing the data located in the accelerator memory region based on the accelerator processor and the plurality of streams”. The prior art of record neither anticipates nor renders obvious the above recited combination. The primary reasons for allowance of claim 19 in the instant application is the combination with the inclusion in these claims that “wherein, for the compressing, the one or more processors are configured to based on the determined size and the determined distribution characteristic, perform any one of: partitioning the data located in the accelerator memory region into a plurality of data blocks; merging the plurality of data blocks; and assigning a plurality of streams to each of a plurality of groups into which the plurality of data blocks are partitioned; and compress the data located in the accelerator memory region based on a result of the any one of the partitioning of the data, the merging of the plurality of data blocks, and the assigning of the plurality of streams”. The prior art of record neither anticipates nor renders obvious the above recited combination. As allowable subject matter has been indicated, applicant's response must either comply with all formal requirements or specifically traverse each requirement not complied with. See 37 C.F.R. § 1.111(b) and § 707.07(a) of the MPEP. B. Claims Rejected Claims 1, 4, 8, 9, 12, and 16-18 are rejected. C. Direction of Future Remarks Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAE UN YU whose telephone number is (571)272-1133. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tim Vo can be reached on (571)272-3642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JAE U YU/Primary Examiner, Art Unit 2138
Read full office action

Prosecution Timeline

Jan 13, 2025
Application Filed
Feb 17, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602323
SYSTEMS AND METHODS OF CACHE DATA PLACEMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12578878
STORAGE SYSTEM PROCESSING WITHOUT GLOBAL LOCKS
2y 5m to grant Granted Mar 17, 2026
Patent 12578897
GARBAGE COLLECTION FOR OBJECT-BASED STORAGE SYSTEMS
2y 5m to grant Granted Mar 17, 2026
Patent 12572472
Promoting Prefetched Data from a Cache Memory to Registers in a Processor
2y 5m to grant Granted Mar 10, 2026
Patent 12566709
TILE LEVEL INTERCONNECT DESIGN FOR CENTRAL PROCESSING UNIT IMAGE ACCESS PATTERNS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+10.3%)
2y 6m
Median Time to Grant
Low
PTA Risk
Based on 741 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month