Prosecution Insights
Last updated: April 19, 2026
Application No. 18/893,143

METHOD AND APPARATUS WITH NEURAL NETWORK CHECKPOINT SAVING

Final Rejection §103
Filed
Sep 23, 2024
Examiner
PHAN, TUANKHANH D
Art Unit
2154
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
2 (Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
3y 6m
To Grant
92%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
448 granted / 569 resolved
+23.7% vs TC avg
Moderate +13% lift
Without
With
+12.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
30 currently pending
Career history
599
Total Applications
across all art units

Statute-Specific Performance

§101
15.8%
-24.2% vs TC avg
§103
50.1%
+10.1% vs TC avg
§102
19.3%
-20.7% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 569 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The Amendment, filed on 12/10/2025, has been entered and acknowledged by the Examiner. Claims 1-20 are pending. Rejection of claim under 35 USC 101 has been withdrawn in light of the amendment and the arguments. Response to Arguments Applicant's arguments with respect to claims 1-20 have been considered but are moot in view of the new ground(s) of rejection. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103(a) as being unpatentable over Hall (US 2023/0162049) in view of Reisteter (US Pub. 2022/0414015), and further in view of Dhuse et al. (US Pub. 2025/0036622, hereinafter “Dhuse”). Regarding claim 1, Hall discloses a processor-implemented method comprising: generating, by a working node performing an operation corresponding to a checkpoint file of a neural network (¶ [0167], The checkpoint file may be a file generated by the machine learning code/library with a defined format which can be exported); determining, based on an available resource quantity of a group a number of splits of the checkpoint file (p. 13, Algorithm 1, k is the number of folds used to split dataset); and storing the determined number of splits of the checkpoint file in the nodes in the group, respectively (p. 13, Algorithm 1, then save in the checkpoint file or a model file). Hall does not explicitly disclose to which the working node belongs and a number of nodes in the group available resource quantity of a group nodes… distributivity across available nodes from among the nodes in the group, respectively; however, Reisteter discloses to which the working node belongs and a number of nodes in the group available resource quantity of a group nodes… distributivity across available nodes from among the nodes in the group, respectively (¶¶ [0027]-[0030]; available resources). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Reisteter into Hall to provide a constraint to the available resources based on level of services. Dhuse further discloses wherein the nodes in the group comprise the working nodes and the nodes are connected with each other for communication through a first-layer switch; and where the working node is excluded from the available nodes (¶ [0179]; ¶ [0200]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Dhuse into Reisteter and Hall to indicate ordering in transmission and processing request from the particular node at the one level to the next based on priority of the data block of the set of data blocks being tagged. Regarding claim 2, Hall in view of Reisteter in view of Dhuse disclose the method of claim 1, wherein the determining of the number of splits of the checkpoint file comprises: identifying available nodes among the nodes in the group, based on an available resource quantity of a storage device of each of the nodes in the group (¶ [0012], Splitting the data into training dataset, validation dataset and/or test dataset); and determining the number of the identified available nodes as the number of splits of the checkpoint file (¶ [0013], configurations in order to optimize the performance of model (e.g. to increase an accuracy metric). Regarding claim 3, Hall in view of Reisteter in view of Dhuse disclose the method of claim 2, wherein the storing in the nodes in the group comprises storing the splits of the checkpoint file in respective storage devices of the identified available nodes, respectively (Hall, See Algorithm 1). Regarding claim 4, Hall in view of Reisteter in view of Dhuse disclose the method of claim 1, wherein the determining of the number of splits of the checkpoint file comprises determining the number of splits of the checkpoint file based on the number of nodes performing the operation corresponding to the checkpoint file and the number of nodes comprised in the group (Reisteter, ¶ [003], one or more compute nodes may process queries, which may include engaging in one or more transactions in response to each query). Regarding claim 5, Hall in view of Reisteter in view of Dhuse disclose the method of claim 1, wherein the number of splits of the checkpoint file is determined to be less than or equal to the number of nodes comprised in the group (Hall, ¶ [0160], split such that the size of each subset is less than the 10% or 20% of the size of the training set). Regarding claim 6, Hall in view of Reisteter in view of Dhuse disclose the method of claim 1, wherein the splits of the checkpoint file are stored in storage devices of the nodes in the group (Hall, ¶ [0168]). Regarding claim 7, Hall in view of Reisteter in view of Dhuse disclose the method of claim 1, further comprising: generating meta information and parity information corresponding to the checkpoint file (Hall, ¶ [0063], provide data (including data items and/or images) and metadata 14 to a data management platform which includes a data repository); and storing the meta information and the parity information in a remote storage of a server system, (Hall, ¶ [0063]), wherein the meta information and the parity information are transmitted from the working node to the remote storage via the first-layer switch and a second-layer switch (Dhuse, ¶ [0200]). Regarding claim 8, Hall in view of Reisteter in view of Dhuse disclose the method of claim 7, wherein the checkpoint file corresponds to a first checkpoint, and further comprising: determining whether to flush a second checkpoint file based on meta information of a second checkpoint immediately preceding the first checkpoint (Reisteter, ¶ [0113]); and flushing the second checkpoint file into the remote storage based on a result of the determining (¶ [0113], checkpoint cycle can pick up the pages and flush them to the remote storage file). Regarding claim 9, Hall in view of Reisteter in view of Dhuse disclose a non-transitory computer-readable storage medium storing instructions that, when executed by one or more processors, configure the one or more processors to perform the method of claim 1 (see claim 1). Regarding claim 10, Hall discloses a processor-implemented method comprising: splitting a first checkpoint file corresponding to a first checkpoint of a neural network and storing the first checkpoint file in nodes in a group (Hall, p. 13, Algorithm 1, k is the number of folds used to split dataset); storing meta information of the first checkpoint in a remote storage of a server system (Hall, ¶ [0063], provide data (including data items and/or images) and metadata 14 to a data management platform which includes a data repository); Reisteter further discloses determining whether to flush a second checkpoint file based on meta information of a second checkpoint immediately preceding the first checkpoint (¶ [0113]), wherein the nodes in the groups are connected with each other for communication through a first-layer switch; and flushing the second checkpoint file into the remote storage based on a result of the determining (¶ [0113], checkpoint cycle can pick up the pages and flush them to the remote storage file). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Reisteter into Hall to provide a constraint to the available resources based on level of services. Dhuse further discloses wherein the remote storage and the nodes in the group are connected with each other for communication through the first-layer switch and a second-layer switch (¶ [0179]; ¶ [0200]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Dhuse into Reisteter and Hall to indicate ordering in transmission and processing request from the particular node at the one level to the next based on priority of the data block of the set of data blocks being tagged. Regarding claim 11, Hall in view of Reisteter in view of Dhuse disclose the method of claim 10, wherein the flushing of the second checkpoint file comprises: in response to the second checkpoint file being determined to be a flushing target to be flushed, flushing the second checkpoint file into the remote storage; and in response to the second checkpoint file being determined not to be the flushing target, deleting the second checkpoint file stored in one or more nodes of the server system. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Reisteter into Hall to make the resource available to next service operations. Regarding claim 12, Hall in view of Reisteter in view of Dhuse disclose the method of claim 11, further comprising, in response to the second checkpoint file being determined not to be the flushing target, deleting meta information and parity information of the second checkpoint stored in the remote storage (Reisteter, ¶ [0118], Enabling or disabling WriteBehind may be implemented by dropping and recreating data cache (e.g., SSD). This may incorporate adding or removing Local/Remote CAI to the cache metadata while recreating the data cache). Regarding claim 13, Hall in view of Reisteter in view of Dhuse disclose the method of claim 10, wherein the determining of whether to flush the second checkpoint file comprises determining whether to flush the second checkpoint file based on a tag value comprised in the meta information of the second checkpoint, the tag value indicating whether the second checkpoint is a flushing target to be flushed (R, ¶ [0115]). Regarding claim 14, Hall in view of Reisteter in view of Dhuse disclose the method of claim 10, wherein the splitting and storing of the first checkpoint file in the nodes in the group comprises: determining, based on an available resource quantity of the group comprising nodes performing an operation corresponding to the first checkpoint file, the number of splits of the first checkpoint file (Algorithm 1); and storing the determined number of splits of the first checkpoint file in the nodes in the group, respectively (Algorithm 1). Regarding claims 15-18, see discussion of claims 1-4 above for the same reason of rejection. Regarding claims19-20, see discussion of claims 10-11 above for the same reason of rejection. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TUANKHANH D PHAN whose telephone number is (571)270-3047. The examiner can normally be reached on Mon-Fri, 10:00am-18:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached on 571-270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 or 571-272-1000. /TUANKHANH D PHAN/ Examiner, Art Unit 2154
Read full office action

Prosecution Timeline

Sep 23, 2024
Application Filed
Sep 06, 2025
Non-Final Rejection — §103
Dec 10, 2025
Response Filed
Mar 20, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12536215
AUTOMATED GENERATION OF GOVERNING LABEL RECOMMENDATIONS
2y 5m to grant Granted Jan 27, 2026
Patent 12517738
LOOP DETECTION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Jan 06, 2026
Patent 12511297
TECHNIQUES FOR DETECTING SIMILAR INCIDENTS
2y 5m to grant Granted Dec 30, 2025
Patent 12511701
SYSTEM AND METHOD FOR DETECTING RELEVANT POTENTIAL PARTICIPATING ENTITIES
2y 5m to grant Granted Dec 30, 2025
Patent 12505164
METHOD OF ENCODING TERRAIN DATABASE USING A NEURAL NETWORK
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
92%
With Interview (+12.9%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 569 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month