Prosecution Insights
Last updated: April 19, 2026
Application No. 18/461,555

COMPUTER-READABLE RECORDING MEDIUM STORING MACHINE LEARNING PROGRAM, MACHINE LEARNING METHOD, AND MACHINE LEARNING DEVICE

Non-Final OA §103
Filed
Sep 06, 2023
Examiner
SHALU, ZELALEM W
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
Fujitsu Limited
OA Round
1 (Non-Final)
29%
Grant Probability
At Risk
1-2
OA Rounds
3y 2m
To Grant
48%
With Interview

Examiner Intelligence

Grants only 29% of cases
29%
Career Allow Rate
31 granted / 108 resolved
-26.3% vs TC avg
Strong +19% interview lift
Without
With
+19.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
34 currently pending
Career history
142
Total Applications
across all art units

Statute-Specific Performance

§101
14.3%
-25.7% vs TC avg
§103
63.4%
+23.4% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
10.8%
-29.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 108 resolved cases

Office Action

§103
CTNF 18/461,555 CTNF 94099 DETAILED ACTION Notice of Pre-AIA or AIA Status 07-03-aia AIA 15-10-aia The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. This action is responsive to the Application filed on 09/06/2023. Claims 1-7 are pending in the case. Priority The present application claims priority under 35 U.S.C. §119 to JP patent Application No. 2022-196652 filed on 12/08/2022. Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119 and/or 35 U.S.C. 120 is acknowledged. Information Disclosure Statement As required by MPEP 609 (c), the Applicants’ submission of the Information Disclosure Statement(s) filed on 09/06/2023 is acknowledged by the examiner and the cited references have been considered in the examination of the claims now pending. Specification The disclosure is objected to because of the following informalities: 06-11 AIA The title of the invention is long and not descriptive. An improved title is required that is clearly indicative of the invention to which the claims are directed. Examiner Comments 07-06 AIA 15-10-15 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 103 07-20-aia AIA The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 07-21-aia AIA Claim s 1-7 are rejected under 35 U.S.C. 103 as being unpatentable over AHMAD (Pub. No. US 20210398020 A1, Pub. Date: 2021-12-23) in view of Vucinic (Pub. No. US 20240004795 A1 , Pub. Date: 2024-01-04) Regarding independent Claim 1, AHMAD teaches a non-transitory computer-readable recording medium storing a machine learning program for causing a computer to execute processing (see AHMAD : Fig.2, [0024], “server 110 includes processor 210 and memory 220. Examples of processor 210 and memory 220 are provided below in connection with FIG. 4. Memory 220 may contain training module 230, machine learning model 240, training data 250 checkpoint module 260, and checkpoint information repository 270.”), comprising: causing each […] machine learning processes to perform individual machine learning by the same machine learning model (see AHMAD : Fig.3, [0030], “a training operation may be initialized by selecting a machine learning model for training and selecting training data for use during the training operation (block 300). The training operation may include a number of training iterations that are executed (block 310). Each training iteration may include a forward pass of the training data through the machine learning model, a determination of a loss function, and a backwards pass through the machine learning model to update the weights and parameters of the machine learning model based on the loss function.”) storing data after execution of first processing by each of the machine learning processes in a [non-volatile ] memory accessible by each of the machine learning processes (see AHMAD : Fig.3, [0032], “If a checkpoint has been reached (block 320), checkpoint information for that checkpoint is generated and stored (block 330 ). The checkpoint information may be stored in non-volatile storage media or memory, such as in the examples described below, to protect the training of the machine learning model that has been completed up to the checkpoint in case of power loss or system failure, or in case of a user pause or cancellation of the training operation. The checkpoint information for each checkpoint may be stored, for example, in a log format containing the checkpoint information for all of the checkpoints as a record of the training process.”); and causing a second machine learning process other than the first machine learning process among the plurality of machine learning processes to execute second processing regarding the first machine learning process, based on first data after the execution of the first processing regarding the first machine learning process (see AHMAD : Fig.3, [0034], “after the training operation has been paused (or otherwise interrupted) at block 322, the training process can be resumed (block 324) (i.e. a second machine learning process). As shown, when the training process is resumed, the stored checkpoint information from the last (or any prior) checkpoint may be obtained (block 326) (e.g., from database 270). Obtaining the stored checkpoint information may include loading the configuration of the model in the partially trained state corresponding to the checkpoint information into a model architecture for further training iterations and/or operations . Obtaining the stored checkpoint information for resuming the training operations may include verification of checkpoint validity with persisted training parameters, and state loading using the stored checkpoint information, prior to executing a new training iteration at block 310.”), stored in the [non-volatile] memory in a case where an abnormality occurs at the time of execution of the second processing executed after the first processing by the first machine learning process among the machine learning processes (see AHMAD : Fig.3, [0032], “If a checkpoint has been reached (block 320), checkpoint information for that checkpoint is generated and stored (block 330). The checkpoint information may be stored in non-volatile storage media or memory, such as in the examples described below, to protect the training of the machine learning model that has been completed up to the checkpoint in case of power loss or system failure, or in case of a user pause or cancellation of the training operation. The checkpoint information for each checkpoint may be stored, for example, in a log format containing the checkpoint information for all of the checkpoints as a record of the training process.”) As shown above AHMAD discloses a machine learning training system that is trained through iterative processing, including forward propagation, back propagation and parameter updates by generating and storing checkpoint information representing the training failure or success state and training related data to enable the system to resume training after interruption or failure. However, AHMED fails to teach a plurality of machine leaning processes assessing a shared memory, detection of abnormality and causing a second machine leaning model to execute processing corresponding to a first machine learning process based on the data stored in the shared memory. AHMAD does not teach the system wherein: a plurality of machine learning processes, storing data in a shared memory accessible by each of the processes; cause a second process stored in the shared memory, in a case where an abnormality occurs at the time of execution of the second processing executed based on the data stored in the shared memory. However, Vucinic teaches a non-transitory computer-readable recording medium comprising: a plurality of machine learning processes (see Vucinic : Fig.1, [0017], “Processors 102.sub.1 to 102.sub.N use respective caches 104.sub.1 to 104.sub.N as a Last Level Cache (LLC) (e.g., an L2, L3, or L4 cache depending on the levels of cache included in the processor 102) that caches data blocks or cache lines that are requested by the processor 102 or expected to be accessed by the processor 102.”) storing data in a shared memory accessible by each of the processes (see Vucinic : Fig.1,[0019], “Cache controllers 106 can follow a coherence protocol that is managed by memory controller 114 of shared memory 112. In addition, cache controllers 106 can perform certain fault tolerance operations disclosed herein, such as erasure encoding cache lines for storage in shared memory 112 and erasure decoding cache lines retrieved from shared memory 112.” cause a second process stored in the shared memory, in a case where an abnormality occurs at the time of execution of the second processing executed based on the data stored in the shared memory (see Vucinic : Fig.8, [0085], “the memory controller determines that data stored in one or more nodes in a blast zone needs to be reconstructed and stored in one or more nodes from a rebuild pool in at least one memory. The determination can be based on, for example, at least one of a useable life expectancy for the one or more nodes and a failure indication from attempting to access data stored in the one or more nodes.”) Because both AHMED and Vucinic address to solve same/similar technical issue of failure recovery mechanism using stored data across multiple processes, accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention, to modify the machine learning system of AHMED to incorporate the fault tolerant shared memory architecture of Vucinic in order to improve the efficiency and reliability of machine learning training. After modification of AHMED , the checkpoint and training resumption logic can be implemented in the fault tolerant shared memory environment of Vucinic . One would have been motivated to make such a combination in order to reduce downtime, avid recalculation and improve system efficiency and scalability. Regarding Claim 2, As shown above, AHMED and Vucinic and teaches all the limitations of claim 1. Vucinic further teaches the system wherein: in the processing of storing in the shared memory, the first data is stored in a first shared memory associated with the first machine learning process from among one or more shared memories associated with each of the plurality of machine learning processes (see Vucinic : Fig.2, [0024], “is a block diagram of an example distributed system for implementing memory fault tolerance and memory coherence according to one or more embodiments. As shown in FIG. 2, system 200 includes processing units 201.sub.1 to 201.sub.3 and memory units 212.sub.1 to 212.sub.3. Unlike system 100 in FIG. 1, distributed system 200 in FIG. 2 includes separate memory units 212 that each include a memory 216 that is shared among the processing units 201 via network 210 as a main memory of system 200.”) It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention, to modify the machine learning system of AHMED to incorporate the storing data in the shared memory of Vucinic in order to improve the efficiency and reliability of machine learning training. After modification of AHMED , the checkpoint and training resumption logic can be implemented in the fault tolerant shared memory environment of Vucinic . One would have been motivated to make such a combination in order to reduce downtime, avid recalculation and improve system efficiency and scalability. Regarding Claim 3, As shown above, AHMED and Vucinic and teaches all the limitations of claim 2. AHMED further teaches the system wherein: in the execution of the second processing based on the data after the first processing has been executed in a case where the abnormality has occurred (see AHMAD : Fig.3, [0032], “If a checkpoint has been reached (block 320), checkpoint information for that checkpoint is generated and stored (block 330). The checkpoint information may be stored in non-volatile storage media or memory, such as in the examples described below, to protect the training of the machine learning model that has been completed up to the checkpoint in case of power loss or system failure, or in case of a user pause or cancellation of the training operation.”) acquiring the data after the first processing has been executed stored in the first [non-volatile] memory that corresponds to the first machine learning process in which the abnormality has occurred (see AHMAD : Fig.3, [0038], “in one or more operational scenarios, training a machine learning model may include generating and storing (block 330) checkpoint information at a plurality predetermined checkpoints during a first training operation, completing (block 340) the first training operation (e.g., by completing a predetermined number of training iterations or achieving a predetermined accuracy), performing (block 352) an evaluation of a trained machine learning model generated by the completion of the first training operation,”) generating a snapshot of the machine learning model, based on the acquired data after the first processing has been executed and storing the snapshot in a storage device (see AHMAD : Fig.3, [0039], “the checkpoint information also may include performance numbers representing the performance of the machine learning model in the partially trained state. The performance numbers may include loss values determined for the training data and validation data used in the training operation. The loss values represent the performance of the machine learning model on the training data selected for the training operation. The checkpoint information for each checkpoint may include the loss and optimizer state for the checkpoint and/or metadata such as the original training parameters, and/or any other data to allow a user to user resume training from a checkpoint at a later time. In this way, the checkpoint information can be used for feature extraction in one or more implementations.’0 causing the second machine learning process to execute the second processing by using the snapshot stored in the storage device (see AHMAD : Fig.3, [0039], “the checkpoint (snapshot ) information also may be used to generate an operational machine learning model in the partially trained state. New data, which may be labeled or unlabeled, may be run through the partially trained machine learning model to evaluate the performance based on the output of the model. Alternatively, the training data may be run through the partially trained machine learning model to allow a user to see the output to evaluate the performance of the model or just to demonstrate the utility of the model even though the training process has not been completed. In one or more implementations, a partially trained model represented by the checkpoint information may be deployed and/or operated in other contexts and/or at other devices.”) As shown above, AHMED teaches generating and storing loss values, gradients and model parameters as part of checkpoint information during ML training. AHMED does not teach storing data in the shared memory. Vucinic teaches a shared memory architecture in which data portions are stored, updated reconstructed and written back for fault tolerant operation. It would have been obvious to store AMHAD ’s checkpoint( snapshot) and parameters data in Vucinic’ s shared memory to support distributed or fault tolerant ML processing. Regarding Claim 4, As shown above, AHMED and Vucinic and teaches all the limitations of claim 1. AHMED further teaches the system wherein: for causing the computer to execute any one of forward propagation training, backpropagation training, or parameter update processing of the machine learning model, as the first processing (see AHMAD : Fig.3, [0030], “a training operation may be initialized by selecting a machine learning model for training and selecting training data for use during the training operation (block 300). The training operation may include a number of training iterations that are executed (block 310). Each training iteration may include a forward pass of the training data through the machine learning model, a determination of a loss function, and a backwards pass through the machine learning model to update the weights and parameters of the machine learning model based on the loss function.”) Regarding Claim 5, As shown above, AHMED and Vucinic and teaches all the limitations of claim 4. AHMED further teaches the system wherein: storing a loss of forward propagation in the [non-volatile] memory as the data after the first processing has been executed, in a case where the forward propagation training is performed as the first processing (see AHMED: Fig. 3, [0030], “The training operation may include a number of training iterations that are executed (block 310). Each training iteration may include a forward pass of the training data through the machine learning model, a determination of a loss function, and a backwards pass through the machine learning model to update the weights and parameters of the machine learning model based on the loss function.”) storing a gradient of backpropagation in the [non-volatile] memory as the data after the first processing has been executed, in a case where the backpropagation training is performed as the first processing (see AHMED: Fig. 3, [0030], “The training operation may include a number of training iterations that are executed (block 310). Each training iteration may include a forward pass of the training data through the machine learning model, a determination of a loss function, and a backwards pass through the machine learning model to update the weights and parameters of the machine learning model based on the loss function.”); and storing an optimizer state and a model parameter in the [non-volatile] memory as the data after the first processing has been executed, in a case where the parameter update processing is executed as the first processing (see AHMED: Fig. 3, [0034], “Obtaining the stored checkpoint information may include loading the configuration of the model in the partially trained state corresponding to the checkpoint information into a model architecture for further training iterations and/or operations. Obtaining the stored checkpoint information for resuming the training operations may include verification of checkpoint validity with persisted training parameters, and state loading using the stored checkpoint information, prior to executing a new training iteration at block 310.”) As shown above, AHMED teaches generating and storing loss values, gradients and model parameters as pat of checkpoint information during ML training. AHMED does not teach storing data in the shared memory. Vucinic teaches a shared memory architecture in which data portions are stored, updated reconstructed and written back for fault tolerant operation. It would have been obvious to store AMHAD ’s ML loss, gradient and parameters data in Vucinic’ s shared memory to support distributed or fault tolerant ML processing. Regarding Claim 6, Claim 6 is directed to a method claim and has the same/similar claim limitation as claim 1 and is rejected under the same rationale. Regarding independent Claim 7, Claim 7 is directed to a machine learning device claim and the claim has similar/same claim limitation as claim 1 and is rejected under the same rationale. As shown above, AHMAD teaches ML training and check point-based recovery and Vucinic teaches a multi- processor shared memory, failure detection, and reconstructing data and continuing operation after failure using stored shared memory data. It would have been obvious to store AMHAD ’s ML loss, gradient and parameters data in Vucinic’ s shared memory to support distributed or fault tolerant ML processing . Conclusion 07-96 AIA The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. PGPUB NUMBER: INVENTOR-INFORMATION: TITLE / DESCRIPTION US 20180293144 A1 Johnson; Charles Title : FAILURE INDICATION IN SHARED MEMORY Description : The disclosure herein may provide for failure indication storage in a shared memory. The failure indication may be stored by a node in a computing system that has detected a failure condition expected to cause system functions provided by the node to fail, for example through overheating, kernel panic, memory failure, or other conditions. US 20240428082 A1 Wang; Zhuang Title : EFFICIENT RECOVERY FROM FAILURES DURING DISTRIBUTED TRAINING OF MACHINE LEARNING MODELS Description : A placement plan for training state checkpoints of a machine learning model is generated based at least in part on a number of training servers of a distributed training environment. The plan indicates, with respect to an individual server, one or more other servers at which replicas of training state checkpoints of the individual server are to be stored. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZELALEM W SHALU whose telephone number is (571)272-3003. The examiner can normally be reached M- F 0800am- 0500pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached at (571) 272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Zelalem Shalu/Examiner, Art Unit 2145 /CESAR B PAULA/Supervisory Patent Examiner, Art Unit 2145 Application/Control Number: 18/461,555 Page 2 Art Unit: 2145 Application/Control Number: 18/461,555 Page 3 Art Unit: 2145 Application/Control Number: 18/461,555 Page 4 Art Unit: 2145 Application/Control Number: 18/461,555 Page 5 Art Unit: 2145 Application/Control Number: 18/461,555 Page 6 Art Unit: 2145 Application/Control Number: 18/461,555 Page 7 Art Unit: 2145 Application/Control Number: 18/461,555 Page 8 Art Unit: 2145 Application/Control Number: 18/461,555 Page 9 Art Unit: 2145 Application/Control Number: 18/461,555 Page 10 Art Unit: 2145 Application/Control Number: 18/461,555 Page 11 Art Unit: 2145 Application/Control Number: 18/461,555 Page 12 Art Unit: 2145 Application/Control Number: 18/461,555 Page 13 Art Unit: 2145 Application/Control Number: 18/461,555 Page 14 Art Unit: 2145 Application/Control Number: 18/461,555 Page 15 Art Unit: 2145 Application/Control Number: 18/461,555 Page 16 Art Unit: 2145
Read full office action

Prosecution Timeline

Sep 06, 2023
Application Filed
Mar 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12477016
AUTOMATION OF VISUAL INDICATORS FOR DISTINGUISHING ACTIVE SPEAKERS OF USERS DISPLAYED AS THREE-DIMENSIONAL REPRESENTATIONS
2y 5m to grant Granted Nov 18, 2025
Patent 12468969
METHODS FOR CORRELATED HISTOGRAM CLUSTERING FOR MACHINE LEARNING
2y 5m to grant Granted Nov 11, 2025
Patent 12419611
PATIENT MONITOR, PHYSIOLOGICAL INFORMATION MEASUREMENT SYSTEM, PROGRAM TO BE USED IN PATIENT MONITOR, AND NON-TRANSITORY COMPUTER READABLE MEDIUM IN WHICH PROGRAM TO BE USED IN PATIENT MONITOR IS STORED
2y 5m to grant Granted Sep 23, 2025
Patent 12153783
User Interfaces and Methods for Generating a New Artifact Based on Existing Artifacts
2y 5m to grant Granted Nov 26, 2024
Patent 12120422
SYSTEMS AND METHODS FOR CAPTURING AND DISPLAYING MEDIA DURING AN EVENT
2y 5m to grant Granted Oct 15, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
29%
Grant Probability
48%
With Interview (+19.0%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 108 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month