Prosecution Insights
Last updated: April 19, 2026
Application No. 18/061,470

METHOD AND DEVICE FOR CONTROLLING A SYSTEM USING AN ARTIFICIAL NEURAL NETWORK BASED ON CONTINUAL LEARNING

Non-Final OA §103§112
Filed
Dec 04, 2022
Examiner
CHOKSHI, PINKAL R
Art Unit
2425
Tech Center
2400 — Computer Networks
Assignee
Université De Chambéry - Université Savoie Mont Blanc
OA Round
1 (Non-Final)
60%
Grant Probability
Moderate
1-2
OA Rounds
3y 5m
To Grant
90%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
305 granted / 505 resolved
+2.4% vs TC avg
Strong +30% interview lift
Without
With
+29.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
29 currently pending
Career history
534
Total Applications
across all art units

Statute-Specific Performance

§101
4.6%
-35.4% vs TC avg
§103
59.6%
+19.6% vs TC avg
§102
12.3%
-27.7% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 505 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 13 is objected to because of the following informalities: Claim 13, line 9 beginning with “[Math 14]” is missing a required equation as shown in claim 6. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-5, 7-12, and 14-16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claims 1 and 9, it is unclear when claiming “…iterative sampling one of the base data samples towards one of the decision boundaries to generate one or more modified base data samples.” It is ambiguous what Applicant means by sampling base data samples towards the decision boundary since claim does not define how the modified base data sample is being generated. Applicant is asked to clarify in the claim. For the purpose of examination, it is the Examiner's position that any distance reads on above limitation and such is in accordance with broadest reasonable interpretation, and from the perspective of one having ordinary skill in the art. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 7-12, and 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over NPL title “Beneficial Effect of Combined Replay for Continual Learning” (“Solinas”) in view of US PG Pub 2021/0216857 to Zhang (“Zhang”). Regarding claim 1, “A control system comprising: a computation device implementing at least a first artificial neural network” reads on the system that comprises large memory resources where high performance is the priority by combining the core ideas of rehearsal and pseudo-rehearsal method to provide a hybrid approach to improve the information retrieval process using memory buffers (pg.206, 2nd paragraph and pg.207, 3rd paragraph) disclosed by Solinas and represented in Fig. 2 (Network 1). As to “the first artificial neural network having a first state after having been trained to classify input data samples into a plurality of known classes separated by one or more decision boundaries, the computation device comprising a memory buffer storing one or more base data samples, each base data sample comprising an input data sample and a corresponding class among the plurality of known classes” Solinas discloses (abstract, section 2, pg.208, 3rd paragraph) that the device employs the data stored in tiny memory buffers as seeds to enhance the pseudo-sample generation process by combining this method with the reinjection sampling procedure of a specific pseudo-rehearsal method; (pg.209, section 3.1) while the auto-associative output indicates how well the model is capable of reproducing a given input, the hetero-associative output indicates how well the model has built the decision boundaries for classification. Also, it is an inherent work of a trained artificial neural network to classify input data into known classes. As to “one or more sensors configured to capture input data samples” Solinas discloses (pg.211) in the Algorithm 1 the images, captured of classes s,…,t are trained. As to “wherein the computation device is configured to: generate at least one pseudo-data sample” Solinas discloses (pg.208, section 3; pg.209, section 3.2) that the system uses the two ANNs where during the first learning phase 1 , the knowledge from the first ANN, named Net 1, is “transferred” to the second ANN, named Net 2, through pseudo-samples. That is, Net 2 is trained with the knowledge of Net 1, Net 1 being the model used to generate a pseudo-dataset as represented in Fig. 2 and Fig. 4. As to “train the first artificial neural network to learn one or more new classes in addition to the plurality of known classes using the at least one pseudo-data sample; classify a new input data sample using the first artificial neural network” Solinas discloses (pg.208, section 3; pg.209, section 3.2) that the system uses the two ANNs where during the first learning phase 1 , the knowledge from the first ANN, named Net 1, is “transferred” to the second ANN, named Net 2, through pseudo-samples. That is, Net 2 is trained with the knowledge of Net 1, Net 1 being the model used to generate a pseudo-dataset; Net 1 learns the new classes, but also the pseudo dataset generated by Net 2. In this section, we present the ANN architecture employed in the dual-memory system of Figure 2, the sampling procedure used to generate pseudodata and the knowledge transfer procedure that employs distillation to transfer the knowledge from one ANN to another. As to “wherein the computation device is configured to generate each of the at least one pseudo-data sample by: a) iterative sampling one of the base data samples towards one of the decision boundaries to generate one or more modified base data samples; and b) selecting one or more of the modified base data samples to form the pseudo-data sample” Solinas discloses (pg.209, section 3.1) that while the auto-associative output indicates how well the model is capable of reproducing a given input, the hetero-associative output indicates how well the model has built the decision boundaries for classification; (pg.210-211, sections 3.3, 4, and algorithm 1) when an ANN classifier infers the soft label of a sample, the classifier delivers probabilities for all the classes. The information delivered by the probabilities of all the classes is useful because a new classifier can build similar decision boundaries by learning the real samples and their corresponding soft labels; the standard classification and replication loss for the new samples (Equation (1)) encourages classifying and replicating the new set of classes. The distillation loss for the pseudo-samples and their corresponding softlabels (logits) ensures that the information previously learned is not lost during the new learning stage. Solinas meets all the limitations of the claim except “one or more sensors configured to capture input data; one or more actuators; control the one or more actuators as a function of the classification of the new input data sample.” However, Zhang discloses (¶0074, ¶0078, ¶0080) that the control system comprises a sensor such as an optic/imaging sensor; (¶0068, claim 31) the trained machine is used for classifying an input signal provided to a machine learning system and generating an actuator control signal, by an actuator control system, depending on an output signal of a machine learning system, the output signal being generated depending on which class the input signal has been classified. Therefore, it would have been obvious to one of the ordinary skills in the art before the effective filing date of the invention to modify Solinas’s system by controlling the actuator as a function of the classification of the input data sample as taught by Zhang in order to generate training samples, which may then be used to train a machine learning system. This machine learning system may then be used to control an actuator, thus rendering the control more reliable and/or more precise (Zhang - ¶0035). Regarding claim 2, “The control system of claim 1, wherein the computation device further implements a second artificial neural network, the second artificial neural network also having been trained to classify the input data samples into the plurality of known classes separated by one or more decision boundaries, or having been programmed to memorize the first state of the first artificial neural network” Solinas discloses (pg.208, section 3) that the system utilizes two ANNs; Figure 2 illustrates the two ANNs and the two learning phases of this approach. During the first learning phase 1, the knowledge from the first ANN, named Net 1, is “transferred” to the second ANN, named Net 2, through pseudo-samples. That is, Net 2 is trained with the knowledge of Net 1, Net 1 being the model used to generate a pseudo-dataset that represents the knowledge we want to transfer. As both ANNs are identical, we use a simpler way to transfer the knowledge. We duplicate the parameters of Net 1 into Net 2 instead of using pseudo-samples in the phase 1. During the second learning phase 2, new classes have to be integrated without degrading previously learned knowledge. Net 1 learns the new classes, but also the pseudo dataset generated by Net 2 as represented in Fig. 2. Regarding claim 3, “The control system of claim 2, wherein the computation device is further configured, prior to generating the at least one pseudo-data sample, to at least partially transfer knowledge held by the first artificial neural network to the second artificial neural network, wherein the at least one pseudo-data sample is generated using the second artificial neural network, and wherein the training of the first artificial neural network is performed at least partially in parallel with the generation of one or more pseudo-data samples by the second artificial neural network” Solinas discloses (pg.208, section 3) that the system utilizes two ANNs; Figure 2 illustrates the two ANNs and the two learning phases of this approach. During the first learning phase 1, the knowledge from the first ANN, named Net 1, is “transferred” to the second ANN, named Net 2, through pseudo-samples. That is, Net 2 is trained with the knowledge of Net 1, Net 1 being the model used to generate a pseudo-dataset that represents the knowledge we want to transfer. As both ANNs are identical, we use a simpler way to transfer the knowledge. We duplicate the parameters of Net 1 into Net 2 instead of using pseudo-samples in the phase 1. During the second learning phase 2, new classes have to be integrated without degrading previously learned knowledge. Net 1 learns the new classes, but also the pseudo dataset generated by Net 2 as represented in Fig. 2. Regarding claim 4, “The control system of claim 1, wherein the one or more sensors comprise an image sensor, the input data samples being one or more images captured by the image sensor, and the computation device being configured to perform said classification of the new input data sample by image processing of the new input data sample using the first artificial neural network” Solinas discloses (pg.211) in the Algorithm 1, the images, which are used as input, captured of classes s,…,t are trained, and Zhang discloses (¶0074, ¶0078, ¶0080) that the control system comprises a sensor such as an optic/imaging sensor. Regarding claim 5, “The control system of claim 1, wherein the computation device is configured to repeat the operations a) and b) for each class (c) previously learnt by the first artificial neural network, except the class of the base data sample” Solinas discloses (pg.208-211, sections 3-4) that in the combined replay method, Net 1 learns a new set of classes and its previous knowledge, which is captured by Net 2 through reinjections, as shown in Figure 5. Algorithm 1 lists the steps behind combined replay (Figure 5). We consider that “initially” Net 2 has already been trained on previous classes. The tiny memory buffer and the samples of the new classes are provided. For each training batch, we randomly draw samples from the tiny memory buffer Dold and from the currently available training set Dnew 1 and 3 respectively; a buffer and a pre-updated classifier are used to perform distillation to capture previous knowledge. There, the samples of the buffer and their distilled outputs are jointly learned with the new samples and their ground-truth labels. Whereas the classification loss encourages the classification of the newly observed classes, the distillation loss ensures that the previously learned information is not lost. The differences here are the model architecture and the way the buffer samples are used. That is, we do no train a classifier; instead, we train an Auto-Hetero associative ANN and perform reinjections to capture previous knowledge using the same buffer as represented in Fig. 5. Regarding claim 7, “The control system of claim 1, wherein the computation device is further configured to: detect, using a novelty detector, whether one or more new input data samples correspond to a class that is not already known to the first artificial neural network” Solinas discloses (pg.208-211, sections 3-4) that system duplicates the parameters of Net 1 into Net 2 instead of using pseudo-samples in the phase 1. During the second learning phase 2, new classes have to be integrated without degrading previously learned knowledge. Net 1 learns the new classes, but also the pseudo dataset generated by Net 2; in the combined replay method, Net 1 learns a new set of classes and its previous knowledge, which is captured by Net 2 through reinjections, as shown in Figure 5. Algorithm 1 lists the steps behind combined replay (Figure 5). We consider that “initially” Net 2 has already been trained on previous classes. The tiny memory buffer and the samples of the new classes are provided. For each training batch, we randomly draw samples from the tiny memory buffer Dold and from the currently available training set Dnew 1 and 3 respectively Regarding claim 8, “The control system of claim 1, wherein the computation device is configured to perform the iterative sampling over a plurality of iterations until an iteration I at which a stop condition is met, the stop condition being one of the following, or a combination thereof: 1) iteration I corresponds to a maximum number N of iterations, where N is for example between 4 and 30; 2) a class boundary between the base class of the base data sample and the target class has been reached and/or crossed by the modified base data sample of the iteration I; 3) the activation value at the output of the first artificial neural network resulting from the modified base data sample of iteration i has exceeded a threshold” Solinas discloses (pg.206, 1st paragraph) that the Iterative sampling consists in injecting an input sample in a replicator ANN and in reinjecting its output multiple times until a stop condition is reached as shown at the end of algorithm 1. Regarding claim 9, see rejection similar to claim 1. Regarding claim 10, see rejection similar to claim 2. Regarding claim 11, see rejection similar to claim 3. Regarding claim 12, see rejection similar to claim 4. Regarding claim 14, see rejection similar to claim 7. Regarding claim 15, see rejection similar to claim 8. Regarding claim 16, “The method of claim 15, wherein the stop condition is that the iteration I corresponds to a maximum number N of iterations, the method further comprising a calibration phase before the iteratively sampling, the calibration phase determining a value of N based on a number of iterations taken to reach a class boundary” Solinas discloses (pg.206, 1st paragraph) that the Iterative sampling consists in injecting an input sample in a replicator ANN and in reinjecting its output multiple times until a stop condition is reached as shown at the end of algorithm 1. Furthermore, the Examiner takes the position that claim 16 is meant to further narrow claim 15. The limitation is optional and not positively required since the preceding limitation of claim 15 (stop condition being one of the following 1, 2, 3) requires to find only one of the three elements. Allowable Subject Matter Claims 6 and 13 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims along with overcoming 112 rejection above, and further amending claim 13 to include missing required equation as shown in Claim Objection above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to PINKAL R CHOKSHI whose telephone number is (571)270-3317. The examiner can normally be reached on Monday - Friday, 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, BRIAN T PENDLETON can be reached on (571)272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PINKAL R CHOKSHI/ Primary Examiner, Art Unit 2425
Read full office action

Prosecution Timeline

Dec 04, 2022
Application Filed
Aug 06, 2025
Examiner Interview (Telephonic)
Aug 08, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598332
PROCESSING OF MULTI-VIEW VIDEO
2y 5m to grant Granted Apr 07, 2026
Patent 12593114
APPARATUS AND A METHOD FOR SIGNALING INFORMATION IN A CONTAINER FILE FORMAT
2y 5m to grant Granted Mar 31, 2026
Patent 12593084
VIDEO STREAMING SYSTEM AND VIDEO STREAMING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12581144
A METHOD OF PROVIDING A TIME-SYNCHRONIZED MULTI-STREAM DATA TRANSMISSION
2y 5m to grant Granted Mar 17, 2026
Patent 12574599
METHOD AND SYSTEM FOR REDACTING UNDESIRABLE DIGITAL CONTENT
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
60%
Grant Probability
90%
With Interview (+29.6%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 505 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month