Prosecution Insights
Last updated: April 19, 2026
Application No. 18/207,520

METHOD AND DEVICE FOR DOMAIN GENERALIZED INCREMENTAL LEARNING UNDER COVARIATE SHIFT

Non-Final OA §103
Filed
Jun 08, 2023
Examiner
LI, LIANG Y
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
LG Electronics Inc.
OA Round
1 (Non-Final)
61%
Grant Probability
Moderate
1-2
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
167 granted / 273 resolved
+6.2% vs TC avg
Strong +69% interview lift
Without
With
+69.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
26 currently pending
Career history
299
Total Applications
across all art units

Statute-Specific Performance

§101
16.9%
-23.1% vs TC avg
§103
48.6%
+8.6% vs TC avg
§102
21.2%
-18.8% vs TC avg
§112
9.4%
-30.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 273 resolved cases

Office Action

§103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to pending claims 1-20 filed 6/8/2023. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-16 are rejected under 35 U.S.C. 103 as being unpatentable over Ferdinand ("Attenuating catastrophic forgetting by joint contrastive and incremental learning", published 8/23/2022) in view of He ("Momentum contrast for unsupervised visual representation learning", published 2020). For claim 1, Ferdinand discloses: a computer-implemented method for training a classification model (fig.1 gives overview of model training, with §5, §5.1 disclosing various computer vision tasks), the computer-implemented method comprising: obtaining a labeled data set of images comprising data from a rehearsal memory and new input data (fig.1, “augmented minibatch”; §4.2 ¶1: augmented image minibatches are obtained; §4.1 ¶1, §5.2 ¶2: rehearsal memory and implementation); augmenting the labeled data set to generate a first data set and a second data set, wherein each image of the first data set corresponds to a corresponding image of the second data set (ibid: contrastive and complementary image sets are obtained); inputting the first data set into a query encoder and inputting the second data set into a key encoder to obtain encodings output by the query encoder and the key encoder (fig.1: augmented input is passed to identically structured query student and key teacher encoders, each of which generate respective projection (gamma) and feature (F) outputs, see §4.2 ¶1-2); obtaining a contrastive loss based on the encodings using a sum of a first contrastive loss function and a second contrastive loss function (§4 gives overview of total loss as sum of contrastive losses, see also fig.1 outlining contrastive losses generated from supervised data on the query network (L_con) and contrastive distillation loss (L_DCon)); updating parameters of the query encoder based on the obtained contrastive loss (§4: updating parameters based on losses); and updating parameters of the key encoder based on parameters of the query encoder (§2.1 ¶2: transferring parameters from query to key at each incremental step, with §5.1 ¶2 disclosing 250 epochs for each step). Ferdinand does not disclose: wherein the key encoder is a momentum encoder. He discloses: wherein the key encoder is a momentum encoder (§3.2 “Momentum update” gives overview of a momentum encoder with exponentially weighted moving average per iteration, see eq.(2)). It would have been obvious before the effective filing date to a person of ordinary skill in the art to modify the method of Ferdinand by incorporating the momentum encoding of He. Both concern the art of contrastive incremental learning, and the incorporation would have, according to He, improve performance in contrastive loss applications by better sampling the underlying visual space and hence providing more consistent representations (§1 ¶3-4). For claim 2, Ferdinand modified by He discloses the method of claim 1, as described above. Ferdinand further discloses: updating the rehearsal memory with samples of the new input data (§5.1 ¶2: overview of rehearsal memory implementation with memory bank and using herding sampling to update). For claim 3, Ferdinand modified by He discloses the method of claim 2, as described above. Ferdinand modified by He further discloses: wherein the rehearsal memory is updated based on a balanced-fine tuning such that a number of samples of data existing in the rehearsal memory from previous tasks is equal to a number of samples of the new input data to be stored in the rehearsal memory (He §3.2: “Dictionary as a Queue” discloses storing rehearsal memory as a queue, with the oldest removed and the newest added. Hence, the number from previous tasks will be equal to the number of new input as they are both a minibatch size). For claim 4, Ferdinand modified by He discloses the method of claim 3, as described above. Ferdinand modified by He further discloses: wherein images of the samples of data existing in the rehearsal memory from previous tasks and images of the samples of the new input data to be stored in the rehearsal memory are both selected randomly (He §3.2 ¶1). For claim 5, Ferdinand modified by He discloses the method of claim 1, as described above. Ferdinand further discloses: wherein the query encoder and the momentum encoder have a same size and configuration (§2.1 ¶2: as the key encoder is the previous step query encoder, they are identically sized and configured). For claim 6, Ferdinand modified by He discloses the method of claim 1, as described above. Ferdinand further discloses: wherein the first contrastive loss function is configured to identify encodings of different views of a same input image as anchor-positive pairs in a feature space (fig.1: contrastive distillation loss performed on augmented versions of each image, see §4.3 ¶3, hence, different views or augmentations act as anchor positive pairs). For claim 7, Ferdinand modified by He discloses the method of claim 6, as described above. Ferdinand further discloses: wherein the second contrastive loss function is configured to identify encodings of two different sample images from a same class as anchor-positive pairs in the feature space (fig.1: contrastive loss pushes together all images of a class, hence, two different images would act as anchor positive pairs, see §4.2 ¶1: minimizing according to samples of the same labels in set P). For claim 8, Ferdinand modified by He discloses the method of claim 6, as described above. He further discloses: wherein parameters of the momentum encoder are updated based on exponentially weighted moving averages of the parameters of the query encoder (§3.2 eq.2 shows step-wise exponential moving average). Claim 9-16 disclose computer media corresponding to the above limitations and hence are rejected for the same reasons. Claim(s) 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ferdinand ("Attenuating catastrophic forgetting by joint contrastive and incremental learning", published 8/23/2022) in view of He ("Momentum contrast for unsupervised visual representation learning", published 2020) in view of Dunne (US 20200272899 A1). For claim 17, Ferdinand discloses: obtain a labeled data set of images comprising data from a rehearsal memory stored in the memory and new input data (fig.1, “augmented minibatch”; §4.2 ¶1: augmented image minibatches are obtained; §4.1 ¶1, §5.2 ¶2: rehearsal memory and implementation); augment the labeled data set to generate a first data set and a second data set, wherein each image of the first data set corresponds to a corresponding image of the second data set (ibid: contrastive and complementary image sets are obtained); input the first data set into a query encoder and inputting the second data set into a key encoder to obtain encodings output by the query encoder and the key encoder (fig.1: augmented input is passed to identically structured query student and key teacher encoders, each of which generate respective projection (gamma) and feature (F) outputs, see §4.2 ¶1-2); obtain a contrastive loss based on the encodings using a sum of a first contrastive loss function and a second contrastive loss function (§4 gives overview of total loss as sum of contrastive losses, see also fig.1 outlining contrastive losses generated from supervised data on the query network (L_con) and contrastive distillation loss (L_DCon)); update parameters of the query encoder based on the obtained contrastive loss (§4: updating parameters based on losses); update parameters of the key encoder based on parameters of the query encoder (§2.1 ¶2: transferring parameters from query to key at each incremental step, with §5.1 ¶2 disclosing 250 epochs for each step); and Ferdinand does not disclose: wherein the key encoder is a momentum encoder; a computing device for training a classification model to be provided to an edge device, the computing device comprising: a transceiver; a memory; and one or more processors configured to: provide the classification model including parameters of the query encoder to the edge device via the transceiver. He discloses: wherein the key encoder is a momentum encoder (§3.2 “Momentum update” gives overview of a momentum encoder with exponentially weighted moving average per iteration, see eq.(2)). It would have been obvious before the effective filing date to a person of ordinary skill in the art to modify the method of Ferdinand by incorporating the momentum encoding of He. Both concern the art of contrastive incremental learning, and the incorporation would have, according to He, improve performance in contrastive loss applications by better sampling the underlying visual space and hence providing more consistent representations (§1 ¶3-4). Ferdinand modified by He does not disclose the remaining limitations. Dunne discloses: a computing device for training a classification model to be provided to an edge device (fig.1 gives overview of model training and deployment to edge device), the computing device comprising: a transceiver (fig.1B gives networking overview, with fig.4 showing sending and receiving between edge devices and a centralized training devices via a network, hence, transceiver); a memory (fig.14: 1402); and one or more processors (fig.4:1401) configured to: provide the classification model including parameters of the query encoder to the edge device via the transceiver (fig.3:310 discloses deployment of classification model to an edge device via transceiver, see fig.4 showing sending and receiving of data, combination with Ferdinand and He yielding the provision of parameters of the trained query encoder to the edge device). It would have been obvious before the effective filing date to a person of ordinary skill in the art to modify the method of Ferdinand modified by He by incorporating the edge deployment of Dunne. Both concern the art of machine learning, and the incorporation would have, according to Dune, address current limitations in AI technology, such as computer power limitations (0002-3, 0047). Claim 18-19 disclose devices corresponding to methods 2-3 and are hence rejected for the same reasons. For claim 20, Ferdinand modified by He modified by Dunne discloses the method of claim 17, as described above. Ferdinand further discloses: wherein the first contrastive loss function is configured to identify encodings of different views of a same input image as anchor-positive pairs in a feature space (fig.1: contrastive distillation loss performed on augmented versions of each image, see §4.3 ¶3, hence, different views or augmentations act as anchor positive pairs), and wherein the second contrastive loss function is configured to identify encodings of two different sample images from a same class as anchor-positive pairs in the feature space (fig.1: contrastive loss pushes together all images of a class, hence, two different images would act as anchor positive pairs, see §4.2 ¶1: minimizing according to samples of the same labels in set P). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Lin ("Continual contrastive learning for image classification", published 2022) discloses contrastive loss with a distillation and momentum encoder, see fig.2. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LIANG LI whose telephone number is (303)297-4263. The examiner can normally be reached Mon-Fri 9-12p, 3-11p MT (11-2p, 5-1a ET). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. The examiner is available for interviews Mon-Fri 6-11a, 2-7p MT (8-1p, 4-9p ET). If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Jennifer Welch can be reached on (571)272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center or Private PAIR to authorized users only. Should you have questions about access to Patent Center or the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /LIANG LI/ Primary examiner AU 2143
Read full office action

Prosecution Timeline

Jun 08, 2023
Application Filed
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596463
METHOD AND APPARATUS FOR IMAGE-BASED NAVIGATION
2y 5m to grant Granted Apr 07, 2026
Patent 12585716
INTELLIGENT RECOMMENDATION METHOD AND APPARATUS, MODEL TRAINING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12585375
GENERATING SNAPPING GUIDE LINES FROM OBJECTS IN A DESIGNATED REGION
2y 5m to grant Granted Mar 24, 2026
Patent 12580000
MULTITRACK EFFECT VISUALIZATION AND INTERACTION FOR TEXT-BASED VIDEO EDITING
2y 5m to grant Granted Mar 17, 2026
Patent 12561566
NEURAL NETWORK LAYER FOLDING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
61%
Grant Probability
99%
With Interview (+69.1%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 273 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month