Prosecution Insights
Last updated: April 19, 2026
Application No. 18/211,511

Iterative Distillation into Memory for Incremental Domain Adaptation

Non-Final OA §102§103
Filed
Jun 19, 2023
Examiner
FEATHERSTONE, MARK D
Art Unit
2111
Tech Center
2100 — Computer Architecture & Software
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
4y 6m
To Grant
83%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
178 granted / 305 resolved
+3.4% vs TC avg
Strong +25% interview lift
Without
With
+25.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
10 currently pending
Career history
315
Total Applications
across all art units

Statute-Specific Performance

§101
10.5%
-29.5% vs TC avg
§103
52.2%
+12.2% vs TC avg
§102
21.1%
-18.9% vs TC avg
§112
9.3%
-30.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 305 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-6 and 17-18 are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Perez Vallejo et al (U.S. 2024/0020535), hereinafter Perez. With regard to claim 1, Perez teaches a system for incremental domain adaptation, the system comprising: an iterative knowledge distillation module configured to adapt machine learning models to new tasks sequentially through multiple iterations of knowledge distillation ([0016], fig. 1, a transformer model 130; trained to learn parameters for improved parallel output token prediction); and an external memory bank configured to store parameters of the machine learning models pertaining to the new tasks (fig. 1, training data 140, [0024], 140 training data which are parameters used to train the mode). With regard to claim 2, Perez teaches the system of claim 1. Perez further teaches wherein the machine learning models comprise a transformer architecture ([0016], transformer model architecture; [0022], architecture of the transformer model). With regard to claim 3, Perez teaches the system of claim 2. Perez further teaches wherein a layer in the transformer architecture comprises a multi- head attention module ([0034], multiheaded attention) and a feed-forward layer downstream (fig. 2, feed-forward layer) from the multi-head attention module, and wherein the external memory bank is attached to the transformer architecture between the multi-head attention module and the feed-forward layer via a residual connection (fig. 2, input data; [0024], input token sequence of the training data; fig. 1, connection to training data 140). With regard to claim 4, Perez teaches the system of claim 1. Perez further teaches wherein for each of the multiple iterations the machine learning models comprise a current machine learning model and an adapted machine learning model, and wherein the iterative knowledge distillation module is further configured to adapt the current machine learning model to a given one of the new tasks using a training dataset that represents the given new task to produce the adapted machine learning model ([0074], new input tokens for inference, corresponding to new tasks using training dataset; teacher-student model for learning). With regard to claim 5, Perez teaches the system of claim 4. Perez further teaches wherein the iterative knowledge distillation module is further configured to distill the parameters of the adapted machine learning model pertaining to the given new task to the external memory bank ([0016], teacher-student distillation outputs tokens in fewer iterations). With regard to claim 6, Perez teaches the system of claim 5. Perez further teaches wherein the parameters of the adapted machine learning model pertaining to the given new task are stored in memory slots of the external memory bank ([0058], the token distributions are stored for each selected token, inherently in memory slots; fig. 4/5). Claim 17 corresponds to claim 1, and is analyzed accordingly. With respect to claim 18, Perez teaches the method of claim 17. Perez further teaches: adapting, for each of the multiple iterations, a current one of the machine learning models to a given one of the new tasks using a training dataset that represents the given new task to produce an adapted one of the machine learning models; and distilling the parameters of the adapted one of the machine learning models pertaining to the given new task to the external memory bank ([0016], iterative model with teacher-student distillation to output the tokens in few iterations). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 7-16, and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Perez Vallejo et al (U.S. 2024/0020535), in view of Riley et al (U.S. 2022/0253338), hereinafter Riley. With respect to claims 7, 8, and 9, Perez teaches the system of claim 6. Perez fails to specifically teach wherein the memory slots, when filled, are frozen, wherein the external memory bank is further configured to add additional memory slots for the new tasks, and wherein a number of the memory slots added is varied on a task- by-task basis. Riley teaches wherein the memory slots, when filled, are frozen, wherein the external memory bank is further configured to add additional memory slots for the new tasks, and wherein a number of the memory slots added is varied on a task- by-task basis ([0018], [0019], memory is locked when executing a task, new tasks can use memory, and the tasks are varied on how much memory used. It would have been obvious to one of ordinary skill in the art at the time the invention was effectively filed to modify the system of Perez which uses memory to execute tasks with the teaching of Riley of adding/locking/varying memory for tasks in order to efficiently use the memory available as taught by Riley in the cited section and throughout. Claim 10 corresponds to claims 1 and 9, and is analyzed accordingly. Claim 11 corresponds to claims 10 and 3, and is analyzed accordingly. Claim 12 corresponds to claims 10 and 4, and is analyzed accordingly. Claim 13 corresponds to claims 12 and 5, and is rejected accordingly. With respect to claim 14, Perez in view of Riley teach the system of claim 12. Perez further teaches wherein the number of the memory slots allocated to each of the new tasks is a function of at least one of a number of instances of the given new task in the training dataset, and discrepancy in zero-shot performance and fine-tuning performance on the given new task ([0016], teacher-student model for iterative learning and training for improved prediction; [0049], performed by setting the contribution of the subsequent tokens to zero when combining their respective values). Claims 15-16 correspond to claims 10 and 7-8, and are analyzed accordingly. Claim 19 corresponds to claim 17 and 8-9, and is analyzed accordingly. Claim 20 corresponds to claims 19, 9, and 14, and is rejected accordingly. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARK D FEATHERSTONE whose telephone number is (571)270-3750. The examiner can normally be reached Monday-Friday 9:00AM - 5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Cottingham can be reached at 571-272-1400. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARK D FEATHERSTONE/Supervisory Patent Examiner, Art Unit 2111
Read full office action

Prosecution Timeline

Jun 19, 2023
Application Filed
Feb 11, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596928
MOVEMENT OF TENSOR DATA DURING RESHAPE OPERATION
2y 5m to grant Granted Apr 07, 2026
Patent 12547882
DEEP LEARNING ACCELERATION WITH MIXED PRECISION
2y 5m to grant Granted Feb 10, 2026
Patent 12517785
Non-Blocking Chipkill Recovery
2y 5m to grant Granted Jan 06, 2026
Patent 12493518
ERROR CORRECTION USING ON-DIE PARITY BIT STORAGE AND TRANSITIONAL SIGNALS
2y 5m to grant Granted Dec 09, 2025
Patent 12475403
Fault Tolerant Quantum Error Correction Using physical Transport of Qubits
2y 5m to grant Granted Nov 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
83%
With Interview (+25.0%)
4y 6m
Median Time to Grant
Low
PTA Risk
Based on 305 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month