Prosecution Insights
Last updated: April 19, 2026
Application No. 17/687,019

LEARNING APPARATUS AND LEARNING METHOD

Final Rejection §101
Filed
Mar 04, 2022
Examiner
KASSIM, IMAD MUTEE
Art Unit
2129
Tech Center
2100 — Computer Architecture & Software
Assignee
Denso Ten Limited
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
3y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
116 granted / 160 resolved
+17.5% vs TC avg
Strong +34% interview lift
Without
With
+33.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
23 currently pending
Career history
183
Total Applications
across all art units

Statute-Specific Performance

§101
22.6%
-17.4% vs TC avg
§103
44.2%
+4.2% vs TC avg
§102
11.8%
-28.2% vs TC avg
§112
12.9%
-27.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 160 resolved cases

Office Action

§101
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 11/18/2026 have been fully considered: Regarding 101 abstract idea, applicant argues that the claims are not abstract because the claim are complex and deal with training student teacher models. Examiner disagrees, the claim merely compare performance and based on the performance, chooses a model to train. Therefore, if no more details is provided for the complexity, the claim is merely comparing performance and based on a value chooses a model to train, which the process can be calculated or done mentally. The same argument for all independent claims, where the only difference is if the comparison is based on a loss or by performance. For at least the above reasons, 101 abstract idea rejection is maintained. Regarding 103 rejection, based on amendments and arguments, 103 rejection is withdrawn. Claim Objections Claim 19 is objected to because of the following informalities: claim 19 is dependent to canceled claim 17. Claim 19 should be dependent to claim 4. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 3-7, 12, and 18-19 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50-57 (January 7, 2019) (“2019 PEG”). Claim 3 Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes—claim 3 recites a method. Step 2A, prong one: Does the claim recite an abstract idea, law of nature or natural phenomenon? Yes- The claim recites, inter alia: after the training of the student model with the teacher model, determining a performance difference between the teacher model and the student model; and when the performance difference becomes smaller than a predetermined threshold value, changing the teacher model used in training the student model to a teacher model with higher performance than the teacher model currently used and then training the student model: (computing a performance difference is a mathematical calculation, making a determination based on a computed value can be done mentally (decision making observation and evaluation), selecting which model to use for learning based on performance is a calculation, therefore, the claim as a whole can be implemented mathematically or could be done mentally and/or with pen and paper). Thus, this limitation is construed to be directed to the abstract idea of mathematical operations or could be done mentally and/or with pen and paper. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The elements of the claim, such as teacher/student models and “processor “are typically performed in mathematical modeling using generic computers. Merely reciting the words “apply it” (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As an ordered whole, the claim is directed to a mathematical concepts for changing weight coefficient in calculating a loss. It neither improves the functioning of a computer, transforms an article into another article, nor is applied by a particular machine. As such, the claim is not patent eligible. Claim 4 Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes—claim 4 recites a method. Step 2A, prong one: Does the claim recite an abstract idea, law of nature or natural phenomenon? Yes- The claim recites, inter alia: during the training, determining a loss in the student model according to expression (A) below to update a parameter of the student model; after the training, comparing performance of the student model with performance of the teacher model; performing the training while decreasing a value of X in expression (A) as the performance of the student model approaches the performance of the teacher model, L=Leg+X1-Ldstl (A) where L represents the loss in the student model: Leq represents a loss from learning data; Ldstl represents a loss from the teacher model; and X represents a weight coefficient for adjusting a balance between the loss from learning data and the loss from the teacher models; wherein the student model and the teacher model are models for object detection that include identifications of types and locations of objects: (computing a performance difference is a mathematical calculation, making a determination based on a computed value can be done mentally (decision making observation and evaluation), selecting which model to use for learning based on performance is a calculation, comparing model performance based on a loss is also mathematical operations. Therefore, the claim as a whole can be implemented mathematically or could be done mentally and/or with pen and paper). Thus, this limitation is construed to be directed to the abstract idea of mathematical operations or could be done mentally and/or with pen and paper. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The elements of the claim, such as teacher/student models and “processor “are typically performed in mathematical modeling using generic computers. Merely reciting the words “apply it” (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As an ordered whole, the claim is directed to a mathematical concepts for changing weight coefficient in calculating a loss. It neither improves the functioning of a computer, transforms an article into another article, nor is applied by a particular machine. As such, the claim is not patent eligible. Claim 5 Step 2A Prong 1: The claim recites “wherein the loss from the teacher model is an error in an inference result, or an error in an intermediate-layer feature map, between the teacher and student models.”: This limitation merely recites a mathematical manipulation. Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 6 Step 2A Prong 1: The claim recites “wherein the performance is performance of object detection.”: This limitation merely recites a mathematical manipulation. Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 7 Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes—claim 4 recites a method. Step 2A, prong one: Does the claim recite an abstract idea, law of nature or natural phenomenon? Yes- The claim recites, inter alia: a first loss, which is a loss in an inference result based on the teacher model with respect to learning data, and a second loss, which is a loss in an inference result based on the student model with respect to the learning data: if the first loss is smaller than the second loss, computing a loss in the student model by using the second loss and a loss from the teacher model and updating a parameter of the student model based on the computed loss; and if the first loss is equal to or larger than the second loss, updating the parameter of the student model with the second loss taken as a loss in the student model, wherein the student model and the teacher model are models for object detection that include identifications of types and locations of objects: (computing a performance difference is a mathematical calculation, making a determination based on a computed value can be done mentally (decision making observation and evaluation), selecting which model to use for learning based on performance is a calculation, comparing model performance based on a loss is also mathematical operations. Therefore, the claim as a whole can be implemented mathematically or could be done mentally and/or with pen and paper). Thus, this limitation is construed to be directed to the abstract idea of mathematical operations or could be done mentally and/or with pen and paper. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The elements of the claim, such as teacher/student models and “processor “are typically performed in mathematical modeling using generic computers. Merely reciting the words “apply it” (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As an ordered whole, the claim is directed to a mathematical concepts for changing weight coefficient in calculating a loss. It neither improves the functioning of a computer, transforms an article into another article, nor is applied by a particular machine. As such, the claim is not patent eligible. Claim 12 Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes—claim 4 recites a method. Step 2A, prong one: Does the claim recite an abstract idea, law of nature or natural phenomenon? Yes- The claim recites, inter alia: computing: a first loss, which is a loss in an inference result based on the teacher model with respect to learning data, and a second loss, which is a loss in an inference result based on the student model with respect to the learning data, if the first loss is smaller than the second loss and a difference between the first and second losses is equal to or larger than a predetermined threshold value, to determine to perform training using the teacher model, and if the first loss is smaller than the second loss and the difference between the first and second losses is less than the predetermined threshold value, or if the first loss is equal to or larger than the second loss, changing the teacher model or changing a weight coefficient for adjusting a balance between a loss from learning data and a loss from the teacher model, wherein the student model and the teacher model are models for object detection that include identifications of types and locations of objects: (computing a performance difference is a mathematical calculation, making a determination based on a computed value can be done mentally (decision making observation and evaluation), selecting which model to use for learning based on performance is a calculation, comparing model performance based on a loss is also mathematical operations. Therefore, the claim as a whole can be implemented mathematically or could be done mentally and/or with pen and paper). Thus, this limitation is construed to be directed to the abstract idea of mathematical operations or could be done mentally and/or with pen and paper. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The elements of the claim, such as teacher/student models and “processor “are typically performed in mathematical modeling using generic computers. Merely reciting the words “apply it” (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f). Step 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As an ordered whole, the claim is directed to a mathematical concepts for changing weight coefficient in calculating a loss. It neither improves the functioning of a computer, transforms an article into another article, nor is applied by a particular machine. As such, the claim is not patent eligible. Claim 18 Step 2A Prong 1: The claim recites “wherein X is decreased when a performance difference between the teacher and student models becomes smaller than a predetermined threshold value.”: This limitation merely recites a mathematical manipulation. Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Claim 19 Step 2A Prong 1: The claim recites “wherein X is decreased every predetermined number of epochs.”: This limitation merely recites a mathematical manipulation. Step 2A Prong 2, Step 2B: This judicial exception is not integrated into a practical application. Mere recitation of generic computer components neither integrates the judicial exception into a practical application nor provides an inventive concept. Allowable Subject Matter Claims 3-7, 12, and 18-19 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 101 abstract idea rejection, set forth in this Office action. Related arts: Ishii et a. (US 20220366678 A1) teaches column 9, lines 4-26, “When the learning data X is inputted to the learning device 200, in the estimation unit 10a corresponding to the student model, the logit calculator 11a calculates the logit ya of the learning data X (step S21). In the estimation unit 10b corresponding to the teacher model, the logit calculator 11b calculates the logit yb of the learning data X (step S22). Next, the temperature calculator 21 determines the temperature parameter T from the logits ya and yb based on a predetermined function (step S23). The determined temperature parameter T is supplied to the activators 12a and 12b. Next, the activator 12a calculates the estimation result Ya from the logit ya using the temperature parameter T, and the activator 12b calculates the estimation result Yb from the logit yb using the temperature parameter T (step S24). Next, in the optimizing unit 30, the loss calculator 31 calculates the loss La of the estimation result Ya with respect to the correct labels Y, and the loss Lb of the estimation result Ya with respect to the estimation result Yb (step S25). Next, the weighted average calculator 32 calculates the weighted average Lay of the loss La and the loss Lb (step S26). Next, the parameter updater 33 updates the internal parameters PA of the logit calculator 11a based on the weighted averaged loss Lay (step S27).” Wang et al. (“IDK Cascades: Fast Deep Learning by Learning not to Overthink”, NPL, 2018) teaches page 2, figure 1, “Figure 1: An IDK prediction cascade combines IDK classifiers of increasing accuracy and computational cost such that each will either render a high-accuracy prediction or return IDK passing the input to the next model in the cascade for a more accurate but higher cost prediction”, also first paragraph of page 2, “When an IDK classifier predicts the IDK class the subsequent model in the cascade is invoked. The process is repeated until either a model in the cascade predicts a real class or the end of the cascade is reached at which point the last model must render a prediction.”, also see page 4, section 4, “We consider the k class multi class prediction problem in which we are given two pre-trained models: (1) a fast but less accurate model mfast and (2) an accurate but more costly model macc.”, also see page 5, section 4.1, “the confidence scores (i.e. probability over the predicted class) of mfast(x) and follow the IDK classifier design ( Eq. 3). The intuition is that if the prediction of mfast is insufficiently confident than the more accurate classifier is invoked.”, also see section 4.2, i.e. wherein the mfast model is the student model and macc is the teacher model, and based on the performance it triggers to use the teacher model (macc model). Yang et al. (“Snapshot Distillation: Teacher-Student Optimization in One Generation”, NPL, 2019) teaches page 2859 going into 2860, “(i) the teacher model has been well optimized; (ii) 1 2859the teacher and student models are sufficiently different from each other; and (iii) the teacher provides secondary information [49] for the student to learn. Summarizing these requirements leads to our solution that using a cyclic learning rate policy, in which the last snapshot of each cycle (which arrives at a high accuracy and thus satisfies (i)), serves as the teacher for all iterations in the next cycle (these iterations are pulled away from the teacher after a learning rate boost, which satisfies (ii))”. Choi et al. (US 20180268292 A1) teaches training the Faster R-CNN by learning a student model from a teacher model by employing a weighted cross-entropy loss layer for classification accounting for an imbalance between background classes and object classes, employing a boundary loss layer to enable transfer of knowledge of bounding box regression from the teacher model to the student model, and employing a confidence-weighted binary activation loss layer to train intermediate layers of the student model to achieve similar distribution of neurons as achieved by the teacher model. WANG et al. (US 20200134506 A1) teaches training a student model corresponding to a teacher model is provided. The teacher model is obtained through training by taking first input data as input data and taking a corresponding output data as an output target. The method comprises training the student model by taking second input data as input data and taking the corresponding output data as an output target. The second input data is data obtained due to changing of the first input data. Farhadi et al. (US 20220121855 A1) teaches to improve the performance of the student model. Following this intuition, a mechanism is put forward with a combination of an oracle model (which is considered as the best possible model) and a student model (which is fast but has considerably lower accuracy compared to the oracle). The temporal knowledge of the oracle model is transferred to the student model at the inference time. By transferring this knowledge, the student model adapts itself to the current environment or scene. Haider et al. (US 20220335303 A1) teaches student model is trained using a Dropout-KD approach in which intermediate layer selection is performed efficiently such that the skip, search, and overfitting problems in intermediate layer KD may be solved. Teacher intermediate layers are selected randomly at each training epoch, with the layer order preserved to avoid breaking information flow. Over the course of multiple training epochs, all of the teacher intermediate layers are used for knowledge distillation. A min-max data augmentation method is also described based on the intermediate layer selection of the Dropout-KD training method. Gou et al. (“Knowledge Distillation: A Survey”) teaches deep learning is mainly due to its scalability to encode large-scale data and to maneuver billions of model parameters. However, it is a challenge to deploy these cumbersome deep models on devices with limited resources, e.g., mobile phones and embedded devices, not only because of the high computational complexity but also the large storage requirements. To this end, a variety of model compression and acceleration techniques have been developed. As a representative type of model compression and acceleration, knowledge distillation effectively learns a small student model from a large teacher model. It has received rapid increasing attention from the community. This paper provides a comprehensive survey of knowledge distillation from the perspectives of knowledge categories, training schemes, teacher–student architecture, distillation algorithms, performance comparison and applications. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IMAD M KASSIM whose telephone number is (571)272-2958. The examiner can normally be reached 10:30AM-5:30PM, M-F (E.S.T.). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael J. Huntley can be reached at (303) 297 - 4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /IMAD KASSIM/Primary Examiner, Art Unit 2129
Read full office action

Prosecution Timeline

Mar 04, 2022
Application Filed
Aug 21, 2025
Non-Final Rejection — §101
Oct 23, 2025
Applicant Interview (Telephonic)
Oct 24, 2025
Examiner Interview Summary
Nov 18, 2025
Response Filed
Feb 21, 2026
Final Rejection — §101
Apr 14, 2026
Applicant Interview (Telephonic)
Apr 15, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596923
MACHINE LEARNING OF KEYWORDS
2y 5m to grant Granted Apr 07, 2026
Patent 12572843
AGENT SYSTEM FOR CONTENT RECOMMENDATIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12572854
ROOT CAUSE DISCOVERY ENGINE
2y 5m to grant Granted Mar 10, 2026
Patent 12566980
SYSTEM AND METHOD HAVING THE ARTIFICIAL INTELLIGENCE (AI) ALGORITHM OF K-NEAREST NEIGHBORS (K-NN)
2y 5m to grant Granted Mar 03, 2026
Patent 12566861
IDENTIFYING AND CORRECTING VULNERABILITIES IN MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
99%
With Interview (+33.8%)
3y 8m
Median Time to Grant
Moderate
PTA Risk
Based on 160 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month