Prosecution Insights
Last updated: April 19, 2026
Application No. 17/928,644

LEARNING METHOD, LEARNING APPARATUS AND PROGRAM

Final Rejection §102
Filed
Nov 30, 2022
Examiner
MOUNDI, ISHAN NMN
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
NTT, Inc.
OA Round
2 (Final)
12%
Grant Probability
At Risk
3-4
OA Rounds
4y 6m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 12% of cases
12%
Career Allow Rate
2 granted / 16 resolved
-42.5% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
41 currently pending
Career history
57
Total Applications
across all art units

Statute-Specific Performance

§101
37.7%
-2.3% vs TC avg
§103
45.0%
+5.0% vs TC avg
§102
9.7%
-30.3% vs TC avg
§112
7.2%
-32.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 16 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments Claims 1 and 6 have been amended. Claims 1-7 remain pending in the application. The amendment filed 11/11/2025 is sufficient to overcome the 35 U.S.C. 101 rejections of claims 1-7. The previous rejections have been withdrawn. Response to Arguments Argument 1, regarding the 101 rejections, applicant argues that the claims integrate the abstract ideas into the practical application of reducing error of a neural network by updating model parameters based on response variables and predicted values for the response variables. Examiner agrees and the 35 U.S.C. 101 rejections have been withdrawn. Argument 2, regarding the prior art rejections, applicant argues that none of the cited art teaches “wherein the parameters of the first neural network are learned such that an error between an estimation value calculated based on a query and a response variable is reduced”. Applicant argues that Ura is directed towards learning a high-performance prediction model using smaller sampling whereas the present application incorporates multiple levels of neural network learning to improve the accuracy while reducing the error. Examiner respectfully disagrees because Ura teaches wherein the parameters of the first neural network are learned such that an error between an estimation value calculated based on a query and a response variable is reduced (prediction performance of a model is estimated by a machine learning apparatus to determine parameter values that improve a model’s prediction performance, C7:L47-52, C28:L37-43. “The term ‘prediction performance’ denotes the model's ability to predict the result of an unknown instance correctly, which may thus be called ‘accuracy’.”, C9:L45-47. “Prediction performance may be indicated in terms of accuracy, precision, or mean square error (RMSE)”, C9:L58-59). Ura teaches determining parameter values that improve a model’s prediction performance and prediction performance is a measure of accuracy and root mean squared error, therefore Ura teaches determining parameters for a model that reduce error and improve model accuracy. The full prior art rejections are outlined below. Claim Objections Claims 1 and 6 are objected to because of the following informalities: The claims recite “generating a task vector representing a property of a task corresponding to a second subset using parameters of a second neural network; …predicted values of response variables for the explanatory variables using parameters of a second neural network”. The amendments made to claims 1 and 6 recite “a second neural network” twice. Examiner suggests amending the claims to recite “generating a task vector representing a property of a task corresponding to a second subset using parameters of a second neural network; …predicted values of response variables for the explanatory variables using parameters of the second neural network” instead. Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-7 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Ura et al (Pub. No.: US 11334813 B2), hereafter Ura. Regarding claims 1 and 6, Ura teaches a memory; and a processor configured to execute (Invention includes memory and processor, C8:L8-26): receiving as input, when denoting a set of tasks as R and a set of indices representing response variables of a task rεR as Cr, a data set Drc composed of pairs of the response variables corresponding to the indices and explanatory variables corresponding to the response variables for each index cεCr (“the process of machine learning assumes the use of a collection of unit datasets that represent known instances. These datasets may be collected by the machine learning apparatus 100 itself or another information processing apparatus, from various devices (e.g., sensor devices) via the network 114. The collected data may be called “big data” because of its large data size. Each unit dataset normally includes two or more values of explanatory variables and one value of a response variable”, C9:L5-14); sampling the task r from the set R, and then, sampling an index c from the set Cr (Explanatory variables and response variables are sampled as part of the unit datasets, C9:L21-33), and sampling a first subset from the data set Drc and a second subset from a set obtained by excluding the first subset from the data set Drc (Machine learning algorithm may be used to divide unit datasets into two distinct classes that exist in an N-dimensional space, C10:L27-30); generating a task vector representing a property of a task corresponding to the first subset using parameters of a first neural network (“Column vector μ(s) used in equation (1) has a dimension of n, and its elements are μ(θ.sub.1, s), μ(θ.sub.2, s), . . . , μ(θ.sub.n, s), as seen in equation (5). That is, column vector μ(s) is a collection of mean values of prediction performance, corresponding to the n hyperparameter values”, C30:L43-47, C30:L55); generating a task vector representing a property of a task corresponding to the second subset using parameters of a second neural network (Equation 1 including column vector μ(s) is calculated for each model, as are equations 2-8, C24:L32-29, figure 15); calculating, from the task vector and explanatory variables included in the second subset, predicted values of response variables for the explanatory variables using parameters of a second neural network (“A learned model permits the machine learning apparatus 100 to predict a value of the response variable (outcome) from values of explanatory variables (causes) when an unknown instance is given as an input”, C9:L34-37. “When a hyperparameter value θ and a sample size s are given, equation (1) calculates the mean μ(θ, s) of prediction performance by using column vector κ(θ), matrix K, and column vector μ(s)”, C30:L7-10, C30:L19); and updating the parameters of the first neural network and the parameters of the second neural network using an error between response variables included in the second subset and the predicted values of the response variables (Parameter values are applied to the machine learning algorithm based on different measurements, one of them being root mean squared error (RMSE), C4:L61-67. C5:L1-8. RMSE may be calculated with the difference between an actual (response) value and predicted value. C9:L55-67, C10:L1-8) wherein the parameters of the first neural network are learned such that an error between an estimation value calculated based on a query and a response variable is reduced (prediction performance of a model is estimated by a machine learning apparatus to determine parameter values that improve a model’s prediction performance, C7:L47-52, C28:L37-43. “The term ‘prediction performance’ denotes the model's ability to predict the result of an unknown instance correctly, which may thus be called ‘accuracy’.”, C9:L45-47. “Prediction performance may be indicated in terms of accuracy, precision, or mean square error (RMSE)”, C9:L58-59). Regarding claim 2, Ura teaches the limitations of claim 1 as outlined above. Ura further teaches wherein the generating generates case vectors from respective pairs included in the first subset using the parameters of the first neural network, and generates the task vector by aggregating the case vectors (“Column vector κ(θ) used in equations (1) and (2) has a dimension of n, and its elements are k(θ, θ.sub.1), k(θ, θ.sub.2), . . . , k(θ, θ.sub.n) as seen in equation (3). As will be described later, k(θ, θ.sub.j) indicates the closeness between two hyperparameter values θ and θ.sub.j. Matrix K used in equations (1) and (2) has a dimension of n rows by n columns, and k(θ.sub.i, θ.sub.j) represents the element at the ith row and jth column, as seen in equation (4).” C30:L19-41). Regarding claim 3, Ura teaches the limitations of claim 2 as outlined above. Ura further teaches wherein the generating generates an average vector, a total vector, or a maximum value vector of the case vectors; an output vector of a recursive neural network; or an output vector of an attention mechanism, as the task vector (“Column vector μ(s) used in equation (1) has a dimension of n, and its elements are μ(θ.sub.1, s), μ(θ.sub.2, s), . . . , μ(θ.sub.n, s), as seen in equation (5). That is, column vector μ(s) is a collection of mean values of prediction performance, corresponding to the n hyperparameter values”, C30:L43-47). Regarding claim 4, Ura teaches the limitations of claim 1 as outlined above. Ura further teaches wherein the second neural network includes a third neural network, a fourth neural network, and a fifth neural network, wherein the calculating calculates the predicted values through a Gaussian process using an average function defined by the third neural network, and a kernel function defined by the fourth neural network and the fifth neural network (Predicted values may be calculated with a Gaussian process that uses a kernel function as described in equations 2, 3, and 4. Equation 1 calculates the mean of the prediction performance C30:L59-67, C30:L7-23, C30:L37, C30:L18 There are M total models, with M being an integer greater than one. C13:L20-32). Regarding claim 5, Ura teaches the limitations of claim 4 as outlined above. Ura further teaches wherein the calculating calculates the predicted value of a response variable for one explanatory variable included in the second subset, using a value of the average function with respect to the task vector and the one explanatory variable, a value of the kernel function with respect to each explanatory variable included in the first subset, a value of the kernel function with respect to the one explanatory variable and said each explanatory variable included in the first subset, said each explanatory variable included in the first subset, and a value of the average function with respect to each explanatory variable included in the first subset and the task vector (The predicted value of the response variable is calculated using equation 1, which calculates the mean of the prediction with respect to column vector k, kernel functions as described in equations 2, 3, and 4, and the average function calculation is incorporated with respect to each explanatory variable as described in equation 5. C30:L59-67, C30:L7-23, C30:L37, C30:L18, C30:L55). Regarding claim 7, Ura teaches the limitations of claim 1 as outlined above. Ura further teaches a non-transitory computer-readable recording medium having computer-readable instructions stored thereon, which when executed, cause a computer to function as the learning apparatus according to claim 1 (“there is provided a non-transitory computer-readable medium storing a program that causes a computer to perform a procedure including…”, C2:L51-53). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ISHAN MOUNDI whose telephone number is (703)756-1547. The examiner can normally be reached 8:30 A.M. - P.M.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Ell can be reached at (571) 270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /I.M./Examiner, Art Unit 2141 /MATTHEW ELL/Supervisory Patent Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Nov 30, 2022
Application Filed
Aug 05, 2025
Non-Final Rejection — §102
Oct 02, 2025
Interview Requested
Oct 15, 2025
Applicant Interview (Telephonic)
Oct 15, 2025
Examiner Interview Summary
Nov 11, 2025
Response Filed
Feb 04, 2026
Final Rejection — §102
Apr 10, 2026
Examiner Interview Summary
Apr 10, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561970
METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR IMAGE RECOGNITION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
12%
Grant Probability
46%
With Interview (+33.3%)
4y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 16 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month