Prosecution Insights
Last updated: April 19, 2026
Application No. 17/823,913

LEARNING APPARATUS, LEARNING SYSTEM, AND LEARNING METHOD

Final Rejection §101§103
Filed
Aug 31, 2022
Examiner
KIM, HARRISON CHAN YOUNG
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
Kabushiki Kaisha Toshiba
OA Round
2 (Final)
50%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
83%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
3 granted / 6 resolved
-5.0% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
33 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
37.9%
-2.1% vs TC avg
§103
50.5%
+10.5% vs TC avg
§102
4.9%
-35.1% vs TC avg
§112
5.8%
-34.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is made final. Claims 1, 3-12, and 14-18 are pending. Claims 1, 7 and 18 are independent claims. Response to Arguments With respect to the 35 U.S.C. 112(f) interpretation of the previous office action for claim 7, the interpretation is withdrawn due to amendments filed on 11/5/25. With respect to the 35 U.S.C. 101 rejections of the previous office action, the applicant’s arguments, filed 11/5/25, have been fully considered but are not persuasive. However, due to amendments, the scope of the claims has changed. See the updated 35 U.S.C. 101 rejections below. The applicant argues that the claimed features of amended independent claim 1 do not encompass mental processes. However, in the previous rejection, the only feature described as a mental process was “generate a plurality of pieces of partial data from a mini-batch of learning data used for a plurality of learning processes for learning of a parameter of a neural network using an objective function”. As explained by the examiner, dividing data into a plurality of pieces is a mental process. The other limitations mentioned are described as mathematical calculations. Applicant further argues that the recited steps are “not claimed as mathematical steps per se”, however the examiner argues that they clearly reflect mathematical calculations (i.e., calculating a gradient, updating a parameter which is performed using a mathematical formula, finding an average, finding a variance). The applicant argues that the claims integrate a practical application under Step 2A – Prong 2. The examiner argues that there are no additional elements recited in the previously submitted claim 1 or the amended claim 1, and therefore the claims cannot integrate the abstract ideas recited into a practical application. Applicant does not describe any specific limitations of claim 1 as additional elements. As stated in MPEP 2106.05(a), ¶6, “It is important to note, the judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements.” The applicant argues that the amended claim 1 recites specific technical improvements, but the recited limitations are mathematical calculations, i.e., abstract ideas, that cannot alone provide the improvement. The applicant reiterates that claim 1 “recites specific technical steps that go beyond mere mental processes or mathematical calculations” but does not explain how the limitations go beyond the mental/math. With respect to the 35 U.S.C. 103 rejections of the previous office action, the applicant’s arguments, filed 11/5/25, have been fully considered but are not persuasive. However, due to amendments, the scope of the claims has changed. See the updated 35 U.S.C. 103 rejections below. The applicant argues that the previous office asserts “that Toda does not disclose [calculating the overall gradients] by using the average value and the variance value“. The examiner argues that Toda at least suggests the idea of using an average value and variance of gradients because, as noted in the previous 103 rejection, Toda references an embodiment using the Adam update method as an alternative to standard gradient descent (¶49, For example, learning methods (optimization algorithms) such as… Adam may be used). The applicant argues that Ruder does not teach or suggest that the parameters of the objective function may be updated based on an average value of the plurality of partial gradients corresponding to the pieces of partial data and a variance for the partial gradients, because Ruder only operates on gradients in a general sense without referencing the idea that gradients are derived from distinct pieces of partial data. The applicant further argues that Toda explicitly limits the calculation of the overall gradient to an average of partial gradients, without incorporating variance. The examiner again argues that Toda suggests using the Adam update rule (as referenced in the paragraph above), and therefore does not “explicitly limit” the overall gradient calculation to using the mean of gradients. Examiner agrees that Ruder’s formula for the Adam update rule does not explicitly operate on gradients derived from distinct pieces of partial data, but argues that Toda teaches operations involving gradients derived from distinct pieces of partial data; the examiner further argues that the fact that the gradients originate from distinct pieces of partial data would not prevent the Adam update formula described in Ruder (or mentioned in Toda) from being used. The applicant argues that a person of ordinary skill in the art would not have been motivated to modify Toda to incorporate the variance of the partial gradients into the overall gradient calculation, because Toda allegedly limits the gradient calculation to an average without incorporating variance, and Ruder fails to mention partial gradient data or partial data. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, as Toda explicitly refers to using the Adam update rule, and Ruder defines the Adam update rule, the examiner concludes that the motivation to combine the references is present. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-12 and 14-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1: Step 1: This part of the eligibility analysis evaluates whether the claim falls within any statutory category. See MPEP 2106.03. Claim 1 is directed to an apparatus (Step 1: YES). Step 2A prong 1: Does the claim recite a judicial exception? Claim 1 recites: A learning apparatus comprising processing circuitry configured to: generate a plurality of pieces of partial data from a mini-batch of learning data used for a plurality of learning processes for learning of a parameter of a neural network using an objective function (dividing data into a plurality of pieces is a mental process); calculate a partial gradient that is a gradient related to the parameter of the objective function for each of the pieces of partial data (calculating a partial gradient that corresponds to partial data is a mathematical calculation); calculate an average value of the plurality of partial gradients corresponding to the pieces of partial data (taking the average value of gradients is a mathematical formula) ; calculate a variance for the plurality of partial gradients (finding the variance of gradients is a mathematical formula); calculate an overall gradient that is a gradient of the objective function for the mini-batch by using the average value and the variance (calculating an overall gradient is a mathematical calculation); and update the parameter based on the overall gradient (updating the parameter is a mathematical calculation, i.e., adding a gradient component). These steps can be performed mentally or are mathematical calculations (Step 2A prong 1: YES). Step 2A prong 2: Does the claim recite additional elements? Do those additional elements, considered individually and in combination, integrate the judicial exception into a practical application? Claim 1 does not recite any additional elements (Step 2A prong 2: NO). Step 2B: Since there are no additional elements, the abstract ideas are not integrated into a practical application. These limitations, taken either alone or in combination, fail to provide an inventive concept (Step 2B: NO). Thus, the claim is not patent eligible. Regarding claims 3-6, they recite limitations which further narrow the abstract idea by specifying more details of the mental and mathematical process that occurs (Claim 3, calculating the overall gradient as described is a mathematical calculation; Claim 4, calculating the overall gradient as described is a mathematical calculation; Claim 5, calculating noise and calculating partial gradients are mathematical calculations; Claim 6, calculating noise by using a difference between gradients is a mathematical calculation). Regarding claim 7, it is an apparatus similar to the one described in claim 1 and is rejected on the same grounds. Regarding claims 8-12, they recite limitations which further narrow the abstract idea by specifying more details of the mental and mathematical process that occurs (Claim 8, sharing data by transmitting data over a network is an additional element that is well understood, routine and conventional activity that does not integrate the judicial exception into a practical application – see MPEP 2106.05(d)(II); Claim 9, specifying the type of data being transmitted is still transmitting data; Claim 10, calculating the square of a gradient is a mathematical calculation; Claim 11, transmitting data at different timings is still transmitting data – i.e., well understood, routine and conventional activity that does not integrate the judicial exception into a practical application, and selecting timings can be performed mentally; Claim 12, transmitting data at the same timing is still transmitting data, and selecting a timing for transmission is a mental process). Regarding claims 14-17, they recite similar limitations as claims 3-6 above and are rejected on the same grounds – see above. Regarding claim 18, it is a method similar to the steps implemented by the apparatus of claim 1 and is rejected on the same grounds – see above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3, 5, 6 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Toda et al. (US 20200234082 A1, INCLUDED IN IDS), herein Toda, in view of Ruder (“An overview of gradient descent optimization algorithms”, INCLUDED IN IDS). Regarding claim 1, Toda teaches: A learning apparatus comprising processing circuitry configured to: generate a plurality of pieces of partial data from a mini-batch of learning data used for a plurality of learning processes for learning of a parameter of a neural network using an objective function (Abstract, processors generate a plurality of pieces of learning data to be used in a plurality of learning processes, respectively, to learn a parameter of a neural network using an objective function); calculate a partial gradient that is a gradient related to the parameter of the objective function for each of the pieces of partial data (Abstract, The processors calculate a first partial gradient using a partial data); calculate an average value of the plurality of partial gradients corresponding to the pieces of partial data (¶41, equation 1 is an average of partial gradients)… calculate an overall gradient that is a gradient of the objective function for the mini-batch (¶40, In each learning process, calculation of partial gradients for a plurality of partial mini-batches obtained by dividing a mini-batch, calculation of an overall gradient using the partial gradients, and the like are performed )… and update the parameter based on the overall gradient (¶43, updating the weight… using the overall gradient). Toda fails to explicitly teach: calculate a variance for the plurality of partial gradients… calculate an overall gradient that is a gradient of the objective function for the mini-batch by using the average value and the variance. Toda suggests using the Adam method as an alternative to standard gradient descent (¶49, For example, learning methods (optimization algorithms) such as… Adam may be used), but Ruder more specifically teaches: calculate a variance for the plurality of partial gradients (pg. 8, eq. 21, the Adam update rule includes v̂t, which is a variance (of a plurality) of gradients)… calculate an overall gradient that is a gradient of the objective function for the mini-batch by using the average value and the variance (pg. 8, eq. 21, the Adam update rule is determined by m̂t and v̂t, mean and variance of gradients respectively). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the Adam update rule as disclosed by Ruder in the apparatus disclosed by Toda to achieve better performance (pg. 8, ¶2, works well in practice and compares favorably to other adaptive learning-method algorithms). Regarding claim 3, Toda fails to teach: The learning apparatus according to claim 1, wherein the processing circuitry is further configured to calculate the overall gradient by a product of the average value and a reciprocal of a square root of a sum of a square of the average value and the variance. However, in the same field of endeavor, Ruder teaches: wherein the processing circuitry is further configured to calculate the overall gradient by a product of the average value and a reciprocal of a square root of a sum of a square of the average value and the variance (pg. 7, eq. 18, update rule is determined by gradient gt times reciprocal of square root of E[gt2] which is equivalent to the sum of a square of the average value and the variance). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to calculate the overall gradient by scaling the average value of the gradient by the reciprocal of a square root of a sum of a square of the average value and the variance as disclosed by Ruder in the apparatus disclosed by Toda to achieve better performance (pg. 5, ¶2, to adapt our updates to each individual parameter to perform larger or smaller updates depending on their importance – and – pg. 7, Section 4.5, RMSprop is an unpublished, adaptive learning rate method). Regarding claim 5, Toda teaches: The learning apparatus according to claim 1, wherein the processing circuitry is further configured to calculate noise to be added to the parameter for each of the pieces of partial data, and calculate the partial gradient for the parameter to which the noise is added (Abstract, calculate a first partial gradient using a partial data and the parameter added with noise). Regarding claim 6, Toda teaches: The learning apparatus according to claim 5, wherein the processing circuitry is further configured to calculate the noise by using a difference between an immediately preceding overall gradient calculated by immediately preceding parameter update and an immediately preceding partial gradient of each of a pieces of immediately preceding partial data used at the time of the immediately preceding parameter update (¶47, The noise θt+11 is calculated based on a difference between the overall gradient Gt and a partial gradient gt1). Regarding claim 18, it recites similar limitations to claim 1 and is rejected on the same grounds – see above. Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Toda in view of Ruder as applied to claim 1 above, and further in view of Ida et al. (US 20190156240 A1, INCLUDED IN IDS), herein Ida. Regarding claim 4, Toda fails to teach: The learning apparatus according to claim 1, wherein the processing circuitry is further configured to calculate the overall gradient by a product of the average value and a reciprocal of a square root of a sum of a square of the average value. However, in the same field of endeavor, Ruder teaches: wherein the processing circuitry is further configured to calculate the overall gradient by a product of the average value and a reciprocal of a square root of a sum of a square of the average value and… the variance (pg. 7, eq. 18, update rule is determined by gradient gt times reciprocal of square root of E[gt2] which is equivalent to the sum of a square of the average value and the variance). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to calculate the overall gradient by scaling the average value of the gradient by the reciprocal of a square root of a sum of a square of the average value and the variance as disclosed by Ruder in the apparatus disclosed by Toda to achieve better performance (pg. 5, ¶2, to adapt our updates to each individual parameter to perform larger or smaller updates depending on their importance – and – pg. 7, Section 4.5, RMSprop is an unpublished, adaptive learning rate method). Toda in view of Ruder fails to teach: a moving average of the variance. However, in the same field of endeavor, Ida teaches: a moving average of the variance (fig. 2, S6, using the moving average of variance instead of variance). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use a moving average of variance as disclosed by Ida in the apparatus disclosed by Toda in view of Ruder to improve training efficiency (¶73, it is possible to achieve more efficient learning). Claim(s) 7-10, 14, 16 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Toda in view of Kawai et al. (US 20210209443 A1, INCLUDED IN IDS), herein Kawai, in view of Ruder. Regarding claim 7, Toda teaches: A learning system comprising: a plurality of learning apparatuses each comprising first processing circuitry configured to learn parameters of a neural network by using an objective function… generate a plurality of pieces of partial data from a mini-batch of learning data used for a plurality of learning processes for learning of the parameters and allocate the pieces of partial data to the corresponding learning apparatuses (Abstract, processors generate a plurality of pieces of learning data to be used in a plurality of learning processes, respectively, to learn a parameter of a neural network using an objective function), and the first processing circuitry of each of the learning apparatuses is configured to: calculate a partial gradient that is a gradient related to the parameter of the objective function for the allocated partial data (¶40, In each learning process, calculation of partial gradients for a plurality of partial mini-batches obtained by dividing a mini-batch), the first processing circuitry of a specific learning apparatus among the learning apparatuses is configured to: calculate an average value of a plurality of partial gradients corresponding to the pieces of partial data (¶41, equation 1 is an average of partial gradients)… calculate an overall gradient that is a gradient of the objective function for the mini-batch (¶40, In each learning process, calculation of partial gradients for a plurality of partial mini-batches obtained by dividing a mini-batch, calculation of an overall gradient using the partial gradients, and the like are performed)… and the first processing circuitry of each of the learning apparatus is further configure to: update the parameter based on the overall gradient (¶43, updating the weight… using the overall gradient). Toda fails to explicitly teach: and a management apparatus comprising second processing circuitry configured to manage the learning apparatuses, wherein the second processing circuitry of the management apparatus is configured to… However, in the same field of endeavor, Kawai teaches: and a management apparatus (Fig. 22, aggregation processing node 101 handles aggregation computation) comprising second processing circuitry configured to manage the learning apparatuses (Fig. 22, distributed processing nodes 100[1] through 100[N] handle distributed processing), wherein the second processing circuitry of the management apparatus is configured to… Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to distribute processing via an aggregation node and multiple distributed nodes as disclosed by Kawai in the system disclosed by Toda to increase processing efficiency (¶6, Consequently, it is possible to increase, in proportion to the number of nodes, the number of sample data that can be processed in a unit time). Toda in view of Kawai fails to explicitly teach: calculate a variance for the plurality of partial gradients… by using the average value and the variance, and the first processing circuitry of each of the learning apparatus is further configure to: update the parameter based on the overall gradient. However, in the same field of endeavor, Ruder teaches: calculate a variance (pg. 8, eq. 21, v̂t is the variance of gradients) for the plurality of partial gradients… by using the average value and the variance (pg. 8, eq. 21, update rule is determined by m̂t and v̂t, mean and variance of gradients respectively). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the Adam update rule as disclosed by Ruder in the apparatus disclosed by Toda in view of Kawai to achieve better performance (pg. 8, ¶2, works well in practice and compares favorably to other adaptive learning-method algorithms). Regarding claim 8, Toda teaches: The learning system according to claim 7, wherein the first processing circuitries of the learning apparatuses share gradient information on update of the parameter by communicating with each other (Fig. 9, learning devices 100-2-1 and 100-2-2 are able to communicate as discussed in ¶77, the calculation units 102-2 of the respective learning devices 100-2 can acquire a sum of a plurality of partial gradients calculated by the plurality of learning devices 100-2 using a collective communication algorithm). Regarding claim 9, Toda teaches: The learning system according to claim 8, wherein the gradient information includes information of the partial gradient (¶77, the calculation units 102-2 of the respective learning devices 100-2 can acquire a sum of a plurality of partial gradients calculated by the plurality of learning devices 100-2 using a collective communication algorithm). Regarding claim 10, Toda fails to teach: The learning system according to claim 9, wherein the first processing circuitry of each of the learning apparatuses calculates a square of the partial gradient, and the gradient information further includes information of the square of the partial gradient. However, in the same field of endeavor, Ruder teaches: wherein the first processing circuitry of each of the learning apparatuses calculates a square of the partial gradient, and the gradient information further includes information of the square of the partial gradient (pg. 7, eq. 18, update rule is determined by E[gt2] in RMSprop). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to use the RMSprop learning method which utilizes squared gradients as disclosed by Ruder in the system disclosed by Toda in view of Kawai to achieve better performance (pg. 5, ¶2, to adapt our updates to each individual parameter to perform larger or smaller updates depending on their importance). Regarding claims 14, 16 and 17, they recite limitations similar to claims 3, 5 and 6, respectively, and are rejected on the same grounds – see above. Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Toda in view of Kawai and Ruder as applied to claim 10 above, and further in view of Seshadri et al. (US 20190362227 A1), herein Seshadri. Regarding claim 11, Toda in view of Kawai and Ruder fails to explicitly teach: The learning system according to claim 10, wherein the first processing circuities of the learning apparatuses share the information of the partial gradient and the information of the square of the partial gradient at different timings. However, in the same field of endeavor, Seshadri teaches: wherein the first processing circuities of the learning apparatuses share the information of the partial gradient and the information of the square of the partial gradient at different timings (¶53, after completing backward work for a minibatch, each stage asynchronously sends the gradients to the previous stage, while starting computation for another minibatch). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to asynchronously share gradient information as disclosed by Seshadri in the system disclosed by Toda in view of Kawai and Ruder to reduce learning apparatus downtime (¶52, Each worker 112 is, therefore, busy either doing the forward pass or backward pass for a minibatch in the steady state). Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Toda in view of Kawai and Ruder as applied to claim 10 above, and further in view of Kadav et al. (US 20160103901 A1), herein Kadav. Regarding claim 12, Toda in view of Kawai and Ruder fails to explicitly teach: The learning system according to claim 10, wherein the first processing circuities of the learning apparatuses share the information of the partial gradient and the information of the square of the partial gradient at the same timing. However, in the same field of endeavor, Kadav teaches: wherein the first processing circuities of the learning apparatuses share the information of the partial gradient and the information of the square of the partial gradient at the same timing (¶18, First, units may train independently and synchronize parameters when all parallel units finish training by exhausting their training data). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to synchronously share data updates as disclosed by Kadav in the system disclosed by Toda in view of Kawai and Ruder to reduce communication cost (¶18, These methods are commonly used to train over Hadoop where communication costs are prohibitive). Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Toda in view of Kawai and Ruder as applied to claim 10 above, and further in view of Ida. Regarding claim 15, it recites limitations similar to claim 4 and is rejected on the same grounds – see above. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HARRISON CHAN YOUNG KIM whose telephone number is (571)272-0713. The examiner can normally be reached Monday - Thursday 10:00 am - 7:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached at (571) 272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HARRISON C KIM/ Examiner, Art Unit 2145 /CESAR B PAULA/ Supervisory Patent Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

Aug 31, 2022
Application Filed
Aug 23, 2025
Non-Final Rejection — §101, §103
Oct 16, 2025
Interview Requested
Oct 28, 2025
Applicant Interview (Telephonic)
Nov 05, 2025
Response Filed
Nov 05, 2025
Examiner Interview Summary
Jan 30, 2026
Final Rejection — §101, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
50%
Grant Probability
83%
With Interview (+33.3%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month