Prosecution Insights
Last updated: April 19, 2026
Application No. 17/690,647

System and Method for Resource Efficient Natural Language Processing

Non-Final OA §101§103§112§Other
Filed
Mar 09, 2022
Examiner
KEATON, SHERROD L
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Siemens Aktiengesellschaft
OA Round
3 (Non-Final)
52%
Grant Probability
Moderate
3-4
OA Rounds
4y 6m
To Grant
88%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
295 granted / 563 resolved
-2.6% vs TC avg
Strong +36% interview lift
Without
With
+36.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 6m
Avg Prosecution
32 currently pending
Career history
595
Total Applications
across all art units

Statute-Specific Performance

§101
14.9%
-25.1% vs TC avg
§103
62.0%
+22.0% vs TC avg
§102
11.1%
-28.9% vs TC avg
§112
8.0%
-32.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 563 resolved cases

Office Action

§101 §103 §112 §Other
DETAILED ACTION This action is in response to the RCE filing of 10-15-2025. Claims 1-13 are pending and have been considered below: Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claims 1-13 represent method and system type claims. Therefore claims 1-13 are directed to either a process, machine, manufacture or composition of matter. Regarding claim 1: 2A Prong 1: a feed-forward network configured to learn parameters, based on the input, that result in best function approximation that defines a second output; As drafted, under the broadest reasonable interpretation, the claim covers mental processes (concepts performed in the human mind including an observation, evaluation, judgment, opinion-a user can determine parameters). 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: a processor; and a non-transitory memory having stored thereon modules executed by the processor, the modules comprising (mere instructions to apply the exception using a generic computer component): an encoder comprising: a multi-head attention block configured to perform nonlinear transformation of an input, so as to generate a first output; (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). wherein the multi-head attention block and the feed-forward network are connected in parallel with each other to generate the first output and the second output at the same time as each other, the first output and the second output combined to produce a summed output; and an ordinary differential equation (ODE) solver configured to perform continuous depth integration of the summed output from the multi-head attention block and the feed-forward network. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: a processor; and a non-transitory memory having stored thereon modules executed by the processor, the modules comprising (mere instructions to apply the exception using a generic computer component): an encoder comprising: a multi-head attention block configured to perform nonlinear transformation of an input, so as to generate a first output; (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). wherein the multi-head attention block and the feed-forward network are connected in parallel with each other to generate the first output and the second output at the same time as each other, the first output and the second output combined to produce a summed output; and an ordinary differential equation (ODE) solver configured to perform continuous depth integration of the summed output from the multi-head attention block and the feed-forward network. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). Regarding claim 2: 2A Prong 1: No Additional abstract ideas 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein the ODE solver uses an adjoint sensitivity method to run back propagation through black-box ODE solvers. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein the ODE solver uses an adjoint sensitivity method to run back propagation through black-box ODE solvers. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). Regarding claim 3: 2A Prong 1: No Additional abstract ideas 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein the ODE solver uses a time-invariant differential equation to learn values of the multi-headed attention block and feed forward network. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein the ODE solver uses a time-invariant differential equation to learn values of the multi-headed attention block and feed forward network. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). Regarding claim 4: 2A Prong 1: No Additional abstract ideas 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein the ODE solver uses a time-varying differential equation to learn values of the multi-headed attention block and feed forward network. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein the ODE solver uses a time-varying differential equation to learn values of the multi-headed attention block and feed forward network. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). Regarding claim 5: 2A Prong 1: No Additional abstract ideas 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein a tunable parameter determines the number of time steps over which integration is performed. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein a tunable parameter determines the number of time steps over which integration is performed. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). Regarding claim 6: 2A Prong 1: No Additional abstract ideas 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: wherein a Runge-Kutta (RK4) numerical integrator uses fourth order formula for obtaining numerical solutions of differential equations. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: wherein a Runge-Kutta (RK4) numerical integrator uses fourth order formula for obtaining numerical solutions of differential equations. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). Regarding claim 7: 2A Prong 1: a second feed-forward network configured to learn parameters that result in best function approximation; As drafted, under the broadest reasonable interpretation, the claim covers mental processes (concepts performed in the human mind including an observation, evaluation, judgment, opinion-a user can determine parameters). 2A Prong 2: This judicial exception is not integrated into a practical application. Additional elements: a decoder comprising: a first multi-head attention block configured to perform nonlinear transformation of encoder outputs; (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). a second multi-head attention block configured to perform nonlinear transformation of decoder outputs shifted right; (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). wherein the first multi-head attention block, the second multi-head attention block, and the feed-forward network are connected in parallel to produce a second summed output; and an ODE solver configured to perform continuous depth integration of the second summed output. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). 2B: The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. Additional elements: a decoder comprising: a first multi-head attention block configured to perform nonlinear transformation of encoder outputs; (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). a second multi-head attention block configured to perform nonlinear transformation of decoder outputs shifted right; (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). wherein the first multi-head attention block, the second multi-head attention block, and the feed-forward network are connected in parallel to produce a second summed output; and an ODE solver configured to perform continuous depth integration of the second summed output. (Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f)). Claims 8-13 are similar in scope to claims 1-6, and are analyzed and rejected under the same rationale. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1 and 8 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The claims recite “best function approximation”, however the term best has not been quantitively established and is therefore indefinite. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 5, 7-9 and 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dalli et al. (“Dalli” 20220198254 A1) in view of Li et al. (“Li” 20220138536 A1), Pu et al. (“Pu” 20210390873 A1) and Neural Ordinary Differential Equations, 12-14-2019, pages 1-18, Chen et al. (“Chen”). Claim 1: Dalli discloses a system for performing natural language processing, comprising: a processor (Paragraph 61; processor); and a non-transitory memory having stored thereon modules executed by the processor, the modules (Paragraph 229; memory and processing) comprising: an encoder (Paragraph 6) comprising: a multi-head attention block configured to perform nonlinear transformation of an input, so as to generate a first output (Figures 2 and 7:215/240, Paragraphs 6, 124 (multi-head block) 76, 150 (model can perform non-linear) and provides a first output); a feed-forward network configured to learn parameters, based on the input, that result in best function approximation that defines a second output (Figure 2:275 and Figure 7:230/250, Paragraphs 117, 123-125; feed forward model); Dalli discloses parallelization (Figure 7 Paragraphs 123-125; embodiment with parallel methods for output); however to further disclose wherein the multi-head attention block and the feed-forward network are connected in parallel with each other to generate the first output and the second output at the same time as each other, the first output and second output combined to produce a summed output, Li is disclosed. Li is provided because it discloses a transformer encoder module including a multi-head attention block and feed forward layer, the system further discloses that each module can be executed in parallel (Paragraphs 84-85). Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to apply a known technique to a known device ready for improvement and incorporate the parallel module configuration with the parallelization of Dalla. One would have been motivated to provide the functionality because this method provides speed optimization. Pu also discloses a transformer block which can include a multi-head attention block and/or a feed forward layer. Further the configuration can be provided in parallel (Paragraph 84). The parallel capability can be applied to the transformers containing multi-head attention blocks and feed forward layer of the modified Dalla. Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to apply a known technique to a known device ready for improvement and incorporate a configuration of parallelization for the components found in Dalla. One would have been motivated to provide the parallel structure for maximized computation efficiency, enhanced training speed and better gradient flow. Dalli discloses the capability to use an ordinary differential equation (ODE) solver (Paragraph 247; ODE utilized) and a summed output of a multi-head attention block with a feed forward network (Figure 7 and Paragraph 167). However may not explicitly disclose an ODE solver configured to perform continuous depth integration of the summed output from the multi-head attention block and the feed-forward network. Chen is provided because it discloses an ODE solver functionality, and further discloses that these models utilize depth integration (abstract, Page 2, Section 2, Paragraphs 1-2), it is understood that Neural ODE functionality includes continuous depth integration. Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to apply a known technique to a known device ready for improvement and incorporate the continuous depth models (integration) within the ODE models of Dalla. One would have been motivated to provide the functionality because this method reduces computation/memory utilization which minimizes cost (Chen: abstract). Claim 2: Dalla, Li, Pu and Chen disclose a system of claim 1, wherein the ODE solver uses an adjoint sensitivity method to run back propagation (Dalla: Paragraphs 78, 87, 119) through black-box (Dalla: Paragraphs 118, 166 (black box)) ODE solvers (Dalla: Paragraph 247). Dalla XTT model uses gradient descent as a back-propagation method along with the ODE solvers. (Chen: Page 2, Section 2, Paragraph 2; compute as a black box and through adjoint sensitivity). Claim 5: Dalla, Li, Pu and Chen disclose system of claim 1, wherein a tunable parameter determines the number of time steps over which integration is performed (Dalla: Paragraph 181, 253, 258; parameters modifiable, Chen: Page 5, Section 4: Time-dependent dynamics; parameters can be specified, in order to learn normalizing flow (determines how to incorporate)). Claim 7: Dalla, Li, Pu and Chen disclose a system of claim 1, further comprising: a decoder comprising: a first multi-head attention block configured to perform nonlinear transformation of encoder outputs; a second multi-head attention block configured to perform nonlinear transformation of decoder outputs shifted right; a second feed-forward network configured to learn parameters that result in best function approximation (Dalla: Figures 2 and 6-7; added with explanation to produced summed output); wherein the first multi-head attention block, the second multi-head attention block, and the feed-forward network are connected in parallel to produce a second summed output; Li discloses transformer encoder module includes a multi-head attention block and feed forward layer; the system further discloses that multiple modules can be executed in parallel (Paragraphs 84-85); and another ODE solver configured to perform continuous depth integration of the second summed output Chen discloses an ODE solver functionality, and further discloses that these models utilize depth integration (abstract, Page 2, Section 2, Paragraphs 1-2). Claim 8 is similar in scope to claim 1 and therefore rejected under the same rationale. Claim 9 is similar in scope to claim 2 and therefore rejected under the same rationale. Claim 12 is similar in scope to claim 5 and therefore rejected under the same rationale. Claims 3-4 and 10-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dalli et al. (“Dalli” 20220198254 A1), Li et al. (“Li” 20220138536 A1), Pu et al. (“Pu” 20210390873 A1) and Neural Ordinary Differential Equations, 12-14-2019, pages 1-18, Chen et al. (“Chen”) in view further of Han et al. (“Han” 8935137 B1). Claim 3: Dalla, Li, Pu and Chen disclose a system of claim 1, but may not explicitly disclose wherein the ODE solver uses a time-invariant differential equation to learn values of the multi-headed attention block and feed forward network. Han is provided because it discloses a processing method, and further the system utilizes an ODE solver with a time varying system and a time invariant method (Column 16, Lines 5-35). Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to apply a known technique to a known device ready for improvement and incorporate the different time methods with the ODE solver of Dalla. One would have been motivated to provide the functionality because this method expands computation capabilities providing a more robust system. Claim 4: Dalla, Li, Pu and Chen disclose a system of claim 1, but may not explicitly disclose wherein the ODE solver uses a time-varying differential equation to learn values of the multi-headed attention block and feed forward network. Han is provided because it discloses a processing method, and further the system utilizes an ODE solver with a time varying system and a time invariant method (Column 16, Lines 5-35). Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to apply a known technique to a known device ready for improvement and incorporate the different time methods with the ODE solver of Dalla. One would have been motivated to provide the functionality because this method expands computation capabilities providing a more robust system. Claim 10 is similar in scope to claim 3 and therefore rejected under the same rationale. Claim 11 is similar in scope to claim 4 and therefore rejected under the same rationale. Claims 6 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dalli et al. (“Dalli” 20220198254 A1) , Li et al. (“Li” 20220138536 A1), Pu et al. (“Pu” 20210390873 A1) and Neural Ordinary Differential Equations, 12-14-2019, pages 1-18, Chen et al. (“Chen”) in view further of Kanderian et al. (“Kanderian” 20160019324 A1). Claim 6: Dalla, Li, Pu and Chen disclose a system of claim 1, but may not explicitly disclose wherein a Runge-Kutta(RK4) numerical integrator uses fourth order formula for obtaining numerical solutions of differential equations. Kanderian is provided because it discloses a processing method, and further the system utilizes an ODE Runge Katta integrator (Paragraphs 112 and 377). Therefore it would have been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to apply a known technique to a known device ready for improvement and incorporate the numerical analysis with the ODE solver of Dalla. One would have been motivated to provide the functionality because this method expands computation capabilities providing a more robust system. Claim 13 is similar in scope to claim 6 and therefore rejected under the same rationale. Response to Arguments Applicant's arguments have been fully considered but they are not persuasive. Regarding the 101, the functionality of learning parameters and further non-linear transformations are still seen as a form of evaluation and the “transformation” could also be interpreted as a mathematical concept. Further these features can be implemented with generic computer components. The claims only recite components and processing with no further integration. If applicant intends to link the processes to the NLP, there should be some detail how this functionality is integrated to affect the outcome of NLP capabilities. The claim merely recites the NLP in the preamble. Regarding the 103 rejection, Li and Pu provide transformers found within an encoder. Each transformer can include a multi-head attention block and/or a feed forward layer, the system further discloses that the transformers (which would include the modules) can be built out and executed in parallel (Li: Paragraphs 84-85 and Pu: Paragraph 84). Each transformer containing modules with these layers being connected in parallel would read on the claim limitations. Conclusion The prior art made of record and not relied upon is considered pertinent to Applicant’s disclosure: 20220028139 A1 Mitra et al. [0105] Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). In the interests of compact prosecution, Applicant is invited to contact the examiner via electronic media pursuant to USPTO policy outlined MPEP § 502.03. All electronic communication must be authorized in writing. Applicant may wish to file an Internet Communications Authorization Form PTO/SB/439. Applicant may wish to request an interview using the Interview Practice website: http://www.uspto.gov/patent/laws-and-regulations/interview-practice. Applicant is reminded Internet e-mail may not be used for communication for matters under 35 U.S.C. § 132 or which otherwise require a signature. A reply to an Office action may NOT be communicated by Applicant to the USPTO via Internet e-mail. If such a reply is submitted by Applicant via Internet e-mail, a paper copy will be placed in the appropriate patent application file with an indication that the reply is NOT ENTERED. See MPEP § 502.03(II). Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHERROD KEATON whose telephone number is 571-270-1697. The examiner can normally be reached 9:30am to 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor MICHELLE BECHTOLD can be reached at 571-431-0762. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHERROD L KEATON/ Primary Examiner, Art Unit 2148 2-6-2026
Read full office action

Prosecution Timeline

Mar 09, 2022
Application Filed
Mar 08, 2025
Non-Final Rejection — §101, §103, §112
Jun 13, 2025
Response Filed
Jul 11, 2025
Final Rejection — §101, §103, §112
Oct 15, 2025
Request for Continued Examination
Oct 27, 2025
Response after Non-Final Action
Feb 06, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566823
SYSTEMS AND METHODS FOR INTERPOLATIVE CENTROID CONTRASTIVE LEARNING
2y 5m to grant Granted Mar 03, 2026
Patent 12547820
Automated Generation Of Commentator-Specific Scripts
2y 5m to grant Granted Feb 10, 2026
Patent 12530587
SYSTEMS AND METHODS FOR CONTRASTIVE LEARNING WITH SELF-LABELING REFINEMENT
2y 5m to grant Granted Jan 20, 2026
Patent 12524147
Modality Learning on Mobile Devices
2y 5m to grant Granted Jan 13, 2026
Patent 12524603
METHODS FOR RECOGNIZING AND INTERPRETING GRAPHIC ELEMENTS
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
52%
Grant Probability
88%
With Interview (+36.1%)
4y 6m
Median Time to Grant
High
PTA Risk
Based on 563 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month