Prosecution Insights
Last updated: April 19, 2026
Application No. 18/446,125

System for Providing Step-by-Step Explanations of Pedagogical Exercises Using Machine-Learned Models

Non-Final OA §101§102§103
Filed
Aug 08, 2023
Examiner
ROWLAND, STEVE
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Google LLC
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
95%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
823 granted / 1059 resolved
+7.7% vs TC avg
Strong +18% interview lift
Without
With
+17.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
24 currently pending
Career history
1083
Total Applications
across all art units

Statute-Specific Performance

§101
17.2%
-22.8% vs TC avg
§103
32.0%
-8.0% vs TC avg
§102
28.7%
-11.3% vs TC avg
§112
13.5%
-26.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1059 resolved cases

Office Action

§101 §102 §103
Detailed Action Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. See Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 573 U.S. 208, 134 S. Ct. 2347 (2014). The claim(s) recite(s) a method of organizing human activity using data storage and retrieval, which is practicably performable by a human using only their mind, pen and paper. The claim(s) recite(s), inter alia, obtaining first training data for a large language model, wherein the training data includes a plurality of example pedagogical exercises, the solutions to those exercises, and ground truth multi-step explanations for the solutions obtaining second training data, wherein the second training data includes a plurality of example pedagogical exercises and the solutions to those exercises without multi-step explanations Under the broadest reasonable interpretation, claims 1 and 10 recite limitations performable in the human mind. A human—using only their mind, pen, and paper—is capable of compiling and storing training data including problem solutions and according steps, along with ground truth solutions to the problems, and compiling and storing training data including problem solutions without step details. The abstract idea is not integrated into a practical application. Claims 1-20 recite the limitation “a computing system comprising one or more processors,” and “updating a language model using the training dataset,” “receiving a query from a user and determining that the query includes query data describing a pedagogical exercise to be solved,” “providing the query data as input to an explanatory machine-learned model,” “receiving, as output from the explanatory machine-learned model, a pedagogical response, the pedagogical response including a multi-step explanation of a solution to the pedagogical exercise,” and “providing the pedagogical response for display to a user.” Specifically, these additional elements, when considered individually or in combination, are not integrated into a practical application because: Computing system comprising one or more processors: is described generically in the specification: The user computing device 102 can be any type of computing device, such as, for example, a personal computing device (e.g., laptop or desktop), a mobile computing device (e.g., smartphone or tablet), a gaming console or controller, a wearable computing device, an embedded computing device, or any other type of computing device … the one or more processors 112 can be any suitable processing device ( e.g., a processor core, a microprocessor, an ASIC, an FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected Therefore, it would accordingly be reasonable to interpret these as routine and conventional computing components. Non-transitory computer-readable media: are described generically in the specification: The memory 114 can include one or more non-transitory computer-readable storage mediums, such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof. The memory 114 can store data 116 and instructions 118 which are executed by the processor 112 to cause the user computing device 102 to perform operations. Therefore, it would again be reasonable to interpret these as routine and conventional computing components. Receiving a query from a user and determining that the query includes query data describing a pedagogical exercise to be solved, Providing the query data as input to an explanatory machine-learned model, Receiving, as output from the explanatory machine-learned model, a pedagogical response, the pedagogical response including a multi-step explanation of a solution to the pedagogical exercise and Providing the pedagogical response for display to a user: can fairly be classified as mere pre- and post-solution activity since they are not integral to the central inventive concept. Large language model: The language model is described in the specification as: For example, the machine-learned models 120 can be or can otherwise include various machine-learned models such as neural networks (e.g., deep neural networks) or other types of machine-learned models, including non-linear models and/or linear models. Neural networks can include feed-forward neural networks, recurrent neural networks (e.g., long short-term memory recurrent neural networks), convolutional neural networks or other forms of neural networks... Thus, the language model described is of a conventional type, storing information in a structured way using rules and pathways to identify trends and tendencies in data and respond to queries in a human-like way. The storage and organization techniques used in its construction and application are mathematical algorithms analogous to those of Example 2, Claim 2 of the July 2024 Subject Matter Update, which was cited as an example of ineligible subject matter.1 Additional elements which were interpreted under step 2A prong 2 as extra-solution activity are re-evaluated in step 2B, and here, the elements appear to have been well-understood, routine, and conventional at the time of filing. Receiving a query from a user and determining that the query includes query data describing a pedagogical exercise to be solved, Providing the query data as input to an explanatory machine-learned model, Receiving, as output from the explanatory machine-learned model, a pedagogical response, the pedagogical response including a multi-step explanation of a solution to the pedagogical exercise: Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015), OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93). Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information), TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission), OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network), buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network) Providing the pedagogical response for display to a user: Gathering and analyzing information using conventional techniques and displaying the result, TLI Communications, 823 F.3d at 612-13, 118 USPQ2d at 1747-48. Providing the pedagogical exercises and the solutions to those exercises as input to a synthesis machine-learned model: Similar to Example 2, Claim 2 of the July 2024 Subject Matter Update, the recitation lacks low-level details about how the training data is constructed or how AI is actually trained using same. As recited, the claims appear to recite a conventional method of data storage and retrieval. Storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015), OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93). Claims 5-12 and 14-19 deal with how queries are gathered and how model output is formatted and presented to the user. Thus, these limitations constitute pre- or post-solution activity as noted supra, and thus cannot serve to transform the invention into a practical application. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: (a) A person shall be entitled to a patent unless— (1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention Claims 1-3, 13 and 15-19 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Symbolab.com (as evidenced by Symbolab.pdf and Youtube.com) (“SL”). Regarding claim 1, SL discloses a computing system comprising one or more processors and one or more non-transitory computer-readable media that collectively store instructions (p. 3: browser host computer inherently includes these features) that, when executed by the one or more processors, cause the computing system to perform operations the operations comprising receiving a query from a user (p. 3: enter the problem to be solved in the input box and click GO), determining that the query includes query data describing a pedagogical exercise to be solved and providing the query data (p. 4: immediately you’re given a step-by-step solution) as input to an explanatory machine-learned model (p. 2: machine learning algorithms), receiving, as output from the explanatory machine-learned model, a pedagogical response, the pedagogical response including a multi-step explanation of a solution to the pedagogical exercise, and providing the pedagogical response for display to a user (p. 4: immediately you’re given a step-by-step solution). Regarding claim 2, SL discloses wherein the query data includes one or more of text data, image data, and audio data (p. 2: enter the problem to be solved in the input box and click GO). Regarding claim 3, SL discloses determining that a query type associated with the query is an explanation query type, and extracting data describing the pedagogical exercise to be solved from the query (p. 4: if ‘Hide steps’ is selected, only the answer is shown … otherwise all problem steps and the answer will be displayed). Regarding claim 13, SL discloses a computer-implemented method (p, 2) comprising receiving, by a computing system comprising one or more processors (p. 3: browser host computer inherently includes these features), an image that includes a pedagogical exercise (p. 3: enter the problem to be solved using the symbol generator and click GO), extracting, by the computing system, data describing the pedagogical exercise (p. 3: enter the problem to be solved in the input box and click GO), providing, by the computing system, the data describing the pedagogical exercise (p. 4: immediately you’re given a step-by-step solution) as input to an explanatory machine-learned model (p. 2: machine learning algorithms), receiving, as output from the explanatory machine-learned model, a pedagogical response, the pedagogical response including a multi-step explanation of the solution to the pedagogical exercise, and providing the pedagogical response for display to a user (p. 4: immediately you’re given a step-by-step solution). Regarding claim 15, SL discloses providing the multi-step explanation in a format such that each respective step can be displayed in a respective collapsible section of the user interface (p. 5). Regarding claim 16, SL discloses wherein a respective step in the multi-step process includes one or more of text, images, and rendered mathematical formulas (p. 5). Regarding claim 17, SL discloses wherein rendered mathematical formulas are rendered based on rendering data output by the machine-learned model (p. 5). Regarding claim 18, SL discloses wherein images can be generated based on a description of the characteristics of an image output by the machine-learned model (p. 5). Regarding claim 19, SL discloses wherein the input to the explanatory machine-learned model can be multimodal (pp. 3-5). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. If this application names joint inventors, Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 4-11 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over SL in view of Pandey et al (US 2024/0330796 A1). Regarding claims 4 and 14, Pandey suggests—where SL does not disclose—wherein the explanatory machine-learned model is a large language model (¶ [0037]). It would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the invention to combine the disclosures of SL and Pandey in order to provide for natural language queries and processing. Regarding claim 5, SL discloses wherein the output of the explanatory machine-learned model includes formatting data for use in displaying the pedagogical response (pp. 4-5, output is presented in html format). Regarding claim 6, SL discloses wherein the formatting data includes markup data (pp. 4-5, output is presented in html format). Regarding claim 7, SL discloses wherein the formatting data causes each step in the multi-step explanation to be displayed in a distinct section of a user interface (pp. 4-5). Regarding claim 8 SL discloses wherein each distinct section of the user interface is collapsible such that one or more steps in the multi-step explanation can be hidden (p. 5). Regarding claim 9, Pandey suggests—where SL does not disclose—generating a machine-learned model prompt, wherein the prompt includes the query data, context information for the query data, and instructions to the machine-learned model (¶ [0148]: determining, for one or more groups of a user network, at least one group skill associated with each respective group of the one or more groups, generating a prompt based on one or more user skills of a first user registered in the user network, the generating the prompt comprising inserting at least one of the one or more user skills into a prompt template to generate the prompt, the prompt template including a request to generate one or more questions for the respective group). It would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the invention to combine the disclosures of SL and Pandey in order to allow for more nuanced responses by the model. Regarding claim 10, Pandey suggests—where SL does not disclose—wherein the contextual information includes user profile data describing a user current level of understanding (¶ [0148]: determining, for one or more groups of a user network, at least one group skill associated with each respective group of the one or more groups, generating a prompt based on one or more user skills of a first user registered in the user network, the generating the prompt comprising inserting at least one of the one or more user skills into a prompt template to generate the prompt). It would have been obvious to a person of ordinary skill in the art prior to the effective filing date of the invention to combine the disclosures of SL and Pandey in order to allow for more nuanced responses by the model. Regarding claim 11, SL discloses wherein the output generated by the explanatory machine-learned model designates, for a respective step in the multi-step explanation, whether the respective step should initially be displayed as collapsed or expanded (p. 5). Conclusion Claims 12 and 20 are not subject to a prior art rejection but remain rejected as ineligible subject matter under 35 USC § 101 as detailed supra. The prior art considered pertinent to applicant's disclosure and not relied upon is made of record on the attached PTO-892 form. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Steve Rowland whose telephone number is (469) 295-9129. The examiner can normally be reached on Monday through Thursday, alternate Fridays, 8:30 am to 6:00 pm, Eastern Time. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Dmitry Suhol can be reached at (571) 272-4430. The fax number for the organization where this application or proceeding is assigned is (571) 273-8300. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. Applicant may choose, at his or her discretion, to correspond with Examiner via Internet e-mail. A paper copy of any and all email correspondence will be placed in the appropriate patent application file. Email communication must be authorized in advance. Without a written authorization by applicant in place, the USPTO will not respond via e-mail to any correspondence which contains information subject to the confidentiality requirement as set forth in 35 U.S.C. 122. Authorization may be perfected by submitting, on a separate paper, the following (or similar) disclaimer: PNG media_image1.png 18 19 media_image1.png Greyscale Recognizing that Internet communications are not secure, I hereby authorize the USPTO to communicate with me concerning any subject matter of this application by electronic mail. I understand that a copy of these communications will be made of record in the application file. PNG media_image1.png 18 19 media_image1.png Greyscale See MPEP 502.03 for more information. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /STEVE ROWLAND/Primary Examiner, Art Unit 3715 1 At https://www.uspto.gov/sites/default/files/documents/2024-AI-SMEUpdateExamples47-49.pdf
Read full office action

Prosecution Timeline

Aug 08, 2023
Application Filed
Dec 16, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589308
GENERATIVE NARRATIVE GAME EXPERIENCE WITH PLAYER FEEDBACK
2y 5m to grant Granted Mar 31, 2026
Patent 12586441
SELECTIVE REDEMPTION OF GAMING ESTABLISHMENT TICKET VOUCHERS
2y 5m to grant Granted Mar 24, 2026
Patent 12582874
APPARATUS FOR ARTIFICIAL INTELLIGENCE EXERCISE RECOMMENDATION BY ANALYZING DATA COLLECTED BY POSTURE MEASUREMENT SENSOR AND DRIVING METHOD THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12579757
UPDATING A VIRTUAL REALITY ENVIRONMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12569763
VIRTUAL OBJECT CONTROL METHOD AND RELATED APPARATUS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
95%
With Interview (+17.6%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 1059 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month