Prosecution Insights
Last updated: April 19, 2026
Application No. 18/614,785

PERSONALIZED AND GAMIFIED LEARNING EXPERIENCE

Non-Final OA §101§102§103
Filed
Mar 25, 2024
Examiner
GEBREMICHAEL, BRUK A
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
The Toronto-Dominion Bank
OA Round
1 (Non-Final)
22%
Grant Probability
At Risk
1-2
OA Rounds
4y 5m
To Grant
47%
With Interview

Examiner Intelligence

Grants only 22% of cases
22%
Career Allow Rate
152 granted / 680 resolved
-47.6% vs TC avg
Strong +25% interview lift
Without
With
+25.0%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
61 currently pending
Career history
741
Total Applications
across all art units

Statute-Specific Performance

§101
23.8%
-16.2% vs TC avg
§103
36.6%
-3.4% vs TC avg
§102
6.4%
-33.6% vs TC avg
§112
27.9%
-12.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 680 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 101 3. Non-Statutory (Directed to a Judicial Exception without an Inventive Concept/Significantly More) 35 U.S.C.101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. ● Claims 1-20 are rejected under 35 U.S.C.101 because the claimed invention is directed to an abstract idea without significantly more. (Step 1) The current claims fall within one of the four statutory categories of invention (MPEP 2106.03). (Step 2A) [Wingdings font/0xE0] Prong-One: The claim(s) recite a judicial exception, namely an abstract idea, as shown below: — Considering each of claims 1, 8 and 15 as representative claims, the following claimed limitations recite an abstract idea (note that an avatar is essentially a fictional or a pictorial character; and therefore, the term “character” is used below in order to provide proper context to the recited abstract idea): [collect] a characteristic of an account within an account profile; create a [character] for the account; generate and [show] pages of visual aids including the [character] as the account interacts with the pages of the visual aids; determine that the account has interacted with a visual aid and implemented training described within the visual aid based on the account interaction; and in response, change a position of the [character] within the pages of the visual aids to reflect the implemented training. Thus, the limitations identified above recite an abstract idea since the limitations correspond to certain methods of organizing human activity, and/or mental processes, which are part of the enumerated groupings of abstract ideas identified according to the current eligibility standard (see MPEP 2106.04(a)). For instance, the current claims correspond to managing personal behavior; such as teaching. Although an attempt appears to be made—per the claims—to disguise the involvement of the user, the original specification reveals that the user is the one who is interacting when taking training. In particular, during the onboarding/registration phase, the user provides information/characteristics, which is used to build the user’s profile as part of the user’s account; the user is then presented with training content, including visual aids; the user views and/or interacts with the visual aids as part of performing the training, etc. (e.g., see [0024]; [0025]; [0030]; [0033]; [0036], etc.). Thus, when interpreting the claims in light of the specification, the claims do recite an abstract idea—such as, the sub-grouping managing personal behavior, under the group certain methods of organizing human activity. For instance, once the user has registered and created an account based on information that he/she has provided, the user is presented with one or more pages depicting content items (e.g., visual aids, a character, etc.); wherein the user interacts with one or more of the content items as part of performing the training; and wherein, based on the user’s performance, i.e., based on determining that the user has interacted with the visual aid and implemented the training within the visual aid, a relevant result is generated—such as, reflecting the implemented training by changing the position of the character within the page of the visual aid, etc. Similarly, given the limitations that recites the process of: displaying pages of visual aids including the [character] as the [user] interacts with the pages of the visual aids; determining—based on the [user’s] interaction—that the [user] has interacted with a visual aid and implemented training described within the visual aid; also furthermore, changing—in response to the determination above—the position of the [character] within the pages of the visual aids to reflect the implemented training, etc., the claims also correspond to the group mental processes; such as, an observation, an evaluation and/or a judgment process, etc. (Step 2A) [Wingdings font/0xE0] Prong-Two: The claim(s) recite additional element(s), wherein a computer that comprises a processor, a memory, a display, etc., is utilized to facilitate the recited functions/steps regarding: collecting user information to generate an account (e.g., “onboarding an account with a software application, wherein the onboarding comprises storing a characteristic of the account within an account profile of the software application; creating an avatar for the account within the software application”); presenting a user with one or more content items (e.g., “generating and displaying pages of visual aids including the avatar on a user interface of the software application as the account interacts with the pages of the visual aids”); evaluating the user’s interaction with the presented content items (e.g., “determining that the account has interacted with a visual aid and implemented training described within the visual aid based on the account interaction with the software application”); generating one or more relevant results based on the evaluation above (e.g., “changing a position of the avatar within the pages of the visual aids to reflect the implemented training”), etc. However, the claimed additional element(s) fail to integrate the abstract idea into a patent-eligible practical application since the additional element(s) are utilized merely as a tool to facilitate the abstract idea. Accordingly, when each of the claims is considered as a whole, the additional element(s) fail to impose meaningful limits on practicing the abstract idea. For instance, when each of the claims is considered as a whole, none of the claims provides an improvement over the relevant existing technology. The observations above confirm that the claims are indeed directed to an abstract idea. (Step 2B) Accordingly, when the claim(s) is considered as a whole (i.e., considering all claim elements both individually and in combination), the claimed additional elements do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to “significantly more” than the abstract idea itself (also see MPEP 2106). The claimed additional elements are directed to conventional computer elements, which are serving merely to perform conventional computer functions. Accordingly, none of the current claims, when considered as a whole, recites an element—or a combination of elements—directed to an inventive concept. It is also worth noting, per the original disclosure, that the current claimed apparatus/method is directed to a conventional and generic arrangement of the additional elements. For instance, the specification describes a system that implements one or more commercially available conventional computing devices (e.g., a general-purpose computer, i.e., a desktop computer, a laptop, etc.); wherein the devices communicate—via the conventional communication network (e.g., the Internet)—with one or more online servers; and thereby the system presents a user with interactive training materials (e.g., see [0110] to [0120], etc.). In addition, the utilization of the conventional computer/network system to facilitate the presentation of interactive content items (e.g., educational materials) to a user, including generating—based on the analysis of the user’s interaction—one or more results (e.g., one or more of: textual data, graphical data, etc.), etc., is directed to a well-understood, routine, conventional activity in the art (e.g., see US 2017/0206797; US 2008/0254426; US 2008/0268418, etc.). The above observation confirms that the current claimed invention fails to amount to “significantly more” than an abstract idea. It is worth noting that the above analysis already encompasses each of the current dependent claims (i.e., claims 2-7, 9-14 and 16-20). Particularly, each of the dependent claims also fails to amount to “significantly more” than the abstract idea since each dependent claim is directed to a further abstract idea, and/or a further conventional computer element(s) utilized to facilitate the abstract idea. Accordingly, the findings above demonstrate that none of the claims implements an element—or a combination of elements—directed to an inventive concept (e.g., none of the current claims is reciting an element—or a combination of elements—that provides a technological improvement over the existing/conventional technology). ● Claims 15-20 further fail to comply with 35 U.S.C.101 since these claims are directed to non-statutory subject matter. Particularly, claims 15-20 are directed to a computer-readable storage medium. It is worth noting that a computer-readable storage medium broadly covers both statutory and non-statutory subject matter (e.g., signal per se). However, claims 15-20 do not positively exclude the non-statutory subject matter. Also see MPEP 2106.03(I) (emphasis added), Non-limiting examples of claims that are not directed to any of the statutory categories include: • Products that do not have a physical or tangible form, such as information (often referred to as “data per se”) or a computer program per se (often referred to as “software per se”) when claimed as a product without any structural recitations; Accordingly, claims 15-20 further fail to comply with the statutory requirement per section §101. It is also worth to note that the original specification does not necessarily exclude the non-statutory category since it is broadly asserting that “or any other form of storage medium known in the art” (see [0108] of the specification). Claim Rejections - 35 USC § 102 4. The following is a quotation of the appropriate paragraphs of pre-AIA 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Note that the one or more citations (paragraphs or columns) presented in this office action regarding the teaching of a cited reference(s) are exemplary only. Accordingly, such citation(s) are not intended to limit/restrict the teaching of the reference(s) to the cited portion(s) only. Applicant is required to evaluate the entire disclosure of each reference; such as additional portions that teach or suggest the claimed limitations. ● Claims 1, 3, 4, 6-10, 13-18 and 20 are rejected under 35 U.S.C.102(a)(1) as being anticipated by Dohring 2014/0248597. Regarding each of claims 1, 8 and 15, Dohring teaches the following claimed limitations: an apparatus comprising: a memory; and a processor coupled to the memory, the processor configured to (or “a computer-readable storage medium comprising instructions stored therein which when executed by a processor cause the processor to perform:”, per claim 15) ([0076]; [0084]; [0102]: e.g., a computer-based system/method for teaching a user—such as a child—one or more subjects; wherein the system comprises at least one computing device that includes basic computer components, including: a processor, a memory, etc.) (or “a method comprising:”, per claim 8;): onboard an account with a software application and store a characteristic of the account within an account profile of the software application (see [0213]: e.g., at least one authorized user, such as a mentor, creates an account to the child; wherein the mentor also instructs the child how to log in and navigate the computer environment. Thus, the above corresponds to the process of onboarding an account with a software application and store a characteristic of the account within an account profile of the software application); create an avatar for the account within the software application ([0170]: e.g., an avatar that represents the child is created. The above indicates the creation of an avatar associated with the account within the software application. This is because the child already has an account; and wherein the avatar is representing the child); generate and display pages of visual aids including the avatar on a user interface of the software application as the account interacts with the pages of the visual aids; determine that the account has interacted with a visual aid and implemented training described within the visual aid based on the account interaction with the software application ([0171] to [0175]: e.g., the system presents to the child—along with the avatar that represents the child—interactive elements; such as, animals, plants, etc., which the child reveals and learn facts pertaining to such elements; and furthermore, based on monitoring the child’s interaction, the avatar progresses through a succession of individual lessons in a step-by-step path in the virtual environment. Thus, besides generating/displaying pages of visual aids including the avatar on a user interface of the software application as the account interacts with the pages of the visual aids, the system also determines—based on the account interaction with the software application—that the account has interacted with a visual aid and implemented training described within the visual aid); and in response, change a position of the avatar within the pages of the visual aids to reflect the implemented training ([0172]; [0173]; also see of FIG 34 to FIG 37: e.g., as already discussed above, the system moves the avatar through a succession of lessons, which are depicted as graphical element, based on monitoring the child’s interaction. Thus, the above indicates the process of changing the position of the avatar within the pages of the visual aids to reflect the implemented training in response to the determination). Dohring teaches the claimed limitations as discussed per claims 1, 8 and 15 above. Dohring further teaches: Regarding claims 3, 10 and 17, receive an instruction input by the account on the user interface of the software application, and in response, change an order in which the pages of the visual aids are displayed ([0116] to [0118]; [0199] to [0202]: e.g., the system already provides the child with an option to sort learning activities according to one or more preferences; wherein the child selectively makes activities available based on each activity’s association with levels of learning; such as, preschool, pre-K, kindergarten, first grade, etc. Thus, the processor is already configured to receive an instruction input by the account on the user interface of the software application, and in response, change an order in which the pages of the visual aids are displayed); Regarding claims 4 and 18, display an animated game with educational content therein via a page of the software application and move the avatar within the animated game to reflect the implemented training (see FIG 34 to FIG 37; [0102]; [0171] to [0175]: e.g., the system is already generating an animated game environment for teaching the child one or more educational subjects; wherein the animated game environment includes a plurality of animated graphical elements and an avatar that represents the child; and wherein, based on the child’s interaction when performing a lesson, the avatar progresses—in a step-by-step fashion—from one position to another in the animated game environment); Regarding claims 6, 13 and 20, move the avatar along a gameboard within a page of the software application to reflect the implemented training (FIG 34 to FIG 37: e.g., as already pointed out per claim 1, the system monitors the activities of the child; and subsequently, as the child completes a lesson, the system moves the avatar from one position to another position within the virtual environment—such as, a virtual landscape. Accordingly, the virtual landscape above corresponds to a virtual gameboard within the page of the software application; and wherein the processor moves the avatar along the above gameboard to reflect the implemented training); Regarding claims 7 and 14, receive a sharing input via the user interface of the software application, and in response, share the position of the avatar within the pages of the visual aids via a user interface of a different account of the software application ([0195]; [0196]: e.g., the system already implements various communication means, including a virtual mail that allows the child to present his/her educational work products to friends and mentors. In this regard, the child’s educational work products already encompass the page that depicts the progress that the child’s avatar has made, which reflects the one or more lessons that the child has completed. Thus, the above indicates the process of sharing—in response to a sharing input via the user interface—the position of the avatar within the pages of the visual aids via a user interface of a different account of the software application); Regarding claims 9 and 16, receiving feedback about the position of the avatar within the pages of the visual aids and changing the position of the avatar again within the pages of the visual aids based on the feedback ([0172] to [0175]: e.g., as already discussed per claim 1, the system monitors the child’s interaction with one or more graphic elements—such as, the child clicking a given graphical element to reveal and learn facts pertinent to the environment, etc., and based on such monitoring, the system keeps visually progressing the avatar from one position to another in a step-by-step fashion. Accordingly, such process already indicates that the process is receiving feedback regarding the position of the avatar within the pages of the visual aids—such as, feedback regarding the most recent position of the avatar; and subsequently, the processor changes this position—such as progressing the avatar to the next position—based on the child’s interaction); Claim Rejections - 35 USC § 103 5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Note that the one or more citations (paragraphs or columns) presented in this office action regarding the teaching of a cited reference(s) are exemplary only. Accordingly, such citation(s) are not intended to limit/restrict the teaching of the reference(s) to the cited portion(s) only. Applicant is required to evaluate the entire disclosure of each reference; such as additional portions that teach or suggest the claimed limitations. ● Claims 2 and 11 are rejected under 35 U.S.C.103 as being unpatentable over Dohring 2014/0248597 in view of Solomon 2017/0206797. Regarding claim 2, Dohring teaches the claimed limitations as discussed above per claim 1. Dohring further teaches, the processor is further configured to receive feedback about the position of the avatar within the pages of the visual aids and change the position of the avatar again within the pages of the visual aids based on the feedback ([0172] to [0175]: e.g., as already discussed per claim 1, the system monitors the child’s interaction with one or more graphic elements—such as, the child clicking a given graphical element to reveal and learn facts pertinent to the environment, etc., and based on such monitoring, the system keeps visually progressing the avatar from one position to another in a step-by-step fashion. Accordingly, such process already indicates that the process is receiving feedback regarding the position of the avatar within the pages of the visual aids—such as, feedback regarding the most recent position of the avatar; and subsequently, the processor changes this position—such as progressing the avatar to the next position—based on the child’s interaction). Dohring does not describe that the creation of the avatar is performed by an artificial intelligence chatbot interacting with the software application. However, Solomon discloses a computer-based system that provides one or more educational games to a user; and wherein, the system implements one or more artificial intelligence algorithms to generate one or more virtual scenarios, including an avatar in the form of a chatbot conversational agent, etc. ([0035]; [0044]; [0051]). Accordingly, given the above teaching, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of Dohring in view of Solomon; for example, by incorporating one or more artificial intelligence algorithms for generating one or more scenarios in the virtual game environment; such as, configuring the child’s avatar to interact with the child in a natural language; thereby allowing the child to easily interact with the avatar, including asking the avatar one or more questions related to one or more of the lessons that the child is learning; and wherein the avatar provides the child with a proper response (e.g., audibly, and/or textually as part of the chat function [0195], etc.) in a more realistic manner (e.g., a natural dialog); and such implementation helps the child to easily and naturally interact with his/he studies. Regarding claim 11, Dohring teaches the claimed limitations as discussed above per claims 8. Dohring further teaches, the displaying the pages of the visual aids comprises displaying an animated game with educational content therein via a page of the software application and moving the avatar within the animated game to reflect the implemented training (FIG 34 to FIG 37; [0102]; [0171] to [0175]: e.g., the system is already generating an animated game environment for teaching the child one or more educational subjects; wherein the animated game environment includes a plurality of animated graphical elements and an avatar that represents the child; and wherein, based on the child’s interaction when performing a lesson, the avatar progresses—in a step-by-step fashion—from one position to another in the animated game environment). However, Solomon discloses a computer-based system that provides one or more educational games to a user; and wherein, the system implements one or more artificial intelligence algorithms to generate one or more virtual scenarios, including an avatar, in the virtual game environment ([0035]; [0044]; [0051]). Accordingly, given the above teaching, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of Dohring in view of Solomon; for example, by incorporating one or more artificial intelligence algorithms for generating one or more scenarios in the virtual game environment; such as, the child’s avatar and/or at least one additional avatar that interacts with the user in a natural language; thereby allowing the child to easily interact with the avatar, including asking the avatar one or more questions related to one or more of the lessons that the child is learning; and wherein the avatar provides the child with a proper response (e.g., audibly, textually, etc.) in a more realistic manner (e.g., a natural dialog); and such implementation helps the child to easily and naturally interact with his/he studies. ● Claims 5, 12 and 19 are rejected under 35 U.S.C.103 as being unpatentable over Dohring 2014/0248597 in view of Rao 2019/0286439. Regarding claims 5, 12 and 19, Dohring teaches the claimed limitations as discussed above per claims 1, 8 and 15 respectively. Dohring already teaches the process of determining a progress of the account on the pages of the visual aids ([0171] to [0173]: e.g., the system monitors the child as the child performs each of the one or more lessons in the virtual environment; and thereby, the system determines the child’s progress. The above indicates the process of determining a progress of the account on the pages of the visual aids). Dohring does not expressly teach, determining that a different account has not progressed as much as the account on the pages of the visual aids, and displaying a leaderboard via the user interface which comprises an identifier of the account and an identifier of the different account where the identifier of the account is leading the identifier of the different account. However, Rao discloses a gamified virtual environment, which allows a plurality of individuals to participate; wherein each of the one or more individual is performing a corresponding task ([0144]; [0148]; [0149]); and furthermore, the system implements a leaderboard feature, which allows a first individual (or a first team) to evaluate his/her ranking when compared to second individual (or second team); and wherein the leaderboard shows, based on a respective identifiers associated with each of the first individual and the second individual, whether the first individual is leading the second individual or not ([0168] to [0172]). Accordingly. given the above teaching, it would have been obvious to one ordinary skill in the art, before the effective filing date of the claimed invention, to modify the invention of Dohring in view of Rao; for example, by incorporating additional features, including a graphical leaderboard, which is trigger automatically and/or based on the user’s request; so that, the child would be able to easily view and evaluate his/her ranking against at least one friend (e.g., another child of the same age, etc.) who is conducting the same/similar type of training; wherein the child would be able to easily determine whether he is ahead of—or lagging behind—his friend; and thereby, the child would be motivated to improve his/her skills, etc. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRUK A GEBREMICHAEL whose telephone number is (571) 270-3079. The examiner can normally be reached on 7:00AM-3:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DAVID LEWIS can be reached on (571) 272-7673. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BRUK A GEBREMICHAEL/Primary Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Mar 25, 2024
Application Filed
Feb 05, 2025
Response after Non-Final Action
Jan 23, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12165542
MOTION PLATFORM
2y 5m to grant Granted Dec 10, 2024
Patent 12008914
SYSTEMS AND METHODS TO SIMULATE JOINING OPERATIONS
2y 5m to grant Granted Jun 11, 2024
Patent 11990055
SURGICAL TRAINING MODEL FOR LAPAROSCOPIC PROCEDURES
2y 5m to grant Granted May 21, 2024
Patent 11837105
PSEUDO FOOD TEXTURE PRESENTATION DEVICE, PSEUDO FOOD TEXTURE PRESENTATION METHOD, AND PROGRAM
2y 5m to grant Granted Dec 05, 2023
Patent 11810467
FINGER RECOGNITION SYSTEM AND METHOD FOR USE IN TYPING
2y 5m to grant Granted Nov 07, 2023
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
22%
Grant Probability
47%
With Interview (+25.0%)
4y 5m
Median Time to Grant
Low
PTA Risk
Based on 680 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month