Prosecution Insights
Last updated: April 19, 2026
Application No. 18/625,692

AUTOMATICALLY CREATING PSYCHOMETRICALLY VALID AND RELIABLE ITEMS USING GENERATIVE LANGUAGE MODELS

Final Rejection §101§103
Filed
Apr 03, 2024
Examiner
YIP, JACK
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
UNIVERSITY OF SOUTH FLORIDA
OA Round
4 (Final)
33%
Grant Probability
At Risk
5-6
OA Rounds
4y 1m
To Grant
70%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
229 granted / 702 resolved
-37.4% vs TC avg
Strong +38% interview lift
Without
With
+37.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
51 currently pending
Career history
753
Total Applications
across all art units

Statute-Specific Performance

§101
22.8%
-17.2% vs TC avg
§103
42.4%
+2.4% vs TC avg
§102
15.0%
-25.0% vs TC avg
§112
12.4%
-27.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 702 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment In response to the amendment filed 10/15/2025; claims 1-7, 9 and 11-17 are pending; claims 8, 10, 18-20 have been cancelled. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-7, 9 and 11-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 1: Is the claimed invention a statutory category of invention? Claims 1 and 11 are directed to generating test questions (Step 1, Yes). Step 2A, Prong 1: Does the claim recite an abstract idea? The limitation of steps: receive a plurality of items; place the items of the plurality of items in an item bank; for each item in the item bank, generate a score for at least one psychometric property of the item; sort each item in the item bank based on the generated scores to generate a set of top-k scored items and a set of bottom-k scored items; select a first item from the set of top-k scored items and a second item from the set of bottom-k scored items; receive a prompt for a generative large language model based on the sorted items, wherein the prompt indicates a desired number of new items and includes the selected first item and the selected second item, wherein the prompt is for the desired number of new items that are similar to the selected first item and dissimilar to the selected second item, and the prompt indicates the at least one psychometric property; provide the prompt to the generative large language model; receive a generated set of new items from the generative large language model based on the generated prompt, wherein none of the new items of the generated set of new items are in the item bank add at least some of the generated set of new items to the item bank and remove the item with the lowest generated score from the item bank as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of a computing device. This type of mental process can be practically performed in the human mind. Note that this akin to the abstract idea of performing mental observations, evaluations, and judgements. The mere nominal recitation of computing device performing these steps does not take the claim limitation outside of the mental processes grouping. Thus, the claim recites a mental process (Step 2A, Prong 1: yes). Step 2A, Prong 2: Does the claim recite additional elements that integrate the judicial exception into a practical application? Per the 2019 Revised Patent Subject Matter Eligibility Guidance, if a claim as a whole integrates the recited judicial exception into a practical application of that exception, a claim is not "directed to" a judicial exception. Alternatively, a claim that does not integrate a recited judicial exception into a practical application is directed to the exception. Evaluating whether a claim integrates an abstract idea into a practical application is performed by a) identifying whether there are any additional elements recited in the claim beyond the abstract idea, and b) evaluating those additional elements individual and in combination to determine whether they integrate the abstract idea into a practical application, using one or more of the considerations laid out by the Supreme Court and the Federal Circuit. Exemplary considerations indicative that an additional element (or combination of elements) may have or has not been integrated into a practical application are set forth in the 2019 PEG. With respect to the instant claims, claims 1 and 11 recite the additional elements of: at least one computing device; and a non-transitory computer-readable medium with computer-executable instructions stored thereon that when executed by the at least one computing device cause the at least one computing device. It is particularly noted that the use of a computing device "as a tool" to perform an abstract method and steps that only amount to extra solution activity are indicated in the 2019 PEG as examples that an additional element has not been integrated into a practical application. Even in combination, the recited additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits, such as an improvement to a computing system, on practicing the abstract idea (STEP 2A, Prong 2: NO). Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? Claims 1, 11 recite the additional elements of: at least one computing device; and a non-transitory computer-readable medium with computer-executable instructions stored thereon that when executed by the at least one computing device cause the at least one computing device set forth above for Step 2A, Prong 2. Regarding these limitations: Applicant's specification only describes these features in a highly generic manner by stating that "Illustrative types of hardware components that can be used include Field-programmable Gate Arrays (FPGAs ), Application-specific Integrated Circuits (ASICs), Application-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), graphic processing units (GPUs), tensor processing units (TPUs), quantum computing devices, etc." in the Applicant’s specification, para. [0064]. Dependent claims 2-9 and 12-18 inherit the deficiencies of their respective parent claims through their dependencies and do not recite additional limitations sufficient to direct the claims to more than the claimed abstract idea, and are thus rejected for the same reasons. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-7, 9 and 11-17 are rejected under 35 U.S.C. 103 as being unpatentable over Taylor et al. (US 10,528,916 B1) in view of Khabiri et al. (US 2019/0205726 A1), Srinivasan et al. (US 11,582,174 B1) and Williams et al. (US 2024/0289851 A1) Re claims 1, 11: 1. Taylor teaches [A] method (Taylor, Abstract) comprising: receiving a plurality of items by a computing device (Taylor, Abstract, “determines a list of questions”; fig. 7; col. 21, lines 9 - 18); placing the items of the plurality of items in an item bank by the computing device (Taylor, col. 8, lines 36 – 44, “candidates who have been presented with questions from the question bank”; col. 8, lines 20 - 24, “a bank of questions may be stored in a data storage location”); for each item in the item bank, generating a score for at least one psychometric property of the item by the computing device (Taylor, col. 8, line 55 – col. 9, line 9, “The question identified as impacting the competency score is then validated for use in an interview”; col. 11, lines 1 – 6, “each question has a weight for negative scoring and positive scoring”; col. 2, line 49 – col. 3, line 13, “Examples of some competencies may include drive, dedication, creativity, motivation, communication skills, teamwork, energy, enthusiasm, determination, reliability, honesty, integrity, intelligence, pride, dedication, analytical skills, listening skills, achievement profile, efficiency, economy, procedural awareness, opinion, emotional intelligence, etc.”); sorting each item in the item bank based on the generated scores by the computing device (Taylor, col. 8, line 55 – col. 9, line 9, “The questions may also be ranked by the amount of impact they have on the competency score”, col. 17, lines 50 – 60, “the model 310 of the interview design program 110 may rank the list of questions based on the impact of the questions in the historical data”); receiving a prompt based on the sorted items by the computing device (Taylor, Abstract, “sends the list of questions and the ranking information to the first device and receives a selection of a set of desired questions”; col. 13, lines 14 – 41, “The user interfaces 124 presented to the campaign manager may allow for selecting and/or entering one or more competencies, questions, or prompts to be presented to candidates in the evaluation process”; col. 17, lines 50 – 60, “The list of questions is sent back to the agent at the company A client 302 for review. The agent then reviews the questions and approves or disapproves of each question or group of questions”); receiving a generated set of new items from the item generator based on the generated prompt by the computing device (Taylor, col. 2, lines 25 – 48, “to generate question set … a series of prompts or question”); and adding at least some of the generated set of new items to the item bank by the computing device (Taylor, col. 20, lines 1 – 14, “The agent may designate a competency to which the evaluation platform may add the new question”; col. 25, lines 12 – 36, “may add the questions”). 11. Taylor teaches [A] system (Taylor, Abstract) comprising: at least one computing device; and a non-transitory computer-readable medium with computer-executable instructions stored thereon that when executed by the at least one computing device cause the at least one computing device (Taylor, fig. 8; col. 23, lines 4 - 63) to: receive a plurality of items (Taylor, Abstract, “determines a list of questions”; fig. 7; col. 21, lines 9 - 18); place the items of the plurality of items in an item bank (Taylor, col. 8, lines 36 – 44, “candidates who have been presented with questions from the question bank”; col. 8, lines 20 - 24, “a bank of questions may be stored in a data storage location”); for each item in the item bank, generate a score for at least one psychometric property of the item (Taylor, col. 8, line 55 – col. 9, line 9, “The question identified as impacting the competency score is then validated for use in an interview”; col. 11, lines 1 – 6, “each question has a weight for negative scoring and positive scoring”; col. 2, line 49 – col. 3, line 13, “Examples of some competencies may include drive, dedication, creativity, motivation, communication skills, teamwork, energy, enthusiasm, determination, reliability, honesty, integrity, intelligence, pride, dedication, analytical skills, listening skills, achievement profile, efficiency, economy, procedural awareness, opinion, emotional intelligence, etc.”); sort each item in the item bank based on the generated scores (Taylor, col. 8, line 55 – col. 9, line 9, “The questions may also be ranked by the amount of impact they have on the competency score”, col. 17, lines 50 – 60, “the model 310 of the interview design program 110 may rank the list of questions based on the impact of the questions in the historical data”); receiving a prompt based on the sorted items (Taylor, Abstract, “sends the list of questions and the ranking information to the first device and receives a selection of a set of desired questions”; col. 13, lines 14 – 41, “The user interfaces 124 presented to the campaign manager may allow for selecting and/or entering one or more competencies, questions, or prompts to be presented to candidates in the evaluation process”; col. 17, lines 50 – 60, “The list of questions is sent back to the agent at the company A client 302 for review. The agent then reviews the questions and approves or disapproves of each question or group of questions”); receive a generated set of new items from the item generator based on the generated prompt (Taylor, col. 2, lines 25 – 48, “to generate question set … a series of prompts or question”); and add at least some of the generated set of new items to the item bank (Taylor, col. 20, lines 1 – 14, “The agent may designate a competency to which the evaluation platform may add the new question”; col. 25, lines 12 – 36, “may add the questions”). Taylor does not explicitly disclose sorting each item in the item bank based on the generated scores by the computing device to generate a set of top-k scored items and a set of bottom-k scored items; selecting a first item from the set of top-k scored items and a second item from the set of bottom-k scored items by the computing device; receiving a prompt for a model based on the sorted items by the computing device, wherein the prompt indicates a desired number of new items and includes the selected first item and the selected second item, wherein the prompt is for the desired number of new items that are similar to the selected first item and dissimilar to the selected second item, and the prompt indicates the at least one psychometric property; providing the prompt to the model by the computing device; Khabiri et al. (US 2019/0205726 A1) teaches an invention generally relate to computers, and computer applications, and more particularly to computer-implemented system and method for enhancing a display presentation with additional insight data for a question/answer system (Khabiri, Abstract). Khabiri teaches: sorting each item in the item bank based on the generated scores by the computing device to generate a set of top-k scored items and a set of bottom-k scored items (Khabiri, [0017], “the multi-dimensional insight ranking module to generate scores and sort the candidate questions”; fig. 7; [0034], “each of these factors is used to generate a score for the candidate expansion question, and a ranking is performed to generate a list of the candidate expansion questions, e.g., from a highest score to lowest score”; [0056], “the method obtains the highest ranked questions, e.g., a top-K insight (where K is a predetermined limit or number) questions having the highest scores from ranked insights”); selecting a first item from the set of top-k scored items and a second item from the set of bottom-k scored items by the computing device (Khabiri, [0017], “the multi-dimensional insight ranking module to generate scores and sort the candidate questions”; fig. 7; [0034], “each of these factors is used to generate a score for the candidate expansion question, and a ranking is performed to generate a list of the candidate expansion questions, e.g., from a highest score to lowest score”; [0056], “the method obtains the highest ranked questions, e.g., a top-K insight (where K is a predetermined limit or number) questions having the highest scores from ranked insights”); receiving a prompt for a model based on the sorted items by the computing device, wherein the prompt indicates a desired number of new items and includes the selected first item and the selected second item, wherein the prompt is for the desired number of new items that are similar to the selected first item and dissimilar to the selected second item (Khabiri, [0017], “the multi-dimensional insight ranking module to generate scores and sort the candidate questions”; fig. 7; [0034], “each of these factors is used to generate a score for the candidate expansion question, and a ranking is performed to generate a list of the candidate expansion questions, e.g., from a highest score to lowest score”; [0056], “the method obtains the highest ranked questions, e.g., a top-K insight (where K is a predetermined limit or number) questions having the highest scores from ranked insights”), and the prompt indicates the at least one psychometric property (Khabiri, [0007], “among the candidate expanded questions based upon one or more criteria”; [0038], “the User Preference-based template” ); providing the prompt to the model by the computing device (Khabiri, [0035], “pre-defined criteria may be embodied in the form of one or more question expansion templates … use of a Machine learning model”; [0040], “a recursive machine learning (not shown) algorithm may be employed in system 100 for use as a prediction tool to generate a new related question that is most relevant given the user's history of questions that the user (or multiple users) has asked”; [0060], “natural language format”); receiving a generated set of new items from the item generator based on the generated prompt by the computing device, wherein none of the new items of the generated set of new items are in the item bank (Khabiri, [0040], “generate a new related question that is most relevant given the user's history of questions that the user (or multiple users) has asked”); and adding at least some of the generated set of new items by the computing device and remove the item with the lowest generated score (Khabiri, [0034], “top-K number of candidate expansion questions having the highest scores may be selected”; [0049], “a parameter "n" may be the top 10 or top 100 insights and thus, the system will limit the number of generated candidate questions based on the parameter”). Therefore, in view of Khabiri, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method/system described in Taylor, by ranking and generating a number of candidate questions as taught by Khabiri, since the multi-dimensional insight ranking module will limit the number of related questions to a top-k amount which will be the most relevant based on the user needs (Khabiri, [0033]). Taylor does not explicitly disclose remove removing the item with the lowest generated score from the item bank by the computing device. Srinivasan (US 11,582,174 B1) teaches store content and when to refrain from storing content (Srinivasan, Abstract). Srinivasan teaches remove/removing the item with the lowest generated score from the item bank by the computing device (Srinivasan, col. 12, lines 1 – 29, “weights to these values to generate a score for the piece of content. When storage space is needed, therefore, the content with the lowest score may be removed from the first storage location 158 to make room for the new (higher-scoring) piece of content”). Therefore, in view of Srinivasan, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method/system described in Taylor, by removing lowest score content as taught by Srinivasan, since in order to make space for new items (Srinivasan, col. 12, lines 1 – 29). Taylor does not explicitly disclose a generative AI for ranking questions. Williams et al. (US 2024/0289851 A1) teaches systems and methods are described for identifying impactful elements in database information to generate a dialogue output (William, Abstract). William further teaches a generative AI for ranking questions (William, [0108], “the dialogue output may be a recommendation to condense a number of questions being asked to users by combining similar questions or removing questions that do not provide important or useful feedback. In some such embodiments, the generative AI or ML model may request access to data sources for a user and may automatically answer one or more questions without user input. The generative AI or ML model may then determine which questions remain unanswered and may prompt a user to ask such to a customer and/or provide such to a customer. In still further such embodiments, the generative AI or ML model prepares a hierarchy (e.g., assigns importance and/or ranking to questions) for questions based upon user data”). Therefore, in view of Taylor, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the method/system described in Taylor, by providing generative AI of Williams, since the generative AI and/or ML model may modify or generate a new version of the questions personalize to the user (Williams, [0093]). Re claims 2, 12: 2. The method of claim 1, further comprising: for each item in the generated set of new items, generating a score for at least one psychometric property of the item (Taylor, col. 8, line 55 – col. 9, line 9, “The question identified as impacting the competency score is then validated for use in an interview”; col. 11, lines 1 – 6, “each question has a weight for negative scoring and positive scoring”; col. 2, line 49 – col. 3, line 13, “Examples of some competencies may include drive, dedication, creativity, motivation, communication skills, teamwork, energy, enthusiasm, determination, reliability, honesty, integrity, intelligence, pride, dedication, analytical skills, listening skills, achievement profile, efficiency, economy, procedural awareness, opinion, emotional intelligence, etc.”); filtering the generated set of new items to remove items with generated scores that are below a threshold; and adding the remaining new items from the generated set of new items to the item bank (Taylor, col. 10, line 44 – col. 11, line 6, “ratings below a threshold of0.5 may be placed into the low matrix”; col. 24, line 55 - col. 25, line 11, “select questions which have an impact value above a certain threshold”). 12. The system of claim 11, further comprising: for each item in the generated set of new items, generating a score for at least one psychometric property of the item (Taylor, col. 8, line 55 – col. 9, line 9, “The question identified as impacting the competency score is then validated for use in an interview”; col. 11, lines 1 – 6, “each question has a weight for negative scoring and positive scoring”; col. 2, line 49 – col. 3, line 13, “Examples of some competencies may include drive, dedication, creativity, motivation, communication skills, teamwork, energy, enthusiasm, determination, reliability, honesty, integrity, intelligence, pride, dedication, analytical skills, listening skills, achievement profile, efficiency, economy, procedural awareness, opinion, emotional intelligence, etc.”); filtering the generated set of new items to remove items with generated scores that are below a threshold; and adding the remaining new items from the generated set of new items to the item bank (Taylor, col. 10, line 44 – col. 11, line 6, “ratings below a threshold of0.5 may be placed into the low matrix”; col. 24, line 55 - col. 25, line 11, “select questions which have an impact value above a certain threshold”). Re claims 3, 13: 3. The method of claim 1, further comprising generating a test using at least some of the items in the item bank. 13. The system of claim 11, further comprising generating a test using at least some of the items in the item bank (Taylor, col. 25, lines 12 – 36, “the interview design program may add the questions to the digital interview”; Abstract, “The interview design program creates the digital interview with the set of desired questions”). Re claims 4, 14: 4. The method of claim 1, wherein the items are test items. 14. The system of claim 11, wherein the items are test items (Taylor, col. 8, lines 36 – 44, “candidates who have been presented with questions from the question bank”; col. 8, lines 20 - 24, “a bank of questions may be stored in a data storage location”). Re claims 5, 15: 5. The method of claim 1, wherein the psychometric property comprises one of difficulty, discrimination, and reliability. 15. The system of claim 11, wherein the psychometric property comprises one of difficulty, discrimination, and reliability (Taylor, col. 8, line 55 – col. 9, line 9, “The question identified as impacting the competency score is then validated for use in an interview”; col. 11, lines 1 – 6, “each question has a weight for negative scoring and positive scoring”; col. 2, line 49 – col. 3, line 13, “Examples of some competencies may include drive, dedication, creativity, motivation, communication skills, teamwork, energy, enthusiasm, determination, reliability, honesty, integrity, intelligence, pride, dedication, analytical skills, listening skills, achievement profile, efficiency, economy, procedural awareness, opinion, emotional intelligence, etc.”). Re claims 6 – 7, 16 – 17: 6. The method of claim 1, wherein the score is generated by an AI model. 7. The method of claim 1, wherein the score is generated by expert reviewer. 16. The system of claim 11, wherein the score is generated by an AI model. 17. The system of claim 11, wherein the score is generated by expert reviewer (Taylor, col. 3, line 62 – col. 4, line 28, “machine-learning system”; col. 3, lines 14 – 24, “subject matter experts”). Re claim 9: 9. The method of claim 1, wherein the items comprise text items, image items, and video items (Taylor, col. 11, lines 60 – 64; col. 25, lines 37 - 52). Response to Arguments Applicant's arguments filed 10/15/2025 have been fully considered but they are not persuasive. Applicant argues that The Office Action alleges that the claims are directed to the abstract idea of a "mental process" (Office Action, Page 5). Without conceding that the independent claims are directed to an abstract idea or a mental process, Applicant respectfully submits that the independent claims as amended recite additional elements that integrate the judicial exception into a practical application and are therefore eligible subject matter under 35 U.S.C. § 101. It is unclear which elements are addition elements. The “generative large language model” is neither a physical element, nor inherently requires a computing device to be implemented. A “generative large language model” resembles a mathematical model or an algorithm for ranking an item pool and selecting items. A human may readily perform such mathematical model by using pen and pencil. Applicant argues: First, Applicant respectfully submits that the cited references fail to teach or suggest "selecting a first item from the set of top-k scored items and a second item from the set of bottom-k scored items by the computing device." Applicant has reviewed the cited references and can find no teaching or suggestion of such a feature. The Office maintains the position that Taylor teaches sorting each item in the item bank based on the generated scores by the computing device (Taylor, col. 8, line 55 – col. 9, line 9, “The questions may also be ranked by the amount of impact they have on the competency score”, col. 17, lines 50 – 60, “the model 310 of the interview design program 110 may rank the list of questions based on the impact of the questions in the historical data”). Furthermore, Khabiri teaches sorting each item in the item bank based on the generated scores by the computing device to generate a set of top-k scored items and a set of bottom-k scored items (Khabiri, [0017], “the multi-dimensional insight ranking module to generate scores and sort the candidate questions”; fig. 7; [0034], “each of these factors is used to generate a score for the candidate expansion question, and a ranking is performed to generate a list of the candidate expansion questions, e.g., from a highest score to lowest score”; [0056], “the method obtains the highest ranked questions, e.g., a top-K insight (where K is a predetermined limit or number) questions having the highest scores from ranked insights”). The highest score to the lowest score with the top-k insight and the bottom-k (i.e., lowest score). Applicant argues: Second, Applicant respectfully submits that the cited references also fail to teach or suggest "receiving a prompt for a generative large language model based on the sorted items by the computing device, wherein the prompt indicates a desired number of new items and includes the selected first item and the selected second item, wherein the prompt is for the desired number of new items that are similar to the selected first item and dissimilar to the selected second item from the item bank with the lowest generated score, and the prompt indicates the at least one psychometric property." … Further, Applicant respectfully submits that none of the cited references teaches or suggests "providing the prompt to the generative large language model by the computing device." In the response to arguments section of the Office Action, the Examiner writes that the use of the generative large language model is taught by Srinivasan by the "pre-established language models 1254 stored in an ASR model knowledge base" and by Khabiri by the "use of a machine learning model" and "natural language format" (Office Action page 16). The Office cited Williams et al. (US 2024/0289851 A1) teaches a generative AI for sorting question based on user data. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACK YIP whose telephone number is (571)270-5048. The examiner can normally be reached Monday thru Friday; 9:00 AM - 5:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, XUAN THAI can be reached at (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACK YIP/Primary Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Apr 03, 2024
Application Filed
Nov 16, 2024
Non-Final Rejection — §101, §103
Jan 27, 2025
Response Filed
Jan 31, 2025
Final Rejection — §101, §103
Mar 20, 2025
Response after Non-Final Action
Apr 15, 2025
Request for Continued Examination
Apr 17, 2025
Response after Non-Final Action
Jul 11, 2025
Non-Final Rejection — §101, §103
Oct 15, 2025
Response Filed
Feb 06, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12588859
SYSTEM AND METHOD FOR INTERACTING WITH HUMAN BRAIN ACTIVITIES USING EEG-FNIRS NEUROFEEDBACK
2y 5m to grant Granted Mar 31, 2026
Patent 12592160
System and Method for Virtual Learning Environment
2y 5m to grant Granted Mar 31, 2026
Patent 12558290
BLOOD PRESSURE LOWERING TRAINING DEVICE
2y 5m to grant Granted Feb 24, 2026
Patent 12525140
SYSTEMS AND METHODS FOR PROGRAM TRANSMISSION
2y 5m to grant Granted Jan 13, 2026
Patent 12512012
SYSTEM FOR EVALUATING RADAR VECTORING APTITUDE
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
33%
Grant Probability
70%
With Interview (+37.6%)
4y 1m
Median Time to Grant
High
PTA Risk
Based on 702 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month