Prosecution Insights
Last updated: April 19, 2026
Application No. 17/733,301

NATURAL LANGUAGE INTERFACE FOR VIRTUAL ENVIRONMENT GENERATION

Non-Final OA §101§102
Filed
Apr 29, 2022
Examiner
GALKA, LAWRENCE STEFAN
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Microsoft Technology Licensing, LLC
OA Round
6 (Non-Final)
76%
Grant Probability
Favorable
6-7
OA Rounds
2y 11m
To Grant
95%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
649 granted / 851 resolved
+6.3% vs TC avg
Strong +19% interview lift
Without
With
+18.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
28 currently pending
Career history
879
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
35.3%
-4.7% vs TC avg
§102
25.6%
-14.4% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 851 resolved cases

Office Action

§101 §102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 3/2/26 has been entered, wherein claims 1, 9 and 15 have been amended. Claims 1-20 are pending. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. In particular exemplary presented claims 1, 9 and 15 includes the following underlined claim elements: a method for programmatically affecting a virtual environment based on natural language input, the method comprising: obtaining user input to the virtual environment; providing the obtained user input and programmatic grounding context comprising a set of instructions associated with the virtual environment both as prompt input to a generative machine learning model the set of instructions including a natural language portion and associated programmatic code, causing the generative machine learning model to generate a model output comprising new and distinct lines of programmatic code that were not included in the associated programmatic code in the set of instructions associated with the virtual environment to affect the virtual environment, wherein the model output includes an update to a state bag associated with the virtual environment; and providing the model output for execution by a computing device, thereby causing the computing device to execute an instruction of the model output to update the virtual environment. a method for programmatically affecting a virtual environment based on natural language input, the method comprising: obtaining user input to the virtual environment; providing the obtained user input and programmatic grounding context comprising a set of instructions associated with the virtual environment both as prompt input to a generative machine learning model the set of instructions including a natural language portion and associated programmatic code, causing the generative machine learning model to generate a model output comprising new and distinct lines of programmatic code that were not included in the associated programmatic code in the set of instructions associated with the virtual environment to affect the virtual environment, wherein the model output includes an update to a state bag associated with the virtual environment; and providing the model output for execution by a computing device, thereby causing the computing device to execute an instruction of the model output to update the virtual environment a method for affecting a virtual environment based on natural language input, the method comprising: receiving user input to the virtual environment; generating model output by processing the received user input and programmatic grounding context together as prompt input to a generative machine learning model, wherein the programmatic grounding context comprises a set of instructions associated with the virtual environment, the set of instructions includes a natural language portion and associated programmatic code, the model output comprises new and distinct lines of programmatic code that were not included in the associated programmatic code in the set of instructions associated with the virtual environment, and the generative machine learning model is configured to process the prompt input to generate the model output; and executing an instruction of the programmatic code of the model output to affect the virtual environment in response to the received user input. The claim elements underlined above, concern the court enumerated abstract ideas of Mental Processes involving concepts performable by the human mind including observation, evaluation, and judgement because they set for steps for generating instructions and related game operations based on observation of natural language input as well as Certain Methods of Organizing Human Activity following rules or instructions because the claims set forth the interactions involving one or more parties in the context of a game generation and presentation for play. This judicial exception is not integrated into a practical application because the claims do not recite additional elements that would integrate the abstract idea into a practical application. The recited “computing device” embodying Applicant’s claimed abstract idea(s), a video game application, to embody the abstract idea on a general purpose computer, and/or do no more than generally link the use of a judicial exception to a particular technological environment or field of use. There is no improvement made to computer technology since the claims are affecting a virtual environment based on a user input. This is not related to a long standing problem in computer technology. Additionally, there is no practical application as there is no particular machine that is used to implement the claim language only generic computer components are used to perform the invention. Also, there is no transformation of the machine used in the application into a different state or thing. Lastly, the claims do not attempt to apply the abstract idea in a meaningful way beyond simply using the claimed machine. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claims does not recite significantly more than a generic information processing machine consisting of a computing device. Therefore, the claims are directed to an abstract idea that lacks significantly more and thus is not patent eligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claim(s) 1-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Kovacs et al. (pub. no. 20190197402). Regarding claim 1, Kovacs discloses a system comprising: at least one processor; and memory storing instructions that, when executed by the at least one processor, causes the system to perform a set of operations (“FIG. 15 is a block diagram illustrating the data flow between components of an embodiment of a scalable framework for autonomous artificial intelligence (AI) characters. In some embodiments, the data flow of FIG. 15 illustrates how a conversational AI server affects the game environment. In the example shown, the scalable AI framework of FIG. 15 includes planning problem description 1501, parser 1503, planner 1505, action executor 1507, AI game framework 1509, percept listener 1511, and conversational AI server 1551”, [0131]; “In some embodiments, the game framework used in the Futurable life simulation game uses artificial intelligence (AI) game framework 1509. In some embodiments, conversational artificial intelligence (AI) server 1551 is conversational AI server 1051 of FIG. 10 and 1301 of FIG. 13. In some embodiments, planning problem description 1501, parser 1503, planner 1505, action executor 1507, AI game framework 1509, and percept listener 1511 are implemented using game AI component 1401 and AI game framework 1451 of FIG. 14 using their respective corresponding components, planning problem description 1403, parser 1405, planner 1409, action executor 1411, AI game framework 1451, and percept listener 1407. In some embodiments, FIGS. 1-9 relate to a lightweight artificial intelligence (AI) framework for implementing the data flow of FIG. 15. In some embodiments, planner 1505 is trained as described by FIGS. 1-6 and used to solve AI planning problems as described by FIGS. 7 and 8. In some embodiments, game AI component 1011 and/or AI loop 1017 of FIG. 10, game AI platform 1101 of FIG. 11, AI processor 1201 of FIG. 12, and/or game AIs 1333 and 1343 of FIG. 13 are used to implement the data flow of FIG. 15”, [0135]), the set of operations comprising: receiving user input to a virtual environment (“In some embodiments, a player and a non-player character (NPC) agent can communicate using natural language. In some embodiments, NPCs can share information with one another. In various embodiments, the conversation between a player and an NPC can change the game. For example, a conversation between a player and an NPC changes the game environment including the NPC's understanding of the game world. In some scenarios, in the event NPCs share information with one another, for example, NPC1 shares information of a conversation with the player with NPC2, a receiving NPC, such as NPC2, can use the shared information to determine its next action. In some embodiments, when a character determines its next action set it can refer to the statements shared with it by other NPCs and/or by the player”, [0134]); generating model output by processing the received user input and programmatic grounding context both as prompt input to a generative machine learning model, wherein the programmatic grounding context comprises a set of instructions associated with the virtual environment, the set of instructions includes a natural language portion and associated programmatic code, the generative machine learning model is configured to process the prompt input to generate the model output, and the model output comprises new and distinct lines of programmatic code that were not included in the associated programmatic code in the set of instructions associated with the virtual environment (“In the example shown, conversational artificial intelligence (AI) server 1551 feeds into AI game framework 1509. By coupling conversational AI server 1551 with AI game framework 1509, agents (or NPCs) of AI game framework 1509 receive conversation input and update their context and state based on the conversations of conversational AI server 1551. In some embodiments, each conversation of conversational AI server 1551 affects the sender and receiver agents. For example, in some embodiments, all participants of a conversation are impacted by a conversation. As another example, when NPC1 talks to NPC2, the information of NPC1 affects the belief of NPC2. NPC2 updates its belief and generates a new plan to reach a new goal. At the same time, NPC1 updates its belief that it provided information to NPC2. In some embodiments, beliefs are used to represent the informational state of the agent. In some embodiments, the informational state of an agent is the agent's understanding or beliefs about the world and includes its beliefs about itself and other agents. In some embodiments, beliefs can include inference rules and allow forward chaining that results in new beliefs. In some embodiments, a belief system is used to represent an agent's understanding of the world. A belief may be held by the agent to be true but the belief itself may not necessarily be true and may change in the future. For example, NPC1 has a true belief that NPC2 is awake. However, at a later time, NPC2 is now asleep. Absent other input, with respect to NPC1, NPC1 still believes that NPC2 is awake even though the belief is no longer true”, [0133]; “During each iteration of artificial intelligence (AI) loop 1017, the first action of solution plan 1023 is extracted and encoded using first action extractor and encoder 1025. The extracted action is executed using action executor 1027. For example, the action may include a movement action, a speaking action, an action to end a conversation, etc. In various embodiments, since each non-player character (NPC) is an agent and utilizes its own AI game component 1011, each NPC acts as an individual. For example, each NPC in a life simulation game lives its own life that may include working, sleeping, eating, going to school, going to the gym, etc. by performing its actions from its own action list of its corresponding solution plan. In various embodiments, the actions performed by NPCs influence the game environment and in turn the environment also influences the NPCs during the next iteration of AI loop 1017”, [0104]; “In the example shown, game artificial intelligence (AI) component 1401 continuously generates AI problems and solutions. For example, for each iteration, game AI component 1401 generates and/or updates planning problem description 1403. Parser 1405 parses planning problem description 1403 to create an AI planning problem that planner 1409 solves”, [0127]; AI planning problem interpreted to be a set of instructions associated with the virtual environment; “In some embodiments, when game artificial intelligence (AI) component 1401 generates a problem, creates a solution, and performs an action based on the solution, the performed action impacts the game environment and the agents of AI game framework 1451 such as agents 1453 and 1459. In some embodiments, agents, such as agents 1453 and 1459, receive environmental influences in a form of a percept”, [0128]; the solution is interpreted to be programmatic code for the virtual environment; A voice input that the player gives to an npc such as “They are giving out free Xboxes at the church” could cause the problem planner to generate a problem for that npc such as “take advantage of the church Xbox giveaway”; this would be an example of a natural language portion and associated programmatic code; that the solution generated from this input is novel for the npc interpreted to mean new and distinct lines of programmatic code); and executing an instruction of the programmatic code of the model output to affect the virtual environment in response to the received user input (“In some embodiments, actions are executed as a result of environmental changes and the executed actions will influence the game environment. For example, actions are executed by action executor 1507 as a result of changes related to artificial intelligence (AI) game framework 1509 and detected by percept listener 1511. Once executed, the actions of action executor 1507 in turn impact AI game framework 1509. In various embodiments, in addition to environmental changes from AI framework 1509 and actions executed by action executor 1507, conversations also impact the game environment. In some embodiments, a percept includes conversations and can change AI planning problems and plans. For example, in the way random sounds or noises can impact the game environment, so can conversations”, [0132]; actions interpreted to be instructions of the programmatic code). Regarding claim 2, Kovacs discloses determining the model output does not correspond to the user input; and updating a generative platform associated with the generative machine learning model (“In some embodiments, planner 1505 is trained as described by FIGS. 1-6 and used to solve AI planning problems as described by FIGS. 7 and 8. In some embodiments, game AI component 1011 and/or AI loop 1017 of FIG. 10, game AI platform 1101 of FIG. 11, AI processor 1201 of FIG. 12, and/or game AIs 1333 and 1343 of FIG. 13 are used to implement the data flow of FIG. 15”, [0135]; “At 611, an initial DNN is generated. At 613, training parameters are provided. At 615, a DNN training environment is utilized to train the initial DNN of 611 based on the initial DNN of 611 and the training parameters of 613. At 617, a trained DNN is produced”, [0080]). Regarding claim 3 Kovacs discloses providing an indication to update a prompt store and a training data store of the generative platform (“In some embodiments, an AI tool includes an expression editor such as a visual expression editor for assigning and modifying precondition and effects of AI actions. The parameters may reference the preconditions and effects. Preconditions and conditions of conditional effects of AI actions may evaluate to true or false (e.g., Boolean, numeric, or object-comparison, etc.), while effects of AI actions can be Boolean, numeric, and/or object assignments”, [0242]). Regarding claim 4, Kovacs discloses the model output is associated with a state bag that indicates a state of an object in the virtual environment associated with the model output (“An autonomous AI character receives sensor inputs. For example, the autonomous AI character detects movement, voices, and obstacles in the game environment. Based on the sensory input, the character updates its set of beliefs. For example, an autonomous AI character constructs a set of beliefs that are based on facts, including observed facts from its sensors, that correspond to its understanding of the game world. Facts may include its position and positions of other NPCs, objects, obstacles, etc. in the game world. Its beliefs may also include it's understanding of the beliefs of other NPCs. In addition to a set of beliefs, the autonomous AI character identifies one or more goals. Goals may include game objectives such as traveling from one location to another, speaking with the game player to inform her or him of relevant information, refueling a vehicle to increase its potential range, etc”, [0039]; [0133]). Regarding claim 5, Kovacs discloses updating the state bag based on an object associated with the model output, thereby enabling the object to be indirectly referenced by a subsequent input (“In some embodiments, the action of an NPC agent can influence the objects in the game environment. For example, an NPC can move an object, carry an object, and/or consume an object, etc. These changes may influence the agent performing the action on the object as well as other agents in the game. For example, in response to a box moving from one position to another, in the event the box is significant in the planning of an autonomous AI NPC, the autonomous AI NPC will detect that the box has moved and its next determined action will reflect the moving of the box”, [0114]; “Using a triple database, a player can talk with one or more NPC agents and the NPC agents will remember the newly learned information from the conversation. The information includes at least the sentences spoken and may include other information whether game related or not. When a new user starts a game, the information saved can be initialized and/or reset. Once saved for that user, the information may be kept and read whenever that user resumes the game, for example, after stopping the game and later resuming it from the last played state. In some embodiments, all saved information is saved using a triple format for each NPC agent in a triple database. In some embodiments, each player associated with a client can only access the game via the client's own saved memory store (e.g., triple database). For example, a client cannot access another client's triple database and therefore cannot access the saved knowledge of NPCs of another client”, [0124]). Regarding claim 6, Kovacs discloses executing the programmatic code of the model output to programmatically affect the virtual environment further comprises one of: generating a new object within the virtual environment; updating an existing object within the virtual environment; removing the existing object from the virtual environment; or causing an object of the virtual environment to perform an associated action ([0114]). Regarding claim 7, Kovacs discloses the user input includes one or more of textual input, spoken input, or gestural input ([0134]). Regarding claim 8, Kovacs discloses the set of instructions comprises programmatic code for a software library associated with the virtual environment for affecting the virtual environment (“In various embodiments, AI entity 1211 is controlled by an AI library via interface 1203. For example, an AI library may be a library of AI functionality implemented on an AI processor such as AI processor 1201. In some embodiments, the AI library is a centralized library. In the example shown, AI entity 1211 sends percepts to interface 1203 and receives actions from interface 1203. In various embodiments, the percepts and actions are communicated to/from AI processor 1201. For example, AI processor 1201 determines actions based on the received percepts. As labeled in FIG. 12, in some embodiments, AI entity 1211 includes world event properties 1213 and 1215 and is capable of consuming and generating world events such as world events 1231”, [0117]). Regarding claim 9, Kovacs discloses a method for programmatically affecting a virtual environment based on natural language input, the method comprising: obtaining user input to the virtual environment ([0134]); providing the obtained user input and programmatic grounding context comprising a set of instructions associated with the virtual environment both as prompt input to a generative machine learning model the set of instructions including a natural language portion and associated programmatic code, causing the generative machine learning model to generate a model output comprising new and distinct lines of programmatic code that were not included in the associated programmatic code in the set of instructions associated with the virtual environment to affect the virtual environment ([0133], [0135], [0127] & [0128]; AI planning problem interpreted to be a set of instructions associated with the virtual environment; The solution is interpreted to be programmatic code for the virtual environment; A voice input that the player gives to an npc such as “They are giving out free Xboxes at the church” could cause the problem planner to generate a problem for that npc such as “take advantage of the church Xbox giveaway”; this would be an example of a natural language portion and associated programmatic code; that the solution generated from this input is novel for the npc interpreted to mean new and distinct lines of programmatic code), wherein the model output includes an update to a state bag associated with the virtual environment ([0039]; [0133]); and providing the model output for execution by a computing device, thereby causing the computing device to execute an instruction of the model output to update the virtual environment ([0132]; actions interpreted to be instructions of the programmatic code). Regarding claim 10, Kovacs discloses the state bag is updated to include an object associated with the model output, thereby enabling the object to be indirectly referenced by a subsequent input ([0114], [0124]). Regarding claim 11, Kovacs discloses the model output is generated based at least in part on a prompt ([0133]; [0127]) associated with a library for the virtual environment ([0117]). Regarding claim 12, Kovacs discloses the library comprises a scripting language framework for affecting the virtual environment through programmatic code (“In some embodiments, in-game environment 1221 includes properties 1223 and 1225. In-game environment 1221 provides AI entity 1211 perceptions (e.g., based on a query) and in-game environment 1221 generates world events based on received actions from AI entity 1211. For example, in-game environment 1221 receives a list of actions performed by an AI entity represented by AI entity 1211. In some embodiments, the generated world events are world events 1231. In some embodiments, an autonomous artificial intelligence (AI) non-player character (NPC) perceives itself and its local environment. For example, the AI entity corresponding to the NPC, such as AI entity 1211, receives perceptions of the game environment (e.g., from in-game environment 1221) and sends the perceptions to an AI processor, such as AI processor 1201 via interface 1203, to generate an appropriate action based on the current context. In response, the AI entity, such as AI entity 1211, receives an action to be performed by the NPC and generates world events, such as world events 1231”, [0118] & [0119]). Regarding claim 13, Kovacs discloses the programmatic grounding context comprises a text comment associated with a software instruction of the set of instructions to affect the virtual environment according to the text comment (“In some embodiments, triple databases 1311 and 1321 are databases for storage and retrieval of triples through semantic queries. In some embodiments, triple databases 1311 and 1321 are triple-stores and purpose-built databases. In various embodiments, a triple is a set of three entities that codify a statement about semantic data in the form of a subject-predicate-object expression. For example, “John is 35” and “John knows Gordon” are each subject-predicate-object triples. In the example shown, each memory of agent 1313, 1315, 1319, 1323, 1325, and 1329 has core information regarding the respective agent such as the agent's name, age, etc. In various embodiments, a player can talk to an agent. In various embodiments, each agent is a non-player character (NPC). Using a triple database, a player can talk with one or more NPC agents and the NPC agents will remember the newly learned information from the conversation. The information includes at least the sentences spoken and may include other information whether game related or not. When a new user starts a game, the information saved can be initialized and/or reset. Once saved for that user, the information may be kept and read whenever that user resumes the game, for example, after stopping the game and later resuming it from the last played state. In some embodiments, all saved information is saved using a triple format for each NPC agent in a triple database. In some embodiments, each player associated with a client can only access the game via the client's own saved memory store (e.g., triple database). For example, a client cannot access another client's triple database and therefore cannot access the saved knowledge of NPCs of another client”, [0123] & [0124]). Regarding claim 14, Kovacs discloses obtaining a subsequent input, wherein the subsequent input references an object of the state bag; generating a subsequent model output based on the subsequent input and the state bag; and providing the subsequent model output for processing by the computing device to update the virtual environment according to the subsequent input ([0133] & [0127]). Regarding claim 15, Kovacs discloses A method for affecting a virtual environment based on natural language input, the method comprising: receiving user input to the virtual environment ([0134]); generating model output by processing the received user input and programmatic grounding context together as prompt input to a generative machine learning model, wherein the programmatic grounding context comprises a set of instructions associated with the virtual environment, the set of instructions includes a natural language portion and associated programmatic code, the model output comprises new and distinct lines of programmatic code that were not included in the associated programmatic code in the set of instructions associated with the virtual environment, and the generative machine learning model is configured to process the prompt input to generate the model output ([0133], [0135], [0127] & [0128]; AI planning problem interpreted to be a set of instructions associated with the virtual environment; The solution is interpreted to be programmatic code for the virtual environment; A voice input that the player gives to an npc such as “They are giving out free Xboxes at the church” could cause the problem planner to generate a problem for that npc such as “take advantage of the church Xbox giveaway”; this would be an example of a natural language portion and associated programmatic code; that the solution generated from this input is novel for the npc, because the npc had never travelled to the church, interpreted to mean new and distinct lines of programmatic code); and executing an instruction of the programmatic code of the model output to affect the virtual environment in response to the received user input ([0132]; actions interpreted to be instructions of the programmatic code). Regarding claim 16, Kovacs discloses determining the model output does not correspond to the user input; and updating a generative platform associated with the generative machine learning model ([0135], [0080]). Regarding claim 17, Kovacs discloses updating the multimodal generative platform further comprises: providing an indication to update a prompt store and a training data store of the generative platform ([0242]). Regarding claim 18, Kovacs discloses the model output is associated with a state bag that indicates a state of an object in the virtual environment associated with the model output ([0039], [0133]). Regarding claim 19, Kovacs discloses updating the state bag based on an object associated with the model output, thereby enabling the object to be indirectly referenced by a subsequent input ([0114], [0124]). Regarding claim 20, Kovacs discloses executing the programmatic code of the model output to programmatically affect the virtual environment further comprises one of: generating a new object within the virtual environment; updating an existing object within the virtual environment; removing the existing object from the virtual environment; or causing an object of the virtual environment to perform an associated action ([0114]). Response to Arguments Applicant’s arguments filed on March 2, 2026 have been fully considered but they are not entirely persuasive. On pages 8 & 9, Applicant argues that the amended claims overcome the prior art of record because Kovacs fails to disclose inputting natural language and programmatic code associated with virtual environment to output new lines of programmatic code. Examiner respectfully disagrees. Kovacs discloses natural language input that causes npc beliefs and goals to be adjusted, which in turn effect the AI planning problem which are interpreted to be natural language input and programmatic code associated with the virtual environment. This in turn is inputted into a generative model to create a solution, which can be novel according to the adjusted goals and beliefs of the NPC as detailed above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to LAWRENCE STEFAN GALKA whose telephone number is (571)270-1386. The examiner can normally be reached M-F 6-9 & 12-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Lewis can be reached at 571-272-7673. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LAWRENCE S GALKA/Primary Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Apr 29, 2022
Application Filed
Feb 24, 2024
Non-Final Rejection — §101, §102
Apr 26, 2024
Interview Requested
May 29, 2024
Response Filed
Aug 22, 2024
Final Rejection — §101, §102
Nov 27, 2024
Request for Continued Examination
Dec 01, 2024
Response after Non-Final Action
Dec 05, 2024
Non-Final Rejection — §101, §102
May 12, 2025
Response Filed
Jun 25, 2025
Final Rejection — §101, §102
Sep 11, 2025
Examiner Interview Summary
Sep 11, 2025
Applicant Interview (Telephonic)
Sep 29, 2025
Request for Continued Examination
Oct 02, 2025
Response after Non-Final Action
Oct 18, 2025
Non-Final Rejection — §101, §102
Dec 23, 2025
Response after Non-Final Action
Dec 23, 2025
Notice of Allowance
Jan 15, 2026
Response after Non-Final Action
Mar 02, 2026
Request for Continued Examination
Mar 17, 2026
Response after Non-Final Action
Mar 21, 2026
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589294
SYSTEMS AND METHODS FOR ELECTRONIC GAME CONTROL WITH VOICE DETECTION AND AUDIO STREAM PROCESSING
2y 5m to grant Granted Mar 31, 2026
Patent 12576334
RECEPTION APPARATUS, TRANSMISSION APPARATUS, AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12569764
INPUT ANALYSIS AND CONTENT ALTERATION
2y 5m to grant Granted Mar 10, 2026
Patent 12569756
CLOUD APPLICATION-BASED DEVICE CONTROL METHOD AND APPARATUS, ELECTRONIC DEVICE AND READABLE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12573270
CONTROLLING A USER INTERFACE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

6-7
Expected OA Rounds
76%
Grant Probability
95%
With Interview (+18.6%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 851 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month