Prosecution Insights
Last updated: April 19, 2026
Application No. 18/648,265

GESTURE TRAINING FOR SKILL ADAPTATION AND ACCESSIBILITY

Non-Final OA §102§103§DP
Filed
Apr 26, 2024
Examiner
LEGGETT, ANDREA C.
Art Unit
2171
Tech Center
2100 — Computer Architecture & Software
Assignee
Sony Interactive Entertainment Inc.
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
3y 4m
To Grant
96%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
484 granted / 639 resolved
+20.7% vs TC avg
Strong +21% interview lift
Without
With
+20.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
32 currently pending
Career history
671
Total Applications
across all art units

Statute-Specific Performance

§101
14.0%
-26.0% vs TC avg
§103
45.0%
+5.0% vs TC avg
§102
34.8%
-5.2% vs TC avg
§112
4.6%
-35.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 639 resolved cases

Office Action

§102 §103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to the claims filed on April 26, 2024. Claim 1 is amended; and claims 1-21 are pending and examined below. Information Disclosure Statement The information disclosure statement (IDS) submitted on 7-25-2024 was filed. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Double Patenting Claims 1-21 of this application is patentably indistinct from claims 1-21 of Application No. 17/817,445. Pursuant to 37 CFR 1.78(f), when two or more applications filed by the same applicant or assignee contain patentably indistinct claims, elimination of such claims from all but one application may be required in the absence of good and sufficient reason for their retention during pendency in more than one application. Applicant is required to either cancel the patentably indistinct claims from all but one application or maintain a clear line of demarcation between the applications. See MPEP § 822. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-21 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claim claims 1-21 of copending Application No. 17/817,445 (reference application). Although the claims at issue are not identical, they are not patentably distinct from each other because they name the same inventive entity and are commonly assigned. This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Application No. 18/648,265 Copending Application No. 17/817,445 Claim 1. An assembly, comprising: at least one processor configured with instructions to: identify a first gesture correlatable in at least one computer simulation to a command to control a character in a computer game and/or control a weapon in the computer game; identify a second gesture; input the first and second gestures to at least one machine learning (ML) model along with an identification of the command to train the ML model; after training the ML model, during execution of the computer simulation, identify a user gesture; input the user gesture to the ML model; and receive from the ML model, responsive to the input, at least one command. Claim 1. An assembly, comprising: at least one processor configured with instructions to: present a first prompt for a person to make a first gesture, the first gesture being correlatable in at least one computer simulation to a command to control a character in a computer game and/or control a weapon in the computer game; identify a first gesture made in response to the prompt; present at least a second prompt for a person to make the first gesture; identify a second gesture made in response to the prompt; input the first and second gestures to at least one machine learning (ML) model along with an identification of the command to train the ML model; after training the ML model, during execution of the computer simulation, identify a user gesture; input the user gesture to the ML model; and receive from the ML model, responsive to the input, at least one command. Claim 2. The assembly of claim 1, wherein the processor is configured with instructions to present prompts visibly on at least one video display to make the gestures. Claim 2. The assembly of Claim 1, wherein the prompts are presented visibly on at least one video display. Claim 3. The assembly of claim 1, wherein the processor is configured with instructions to present prompts audibly on at least one speaker to make the gestures. Claim 3. The assembly of Claim 1, wherein the prompts are presented audibly on at least one speaker. Claim 4. The assembly of Claim 1, wherein the first and second gestures each comprise a respective hand motion made once. Claim 4. The assembly of Claim 1, wherein the first and second gestures each comprise a respective hand motion made once. Claim 5. The assembly of claim 1, wherein the first and second gestures each comprise a number of respective displayed button object presses greater than one Claim 5. The assembly of Claim 1, wherein the first and second gestures each comprise a number of respective displayed button object presses greater than one. Claim 6. The assembly of claim 1, wherein the instructions are executable to: receive a signal from at least one physiological sensor; input the signal to the ML model; and receive output from the ML model indicating further learning is required. Claim 6. The assembly of claim 1, wherein the instructions are executable to: receive a signal from at least one physiological sensor; input the signal to the ML model; and receive output from the ML model indicating further learning is required. Claim 7. The assembly of claim 1, wherein the instructions are executable to: receive a signal from at least one physiological sensor; input the signal to the ML model; and based at least in part based at least in part on output from the ML model, alter execution of the computer simulation at least in part by changing a playback speed of the computer simulation and/or changing a skill level of the computer simulation Claim 7. The assembly of claim 1, wherein the instructions are executable to: receive a signal from at least one physiological sensor; input the signal to the ML model; and based at least in part based at least in part on output from the ML model, alter execution of the computer simulation at least in part by changing a playback speed of the computer simulation and/or changing a skill level of the computer simulation Claim 8. The assembly of claim 1, wherein the instructions are executable to: receive a signal from at least one motion sensor; input the signal to the ML model; and receive output from the ML model indicating further learning is required. Claim 8. The assembly of claim 1, wherein the instructions are executable to: receive a signal from at least one motion sensor; input the signal to the ML model; and receive output from the ML model indicating further learning is required. Claim 9. The assembly of claim 1, wherein the instructions are executable to: receive a signal from at least one motion sensor; input the signal to the ML model; and based at least in part on output from the ML model, alter execution of the computer simulation at least in part by changing a playback speed of the computer simulation and/or changing a skill level of the computer simulation. Claim 9. The assembly of claim 1, wherein the instructions are executable to: receive a signal from at least one motion sensor; input the signal to the ML model; and based at least in part on output from the ML model, alter execution of the computer simulation at least in part by changing a playback speed of the computer simulation and/or changing a skill level of the computer simulation. Claim 10. a method, comprising: training at least one machine learning (ML) model to recognize at least one gesture correlated to at least one command to control a character in a computer game and/or control a weapon in the computer game; during execution of the computer simulation, sending information representing at least one gesture made by the person to the ML model; receiving from the ML model at least one indication of at least one command; and executing the computer simulation according to the at least one command to control a character in the computer game and/or control the weapon in the computer game. Claim 10. a method, comprising: training at least one machine learning (ML) model to recognize a manner in which a person makes at least one gesture correlated to at least one command to at least one computer simulation to control a character ins a computer game and/or control a weapon in the computer game; during execution of the computer simulation, sending information representing at least one gesture made by the person to the ML model; receiving from the ML model at least one indication of at least one command; and executing the computer simulation according to the at least one command to control a character in the computer game and/or control a weapon in the computer game. Claim 11. The method of claim 10, comprising training the ML model to recognize player motion Claim 11. The method of claim 10, comprising training the ML model to recognize player motion Claim 12. The method of claim 10, comprising training the ML model to recognize player physiological state Claim 12. The method of claim 10, comprising training the ML model to recognize player physiological state Claim 13. a device comprising: at least one computer readable storage apparatus that is not a transitory signal and that comprises instructions executable by at least one processor to: identify at least one gesture in free space; correlate the gesture to at least one command to control a character in a computer game and/or control a weapon in the computer game; and execute the at least one command to control the character in the computer game and/or control the weapon in the computer game. Claim 13. a device comprising: at least one computer readable storage apparatus that is not a transitory signal and that comprises instructions executable to: train at least one machine learning (ML) model to recognize a manner in which a person makes at least one gesture correlated to at least one command to at least one computer simulation; during execution of the computer simulation, send information representing at least one gesture made by the person to the ML model; receive from the ML model at least one indication of at least one command; and execute the computer simulation according to the at least one command. Claim 14. The device of claim 13, wherein the instructions are executable to: present a first prompt for a person to make a first gesture, the first gesture being correlated in at least one computer simulation to a command; identify a first prompted gesture made in response to the prompt; present at least a second prompt for a person to make the first gesture; identify a second prompted gesture made in response to the prompt; and input the first and second prompted gestures to at least one ML model along with an identification of the command to train the ML model. Claim 14. The device of claim 13, wherein the instructions are executable to: present a first prompt for a person to make a first gesture, the first gesture being correlated in least one computer simulation to a command; identify a first prompted gesture made in response to the prompt; present at least a second prompt for a person to make the first gesture; identify a second prompted gesture made in response to the prompt; and input the first and second prompted gestures to at least one ML model along with an identification of the command to train the ML model. Claim 15. The device of claim 14, wherein the prompts are presented visibly on at least one video display Claim 15. The device of claim 14, wherein the prompts are presented visibly on at least one video display Claim 16. The device of claim 14, wherein the prompts are presented audibly on at least one speaker Claim 16. The device of claim 14, wherein the prompts are presented audibly on at least one speaker Claim 17. The device of claim 14, wherein the first and second prompted gestures each comprise a respective hand motion made once Claim 17. The device of claim 14, wherein the first and second prompted gestures each comprise a respective hand motion made once Claim 18. The device of claim 14, wherein the first and second prompted gestures each comprise a number of respective button object presses greater than one Claim 18. The device of claim 14, wherein the first and second prompted gestures each comprise a number of respective button object presses greater than one Claim 19. The device of claim 13, wherein the instructions are executable to: receive a signal from at least one physiological sensor; input the signal to the ML model; and receive output from the ML model indicating further learning is required. Claim 19. The device of claim 13, wherein the instructions are executable to: receive a signal from at least one physiological sensor; input the signal to the ML model; and receive output from the ML model indicating further learning is required. Claim 20. The device of claim 13, wherein the instructions are executable to: receive a signal from at least one physiological sensor; input the signal to the ML model; and based at least in part on output from the ML model, alter execution of the computer simulation at least in part by changing a playback speed of the computer simulation and/or changing a skill level of the computer simulation. Claim 20. The device of claim 13, wherein the instructions are executable to: receive a signal from at least one physiological sensor; input the signal to the ML model; and based at least in part on output from the ML model, alter execution of the computer simulation at least in part by changing a playback speed of the computer simulation and/or changing a skill level of the computer simulation. Claim 21. The assembly of claim 2, wherein the first prompt to make the first gesture comprises a prompt to press a particular control a specified number of times as quickly as possible Claim 21. The assembly of claim 1, wherein the first prompt to make the first gesture comprises a prompt to press a particular control a specified number of times as quickly as possible Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-5, 10-18 and 21 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Gurumurthy et al. (U.S. 2020/0269136). With regard to claim 1, Gurumurthy teaches an assembly [abstract], comprising: at least one processor ([0037] a processor 334 (or a processor of the training manager 312 or virtual coach 318) will be a central processing unit (CPU)) configured with instructions to: identify a first gesture (Fig. 4, 402) correlatable in at least one computer simulation to a command to control a character in a computer game and/or control a weapon in the computer game (Fig. 1; [0020] In the example of FIG. 1, the state of the game is analyzed and a determination made as to one or more actions that should (or at least advantageously could) be taken by a user to achieve a determined or specified goal. In the figure, a graphical representation of a location 104 in the world can be provided, such as through an overlay or rendering within the game display); identify a second gesture (Figs. 2A-2F; [0020] In this example, the game might coach the player to move the avatar 102 to that location 104, then ready a weapon, lean around the corner, and fire at the gameplay element 106. Other actions might be provided as well, such as to wait until the gameplay element moves, jump or climb above the gameplay element, etc. In this way, the game can help to coach the player through the level, with the amount or level of coaching capable of varying based on a number of different factors); input the first and second gestures to at least one machine learning (ML) model ([abstract] This data can be used to train a machine learning model for the game; [0029] such an approach can help to provide a machine learning—or artificial intelligence-based virtual coach, which can assist players, such as e-sports gamers, in learning and/or improving their gameplay for at least certain games; [0032] Once the strategies are learned, such as by training machine learning models using the experienced player data) along with an identification of the command to train the ML model (Fig. 2D; Fig. 4; [abstract] This data can be used to train a machine learning model for the game. Gameplay data for an identified player can be obtained, and related information provided as input to the trained model; [0012] provide a virtual coach that can help teach, train, or improve the skills of users of an application, such as a gaming application. Data can be obtained that demonstrates how skilled users utilize an application, such as how professional players play a specific game. This data can be used to train one or more machine learning models, for example, that can then provide inferences as to actions that should be taken in the game based on that training); after training the ML model, during execution of the computer simulation, identify a user gesture ([0043] Information for the determined game state can then be provided 408 as input to a trained machine learning model for the game, where the model can have been trained using data obtained for skilled players or other such sources); input the user gesture to the ML model (Fig. 4, 408-416; [0044] Using the trained model, one or more actions for the player to take in the game can be inferred 408 or otherwise determined. As mentioned, these can include short term actions or strategies, such as a next move to make, or longer term strategies, such as a location to which to relocate over time); and receive from the ML model, responsive to the input, at least one command ([0028] For example, the image 260 of FIG. 2F just provides a pointer 262 overlay indicating a potentially best option to travel based on the current game state, and that overlay may only be provided periodically or upon request of the user. In some embodiments the overlay might only appear when the player is about to take one action, such as to travel in a first direction, and it is determined that a different option would be better based upon various goals or criteria, etc.; [0073] a user can input a command to the device… such a device might not include any buttons at all and might be controlled only through a combination of visual and audio commands such that a user can control the device without having to be in contact with the device). With regard to claim 2, the limitations are addressed above and Gurumurthy teaches wherein the processor is configured with instructions to present prompts visibly on at least one video display to make the gestures (Figs. 2B-2F; [0021] FIGS. 2A through 2F illustrate examples of advice or guidance that a virtual coach might provide to a gamer in accordance with various embodiments. As mentioned, the types of advice provided can depend in part upon factors such as the type of player, player skill level, player or game goal, or whether the advice is provided in a real-time or offline fashion, among other such options). With regard to claim 3, the limitations are addressed above and Gurumurthy teaches wherein the processor is configured with instructions to present prompts audibly on at least one speaker to make the gestures ([abstract] The information can be conveyed to the player using visual, audio, or haptic guidance during gameplay, or can be provided offline, such as with video or rendered replay of the game session; [0073] a user can input a command to the device… such a device might not include any buttons at all and might be controlled only through a combination of visual and audio commands such that a user can control the device without having to be in contact with the device). With regard to claim 4, the limitations are addressed above and Gurumurthy teaches wherein the first and second gestures each comprise a respective hand motion made once (Figs. 2A-2F; [abstract] The information can be conveyed to the player using visual, audio, or haptic guidance during gameplay, or can be provided offline, such as with video or rendered replay of the game session). With regard to claim 5, the limitations are addressed above and Gurumurthy teaches wherein the first and second gestures each comprise a number of respective displayed button object presses greater than one ([0018] As is common for such games, at least a portion of a player avatar 102 can be displayed, and gameplay can involve manipulating that avatar through a virtual 3D world to accomplish one or more goals. The player can thus provide input, such as by tapping keys of a keyboard or pressing buttons of a joypad controller, to cause the player avatar to move through the world; [0021] For novice players, the advice may include instructions on switching to the grenade, such as the next key or button to press to take that action). With regard to claim 10, Gurumurthy teaches a method, comprising: training at least one machine learning (ML) model (Fig. 2D; Fig. 4; [abstract] This data can be used to train a machine learning model for the game. Gameplay data for an identified player can be obtained, and related information provided as input to the trained model; [0012] provide a virtual coach that can help teach, train, or improve the skills of users of an application, such as a gaming application. Data can be obtained that demonstrates how skilled users utilize an application, such as how professional players play a specific game. This data can be used to train one or more machine learning models, for example, that can then provide inferences as to actions that should be taken in the game based on that training) to recognize at least one gesture (Fig. 4, 402) correlated to at least one command to control a character in a computer game and/or control a weapon in the computer game (Fig. 1; [0020] In the example of FIG. 1, the state of the game is analyzed and a determination made as to one or more actions that should (or at least advantageously could) be taken by a user to achieve a determined or specified goal. In the figure, a graphical representation of a location 104 in the world can be provided, such as through an overlay or rendering within the game display); during execution of the computer simulation, sending information representing at least one gesture made by the person to the ML model ([abstract] This data can be used to train a machine learning model for the game; [0029] such an approach can help to provide a machine learning—or artificial intelligence-based virtual coach, which can assist players, such as e-sports gamers, in learning and/or improving their gameplay for at least certain games; [0032] Once the strategies are learned, such as by training machine learning models using the experienced player data); receiving from the ML model at least one indication of at least one command ([0028] For example, the image 260 of FIG. 2F just provides a pointer 262 overlay indicating a potentially best option to travel based on the current game state, and that overlay may only be provided periodically or upon request of the user. In some embodiments the overlay might only appear when the player is about to take one action, such as to travel in a first direction, and it is determined that a different option would be better based upon various goals or criteria, etc.; [0073] a user can input a command to the device… such a device might not include any buttons at all and might be controlled only through a combination of visual and audio commands such that a user can control the device without having to be in contact with the device); and executing the computer simulation according to the at least one command to control a character in the computer game and/or control the weapon in the computer game (Figs. 2A-2F; [0020] In the example of FIG. 1, the state of the game is analyzed and a determination made as to one or more actions that should (or at least advantageously could) be taken by a user to achieve a determined or specified goal. In the figure, a graphical representation of a location 104 in the world can be provided, such as through an overlay or rendering within the game display). With regard to claim 11, the limitations are addressed above and Gurumurthy teaches comprising training the ML model to recognize player motion ([0017] displaying information as to a location where the player should move, indicating an action the player should make in the game, etc.…guidance provided for a novice player may indicate basic moves and actions that should be taken, in order to quickly enable the player to be competitive in the game; [0018] The player can thus provide input, such as by tapping keys of a keyboard or pressing buttons of a joypad controller, to cause the player avatar to move through the world). With regard to claim 12, the limitations are addressed above and Gurumurthy teaches comprising training the ML model to recognize player physiological state (Fig. 2D; Fig. 4; [abstract] This data can be used to train a machine learning model for the game. Gameplay data for an identified player can be obtained, and related information provided as input to the trained model; [0012] provide a virtual coach that can help teach, train, or improve the skills of users of an application, such as a gaming application. Data can be obtained that demonstrates how skilled users utilize an application, such as how professional players play a specific game. This data can be used to train one or more machine learning models, for example, that can then provide inferences as to actions that should be taken in the game based on that training). With regard to claim 13, Gurumurthy teaches a device [abstract] comprising: at least one computer readable storage apparatus that is not a transitory signal and that comprises instructions executable by at least one processor ([0037] a processor 334 (or a processor of the training manager 312 or virtual coach 318) will be a central processing unit (CPU)) to: identify at least one gesture in free space (Fig. 4, 402); correlate the gesture to at least one command to control a character in a computer game and/or control a weapon in the computer game (Fig. 1; [0020] In the example of FIG. 1, the state of the game is analyzed and a determination made as to one or more actions that should (or at least advantageously could) be taken by a user to achieve a determined or specified goal. In the figure, a graphical representation of a location 104 in the world can be provided, such as through an overlay or rendering within the game display); and execute the at least one command to control the character in the computer game and/or control the weapon in the computer game ([0028] For example, the image 260 of FIG. 2F just provides a pointer 262 overlay indicating a potentially best option to travel based on the current game state, and that overlay may only be provided periodically or upon request of the user. In some embodiments the overlay might only appear when the player is about to take one action, such as to travel in a first direction, and it is determined that a different option would be better based upon various goals or criteria, etc.; [0073] a user can input a command to the device… such a device might not include any buttons at all and might be controlled only through a combination of visual and audio commands such that a user can control the device without having to be in contact with the device). With regard to claim 14, the limitations are addressed above and Gurumurthy teaches wherein the instructions are executable to: present a first prompt for a person to make a first gesture (Fig. 4, 402), the first gesture being correlated in at least one computer simulation to a command (Fig. 1; [0020] In the example of FIG. 1, the state of the game is analyzed and a determination made as to one or more actions that should (or at least advantageously could) be taken by a user to achieve a determined or specified goal. In the figure, a graphical representation of a location 104 in the world can be provided, such as through an overlay or rendering within the game display); identify a first prompted gesture made in response to the prompt (Figs. 2A-2F; [0020] In this example, the game might coach the player to move the avatar 102 to that location 104, then ready a weapon, lean around the corner, and fire at the gameplay element 106. Other actions might be provided as well, such as to wait until the gameplay element moves, jump or climb above the gameplay element, etc. In this way, the game can help to coach the player through the level, with the amount or level of coaching capable of varying based on a number of different factors); present at least a second prompt for a person to make the first gesture (Figs. 2A-2F; [0020] In this example, the game might coach the player to move the avatar 102 to that location 104, then ready a weapon, lean around the corner, and fire at the gameplay element 106. Other actions might be provided as well, such as to wait until the gameplay element moves, jump or climb above the gameplay element, etc. In this way, the game can help to coach the player through the level, with the amount or level of coaching capable of varying based on a number of different factors); identify a second prompted gesture made in response to the prompt ([0020] In this example, the game might coach the player to move the avatar 102 to that location 104, then ready a weapon, lean around the corner, and fire at the gameplay element 106. Other actions might be provided as well, such as to wait until the gameplay element moves, jump or climb above the gameplay element, etc. In this way, the game can help to coach the player through the level, with the amount or level of coaching capable of varying based on a number of different factors); and input the first and second prompted gestures to at least one ML model ([abstract] This data can be used to train a machine learning model for the game; [0029] such an approach can help to provide a machine learning—or artificial intelligence-based virtual coach, which can assist players, such as e-sports gamers, in learning and/or improving their gameplay for at least certain games; [0032] Once the strategies are learned, such as by training machine learning models using the experienced player data) along with an identification of the command to train the ML model (Fig. 2D; Fig. 4; [abstract] This data can be used to train a machine learning model for the game. Gameplay data for an identified player can be obtained, and related information provided as input to the trained model; [0012] provide a virtual coach that can help teach, train, or improve the skills of users of an application, such as a gaming application. Data can be obtained that demonstrates how skilled users utilize an application, such as how professional players play a specific game. This data can be used to train one or more machine learning models, for example, that can then provide inferences as to actions that should be taken in the game based on that training). With regard to claim 15, the limitations are addressed above and Gurumurthy teaches wherein the prompts are presented visibly on at least one video display (Figs. 2B-2F; [0021] FIGS. 2A through 2F illustrate examples of advice or guidance that a virtual coach might provide to a gamer in accordance with various embodiments. As mentioned, the types of advice provided can depend in part upon factors such as the type of player, player skill level, player or game goal, or whether the advice is provided in a real-time or offline fashion, among other such options). With regard to claim 16, the limitations are addressed above and Gurumurthy teaches wherein the prompts are presented audibly on at least one speaker ([abstract] The information can be conveyed to the player using visual, audio, or haptic guidance during gameplay, or can be provided offline, such as with video or rendered replay of the game session; [0073] a user can input a command to the device… such a device might not include any buttons at all and might be controlled only through a combination of visual and audio commands such that a user can control the device without having to be in contact with the device). With regard to claim 17, the limitations are addressed above and Gurumurthy teaches wherein the first and second prompted gestures each comprise a respective hand motion made once (Figs. 2A-2F; [abstract] The information can be conveyed to the player using visual, audio, or haptic guidance during gameplay, or can be provided offline, such as with video or rendered replay of the game session). With regard to claim 18, the limitations are addressed above and Gurumurthy teaches wherein the first and second prompted gestures each comprise a number of respective button object presses greater than one ([0018] As is common for such games, at least a portion of a player avatar 102 can be displayed, and gameplay can involve manipulating that avatar through a virtual 3D world to accomplish one or more goals. The player can thus provide input, such as by tapping keys of a keyboard or pressing buttons of a joypad controller, to cause the player avatar to move through the world; [0021] For novice players, the advice may include instructions on switching to the grenade, such as the next key or button to press to take that action). With regard to claim 21, the limitations are addressed above and Gurumurthy teaches wherein the first prompt to make the first gesture comprises a prompt to press a particular control a specified number of times as quickly as possible ([0023] the map and advice might activate, update, or appear automatically when there is advice to be given, or when it is likely to be needed, such as when a player has failed to complete a task for a number of times or has been attempting a specific task for at least a minimum period of time, etc.). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 6-9 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Gurumurthy et al. (U.S. 2020/0269136) in view of Froy et al. (U.S. 2017/0169662). With regard to claim 6, the limitations are addressed above and Gurumurthy teaches wherein the instructions are executable to: receive a signal ([abstract] Gameplay data for an identified player can be obtained, and related information provided as input to the trained model. The model can infer one or more actions or strategies to be taken by the player in order to achieve a determined goal); input the signal to the ML model ([abstract] This data can be used to train a machine learning model for the game. Gameplay data for an identified player can be obtained, and related information provided as input to the trained model; [0029] such an approach can help to provide a machine learning—or artificial intelligence-based virtual coach, which can assist players, such as e-sports gamers, in learning and/or improving their gameplay for at least certain games; [0032] Once the strategies are learned, such as by training machine learning models using the experienced player data); and receive output from the ML model indicating further learning is required ([0038] the player data is run through a parser that outputs the underlying events that the player actually played, actions the player took in the game, input the user provided, etc.; [0053] The learning algorithm finds patterns in the training data that map the input data attributes to the target, the answer to be predicted, and a machine learning model is output that captures these patterns). However, Gurumurthy does not specifically teach: - from at least one physiological sensor Froy teaches an electronic gaming machine that allows players to play an interactive game using their player eye gaze [abstract]. Froy also teaches at least one physiological sensor ([0080] Display device 12, 14 may also include a camera, sensor, and other hardware input devices; [0093] the EGM 10 may include at least one data capture camera device, which may be one or more cameras that detect one or more spectra of light, one or more sensors (e.g. optical sensor); [0109] The at least one data capture camera device and/or a sensor (e.g. an optical sensor) may also be configured to detect and track the position(s) of a player's eyes or more precisely, pupils, relative to the screen of the EGM 10). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to have modified the players of the electronic gaming application, with the electronic gaming machine having a physiological sensor, to have achieved an electronic game with advice or coaching depending upon goals, skill level, and preferences of the player. With regard to claim 7, the limitations are addressed above and Gurumurthy teaches wherein the instructions are executable to: receive a signal ([abstract] Gameplay data for an identified player can be obtained, and related information provided as input to the trained model. The model can infer one or more actions or strategies to be taken by the player in order to achieve a determined goal); input the signal to the ML model ([abstract] This data can be used to train a machine learning model for the game. Gameplay data for an identified player can be obtained, and related information provided as input to the trained model; [0029] such an approach can help to provide a machine learning—or artificial intelligence-based virtual coach, which can assist players, such as e-sports gamers, in learning and/or improving their gameplay for at least certain games; [0032] Once the strategies are learned, such as by training machine learning models using the experienced player data); and based at least in part on output from the ML model ([0038] the player data is run through a parser that outputs the underlying events that the player actually played, actions the player took in the game, input the user provided, etc.; [0053] The learning algorithm finds patterns in the training data that map the input data attributes to the target, the answer to be predicted, and a machine learning model is output that captures these patterns), alter execution of the computer simulation at least in part by changing a playback speed of the computer simulation ([0018] This can make it difficult for many novice players to quickly get up to speed with the game, as the players must not only learn the strategy of the game and figure out what to do, but must also attempt to learn the specific inputs and combinations that can trigger the desired actions) and/or changing a skill level of the computer simulation ([0012] The types of advice or coaching given can vary depending upon factors such as the goals, skill level, and preferences of the player, and the types of advice given to a specific player can change over time as that player's skill set or preferences change; [0036] The virtual coach 318 can provide the player and game data as input to the trained model, which can then infer one or more actions, inputs, directions, changes, or other such advice that should be provided to the player). However, Gurumurthy does not specifically teach: - from at least one physiological sensor Froy teaches an electronic gaming machine that allows players to play an interactive game using their player eye gaze [abstract]. Froy also teaches at least one physiological sensor ([0080] Display device 12, 14 may also include a camera, sensor, and other hardware input devices; [0093] the EGM 10 may include at least one data capture camera device, which may be one or more cameras that detect one or more spectra of light, one or more sensors (e.g. optical sensor); [0109] The at least one data capture camera device and/or a sensor (e.g. an optical sensor) may also be configured to detect and track the position(s) of a player's eyes or more precisely, pupils, relative to the screen of the EGM 10). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to have modified the players of the electronic gaming application, with the electronic gaming machine having a physiological sensor, to have achieved an electronic game with advice or coaching depending upon goals, skill level, and preferences of the player. With regard to claim 8, the limitations are addressed above and Gurumurthy teaches wherein the instructions are executable to: receive a signal from at least one motion ([0017] displaying information as to a location where the player should move, indicating an action the player should make in the game, etc.…guidance provided for a novice player may indicate basic moves and actions that should be taken, in order to quickly enable the player to be competitive in the game; [0018] The player can thus provide input, such as by tapping keys of a keyboard or pressing buttons of a joypad controller, to cause the player avatar to move through the world); input the signal to the ML model ([abstract] This data can be used to train a machine learning model for the game. Gameplay data for an identified player can be obtained, and related information provided as input to the trained model; [0029] such an approach can help to provide a machine learning—or artificial intelligence-based virtual coach, which can assist players, such as e-sports gamers, in learning and/or improving their gameplay for at least certain games; [0032] Once the strategies are learned, such as by training machine learning models using the experienced player data); and receive output from the ML model indicating further learning is required ([0038] the player data is run through a parser that outputs the underlying events that the player actually played, actions the player took in the game, input the user provided, etc.; [0053] The learning algorithm finds patterns in the training data that map the input data attributes to the target, the answer to be predicted, and a machine learning model is output that captures these patterns). However, Gurumurthy does not specifically teach: - from at least one motion sensor Froy teaches an electronic gaming machine that allows players to play an interactive game using their player eye gaze [abstract]. Froy also teaches at least one motion sensor ([0093] EGM 10 may also include hardware configured to provide eye, motion or gesture tracking...The at least one data capture camera device may be used for eye, gesture or motion tracking of player, such as detecting eye movement, eye gestures, player positions and movements, and generating signals defining x, y and z coordinates…The at least one data capture camera device may be used for eye, gesture or motion tracking of player, such as detecting eye movement, eye gestures, player positions and movements, and generating signals defining x, y and z coordinates…An example type of motion tracking is optical motion tracking. The motion tracking may include a body and head controller. The motion tracking may also include an eye controller. EGM 10 may implement eye-tracking recognition technology using cameras, sensors (e.g. optical sensor), data receivers and other electronic hardware to capture various forms of player input). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to have modified the players of the electronic gaming application, with the electronic gaming machine having a motion sensor, to have achieved an electronic game with advice or coaching depending upon goals, skill level, and preferences of the player. With regard to claim 9, the limitations are addressed above and Gurumurthy teaches wherein the instructions are executable to: receive a signal from at least one motion ([0017] displaying information as to a location where the player should move, indicating an action the player should make in the game, etc.…guidance provided for a novice player may indicate basic moves and actions that should be taken, in order to quickly enable the player to be competitive in the game; [0018] The player can thus provide input, such as by tapping keys of a keyboard or pressing buttons of a joypad controller, to cause the player avatar to move through the world); input the signal to the ML model ([abstract] This data can be used to train a machine learning model for the game. Gameplay data for an identified player can be obtained, and related information provided as input to the trained model; [0029] such an approach can help to provide a machine learning—or artificial intelligence-based virtual coach, which can assist players, such as e-sports gamers, in learning and/or improving their gameplay for at least certain games; [0032] Once the strategies are learned, such as by training machine learning models using the experienced player data); and based at least in part on output from the ML model ([0038] the player data is run through a parser that outputs the underlying events that the player actually played, actions the player took in the game, input the user provided, etc.; [0053] The learning algorithm finds patterns in the training data that map the input data attributes to the target, the answer to be predicted, and a machine learning model is output that captures these patterns), alter execution of the computer simulation at least in part by changing a playback speed of the computer simulation and/or changing a skill level of the computer simulation ([0012] The types of advice or coaching given can vary depending upon factors such as the goals, skill level, and preferences of the player, and the types of advice given to a specific player can change over time as that player's skill set or preferences change; [0036] The virtual coach 318 can provide the player and game data as input to the trained model, which can then infer one or more actions, inputs, directions, changes, or other such advice that should be provided to the player). However, Gurumurthy does not specifically teach: - from at least one motion sensor Froy teaches an electronic gaming machine that allows players to play an interactive game using their player eye gaze [abstract]. Froy also teaches at least one motion sensor ([0093] EGM 10 may also include hardware configured to provide eye, motion or gesture tracking...The at least one data capture camera device may be used for eye, gesture or motion tracking of player, such as detecting eye movement, eye gestures, player positions and movements, and generating signals defining x, y and z coordinates…The at least one data capture camera device may be used for eye, gesture or motion tracking of player, such as detecting eye movement, eye gestures, player positions and movements, and generating signals defining x, y and z coordinates…An example type of motion tracking is optical motion tracking. The motion tracking may include a body and head controller. The motion tracking may also include an eye controller. EGM 10 may implement eye-tracking recognition technology using cameras, sensors (e.g. optical sensor), data receivers and other electronic hardware to capture various forms of player input). Therefore, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which said subject matter pertains to have modified the players of the electronic gaming application, with the electronic gaming machine having a motion sensor, to have achieved an electronic game with advice or coaching depending upon goals, skill level, and preferences of the player. With regard to claim 19, the device claim corresponds to the assembly claim 6, respectively, and therefore is rejected with the same rationale. With regard to claim 20, the device claim corresponds to the assembly claim 7, respectively, and therefore is rejected with the same rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Krasadakis (US 2017/0212583) teaches an eye-tracking user interface for allowing users to have a gaming console. Francis et al. (US 2023/0400918) teaches a system for a hands-free scrolling interface based on detected user reading activity. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREA C. LEGGETT whose telephone number is (571)270-7700. The examiner can normally be reached M-F 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached at 571-272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREA C LEGGETT/Primary Examiner, Art Unit 2171
Read full office action

Prosecution Timeline

Apr 26, 2024
Application Filed
Jan 15, 2026
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578478
Method for Checking the Integrity of GNSS Correction Data Provided without Associated Integrity Information
2y 5m to grant Granted Mar 17, 2026
Patent 12576855
ELECTRONIC DEVICE AND METHOD FOR UPDATING WEATHER INFORMATION BASED ON ACTIVITY STATE OF USER USING THE SAME
2y 5m to grant Granted Mar 17, 2026
Patent 12532148
METHODS, DEVICES, AND SYSTEMS FOR VEHICLE TRACKING
2y 5m to grant Granted Jan 20, 2026
Patent 12530962
SELECTING TRAFFIC ALGORITHMS TO GENERATE TRAFFIC DATA
2y 5m to grant Granted Jan 20, 2026
Patent 12529568
RIDE EXPERIENCE ENHANCEMENTS WITH EXTERNAL SERVICES
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
96%
With Interview (+20.7%)
3y 4m
Median Time to Grant
Low
PTA Risk
Based on 639 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month