Prosecution Insights
Last updated: April 19, 2026
Application No. 18/698,668

Method for Controlling Virtual Objects in Virtual Environment, Medium, and Electronic Device

Non-Final OA §101§103§112
Filed
Apr 04, 2024
Examiner
THAI, XUAN MARIAN
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Shanghai Lilith Technology Corporation
OA Round
1 (Non-Final)
2%
Grant Probability
At Risk
1-2
OA Rounds
3y 11m
To Grant
8%
With Interview

Examiner Intelligence

Grants only 2% of cases
2%
Career Allow Rate
4 granted / 175 resolved
-67.7% vs TC avg
Moderate +6% lift
Without
With
+5.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
28 currently pending
Career history
203
Total Applications
across all art units

Statute-Specific Performance

§101
22.3%
-17.7% vs TC avg
§103
37.0%
-3.0% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
18.8%
-21.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 175 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 11 and 12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 11 encompasses a computer program product, which does not fall within one of the four statutory categories of invention because claim 11 is directed to software per se, which is ineligible subject matter under 35 U.S.C. 101. Claim 12 encompasses “A computer-readable storage medium,” which fails to fall within one of the four statutory categories of invention because under the broadest reasonable interpretation of the instant claim, claim 12 encompasses transitory signals. Transitory signals are not within one of the four statutory categories (i.e. non-statutory subject matter). See MPEP 2106(I). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 3, 4 and 7-10 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 3 recites the limitation "the first matching label" in the last line. There is insufficient antecedent basis for this limitation in the claim (“first matching labels” was recited in the previous claims, but not “a first matching label”). Claim 4 recites the limitation "the second matching label" in the last line. There is insufficient antecedent basis for this limitation in the claim (“second matching labels” was recited in the previous claims, but not “a second matching label”). Claims 7 and 8 recites the limitation "the first reinforcement learning model". There is insufficient antecedent basis for this limitation in the claim. Claims 9 and 10 recites the limitation "the second reinforcement learning model". There is insufficient antecedent basis for this limitation in the claim. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 2, 5, 6 and 11-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Timm [US20150111641], in view of Beltran et al. [US12053704], hereinafter Beltran. Regarding claim 1, Timm discloses a method for controlling virtual objects in a virtual environment, used for an electronic device, the virtual objects comprising a first virtual object controlled by a user and a second virtual object controlled by artificial intelligence (abstract, “When the user chooses a multi-player game, the user is prompted to select one or more customized AI characters”), wherein the method comprises: a first obtaining step for obtaining historical data of multiple historical plays of one or more first virtual objects in the virtual environment, and setting corresponding style labels for respective first virtual objects based on the historical data ([0029], “An AI character may be created based on a human player by using data about the human player, e.g., actions, behavior, and game play style, are recorded used to create an AI character that has the same characteristics”); a first training step for using the historical data of one or more first virtual objects belonging to respective style labels for training to obtain the second virtual objects corresponding to respective style labels ([0031], “The AI characters may be available in themed "packs" that are offered through the entertainment system. For example, one pack may have different AI characters that have a certain style, such as a "stealth" style or a "fighter" style”); and a matching step for determining matching labels corresponding to respective style labels by using experience scores of respective historical plays of one or more first virtual objects belonging to respective style labels, and selecting one or more corresponding second virtual objects based on the matching labels to join to a current play ([0031], “The pack may also include AI characters of different skill levels. Alternatively, a single AI character may have different styles and skill levels from which a user may choose”). However, Timm does not explicitly disclose a calculation step for calculating, for respective historical plays of each first virtual object, experience scores of respective historical plays by using the historical data of respective historical plays. Nevertheless, Beltran teaches in a like invention, a calculation step for calculating, for respective historical plays of each first virtual object, experience scores of respective historical plays by using the historical data of respective historical plays (col. 22, lines 20-26, “success criteria may be defined to determine skill level of the player, to include how quick is the players' response time, how accurate is the player in targeting one or more targets (e.g., generally a skilled player has a fast trigger and moves from one target to another quickly, decisively, and accurately), how quick is the period between controller inputs, etc.” and col. 26, lines 23-26, “For example, the player profiler 144a of the analyzer 140 is configured to perform profiling of the player playing the gaming application (e.g., determine skill level of the player)”). Thus, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method disclosed by Timm, to have the calculation step for calculating experience scores, as taught by Beltran, in order to make it more convenient for the user to match an AI character with similar experience score so that the game playing would be more fun. Regarding claim 2, the combination of Timm and Beltran discloses the method according to claim 1, wherein the multiple historical plays comprise a first type of historical play and a second type of historical play, and the current play comprises a first type of current play and a second type of current play (Timm, [0031], “The AI characters may be available in themed "packs" that are offered through the entertainment system. For example, one pack may have different AI characters that have a certain style, such as a "stealth" style or a "fighter" style”), wherein, in the matching step, determining first matching labels corresponding to respective style labels by using the experience scores of respective first type of historical plays of one or more first virtual objects belonging to respective style labels, and selecting one or more corresponding second virtual objects based on the first matching labels to join the first type of current play, and determining second matching labels corresponding to respective style labels by using the experience scores of respective second type of historical plays of one or more first virtual objects belonging to respective style labels, and selecting one or more corresponding second virtual objects based on the second matching labels to join the second type of current play (Timm, Fig. 3B, “Themed Packs 325” and Fig. 3D, “Skill Level 335”). Regarding claim 5, the combination of Timm and Beltran discloses the method according to claim 1, wherein, in the first obtaining step, corresponding style labels are set for respective first virtual objects by using a clustering algorithm, wherein each style label is corresponding to at least one first virtual object (Timm, [0031], “The AI characters may be available in themed "packs" that are offered through the entertainment system. For example, one pack may have different AI characters that have a certain style, such as a "stealth" style or a "fighter" style”). Regarding claim 6, the combination of Timm and Beltran discloses the method according to claim 1, wherein the historical data in respective historical plays comprises feedback data in respective historical plays, wherein, in the calculation step, the experience scores of respective historical plays are calculated based on the feedback data in respective historical plays by using a predetermined calculation function (Beltran, col. 22, lines 20-26, “success criteria may be defined to determine skill level of the player, to include how quick is the players' response time, how accurate is the player in targeting one or more targets (e.g., generally a skilled player has a fast trigger and moves from one target to another quickly, decisively, and accurately), how quick is the period between controller inputs, etc.”). Regarding claims 11-13, please refer to the claim rejection of claim 1. Claim(s) 7 and 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Timm, in view of Beltran, further in view of Takiguchi et al. [US20190318740], hereinafter Takiguchi. Regarding claim 7, the combination of Timm and Beltran discloses the method according to claim 1. However, the combination of Timm and Beltran does not explicitly disclose the method further comprising: a strength adjustment step for interfering with, in the current play, the second virtual object in real time by using the first reinforcement learning model to adjust strength of the second virtual object. Nevertheless, Takiguchi teaches in a like invention, a strength adjustment step for interfering with, in the current play, the second virtual object in real time by using the first reinforcement learning model to adjust strength of the second virtual object ([0156], “So-called reinforcement learning can be used as the learning of the dialog content generation unit 37.” [0160], “The dialog content generation unit 37 acquires, every time it outputs the dialog content W, the change in the feeling of the dialog partner before and after the output of the dialog content W based on the input data Da, and subjects the neural network to reinforcement learning based on the dialog content W and the change in the feeling of the dialog partner.”). Thus, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method disclosed by the combination of Timm and Beltran, to have the reinforcement learning model to adjust strength of the second virtual object in real time, as taught by Takiguchi, in order to adapt the AI generated character in real time based on the player’s feedback to make the game playing more fun. Regarding claim 8, the combination of Timm, Beltran and Takiguchi discloses the method according to claim 7, wherein the strength adjustment step further comprises: a second obtaining step for obtaining first real-time play data of the first virtual object closest to the second virtual object during the current play; a second training step for inputting the first real-time play data into the first reinforcement learning model for training; and an interfering step for interfering with an input and/or output of the second virtual object in real time by using an output of the first reinforcement learning model, to adjust strength of the second virtual object (Takiguchi, [0160], “The dialog content generation unit 37 acquires, every time it outputs the dialog content W, the change in the feeling of the dialog partner before and after the output of the dialog content W based on the input data Da, and subjects the neural network to reinforcement learning based on the dialog content W and the change in the feeling of the dialog partner.”). Claim(s) 9 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Timm, in view of Beltran, further in view of Merai et al. [US20190171897], hereinafter Merai. Regarding claim 9, the combination of Timm and Beltran discloses the method according to claim 1. However, the combination of Timm and Beltran does not explicitly disclose the method further comprising: a label adjustment step for adjusting, in the current play, the style label in real time by using the second reinforcement learning model to obtain an updated style label, so as to change the second virtual object to an updated second virtual object corresponding to the updated style label. Nevertheless, Merai teaches in a like invention, a label adjustment step for adjusting, in the current play, the style label in real time by using the second reinforcement learning model to obtain an updated style label, so as to change the second virtual object to an updated second virtual object corresponding to the updated style label ([0129], “In a second subclassification application, namely multi-label classification (i.e. classifying multiple objects within a same scene), systems and methods described herein can provide improvements through prompting the reinforcement learning vision system to continuously look for all the objects found within a scene (i.e. appropriately labelling all of the objects), and only stop when each object has been recognized/classified with the required accuracy”). Thus, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to have modified the method disclosed by the combination of Timm and Beltran, to have the reinforcement learning model to obtain an updated style label in real time, as taught by Merai, in order to represent the play styles more accurately for a better game play. Regarding claim 10, the combination of Timm, Beltran and Merai discloses the method according to claim 9, wherein the label adjustment step further comprises: a pre-training step for using the historical data of the first virtual object for training to obtain the second reinforcement learning model; an action execution step for executing, in the current play, by the second virtual object, a current action corresponding to a current style label in the virtual environment, and generating one or more parameters in a current state; a second training step for inputting the current action and one or more parameters in a previous state generated by executing a previous action into the second reinforcement learning model for training; and an updating step for outputting, by the second reinforcement learning model, the updated style label, to change the second virtual object into an updated second virtual object corresponding to the updated style label (Merai, [0129], “In a second subclassification application, namely multi-label classification (i.e. classifying multiple objects within a same scene), systems and methods described herein can provide improvements through prompting the reinforcement learning vision system to continuously look for all the objects found within a scene (i.e. appropriately labelling all of the objects), and only stop when each object has been recognized/classified with the required accuracy”). Examiner’s Note The prior art does not demonstrate the features of claims 3 and 4 under the best understanding of claims 3 and 4. However, as illustrated in the section of 35 U.S.C. 112(b), claims 3 and 4 are rejected under 35 U.S.C. 112(b). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to YINGCHUAN ZHANG whose telephone number is (571)272-1375. The examiner can normally be reached 8:00 - 4:30 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xuan Thai can be reached at (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YINGCHUAN ZHANG/Primary Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Apr 04, 2024
Application Filed
Jan 14, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12551797
VIRTUAL OBJECT CONTROL METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 8657605
VIRTUAL TESTING AND INSPECTION OF A VIRTUAL WELDMENT
2y 5m to grant Granted Feb 25, 2014
Patent 8398404
SYSTEM AND METHOD FOR ELEVATED SPEED FIREARMS TRAINING
2y 5m to grant Granted Mar 19, 2013
Patent null
Video display of high contrast graphics for newborns and infants
Granted
Patent null
Device including a lens array
Granted
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
2%
Grant Probability
8%
With Interview (+5.9%)
3y 11m
Median Time to Grant
Low
PTA Risk
Based on 175 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month