Prosecution Insights
Last updated: April 19, 2026
Application No. 18/716,402

VIRTUAL OBJECT INTERACTION CONTROL METHOD, ELECTRONIC DEVICE, MEDIUM, AND COMPUTER PROGRAM PRODUCT

Non-Final OA §101§102
Filed
Jun 04, 2024
Examiner
THAI, XUAN MARIAN
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Shanghai Lilith Technology Corporation
OA Round
1 (Non-Final)
2%
Grant Probability
At Risk
1-2
OA Rounds
3y 11m
To Grant
8%
With Interview

Examiner Intelligence

Grants only 2% of cases
2%
Career Allow Rate
4 granted / 175 resolved
-67.7% vs TC avg
Moderate +6% lift
Without
With
+5.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
28 currently pending
Career history
203
Total Applications
across all art units

Statute-Specific Performance

§101
22.3%
-17.7% vs TC avg
§103
37.0%
-3.0% vs TC avg
§102
17.7%
-22.3% vs TC avg
§112
18.8%
-21.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 175 resolved cases

Office Action

§101 §102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claim 10 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim does not fall within at least one of the four categories of patent eligible subject matter because claim 10 is directed to software per se, which is ineligible subject matter under 35 U.S.C. 101. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-18 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by TAKAFUJI [US20210154581]. Regarding claim 1, TAKAFUJI discloses a virtual object interaction control method, applied to an electronic device, wherein the method comprises: a determining step of determining a first virtual object and a second virtual object to interact with each other ([0049], “Various behaviors such as motions of some of the characters in the game may be controlled in the game apparatus 100 based on operations of the other players on the game apparatuses 162. An image of a battle game may be thereby generated in the game apparatus 100 with the multiple players operating their respective characters”); a receiving step of receiving a first interaction value that the first virtual object performs on the second virtual object and a second interaction value that the second virtual object performs on the first virtual object ([0048], “A touch sensor 134 installed in the display 110 or the like is connected to the touch panel interface unit 124. The game apparatus 100 may receive operation instructions from the player through the touch sensor 134” and [0122], “Performing the aforementioned processing corrects the position of the actual character or the opponent character appropriately at each frame while taking the moving of the position due to the external factors (for example, environmental force such as gravity and wind, an action instruction made by the player, and the like) other than the animation data into consideration and a natural, appropriate image is generated”); and a display step of displaying a corresponding first interaction effect on the second virtual object based on the first interaction value and displaying a corresponding second interaction effect on the first virtual object based on the second interaction value ([0131], “When a scene includes multiple hits occurring at the same time and the multiple hits are hits involving different opponent characters, the positions of the multiple opponent characters are moved based on directions of moving and distances of moving that are obtained from the different opponent characters in a frame k of the scene. An image of the frame k is generated”). Regarding claim 2, TAKAFUJI discloses the method according to claim 1, wherein the determining step further comprises determining a first hit rate of the first virtual object and a second hit rate of the second virtual object, and wherein the display step further comprises displaying the corresponding first interaction effect on the second virtual object based on the first hit rate and the first interaction value and displaying the corresponding second interaction effect on the first virtual object based on the second hit rate and the second interaction value ([0071], “FIG. 4 is a diagram illustrating an example of a functional block diagram 400 in one embodiment relating to an image generation device. An animation data table 402 defines animation data defining a series of actions of one or multiple characters involved in a scene. The animation data may define a series of actions of two characters in the case where the two characters are in a battle” and [0151], “FIG. 16 is a diagram illustrating a flowchart of processing in Embodiment 2. In Embodiment 2, control is performed such that a hit of a predetermined portion of a predetermined character and an opponent portion of an opponent character occurs at a hit timing. In this case, a vector from the predetermined portion to the opponent portion at the hit timing is used. Then, for example, the position of the opponent portion of the opponent character is controlled at each timing such that this vector becomes zero at the hit timing”). Regarding claim 3, TAKAFUJI discloses the method according to claim 2, wherein the display step further comprises displaying at least a portion of corresponding first interaction effects on the second virtual object in a first interaction cycle based on the first hit rate and a plurality of first interaction values received before the first interaction cycle, and displaying at least a portion of corresponding second interaction effects on the first virtual object in a second interaction cycle based on the second hit rate and a plurality of second interaction values received before the second interaction cycle ([0156], “In step S1606, an image of the scene is generated by moving the position of the opponent portion at each of multiple timings included in the scene by using the vector such that the predetermined portion hits the opponent portion at the hit timing.”). Regarding claim 4, TAKAFUJI discloses the method according to claim 3, wherein the first interaction cycle and the second interaction cycle both comprise a hittable cycle and an unhittable cycle, wherein the at least a portion of the corresponding first interaction effects are displayed on the second virtual object in the hittable cycle of the first interaction cycle based on the first hit rate and the plurality of first interaction values received before the first interaction cycle, and the at least a portion of the corresponding second interaction effects are displayed on the first virtual object in the hittable cycle of the second interaction cycle based on the second hit rate and the plurality of second interaction values received before the second interaction cycle, and wherein the first hit rate and the second hit rate are respectively adjusted to 0 in the unhittable cycles of the first interaction cycle and the second interaction cycle ([0071], “FIG. 4 is a diagram illustrating an example of a functional block diagram 400 in one embodiment relating to an image generation device. An animation data table 402 defines animation data defining a series of actions of one or multiple characters involved in a scene. The animation data may define a series of actions of two characters in the case where the two characters are in a battle… In this case, the character for which no animation data is defined may be stationary or move depending on effects of environments such as gravity and wind, an instruction of the player, or the like”). Regarding claim 5, TAKAFUJI discloses the method according to claim 4, wherein after all the corresponding first interaction effects are displayed on the second virtual object in the hittable cycle of the first interaction cycle based on the first hit rate and the plurality of first interaction values received before the first interaction cycle, the first hit rate is adjusted to 0, and after all the corresponding second interaction effects are displayed on the first virtual object in the hittable cycle of the second interaction cycle based on the second hit rate and the plurality of second interaction values received before the second interaction cycle, the second hit rate is adjusted to 0 ([0071], “An animation data table 402 defines animation data defining a series of actions of one or multiple characters involved in a scene. The animation data may define a series of actions of two characters in the case where the two characters are in a battle… In this case, the character for which no animation data is defined may be stationary or move depending on effects of environments such as gravity and wind, an instruction of the player, or the like”). Regarding claim 6, TAKAFUJI discloses the method according to claim 5, wherein after all the corresponding first interaction effects are displayed on the second virtual object in the hittable cycle of the first interaction cycle based on the first hit rate and the plurality of first interaction values received before the first interaction cycle, when one or more new first interaction values are received, the first hit rate is restored and at least a portion of corresponding first interaction effects are displayed on the second virtual object based on the first hit rate and the one or more new first interaction values, and after all the corresponding second interaction effects are displayed on the first virtual object in the hittable cycle of the second interaction cycle based on the second hit rate and the plurality of second interaction values received before the second interaction cycle, when one or more new second interaction values are received, the second hit rate is restored and at least a portion of corresponding second interaction effects are displayed on the first virtual object based on the second hit rate and the one or more new second interaction values ([0113], “When k has exceeded M, the processing is terminated. Note that to take an impact of the hit into consideration, additional animation data or simulation processing that causes the character to move in a form similar to a motion of an actual human or the like may be applied to a motion of the character after the hit.”). Regarding claim 7, TAKAFUJI discloses the method according to claim 4, wherein after the at least a portion of the corresponding first interaction effects are displayed on the second virtual object in the hittable cycle of the first interaction cycle based on the first hit rate and the plurality of first interaction values received before the first interaction cycle, the remaining corresponding first interaction effects are continuously displayed on the second virtual object based on the first hit rate in the hittable cycle of a next first interaction cycle, and after the at least a portion of the corresponding second interaction effects are displayed on the first virtual object in the hittable cycle of the second interaction cycle based on the second hit rate and the plurality of second interaction values received before the second interaction cycle, the remaining corresponding second interaction effects are continuously displayed on the first virtual object based on the second hit rate in the hittable cycle of a next second interaction cycle ([0081], “In FIG. 11A, a series of scenes of a hit is formed of frames at five timings and is illustrated by overlapping five frames”). Regarding claims 8-10, please refer to the claim rejection of claim 1. Regarding claim 11, TAKAFUJI discloses the method according to claim 1, wherein the display step further comprises: displaying at least a portion of corresponding first interaction effects on the second virtual object in a first interaction cycle based on a plurality of first interaction values received before the first interaction cycle, and displaying at least a portion of the corresponding second interaction effects on the first virtual object in a second interaction cycle based on a plurality of second interaction values received before the second interaction cycle (Fig. 10, S1002 and S1004). Regarding claims 12-18, please refer to the claim rejections of claims 2-7 and 11. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to YINGCHUAN ZHANG whose telephone number is (571)272-1375. The examiner can normally be reached 8:00 - 4:30 M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xuan Thai can be reached at (571) 272-7147. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YINGCHUAN ZHANG/Primary Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Jun 04, 2024
Application Filed
Feb 25, 2026
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12551797
VIRTUAL OBJECT CONTROL METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 8657605
VIRTUAL TESTING AND INSPECTION OF A VIRTUAL WELDMENT
2y 5m to grant Granted Feb 25, 2014
Patent 8398404
SYSTEM AND METHOD FOR ELEVATED SPEED FIREARMS TRAINING
2y 5m to grant Granted Mar 19, 2013
Patent null
Video display of high contrast graphics for newborns and infants
Granted
Patent null
Device including a lens array
Granted
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
2%
Grant Probability
8%
With Interview (+5.9%)
3y 11m
Median Time to Grant
Low
PTA Risk
Based on 175 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month