Prosecution Insights
Last updated: April 19, 2026
Application No. 18/751,157

INTERACTION PROCESSING METHOD AND APPARATUS FOR VIRTUAL SCENE, ELECTRONIC DEVICE, AND STORAGE MEDIUM

Non-Final OA §103
Filed
Jun 21, 2024
Examiner
TSWEI, YU-JANG
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Tencent Technology (Shenzhen) Company Limited
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
376 granted / 447 resolved
+22.1% vs TC avg
Strong +17% interview lift
Without
With
+17.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
44 currently pending
Career history
491
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
66.4%
+26.4% vs TC avg
§102
5.6%
-34.4% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 447 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. CN202211078598.7, filed on 2022.09.05. Should applicant desire to obtain the benefit of foreign priority under 35 U.S.C. 119(a)-(d) prior to declaration of an interference, a certified English translation of the foreign application must be submitted in reply to this action. 37 CFR 41.154(b) and 41.202(e). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 8, 12, 13-15, 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Qiu et al. (US 20200293154 A1, hereinafter Qiu), in view of Spivack et. (US 20190108686 A1, hereinafter Spivack). Regarding Claim 13, Qiu teaches an interaction processing apparatus for a virtual scene, the apparatus comprising (Qiu, Fig. 15, Element 1500, an electronic device): processing circuitry (Qiu, Fig. 15, Element 1501, Processor) configured to: display a virtual scene in an [[interaction]] interface (Qiu, Paragraphs [0003]-[0004], "The virtual scene included in the present disclosure may be used for simulating a virtual space ... The user may control a virtual object to move in the virtual scene"; "The user may perform a touch operation on the terminal ... The game data may include virtual scene data, behavioral data of a virtual object in the virtual scene ... The terminal may display the virtual scene in full screen on the current display interface"); [[the virtual scene including multiple groups, ]] and each on the groups including at least one virtual object (Qiu, Paragraph [0004], "The game data may include virtual scene data, behavioral data of a virtual object in the virtual scene"); display, based on an advance control operation on a first group, the first group advancing in the virtual scene (Qiu, Paragraph [0042], "the user may control a virtual object <read on first group> to move in the virtual scene"; it is noted the first virtual object by default is belong to the first group); display first prompt information of a first region based on the first group advancing to a first location and a distance between the first location and the first region in the virtual scene being less than a distance threshold (Qiu, Paragraph [0014], "a display module, configured to display a prompt signal <read on first prompt information> in a viewing angle image of a currently controlled object <read on first group>, the prompt signal <read on first prompt information> being used for prompting a location <read on first location> of the target scene area <read on first region> in the virtual scene"; [0087], "the target display condition may be that: a distance between the location <read on first location> of the currently controlled object <read on first group> and the target scene area <read on first region> is not more than a target distance <read on distance threshold>"); the first prompt information indicating that [[a first interaction event between the multiple groups has occurred]] in the first region (Qiu, Paragraph [0014], "the prompt signal <read on first prompt information> being used for prompting a location <read on first location> of the target scene area <read on first region> in the virtual scene"). But Qiu does not explicitly disclose interaction interface…the virtual scene including multiple groups…the first prompt information indicating that a first interaction event between the multiple groups has occurred in the first region. However, Spivack teaches display first prompt information of a first region based on the first group advancing to a first location and a distance between the first location and the first region in the virtual scene being less than a distance threshold (Spivack, Paragraph [1080], "relative location or distance ranges in the alternate/augmented reality environment (e.g., within 3 meters of a computer-generated player, within 10 meters radius of another virtual object, etc.)"; [1081], "relative location or distance range(s) in the alternate/augmented reality environment from an event"; [0098], “In a movement process of the currently controlled object, the marker point location may be updated in real time, and when the marker point location in the global map is updated”), the first prompt information indicating that a first interaction event between the multiple groups has occurred in the first region (Spivack, Paragraph [1108], "the augmented reality environment is accessible by the human user and a second human user ...occurrence of a triggering event <read on first interaction event> associated with the virtual object is detected ... a notification <read on first prompt information> to notify the human user is generated via the alternate reality environment"; [1095], "occurrence of a digital event <read on first interaction event>, or a synthetic event in the alternate/augmented reality environment"). Qiu and Spivack are analogous since both deal with presenting information in a virtual/AR environment using location/distance-based criteria. Qiu provided a way of prompting a location in a virtual scene using a prompt signal and applying a distance condition between a controlled object location and a target scene area. Spivack provided a way of generating a notification to a human user in an alternate reality environment responsive to occurrence of an event and using distance ranges. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate event/notification prompting taught by Spivack into modified invention of Qiu such that the prompt information displayed based on distance can indicate occurrence of an interaction event in the region. Regarding Claim 1, it recites limitations similar in scope to the limitations of Claim 13 but as a method and the combination of Qiu and Spivack teaches all the limitations as of Claim 13. Therefore is rejected under the same rationale. Regarding Claim 2, the combination of Qiu and Spivack teaches the invention in Claim 1. The combination further teaches a first account that is configured to log in to the interaction interface to control the first group, a second account that has a social relationship with the first account, or a third account that reaches a popularity threshold (Spivack, Paragraph [1124], "the augmented reality environment is accessible by the human user <read on first account> and a second human user <read on second account>"; [1125], "interaction ... can include ... friending ... following ... between the human user <read on first account> and the second human user <read on second account> ... "; and Paragraph [0558], "follower/follow mechanics ... mini celebrities <read on third account> forming ... big ones <read on popularity threshold> ... "; and Paragraph [1099], "criteria or requirements .. . can include ... user score <read on popularity threshold>"). As explained in rejection of Claim 1, the obviousness for combining of socialnetwork/user-account control taught by Spivack into Qiu is provided above. Regarding Claim 3, the combination of Qiu and Spivack teaches the invention in Claim 1. The combination further teaches displaying detail information of the first [[ interaction ]] event based on an operation on a detail command (Qiu, Paragraph [0101], “1301. The terminal obtains a touch event or a click event on a terminal screen”; [0102], “A user may perform an operation on the application interface” [0078], “The prompt signal may be a text prompt, an icon prompt, and/or a special display effect"). Qiu does not explicitly disclose but Spivack teaches not displaying the first prompt information based on an operation on a close command (Spivack, Paragraph [0491], "user interfaces are depicted in the example figures of FIG. 2A-2L"; Paragraph [0493], " The social elements of the AR environment as enabled or actions/reactions/interactions facilitated therefor"; Paragraph [0543], " The message could be associated with the location it was left at for a specific user or a group of users meeting certain criteria. The intended recipient(s) are able to access it or respond to it "; Paragraph [1123], " an information halo associated with the human user can be depicted"). As explained in rejection of Claim 1, the obviousness for combining of ARinterface interaction controls taught by Spivack into Qiu is provided above. Regarding Claim 8, the combination of Qiu and Spivack teaches the invention in Claim 1. The combination further teaches based on an operation on the first prompt information, displaying detail information [[ corresponding to the first interaction event of different types]] (Qiu, Paragraph [0101], “1301. The terminal obtains a touch event or a click event on a terminal screen”; [0102], “A user may perform an operation on the application interface” [0078], “The prompt signal may be a text prompt, an icon prompt, and/or a special display effect"). But Qiu does not explicitly disclose that the corresponding to the first interaction event of different types. However, Spivack teaches displaying information associated with such events (Spivack, Paragraph [1108], "occurrence of a triggering event associated with the virtual object is detected ... a notification to notify the human user is generated via the alternate reality environment <read on displaying detail information> for the detected interaction event”) displaying detail information corresponding to the first interaction event of different types (Spivack, Paragraph [1095], " occurrence of a digital event, or a synthetic event in the alternate/augmented reality environment ( e.g., when a user wins an online game, when a ghost or goblin dies or is shot, or any other activity or reactivity by a virtual object, etc.) <read on interaction event of different types>”). As explained in rejection of Claim 1, the obviousness for combining of AR environment event/action detail types taught by Spivack into Qiu is provided above. Regarding Claim 12, the combination of Qiu and Spivack teaches the invention in Claim 1. The combination further teaches querying, based on an identifier of the first region, detail information of the first interaction event that has occurred in the first region from a database, wherein the database includes detail information of the interaction event that has occurred in each of the regions (Spivack, Paragraph [1114], "one or more identifiers of ... virtual objects ... can be submitted as search criterion"; and Paragraph [1113], "the set of virtual objects ... can be extracted from a database"). As explained in rejection of Claim 1, the obviousness for combining of ARinterface interaction controls taught by Spivack into Qiu is provided above. Regarding Claim 14, it recites limitations similar in scope to the limitations of Claim 2 and therefore is rejected under the same rationale. Regarding Claim 15, it recites limitations similar in scope to the limitations of Claim 3 and therefore is rejected under the same rationale. Regarding Claim 17, it recites limitations similar in scope to the limitations of claim 13 and the combination of Qiu and Spivack teaches all the limitations as of Claim 1. And Qiu discloses these features can be implemented on a computer readable storage medium (Qiu, Paragraph [0147], “the non-transient computer-readable storage medium in the memory 1502 is configured to store at least one instruction, the at least one instruction being configured to, when executed by the processor 1501, implement the following steps in the method for displaying a marker point location provided in the method embodiments of this application”). Regarding Claim 18, it recites limitations similar in scope to the limitations of Claim 2 and therefore is rejected under the same rationale. Regarding Claim 19, it recites limitations similar in scope to the limitations of Claim 3 and therefore is rejected under the same rationale. Claim(s) 4, 16, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Qiu et al. (US 20200293154 A1, hereinafter Qiu), in view of Spivack et. (US 20190108686 A1, hereinafter Spivack) as applied to Claim 1,13, 17 above respectively and further in view of Qiu et al. (US 20200316473 A1, hereinafter Qiu473). Regarding Claim 4, the combination of Qiu and Spivack teaches the invention in Claim 1. The combination of does not explicitly disclose but Qiu473 teaches displaying the first prompt information of the first region in at least one of: a first time period of moving from the first location to a boundary of the first region; a second time period of staying in the first region; or a third time period of leaving from the first region to a second location (Qiu473, , Paragraph [0043], " the virtual object in the virtual environment may further automatically move along a moving path <read on a first time period of moving> planned in advance"; [0123], "the path finding instruction includes location information of a starting point <read on first location>, an end point <read on second location>, and a detailed path point ... "; [0055], "control the virtual object to move along the moving path <read on a third time period of leaving ... to a second location> in the virtual environment"). Qiu473 and Qiu are analogous since both of them are dealing with controlling movement of a virtual object in a virtual scene/environment using location/distance-based techniques. Qiu provided a way of displaying prompt information based on a distance between a controlled object's location and a target scene area. Qiu473 provided a way of controlling a virtual object to move along a planned moving path between a starting point location and an end point location. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate moving-path-based movement periods (moving along a planned path between a starting point and an end point) taught by Qiu473into the modified invention of Qiu such that the prompt information can be displayed during movement of the first group from a first location toward a region boundary and/or while leaving toward a second location. The motivation is to improve automatic movement control and user experience during movement in a virtual environment, as discussed by Qiu473in Paragraph [0005]-[0006] describing limitations of fixed paths and providing improved moving-path control. Regarding Claim 16, it recites limitations similar in scope to the limitations of Claim 4 and therefore is rejected under the same rationale. Regarding Claim 20, it recites limitations similar in scope to the limitations of Claim 4 and therefore is rejected under the same rationale. Claims 5, 7, 9 is/are rejected under 35 U.S.C. § 103 as being unpatentable over: - Qiu et al. (US20200293154A1, "Qiu") in view of Spivack et al. (US201901 08686A1, hereinafter Spivack) as applied to Claim 1 above and further in view of Leppanen et aI. (US 20180164588 A1, hereinafter Leppanen). Regarding Claim 5, the combination of Qiu and Spivack teaches the invention in Claim 1. The combination does not explicitly disclose but Leppanen teaches displaying, according to a region sorting manner, the first display prompt information of a next region of the first region (Leppanen, Paragraph [0037], " One or more example embodiments further perform receipt of information indicative of a visual notification selection input, and changing of the virtual information region location <read on displaying, according to a region sorting manner, the first display prompt information of a next region of the first region> of the virtual information region to a different virtual information region location that is within the field of view in response to the visual notification selection input"), based on a detail information of the first interaction event not being viewed (Leppanen, Paragraph [0015], "In at least one example embodiment , the causation of rendering of the non - visual notification is performed absent display of any visual notification indicative of the virtual information region event" ). displaying, according to a region sorting manner, the first display prompt information of a next region of the first region based on a detail information of the first interaction event being viewed and the detail information of the first interaction event reaching a display duration threshold (Leppanen, Paragraph [0030], "In at least one example embodiment , the termination of display of the visual notification is performed in response to the determination that the threshold duration has elapsed" <read on displaying ... based on a detail information ... reaching a display duration threshold>; [0029], "One or more example embodiments further perform determination that a threshold duration has elapsed subsequent to display of the visual notification"). Leppanen and Qiu are analogous since both of them are dealing with displaying virtual information and notifications to users in a virtual or augmented reality environment. Qiu provided a way of displaying prompts for regions based on distance and location in a virtual scene. Leppanen provided a way of managing notifications by changing the region location based on input or terminating display after a threshold duration to manage user attention. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the region sorting and display duration management taught by Leppanen into modified invention off Qiu such that the system can automatically manage which region's prompt is displayed based on whether the user has viewed the current one or if a time limit has expired. The motivation is to "facilitate viewing particular information in an intuitive and simple manner" discussed by Leppanen in Paragraph [0002]. Regarding Claim 7, the combination of Qiu and Spivack teaches the invention in Claim 1. The combination does not explicitly disclose but Leppanen teaches displaying prompt information corresponding to multiple regions located on at least one side of an advance route of the first group (Leppanen, Paragraph [0040], "determination of occurrence of a different virtual information region event ... allocated to a different virtual information region that is at least partially beyond the field of view , the different virtual information region having a different virtual information region location that is in a different direction from the field of view" <read on displaying prompt information ... located on at least one side of an advance route>; [0095], "The apparatus may cause rendering of a non - visual notification by an output device mounted on the right side of the head mounted display to indicate that the virtual information region corresponding with region 312 has a direction that is rightward from the field of view ... The apparatus may cause rendering of a non-visual notification by an output device mounted on the left side ... to indicate that the virtual information region ... has a direction that is leftward" <read on displaying prompt information ... on at least one side>). Leppanen and Qiu are analogous since both of them are dealing with providing user notifications for events occurring in a virtual environment. Qiu provided a way of displaying prompts for target scene areas based on distance. Leppanen provided a way of providing directional notifications for virtual events that occur outside the user's immediate field of view (e.g., to the left or right). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the directional notification for multiple regions taught by Leppanen into modified invention off Qiu such that the user is informed of events occurring on the sides of their route or view. The motivation is to allow a user to "perceive happenings that may have occurred in relation to at least part of the environment surrounding the user while the users attention was directed away" discussed by Leppanen in Paragraph [0074]. Regarding Claim 9, the combination of Qiu and Spivack teaches the invention in Claim 1. The combination does not explicitly disclose but Leppanen teaches displaying detail information of the first interaction event based on a number of the first interaction events that have occurred in the first region being less than a threshold (Leppanen, Paragraph [0035], "One or more example embodiments further perform determination that a threshold duration has failed to elapse between the rendering of the non - visual notification and the visual notification invocation input" <read on displaying detail information ... based on a number of the first interaction events ... being less than a threshold>; Leppanen, Paragraph [0036], "In at least one example embodiment , the determination of the visual notification is based , at least in part , on the determination that the threshold duration has failed to elapse"); stopping displaying the detail information of the first interaction event based on a close operation on the detail information (Leppanen, Paragraph [0031], "One or more example embodiments further perform receipt of information indicative of a notification termination input" ; [0032], "In at least one example embodiment , the termination of display of the visual notification is performed in response to the notification termination input" <read on stopping displaying ... based on a close operation>). stopping displaying .. . based on ... a display duration of the detail information reaching a display duration threshold (Leppanen, Paragraph [0030], "In at least one example embodiment , the termination of display of the visual notification is performed in response to the determination that the threshold duration has elapsed" <read on stopping displaying ... based on ... a display duration ... reaching a display duration threshold>). Leppanen and Qiu are analogous since both of them are dealing with controlling the display of notifications in a virtual environment. Qiu provided a way of displaying prompts based on distance conditions. Leppanen provided a way of using time thresholds and user inputs to determine when to display or terminate notifications to avoid clutter. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the threshold and termination logic taught by Leppanen into modified invention off Qiu such that the display information is removed when no longer needed or explicitly dismissed by the user. The motivation is to "perform termination of display of the visual notification" discussed by Leppanen in Paragraph [0028] to prevent distraction, as "a visual notification may distract a user who is visually focused on a task unrelated to the virtual information region event , a visual notification may clutter the display" discussed by Leppanen in Paragraph [0094]. Claims 6 is/are rejected under 35 U.S.C. § 103 as being unpatentable over: - Qiu et al. (US20200293154A1, "Qiu") in view of Spivack et al. (US 20190108686 A1, hereinafter Spivack), further in view of Leppanen et al. (US 20180164588 A1, hereinafter Leppanen) as applied to Claim 5 above and further in view of Mattar et al. (US 20200030700 A1, hereinafter Mattar). Regarding Claim 6, the combination of Qiu, Spivack, and Leppanen teaches the invention in Claim 5. The combination does not explicitly but Mattar teaches the region sorting manner includes ... sorting according to scores of regions in a descending order (Mattar, Paragraph [0037], "the location selection system 132 could identify popular locations within Los Angeles and then prioritize the locations based on one or more prioritization criteria" <read on sorting according to scores ... in a descending order>); the scores of the regions are determined according to at least one of: a number of times the first group has reached the regions (Mattar, Paragraph [0037], "The identified popular location can be based on previous user selections over a defined period of time" ), a similarity between the regions and the first region (Mattar, Paragraph [0047], "The asset matching system 136 may determine matching scores for matching the geographic entities and or topographical elements to virtual assets based at least in part on a defined rule set , asset matching models s , and or other matching criteria" <read on scores ... determined according to ... a similarity between the regions and the first region>). Mattar and Qiu are analogous since both of them are dealing with selecting, prioritizing, and presenting locations or virtual assets within a game environment. Qiu provided a way of displaying and managing prompts for regions in a sequential manner based on user interaction and time thresholds. Mattar provided a way of prioritizing locations and virtual assets based on popularity or matching scores to present the most relevant content to the user. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the prioritization and scoring logic taught by Mattar into modified invention off Qiu such that the regions displayed to the user are ordered by relevance, popularity, or similarity scores rather than just distance or sequence. The motivation is to "allow a game application to identify geographical information associated with real world locations" discussed by Mattar in Paragraph [0014] and prioritize them to enhance the user experience. Claims 10 is/are rejected under 35 U.S.C. § 103 as being unpatentable over: - Qiu et al. (US 20200293154 A1, hereinafter Qiu) in view of Spivack et al. (US 20190108686 A1, hereinafter Spivack) as applied to Claim 1 above and further in view of Mattar et al. (US 20200030700 A1, hereinafter Mattar). Regarding Claim 10, the combination of Qiu and Spivack teaches the invention in Claim 1. The combination does not explicitly disclose but Mattar teaches displaying second prompt information of a second region based on a similarity between the second region and the first region in the virtual scene (Mattar, Paragraph [0047], "The asset matching system 136 may determine matching scores for matching the geographic entities and or topographical elements to virtual assets based at least in part on a defined rule set , asset matching models , and or other matching criteria" <read on displaying second prompt information ... based on a similarity between the second region and the first region>). the similarity between the second region and the first region includes at least one of terrain features (Mattar, Paragraph [0040], "Topographical information associated with the region may be used to define an underlying topography of a virtual environment map"; [0042], "The image recognition analysis can be used to identify attributes of a building ... a type of tree the color of the land scape and or other information visually associated with the topography or entities" <read on similarity ... includes at least one of terrain features>), map outlines (Mattar, Paragraph [0040], "locational reference may include .. . geospatial coordinates or areas ... which may use ... a grid location within a grid generated for the defined region" ), and historical interaction events (Mattar, Paragraph [0037], "The identified popular location can be based on previous user selections over a defined period of time" <read on similarity ... includes ... historical interaction events>). Mattar and Qiu are analogous since both of them are dealing with generating virtual environments and populating them with assets or regions. Qiu provided a way of displaying prompts for target scene areas. Mattar provided a way of matching real-world geographic entities to virtual assets based on similarity attributes like topography, visual characteristics, and popularity. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the similarity matching based on terrain and history taught by Mattar into modified invention off Qiu such that the prompt information displayed corresponds to regions that are contextually relevant or similar to the current region. The motivation is to "recreate the selected geographical location using virtual assets from the game application" discussed by Mattar in Paragraph [0014] effectively by ensuring virtual regions share similarity with real-world counterparts. Claims 11 is/are rejected under 35 U.S.C. § 103 as being unpatentable over: - Qiu et al. (US 20200293154 A1, "Qiu") in view of Spivack et al. (US 20190108686 A1, hereinafter Spivack) as applied to Claim 1 above and further in view of Rose et al. (US 20090005140 A1, hereinafter Rose). Regarding Claim 11 , the combination of Qiu and Spivack teaches the invention in Claim 1. The combination does not explicitly disclose but Rose teaches displaying detail information of a new interaction event that has occurred in the first region, wherein the new interaction event occurs since the first group left the first region (Rose, Paragraph [0061), "This enables the mobile device to not only track the movement of the player in its virtual environment but also the movement andor actions of other players in the same gaming landscape" <read on displaying detail information of a new interaction event. .. since the first group left the first region>; [0011], "Movements of the player may be tracked in the real environment and the character in the gaming land scape is adjusted according to the movements of the player in the real environment"). Rose and Qiu are analogous since both of them are dealing with interactive multi-player virtual games involving player movement. Qiu provided a way of displaying prompt information based on the group's location and distance. Rose provided a way of tracking the movements and interactions of other players in the gaming landscape, even those remote from the user. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the tracking of other players' events in departed regions taught by Rose into modified invention off Qiu such that the user stays informed of activity in regions they have visited . The motivation is to "facilitate large scale Social interaction in multi-player fantasy games played in both the real world andor a virtual world" discussed by Rose in Paragraph [0034]. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20230381652 A1 COOPERATIVE AND COACHED GAMEPLAY US 20220092792 A1 TRAINING MULTI-OBJECT TRACKING MODELS USING SIMULATION US 20210331070 A1 METHOD, APPARATUS, AND TERMINAL FOR TRANSMITTING PROMPT INFORMATION IN MULTIPLAYER ONLINE BATTLE PROGRAM US 20210031106 A1 CONTEXTUALLY AWARE COMMUNICATIONS SYSTEM IN VIDEO GAMES US 20210023439 A1 BACKGROUND PROCESS FOR IMPORTING REAL-WORLD ACTIVITY DATA INTO A LOCATION-BASED GAME US 20190321721 A1 SERVER AND METHOD FOR PROVIDING INTERACTION IN VIRTUAL REALITY MULTIPLAYER BOARD GAME US 20190091574 A1 Information Processing Method and Apparatus, Electronic Device, and Storage Medium US 20190030431 A1 Information Processing Method and Apparatus, Storage Medium and Electronic Device US 20220179665 A1 Displaying user related contextual keywords and controls for user selection and storing and associating selected keywords and user interaction with controls data with user Any inquiry concerning this communication or earlier communications from the examiner should be directed to YUJANG TSWEI whose telephone number is (571)272-6669. The examiner can normally be reached 8:30am-5:30pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached on (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YuJang Tswei/Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Jun 21, 2024
Application Filed
Jan 22, 2026
Non-Final Rejection — §103
Mar 05, 2026
Examiner Interview Summary
Mar 05, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579805
AUGMENTED, VIRTUAL AND MIXED-REALITY CONTENT SELECTION & DISPLAY FOR TRAVEL
2y 5m to grant Granted Mar 17, 2026
Patent 12579838
Perspective Distortion Correction on Faces
2y 5m to grant Granted Mar 17, 2026
Patent 12567213
COMPUTER VISION AND ARTIFICIAL INTELLIGENCE METHOD TO OPTIMIZE OVERLAY PLACEMENT IN EXTENDED REALITY
2y 5m to grant Granted Mar 03, 2026
Patent 12567189
RELATIONAL LOSS FOR ENHANCING TEXT-BASED STYLE TRANSFER
2y 5m to grant Granted Mar 03, 2026
Patent 12561930
PARAMETRIC EYEBROW REPRESENTATION AND ENROLLMENT FROM IMAGE INPUT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+17.0%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 447 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month