Prosecution Insights
Last updated: April 17, 2026
Application No. 18/664,172

SYSTEM AND METHOD TO DISPLAY CONTEXTUAL REAL-TIME GAME DATA IN A VIRTUAL REALITY ENVIRONMENT

Non-Final OA §101§102§103§112
Filed
May 14, 2024
Examiner
NGUYEN, TUAN S
Art Unit
2179
Tech Center
2100 — Computer Architecture & Software
Assignee
unknown
OA Round
1 (Non-Final)
65%
Grant Probability
Moderate
1-2
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 65% of resolved cases
65%
Career Allow Rate
206 granted / 318 resolved
+9.8% vs TC avg
Strong +38% interview lift
Without
With
+38.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
17 currently pending
Career history
335
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
54.6%
+14.6% vs TC avg
§102
17.2%
-22.8% vs TC avg
§112
10.8%
-29.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 318 resolved cases

Office Action

§101 §102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION The present invention application contains 20 claims. Claims 1, 15 and 20 are independent. Claims 1-20 are examined and rejected by the following detail action. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Independent claims 1, 15, and 20 are directed to capture the user’s air gesture captured by the HDM’s camera to display the virtual hand(s) to interact with the point of interest virtual objects in the virtual reality environment. The invention simply coordinates the events associated with the Virtual Reality user interface to control and interact to the virtual objects in the virtual reality environment. Such concepts are analogous to those that have been identified as abstract by the courts as performing conventional functions in VR environment such as collecting information (i.e., detecting the user air gesture as a user input), analyzing it (i.e., determining the hand movement is equivalent to a predefined gesture), and displaying certain results of the collection (i.e., render the additional information) as in Electric Power Group LLC v Alstom S.A. Therefore, the claim is directed to an abstract idea. The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements when considered both individually and as a combination do not amount to significantly more than the abstract idea. In particular, the claim recites the additional elements of the generic computer components (i.e. display, processor, memory, VR environment, etc.) performing conventional functions in the GUI art. Therefore, the claim is not patent eligible. Dependent claims 2-14 and 16-19 depend on independent claims 1 and 15 respectively and are similarly rejected. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (B) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 2-4 and 16-17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 2-4 and 16-17 recite the limitations “…the virtual hand being raised in proximity to a brownline or forehead of the user …” or “…the virtual hand is moved away from the brownline or forehead of the user …” which is unclear about the camera captured action image of the user hand being raised or move away from the reality user’s brownline or forehead that indicates the real user’s hand action image not the virtual hand. Thus, the limitations fail to point out and distinctly claim the subject matter of the “virtual hand” or the “user hand”. Therefore, the limitations are rejected as indefinite. Examiner Notes The prior art rejections below cite particular paragraphs, columns, and/or line numbers in the references for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 5-6, 9-10, 12-15, 19 and 20 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated by Mani et al. (“Mani”, US PG-Pub. 2020/0117336 A1). Re-claim 1, Mani teaches a virtual reality system comprising: a display configured to render a virtual reality environment for a user wearing the virtual reality system (Figs. 1, 2, [0022-0024]. Mani describes a virtual image processing and rendering system 100 executed on the user device (i.e. HMD 102-2) having a display to render a virtual reality environment); a processor (Figs. 1, 2, [0052]. Mani describes the user device CPU 252 shown in Fig. 2B); and a memory (Figs. 1, 2, [0052]. Mani describes the user device memory 256 shown in Fig. 2B) storing computer-executable instructions, the processor programmed to execute instructions to: detect a hand movement of a virtual hand associated with the user in the virtual reality environment, wherein the virtual hand is rendered on the display when the user moves the virtual hand (Figs. 5, 6B, [0106]. Mani describes “…A representation of the hand gesture (620) can be rendered in real time in the 3-D virtual environment 604 … the representation of the hand gesture 620 is displayed in real time as the camera(s) 609 capture the hand gesture 618 in the kitchen”);” determine that the hand movement is equivalent to a predefined gesture indicating that the user desires to view additional information associated with one or more points of interest (POI) in the virtual reality environment (Figs. 5, 6B, [0106]. Mani describes “…the second hand gesture is the user's hand gesture 618, which is performed by the user when the user is viewing the 3-D virtual environment 604 and intends to interact with the representation of the physical object 612 …”; and cause the display to render the additional information in the virtual reality environment responsive to determining that the hand movement is equivalent to the predefined gesture (Figs. 5, 6B, [0107]. Mani describes “…a second operation on the virtual aid template associated with the physical object (e.g., displaying a virtual circuit diagram 622 associated with the electronic component) in accordance with the first interaction with the representation of the physical object …”). Re-claim 5, In addition to what Mani teaches in the system of claim 1, Mani also teaches the system, wherein the memory further comprises a POI marker module configured to store information associated with one or more markers associated with the one or more POI, and wherein the processor is further configured to cause the display to render the additional information in the virtual reality environment based on the information associated with the one or more markers (Fig. 4H, [0094]. Mani describes the virtual bridge 412-1 as a POI and its associated stored dimension data is displayed when user select the virtual bridge 412-1). Re-claim 6, In addition to what Mani teaches in the system of claim 5, Mani also teaches the system, wherein the information associated with the one or more markers comprises information associated with a visual appearance of the one or more markers (Fig. 4H, [0094]. Mani describes the virtual bridge 412-1 associated stored dimension data is displayed when user select the virtual bridge 412-1 shown in Fig. 4H). Re-claim 9, In addition to what Mani teaches in the system of claim 5, Mani also teaches the system, wherein the additional information further comprises an icon and a name associated with the one or more markers (Fig. 4H, [0094]. Mani describes the virtual bridge 412-1 is shown as an icon and its associated dimension data is shown as its name label). Re-claim 10, In addition to what Mani teaches in the system of claim 1, Mani also teaches the system, wherein the memory further comprises a POI supplier module configured to store and supply at least one of a POI class, contextual game data and non-player character-specific data to the processor, and wherein the additional information comprises the contextual game data and non-player character-specific data (Fig. 4H, [0094]. Mani describes the virtual bridge 412-1 associated dimension data). Re-claim 12, In addition to what Mani teaches in the system of claim 10, Mani also teaches the system, wherein the memory further comprises a POI display manager module configured to instantiate one or more available markers in the virtual reality environment, and wherein the processor is configured to execute instructions stored in the POI display manager module to determine the one or more available markers within a game scene being viewed by the user in the virtual reality environment based on information supplied by the POI supplier module (Fig. 4H, [0094]. Mani describes the preset virtual bridge 412-1 and virtual cabinets 406-1 and 406-2 as the POIs and their associated stored dimension data are displayed when user interact to them). Re-claim 13, In addition to what Mani teaches in the system of claim 1, Mani also teaches the system, wherein the predefined gesture comprises at least one of the user tapping a finger of user’s one virtual hand to a palm of user’s second virtual hand, or the user moving or waving the virtual hand from left to right or right to left in the virtual reality environment ([0084]. Mani describes “…the user's hand gesture includes moving a user's hand from a first location to a second location …”). Re-claim 14, In addition to what Mani teaches in the system of claim 1, Mani also teaches the system, wherein the additional information is rendered in proximity to the POI in the virtual reality environment (Fig. 4H, [0094]. Mani describes the preset virtual bridge 412-1 and virtual cabinets 406-1 and 406-2 as the POIs and their associated stored dimension data are displayed proximity when user interact to them). Re-claim 15, It is a method claim having similar limitations in scope of claim 1; therefore, it is rejected under similar rationale. Re-claim 19, In addition to what Mani teach in claim 15, claim 19 is a method claim having similar limitations in scope of claim 13; therefore, it is rejected under similar rationale. Re-claim 20, It is a non-transitory computer-readable storage claim having similar limitations in scope of claim 1; therefore, it is rejected under similar rationale. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2-3 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Mani in view of Nonoyama et al. (“Nonoyama”, US PG-Pub. 2020/0258314 A1). Re-claim 2, Mani teaches the system in claim 1, but Mani fails to teach a system, wherein the predefined gesture comprises the virtual hand being raised in proximity to a browline or forehead of the user. However, Nonoyama teaches: wherein the predefined gesture comprises the virtual hand being raised in proximity to a browline or forehead of the user (Figs. 2, 24, 26, [0069, 0101, 0233, 0243]. Nonoyama describes the zooming hand gesture being raised in proximity to a brownline or forehead of the user). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the real-time product interaction via VR environment teachings of Mani with the hand gesture within the field of view (FOV) of the HMD teaching of Nonoyama to indicate a looking or viewing action gesture for a target virtual object’s information. Re-claim 3, Mani-Nonoyama teaches the system in claim 2, but Mani fails to teach a system, wherein the memory comprises an eye shielding detector module configured to detect that the virtual hand is raised in proximity to the browline or forehead of the user, and wherein the processor is configured to detect that the hand movement is equivalent to the predefined gesture by executing instructions stored in the eye shielding detector module. However, Nonoyama teaches: wherein the memory comprises an eye shielding detector module configured to detect that the virtual hand is raised in proximity to the browline or forehead of the user, and wherein the processor is configured to detect that the hand movement is equivalent to the predefined gesture by executing instructions stored in the eye shielding detector module (Figs. 21, 22, 24, 26, [0101, 0220-0221, 0233]. Nonoyama describes the focus and zooming hand movement gestures performing in front of the user eye that can be detected and recognized by the HMD system). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the real-time product interaction via VR environment teachings of Mani with the hand gesture within the field of view (FOV) of the HMD teaching of Nonoyama to indicate a looking or viewing action gesture for a target virtual object’s information. Re-claim 16, in addition to what Mani teaches in claim 15, claim 16 is a method claim having similar limitations in scope of claim 2; therefore, it is rejected under similar rationale. Claims 4 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Mani in view of Nonoyama, and further in view of Shelton, IV et al. (“Shelton”, US PG-Pub. 2022/0104910 A1). Re-claim 4, Mani-Nonoyama teaches the system in claim 3, but Mani fails to teach a system, wherein the processor is further configured to execute the instructions stored in the eye shielding detector module to detect that the virtual hand is moved away from the browline or forehead of the user, and wherein the processor causes the display to zoom rendering object in the virtual reality environment responsive to detecting that the virtual hand is moved away from the browline or forehead of the user. However, Nonoyama teaches: wherein the processor is further configured to execute the instructions stored in the eye shielding detector module to detect that the virtual hand is moved away from the browline or forehead of the user, and wherein the processor causes the display to zoom rendering object in the virtual reality environment responsive to detecting that the virtual hand is moved away from the browline or forehead of the user (Fig. 21, [0220]. Nonoyama describes the zoom-in action in response to detecting the hand gesture moving away from the HMD in the Z-axis direction). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the real-time product interaction via VR environment teachings of Mani with the hand gesture within the field of view (FOV) of the HMD teaching of Nonoyama to perform a designed action function associated with the VR environment. Modified Mani fails to teach: wherein the processor causes the display to stop rendering the additional information in the virtual reality environment responsive to detecting a gesture. However, Shelton teaches: wherein the processor causes the display to stop rendering the additional information in the virtual reality environment responsive to detecting a gesture ([0299]. Shelton describes “…The gesture may be detected by the hub and the hub may instruct the first display to remove the data or stop displaying the data … ). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the real-time product interaction via VR environment teachings of modified Mani with the hand gesture to stop displaying the data teaching of Shelton to clear out the cluster of information for better viewing. Re-claim 17, in addition to what Mani-Nonoyama teaches in claim 16, claim 17 is a method claim having similar limitations in scope of claim 4; therefore, it is rejected under similar rationale. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Mani in view of Dean et al. (“Dean”, US PG-Pub. 2021/0216132 A1). Re-claim 7, Mani teaches the system in claim 5, but Mani fails to teach a system, wherein the processor is further configured to execute instructions stored in the POI marker module to detect a virtual distance of a user’s real-time virtual location from the one or more POIs in the virtual reality environment, wherein the processor causes the display to scale the rendering of the additional information in the virtual reality environment based on the virtual distance, and wherein the additional information further comprises the virtual distance. However, Dean teaches: wherein the processor is further configured to execute instructions stored in the POI marker module to detect a virtual distance of a user’s real-time virtual location from the one or more POIs in the virtual reality environment, wherein the processor causes the display to scale the rendering of the additional information in the virtual reality environment based on the virtual distance, and wherein the additional information further comprises the virtual distance (Fig. 3, [0024, 0029]. Dean describes “… the virtual reality environment module 110 can calculate a virtual distance between the virtual object and a virtual user in the virtual reality environment …” and the virtual distance 306 between the virtual user 310 and virtual object 308 can be displayed in the virtual reality environment 302). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the real-time product interaction via VR environment teachings of Mani with the virtual distance determination teaching of Dean to provide the virtual distance information to user for better viewing experience. Claims 8, 11 and 18 rejected under 35 U.S.C. 103 as being unpatentable over Mani in view of Dean, and further in view of Nonoyama. Re-claim 8, Mani-Dean teaches the system in claim 7, but Mani fails to teach a system, wherein the processor is further configured to execute the instructions stored in the POI marker module to detect a field of view (FOV) or a camera angle of the user in the virtual reality environment, and cause the display to render the additional information in the virtual reality environment based on the FOV or the camera angle. However, Nonoyama teaches: wherein the processor is further configured to execute the instructions stored in the POI marker module to detect a field of view (FOV) or a camera angle of the user in the virtual reality environment, and cause the display to render the additional information in the virtual reality environment based on the FOV or the camera angle (Fig. 24, [0233]. Nonoyama describes the concept of moving the camera angle upward, downward, left and right to render the focus range 631 on the observation screen 630). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the real-time product interaction via VR environment teachings of modified Mani with the hand gesture within the field of view (FOV) of the HMD teaching of Nonoyama to perform a designed action function associated with the VR environment. Re-claim 11, Mani teaches the system in claim 10, but Mani fails to teach a system, wherein the contextual game data comprises at least one of one or more waypoints, compass points, user’s real-time virtual location, enemy’s virtual location in the virtual reality environment, quest objectives or locations. However, Dean teaches: wherein the contextual game data comprises at least one of one or more waypoints, compass points, user’s real-time virtual location, enemy’s virtual location in the virtual reality environment, quest objectives or locations (Fig. 3, [0024, 0029]. Dean describes “… the virtual reality environment module 110 can calculate a virtual distance between the virtual object and a virtual user in the virtual reality environment …” and the virtual distance 306 between the virtual user 310 and virtual object 308 can be displayed in the virtual reality environment 302). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the real-time product interaction via VR environment teachings of Mani with the virtual distance determination teaching of Dean to provide the virtual distance information to user for better viewing experience. Modified Mani fails to teach icons associated with towns or animals. However, Nonoyama teaches icons associated with towns or animals (Fig. 24, [0113]. Nonoyama describes the target observation icons associated with animal within a GPS map). Therefore, it would have been obvious to one having the ordinary skill in the art before the effective filing date of the claimed invention to modify the real-time product interaction via VR environment teachings of Mani with the animal target object teaching of Nonoyama to provide the virtual view of the hard to reach animals. Re-claim 18, in addition to what Mani teaches in claim 15, claim 18 is a medium claim having similar limitations in scope of claim 11; therefore, it is rejected under similar rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TUAN S NGUYEN whose telephone number is (571)270-7612. The examiner can normally be reached Monday-Friday (9-5). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fred Ehichioya can be reached at 571-272-4034. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TUAN S NGUYEN/Primary Examiner, Art Unit 2179
Read full office action

Prosecution Timeline

May 14, 2024
Application Filed
Mar 18, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602157
SIMULATION DEVICE SUITABLE FOR USE IN AUGMENTED-REALITY OR VIRTUAL-REALITY ENVIRONMENTS
2y 5m to grant Granted Apr 14, 2026
Patent 12591354
MEASURING DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12574957
DISPLAY METHOD OF WIRELESS DEVICE FOR CONNECTION
2y 5m to grant Granted Mar 10, 2026
Patent 12566914
SYSTEM AND METHODS TO FACILITATE CONTENT GENERATION USING GENERATIVE ARTIFICIAL INTELLIGENCE MODELS
2y 5m to grant Granted Mar 03, 2026
Patent 12568165
NON-TERRESTRIAL NETWORK CONNECTION ICON
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
65%
Grant Probability
99%
With Interview (+38.4%)
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 318 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month