Prosecution Insights
Last updated: April 19, 2026
Application No. 17/655,973

METHOD AND SYSTEM FOR VERIFYING PERFORMANCE-BASED ASSESSMENTS DURING VIRTUAL REALITY SESSIONS

Non-Final OA §103
Filed
Mar 22, 2022
Examiner
GILLS, KURTIS
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Saudi Arabian Oil Company
OA Round
5 (Non-Final)
57%
Grant Probability
Moderate
5-6
OA Rounds
3y 4m
To Grant
87%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
307 granted / 536 resolved
+5.3% vs TC avg
Strong +29% interview lift
Without
With
+29.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
44 currently pending
Career history
580
Total Applications
across all art units

Statute-Specific Performance

§101
37.5%
-2.5% vs TC avg
§103
42.7%
+2.7% vs TC avg
§102
6.5%
-33.5% vs TC avg
§112
6.7%
-33.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 536 resolved cases

Office Action

§103
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Notice to Applicant In response to the communication received on 01/30/2026, the following is a Non-Final Office Action for Application No. 17655973. Status of Claims Claims 1 and 4-10 are pending. Claims 11-20 are withdrawn. Claim 2-3 are cancelled. Response to Amendments Applicant’s amendments have been fully considered. Response to Arguments Applicant’s arguments with respect to the claims have been considered but are moot in light of the new grounds of rejection, as necessitated by amendment. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 4-10 are rejected under 35 U.S.C. 103 as being unpatentable over Allen et al. (US 20220207830 A1) hereinafter referred to as Allen in view of Beall et al. (US 10403050 B1) hereinafter referred to as Beall in further view of Miller et al. (CA 3084169 A1) hereinafter referred to as Miller. Allen teaches: Claim 1. A method, comprising: obtaining first assessment data, by a first user device, from a server for a first user assessment that is performed by a first user, wherein the first user device comprises:a first touch controller including a first electromagnetic receiver,a first headset,a first tracking sensor including a first transmitter that transmits an electromagnetic sensing signal to the first touch controller, a first camera device, and a user assessment manager comprising a computer processor (¶0052 FIG. 2C is a flow chart of another method for generation of configuration constructs for virtual reality-based application development and deployment, according to some implementations. The method of FIG. 2C is similar to and expands on the process discussed above. At step 250, the computing device may receive the compiled virtual reality experience (e.g. application or data file generated at step 214). At step 252, the objects may be read from the virtual reality experience (e.g. extracted from object code, XML, data, or other such data structures), and the configuration construct may be generated (e.g. as a data array or data structure with keys corresponding to each object's GUID. ¶0066 At step 322, during execution, the system may detect an interaction of the user with an object. The interaction may comprise pressing a button, pulling lever, rotating a knob or dial, etc., and may be performed in any suitable manner (e.g. by tracking a hand position of the user and determining an intersection between a corresponding hand position of a virtual avatar of the user and the object ¶0082 As discussed above, in some implementations, virtual environments may be used for training and/or certification. For example, a trainer or skilled user may record a series of interactions with objects within the environment. Their interactions may be played within the environment, e.g. as a virtual or “ghost” avatar, within the view of a second user or student. This may allow the student user to go through the same motions at the same time as the recorded “ghost” avatar, allowing for intuitive learning by copying the instructor. The “ghost” avatar may be displayed in a semi-transparent form in some implementations, such that the user may view their own avatar overlapping or within the “ghost” avatar. ); determining, by the computer processor and based on the first assessment data, a first task associated with the first user assessment (¶0086 In some implementations, a sequence of interactions may be ordered—that is, the instructor's interactions may be performed in a particular order, and each of the user's interactions may be compared to a corresponding interaction. This may be useful in some implementations in which tasks need be performed in a particular order, or for when a specific object interaction may not be recorded. For example, in one such implementation, if an instructor rotated a dial (e.g. dial 2) as a fourth interaction, and a training user rotated a different dial (e.g. dial 3) as their fourth interaction, the distance between the position of the instructor's hand and the training user's hand may be significant (e.g. on a completely different dial). The distance measurement may automatically account for this); determining, by the first user device, position data regarding the first user during the first user assessment using the first electromagnetic receiver and the first electromagnetic transmitter (Fig.6B and ¶0066 At step 322, during execution, the system may detect an interaction of the user with an object. The interaction may comprise pressing a button, pulling lever, rotating a knob or dial, etc., and may be performed in any suitable manner (e.g. by tracking a hand position of the user and determining an intersection between a corresponding hand position of a virtual avatar of the user and the object; by tracking a position of a virtual “laser pointer” or other device ¶0070 the tracking systems may track the user's head (e.g. position and orientation), the user's hands (e.g. via controllers or image recognition from cameras viewing the hands), and/or any other limbs or appendages of the user. ¶0085 The implementation illustrated shows events or interactions with objects within the virtual environment such as buttons and dials, along with a time within the scenario at which the instructor took the action (e.g. a relative time from the start of the scenario, rather than an absolute time), and a time that the user interacted with the object. A distance between a position of the training user and a position of the instructor user when the event or interaction was recorded for the respective user may be determined (e.g. a distance between a position or rotation of a hand of the user when turning a dial, and a position or rotation of a hand of the instructor when turning the same dial), and a score generated. The score may be calculated inversely proportional to the distance in some implementations (e.g. such that smaller distances receive a higher score). ¶0098 In another aspect, the present disclosure is directed to a method for providing virtual environment-based training and certification. The method includes (a) tracking, by a sensor of a computing device, a position of a user within a physical environment; (b) displaying, by the computing device via a virtual reality display to the user, an avatar corresponding to the tracked position of the user within a virtual environment; (c) detecting, by the computing device, an interaction of the avatar with a virtual object within the virtual environment; ¶0102 The processor is configured to: (a) track, via the sensor, a position of a user within a physical environment); generating, by the computer processor and based on the first assessment data, a first virtual reality (VR) space comprising a virtual plant facility for performing the first task and a first VR plant component in the first VR space, wherein the first task is performed using the first VR plant component (¶0023 Virtual reality environments allow for training and certification of users and operators in environments that would be hazardous in reality, such as nuclear power or chemical processing plants, simulated emergencies such as fires or gas leaks, or other such environments. Such virtual reality environments may be highly immersive, with detailed simulations and photorealistic graphics, providing excellent training opportunities ¶0025-0030 The virtual reality experience 152 may refer variously to the virtual environment (e.g. including objects, textures, images, tec.), a scenario for the virtual environment (e.g. including events that occur responsive to triggers or time that change one or more objects within the virtual environment), or the compiled virtual reality application. The virtual reality experience 152 may thus comprise an application or data executable by an application for providing an immersive three-dimensional virtual environment, and may be developed in the development environment 150. ¶0082 As discussed above, in some implementations, virtual environments may be used for training and/or certification. For example, a trainer or skilled user may record a series of interactions with objects within the environment. Their interactions may be played within the environment, e.g. as a virtual or “ghost” avatar, within the view of a second user or student. This may allow the student user to go through the same motions at the same time as the recorded “ghost” avatar, allowing for intuitive learning by copying the instructor. The “ghost” avatar may be displayed in a semi-transparent form in some implementations, such that the user may view their own avatar overlapping or within the “ghost” avatar. ¶0057 In many implementations, the private scheme URL may conform to the following template: com.example.tdxr://{xr-host}/{xr-portal}/{xr-id}(?action) xr-host may identify an authority (e.g. hostname plus port number) of a remote data server where the portal is found; xr-portal is the portal that owns the XR resource; and xr-id is a path component that uniquely identifies the XR resource within its owning portal. The xr-id may be any valid URL path component, but it is recommended to be a human-readable identifier for the resource. It does not have to match a file name or directory, but could be derived from one.); presenting, by the computer processor and to the first user using the first headset, first VR image data corresponding to the first task in the first VR space (¶0024 For example, FIG. 1A is an illustration of a virtual reality environment 10 for training and certification, according to some implementations. The virtual reality environment 10 may comprise a three-dimensional environment and may be viewed from the perspective of a virtual camera, which may correspond to a viewpoint of a user or operator. ¶0084 The same functions may be used for certification purposes by disabling display of the instructor's “ghost” avatar 452. The training user's movements may still be tracked and compared to the instructor's recorded movements, and in some implementations, a score generated to determine the amount of deviation of the training user from the instructor's actions.); obtaining, by the computer processor and from the first headset, a user input in response to presenting the first task in the first VR space (¶0024 The virtual reality environment 10 may comprise a three-dimensional environment and may be viewed from the perspective of a virtual camera, which may correspond to a viewpoint of a user or operator. In some implementations, the virtual camera may be controlled via a joystick, keyboard, or other such interface, while in other implementations, the virtual camera may be controlled via tracking of a head-mounted display (e.g. virtual reality goggles or headset) or similar head tracking such that the user's view within the virtual environment corresponds to their physical movements and orientation. ¶0070 Client device 400 may comprise or communicate with one or more sensors 408 for tracking movement of a user, and one or more displays including a virtual reality or augmented reality display 410. Although shown separate (e.g. outside-in tracking or tracking by measuring displacement of emitters or reflectors on a headset and/or controllers from separate sensors), in some implementations, sensors 408 and virtual reality/augmented reality display 410 may be integrated (e.g. for inside-out tracking or tracking by measuring translations between successive images of a physical environment taken from sensors on a headset). ¶0084 The same functions may be used for certification purposes by disabling display of the instructor's “ghost” avatar 452. The training user's movements may still be tracked and compared to the instructor's recorded movements, and in some implementations, a score generated to determine the amount of deviation of the training user from the instructor's actions.);); determining, by the computer processor, whether the first user satisfied the first user assessment based on the first user input (¶0087 The scores may be totaled, averaged, or otherwise aggregated and compared to a threshold for certification purposes. In some implementations, if the user's aggregated score is below a threshold, the virtual scenario may be automatically restarted, potentially in a training or guided mode, to provide further instruction. FIG. 4D is a flow chart of a method for virtual reality-based training and certification, according to some implementations. Upon detection of an interaction, the values for the interaction may be recorded at step 480 (e.g. relative time, object interacted with, a value for the object in some implementations such as a dial setting or switch position, position of the user, position of the user's hand or hands, or any other such information to save a state of the simulation at the time of the interaction). If the simulation is in a record mode (e.g. for an instructor), the values may be stored to a configuration construct at step 482 or in metadata of the objects. If the simulation is not in a record mode, at step 484, a difference between a previously recorded interaction and the new interaction may be compared (e.g. including differences in positions, timing, settings, objects, or any other data). At step 486, a score may be generated based on the difference, such as inversely proportional to the difference such that higher accuracy results in a higher score. During the simulation and/or once the simulation is complete, depending on implementation, the scores for interactions may be aggregated and compared to a threshold.); and transmitting, to the server, a command that updates one or more user records in response to determining that the first user satisfied the first user assessment (¶0079 As discussed above, the request may identify a resource by a GUID or resource identifier and, in some implementations, a portal identifier, project identifier, or other such identifier. The local agent may determine if a local copy of the resource exists, and in some implementations, may compare the local copy to a remote copy to determine whether any updates or modifications have been made (e.g. by transmitting a request comprising an identification of a hash value of the resource or version number or other identifier of the local copy to a server, and either receiving a notification that no updates have been made, or receiving a new copy of the resource). ¶0084 The same functions may be used for certification purposes by disabling display of the instructor's “ghost” avatar 452. The training user's movements may still be tracked and compared to the instructor's recorded movements, and in some implementations, a score generated to determine the amount of deviation of the training user from the instructor's actions. For example, FIG. 4C is an example training and certification log for a virtual reality system for training and certification, according to some implementations. The implementation illustrated shows events or interactions with objects within the virtual environment such as buttons and dials, along with a time within the scenario at which the instructor took the action (e.g. a relative time from the start of the scenario, rather than an absolute time), and a time that the user interacted with the object. See also ¶0029, ¶0063, ¶0087.). Although not explicitly taught by Allen, Beall teaches in the analogous art of multi-user virtual and augmented reality tracking systems: wherein the first user device comprises:a first touch controller including a first electromagnetic receiver,a first headset,a first tracking sensor including a first transmitter that transmits an electromagnetic sensing signal to the first touch controller, a first camera device, and a user assessment manager comprising a computer processor (C.14 L.1 FIG. 6 illustrates an example virtual reality client display in a Head Mounted Display (HMD) of a user/participant. The example illustrates how a controller, such as a Wand Controller, may be used to scroll through a displayed grouping of virtual worlds. Optionally, certain descriptive information 610 is displayed in association with each virtual world including information relating to one or more associated VR slide presentations and scheduled session times. Movement of the controller may be reflected in the display in the form of a hand or other pointer object. Movement of the controller is mirrored by movement of the hand, which can “touch” a virtual world and corresponding descriptive information may be emphasized (e.g., displayed in larger size). Optionally, the user can touch and select the descriptive information. C.7 L.47 the master server computing device 200 may host a master server software program 205, as illustrated in FIG. 2, comprising a single software program or a plurality of software programs or software modules including, for example, a render engine 610 configured to render and/or enable the rendering of VR scenes, a physics engine 615 (e.g., that provides a simulation of physical systems, such as rigid and soft body dynamics, collision detection, and fluid dynamics, and that provides an interface that hides the low-level details of the physics needed in virtual reality applications to enable application/game developers to focus on the higher-level functionalities of the application), a rules engine 620, a simulation control engine 625 (that coordinates simulation execution), a session manager 630, a simulation state synchronizer engine 635 (that, for example, synchronizes associated client viewpoints) and/or an error handling 640, a client-server communications manager 650 (that, for example, manages client server communications including over a data communication network (e.g., a low latency data communication network)), resource manager 655 (that, for example, manages resources, including shared resources (e.g., simulation objects, scenes, etc.), speech-to-text conversion module 670, sensor analytics and inference module 675, video compression module 680, virtual reality tracking and marker identification software 660 (e.g., the Vizard VR™ toolkit and PPT Studio software from WorldViz LLC of Santa Barbara) by way of example, enhanced tracking sensor output processing and display 670, enhanced tracking analytics 675, and Voice over Internet Protocol 680 (e.g., for voice communications between session collaborators.). obtaining, by the computer processor and from the first headset, a first user input (C.12 L.38 Virtual Reality software may include a hardware integration module 660 which may incorporate a visual tool for configuring devices that the VR software supports, including displays (e.g., head-mounted displays, multi-screen projection walls, consumer 3D monitors), trackers (head trackers, gloves, full body motion capture), input devices (e.g., wands, steering wheels, gamepads, joysticks, etc.), feedback devices (e.g., haptic feedback devices that simulate the sensation of force, pressure and/or resistance by using electric actuators, pneumatics hydraulics, and/or neuromuscular stimulators C.64 L.34 An example of an action that may be treated as privileged is a rotation of a user's viewpoint display (e.g., a user wearing a head mounted display, configured with PPT emitters, looking to the right, left, up, down, etc.). It has been determined through research that if latencies in a visual rendering of a scene in response to a head movement exceed approximately 10 milliseconds, then the rendering of the virtual scene is no longer indistinguishable from a real scene from a participant's perspective. Depending on the individual and the task at hand, the discrepancy from a real scene as detected by the human visual system may lead to disorientation or sub-optimal performance. Therefore, in many types of simulations this action type may be configured to be privileged. Another example of an action which may be treated as privileged is tracked travel (e.g., walking) of a user). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the multi-user virtual and augmented reality tracking systems of Beall with the system for providing virtual reality environment-based training and certification of Allen for the following reasons: (1) a finding that there was some teaching, suggestion, or motivation, either in the references themselves or in the knowledge generally available to one of ordinary skill in the art, to modify the reference or to combine reference teachings, e.g. Allen ¶0003 teaches that it is desirable to take advantage of the advanced capabilities and functionality of virtual reality; (2) a finding that there was reasonable expectation of success since the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference, e.g. Allen Abstract teaches systems for a dynamic reconfigurable virtual reality environment with in-environment access to external data and resources, and Beall Abstract teaches an example marker identification and position tracking system configured to interface and work in conjunction with a marker device and camera system and to provide high fidelity tracking of user and object motion in a virtual and/or augmented reality experience; and (3) whatever additional findings based on the Graham factual inquiries may be necessary, in view of the facts of the case under consideration, to explain a conclusion of obviousness, e.g. Allen at least the above cited paragraphs, and Beall at least the inclusively cited paragraphs. Therefore, it would be obvious to one skilled in the art at the time of the invention to combine the multi-user virtual and augmented reality tracking systems of Beall with the system for providing virtual reality environment-based training and certification of Allen. The rationale to support a conclusion that the claim would have been obvious is that "a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and whether there would have been a reasonable expectation of success in doing so." DyStar Textilfarben GmbH & Co. Deutschland KG v. C.H. Patrick Co., 464 F.3d 1356, 1360, 80 USPQ2d 1641, 1645 (Fed. Cir. 2006). See MPEP 2143(G). Although not explicitly taught by Allen in view of Beall, Miller teaches in the analogous art of contextual-based rendering of virtual avatars: generating a VR environmental factor in the first VR space;wherein the first task comprises a detection by the first user of the VR environmental factor prior to contacting the first VR plant component (¶0277 There may be three basic possibilities for things that Alice could be looking at. First, Alice could be looking at a physical object in her environment, such as, e.g., a chair or lamp. Second, Alice could be looking at a virtual object rendered in her AR/MR environment by the display 220 of her wearable device 902. Third, Alice could be looking at nothing in particular, such as, e.g., when lost in thought or thinking about something. ¶0278 At block 2144, the wearable system can determine whether the gaze intersects with a virtual object, and if so, a different head pose for Alice's avatar may need to be computed if the objects are in a different relative position from Alice's avatar's perspective (as compared to Alice's perspective). If not, the process 2130 goes back to the start block 2132. In certain implementations, the virtual object as determined from block 2144 can be an object of interest. 102791 At block 2146, the wearable system can extract semantic intent directives, such as interacting with a certain object.), and determining whether the first user detects the VR environmental factor based on analyzing an eye gaze of an avatar corresponding to the first user (¶0281 Let e_W = e_H H_W be the point e_H expressed in the world frame W. Let g_W = f W - e_W be a gaze direction ray pointing in the direction of the line of sight of the head looking towards the fixation point f W and originating at e_W. The ray can be parameterized as g_W(t) e_W + t(f W-e_W), t is in [0, infinity], represents an infinite ray with tZ) corresponding to the point e_W and t=1 representing the fixation point f W on this ray… For g_W, test intersection of this ray against P, S and D. Select the object 0 in the union of P, S, D, that intersects at the smallest value oft. This coincides with the closest object among P, S, D that intersects the ray g_W(t)… If 0 is a member of S (static virtual objects), add the intent lookat(S) to I_avatar. If 0 is a member of D (dynamic virtual objects), add the intent lookat(D) to I_avatar. Set H_avatar H_W. The output is the set of I_avatar and H_avatar. The output I_avatar and H_avatar can be communicated to Bob's wearable device for rendering Alice's avatar based on intent as shown in the block 2150.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the contextual-based rendering of virtual avatars of Miller with the system for providing virtual reality environment-based training and certification of Allen in view of Beall for the following reasons: (1) a finding that there was some teaching, suggestion, or motivation, either in the references themselves or in the knowledge generally available to one of ordinary skill in the art, to modify the reference or to combine reference teachings, e.g. Allen ¶0003 teaches that it is desirable to take advantage of the advanced capabilities and functionality of virtual reality; (2) a finding that there was reasonable expectation of success since the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference, e.g. Allen Abstract teaches systems for a dynamic reconfigurable virtual reality environment with in-environment access to external data and resources, and Beall Abstract teaches an example marker identification and position tracking system configured to interface and work in conjunction with a marker device and camera system and to provide high fidelity tracking of user and object motion in a virtual and/or augmented reality experience, and Miller Abstract teaches systems configured to automatically scale an avatar or to render an avatar based on a determined intention of a user, an interesting impulse, environmental stimuli, or user saccade points; and (3) whatever additional findings based on the Graham factual inquiries may be necessary, in view of the facts of the case under consideration, to explain a conclusion of obviousness, e.g. Allen in view of Beall at least the above cited paragraphs, and Miller at least the inclusively cited paragraphs. Therefore, it would be obvious to one skilled in the art at the time of the invention to combine the contextual-based rendering of virtual avatars of Miller with the system for providing virtual reality environment-based training and certification of Allen in view of Beall. The rationale to support a conclusion that the claim would have been obvious is that "a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and whether there would have been a reasonable expectation of success in doing so." DyStar Textilfarben GmbH & Co. Deutschland KG v. C.H. Patrick Co., 464 F.3d 1356, 1360, 80 USPQ2d 1641, 1645 (Fed. Cir. 2006). See MPEP 2143(G). Although not explicitly taught by Allen in view of Beall, Miller teaches in the analogous art of contextual-based rendering of virtual avatars: Claim 4. The method of claim 1, wherein the eye gaze of the avatar is determined using a machine-learning model (¶0065 the wearable system can learn the behaviors of one or more users (e.g., including the avatar's human counterpart) and drive the avatar's animation based on such learning even though the human counterpart user may or may not be present (either remotely or in the same environment). ¶0281 Let e_W = e_H H_W be the point e_H expressed in the world frame W. Let g_W = f W - e_W be a gaze direction ray pointing in the direction of the line of sight of the head looking towards the fixation point f W and originating at e_W. The ray can be parameterized as g_W(t) e_W + t(f W-e_W), t is in [0, infinity], represents an infinite ray with tZ) corresponding to the point e_W and t=1 representing the fixation point f W on this ray… For g_W, test intersection of this ray against P, S and D. Select the object 0 in the union of P, S, D, that intersects at the smallest value oft. This coincides with the closest object among P, S, D that intersects the ray g_W(t)… If 0 is a member of S (static virtual objects), add the intent lookat(S) to I_avatar. If 0 is a member of D (dynamic virtual objects), add the intent lookat(D) to I_avatar. Set H_avatar H_W. The output is the set of I_avatar and H_avatar. The output I_avatar and H_avatar can be communicated to Bob's wearable device for rendering Alice's avatar based on intent as shown in the block 2150). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the contextual-based rendering of virtual avatars of Miller with the system for providing virtual reality environment-based training and certification of Allen in view of Beall for the following reasons: (1) a finding that there was some teaching, suggestion, or motivation, either in the references themselves or in the knowledge generally available to one of ordinary skill in the art, to modify the reference or to combine reference teachings, e.g. Allen ¶0003 teaches that it is desirable to take advantage of the advanced capabilities and functionality of virtual reality; (2) a finding that there was reasonable expectation of success since the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference, e.g. Allen Abstract teaches systems for a dynamic reconfigurable virtual reality environment with in-environment access to external data and resources, and Beall Abstract teaches an example marker identification and position tracking system configured to interface and work in conjunction with a marker device and camera system and to provide high fidelity tracking of user and object motion in a virtual and/or augmented reality experience, and Miller Abstract teaches systems configured to automatically scale an avatar or to render an avatar based on a determined intention of a user, an interesting impulse, environmental stimuli, or user saccade points; and (3) whatever additional findings based on the Graham factual inquiries may be necessary, in view of the facts of the case under consideration, to explain a conclusion of obviousness, e.g. Allen in view of Beall at least the above cited paragraphs, and Miller at least the inclusively cited paragraphs. Therefore, it would be obvious to one skilled in the art at the time of the invention to combine the contextual-based rendering of virtual avatars of Miller with the system for providing virtual reality environment-based training and certification of Allen in view of Beall. The rationale to support a conclusion that the claim would have been obvious is that "a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and whether there would have been a reasonable expectation of success in doing so." DyStar Textilfarben GmbH & Co. Deutschland KG v. C.H. Patrick Co., 464 F.3d 1356, 1360, 80 USPQ2d 1641, 1645 (Fed. Cir. 2006). See MPEP 2143(G). Allen teaches: Claim 5. The method of claim 1, further comprising: obtaining, from the first user device and using the server, a request to access the first VR space for the first user assessment; establishing, in response to the request, a network connection between the first user device and a second user device, wherein the second user device hosts the first VR space; and generating a first avatar and second avatar in the first VR space, wherein first avatar corresponds to the first user performing the first user assessment, and wherein the second avatar corresponds to an evaluator of the first user assessment (¶0101-0102 In some implementations, the method includes displaying, within the virtual environment, the predetermined interaction associated with the virtual object as a second avatar. In a further implementation, the method includes recording the predetermined interaction while tracking, by the sensor of the computing device, a position of a second user within the physical environment. In another aspect, the present disclosure is directed to a system for providing virtual environment-based training and certification. The system includes a computing device comprising at least one sensor and a processor and in communication with a virtual reality display. The processor is configured to: (a) track, via the sensor, a position of a user within a physical environment; (b) display, via the virtual reality display to the user, an avatar corresponding to the tracked position of the user within a virtual environment; (c) detect an interaction of the avatar with a virtual object within the virtual environment; (d) measure a difference between the detected interaction and a predetermined interaction associated with the virtual object; and (e) generate a score inversely proportional to the measured difference). Allen teaches: Claim 6. The method of claim 1, further comprising: presenting second VR image data to a display device, wherein the second VR image data presents a third-person viewpoint of an avatar performing the first task in the first VR space; and transmitting, by a user device, assessment feedback over a network in response to presenting the second VR image data, wherein the assessment feedback determines a user’s score of the first user assessment (¶0024 For example, FIG. 1A is an illustration of a virtual reality environment 10 for training and certification, according to some implementations. The virtual reality environment 10 may comprise a three-dimensional environment and may be viewed from the perspective of a virtual camera, which may correspond to a viewpoint of a user or operator. ¶0084 The same functions may be used for certification purposes by disabling display of the instructor's “ghost” avatar 452. The training user's movements may still be tracked and compared to the instructor's recorded movements, and in some implementations, a score generated to determine the amount of deviation of the training user from the instructor's actions.). Allen teaches: Claim 7. The method of claim 1, wherein the first user assessment is a dynamic assessment that comprises a plurality of tasks comprising the first task and a second task, wherein the first task is a static task that corresponds to a predetermined right action and a predetermined wrong action for scoring the first user assessment, and wherein the second task is a branching task that is scored by a user device based on a plurality of scenarios (¶0029 Thus, the systems and methods discussed herein provide for delivery of dynamic content in virtual reality, with no need to recreate existing content, while providing real time updates of information and access to legacy data, such as documents, audio, video and other file types, which can still be utilized, as-is. The systems and methods allow for updating of URI addresses or endpoint resources through reconfiguration of the external configuration construct, without requiring programming knowledge or the need to recode or recompile an executable application. ¶0086 In some implementations, a sequence of interactions may be ordered—that is, the instructor's interactions may be performed in a particular order, and each of the user's interactions may be compared to a corresponding interaction. This may be useful in some implementations in which tasks need be performed in a particular order, or for when a specific object interaction may not be recorded. For example, in one such implementation, if an instructor rotated a dial (e.g. dial 2) as a fourth interaction, and a training user rotated a different dial (e.g. dial 3) as their fourth interaction, the distance between the position of the instructor's hand and the training user's hand may be significant (e.g. on a completely different dial). The distance measurement may automatically account for this.). Allen teaches: Claim 8. The method of claim 1, further comprising: transmitting, by an evaluator device, a command to a user device that is operating the first VR space, wherein the command adjusts the first VR space to produce an adjusted VR space, and wherein the adjusted VR space comprises a second VR plant component that is different from the first VR plant component and not located in the first VR space (¶0029 Thus, the systems and methods discussed herein provide for delivery of dynamic content in virtual reality, with no need to recreate existing content, while providing real time updates of information and access to legacy data, such as documents, audio, video and other file types, which can still be utilized, as-is. The systems and methods allow for updating of URI addresses or endpoint resources through reconfiguration of the external configuration construct, without requiring programming knowledge or the need to recode or recompile an executable application. ¶0086 the specific object interacted with or type of interaction may be recorded (e.g. which button is pressed), and the score may be calculated based on its conformity to the proper object or type of interaction, in addition to or instead of distance. Additionally, in some implementations, the score may be adjusted based on a difference between the recorded relative time of the instructor's interaction and the recorded relative time of the training user's interaction. These times may be calculated relative to the start of the scenario in some implementations (e.g. such that penalties for delays are continuously applied, encouraging the training user to speed up to recover after a delay), or may be calculated relative to a previous interaction). Allen teaches: Claim 9. The method of claim 1, further comprising: presenting, to an evaluator device, second VR image data corresponding to a second task in the first user assessment; and transmitting, by the evaluator device, a command to a user device that is operating the first VR space, wherein the command adjusts the second task to produce an adjusted task, and wherein the adjusted task corresponds to a first user input that is different from a second user input for the second task (¶0086 the specific object interacted with or type of interaction may be recorded (e.g. which button is pressed), and the score may be calculated based on its conformity to the proper object or type of interaction, in addition to or instead of distance. Additionally, in some implementations, the score may be adjusted based on a difference between the recorded relative time of the instructor's interaction and the recorded relative time of the training user's interaction. These times may be calculated relative to the start of the scenario in some implementations (e.g. such that penalties for delays are continuously applied, encouraging the training user to speed up to recover after a delay), or may be calculated relative to a previous interaction.). Allen teaches: Claim 10. The method of claim 1, further comprising: obtaining, by the first user device, a request to perform a second user assessment among a plurality of user assessments, wherein the plurality of user assessments further comprises the first user assessment; and obtaining, by the first user device, second assessment data that corresponds to the second user assessment; and generating, by the user device, a second VR space for the second user assessment based on the second assessment data, wherein the first assessment data and the second assessment data are stored on the user device, and wherein the first user assessment and the second user assessment are different types of user assessments (¶0082 As discussed above, in some implementations, virtual environments may be used for training and/or certification. For example, a trainer or skilled user may record a series of interactions with objects within the environment. Their interactions may be played within the environment, e.g. as a virtual or “ghost” avatar, within the view of a second user or student. This may allow the student user to go through the same motions at the same time as the recorded “ghost” avatar, allowing for intuitive learning by copying the instructor. The “ghost” avatar may be displayed in a semi-transparent form in some implementations, such that the user may view their own avatar overlapping or within the “ghost” avatar.). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KURTIS GILLS whose telephone number is (571)270-3315. The examiner can normally be reached on M-F 8-5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O’Connor can be reached on 5712723955. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KURTIS GILLS/Primary Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Mar 22, 2022
Application Filed
Jul 24, 2024
Non-Final Rejection — §103
Oct 11, 2024
Interview Requested
Oct 17, 2024
Applicant Interview (Telephonic)
Oct 17, 2024
Examiner Interview Summary
Oct 29, 2024
Response Filed
Dec 23, 2024
Final Rejection — §103
Feb 17, 2025
Interview Requested
Feb 24, 2025
Applicant Interview (Telephonic)
Feb 25, 2025
Examiner Interview Summary
Feb 28, 2025
Response after Non-Final Action
Mar 28, 2025
Request for Continued Examination
Mar 31, 2025
Response after Non-Final Action
May 13, 2025
Non-Final Rejection — §103
Jul 31, 2025
Interview Requested
Aug 06, 2025
Applicant Interview (Telephonic)
Aug 06, 2025
Examiner Interview Summary
Aug 14, 2025
Response Filed
Oct 29, 2025
Non-Final Rejection — §103
Jan 30, 2026
Response Filed
Feb 26, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602664
INTELLIGENT MEETING TIMESLOT ANALYSIS AND RECOMMENDATION
2y 5m to grant Granted Apr 14, 2026
Patent 12572864
AVOIDING PROHIBITED SEQUENCES OF MATERIALS PROCESSING AT A CRUSHER USING PREDICTIVE ANALYTICS
2y 5m to grant Granted Mar 10, 2026
Patent 12572872
Mine Management System
2y 5m to grant Granted Mar 10, 2026
Patent 12567013
METHOD AND SYSTEM FOR SOLVING SUBSET SUM MATCHING PROBLEM USING DYNAMIC PROGRAMMING APPROACH
2y 5m to grant Granted Mar 03, 2026
Patent 12561703
SYSTEM AND METHOD FOR PERSONA GENERATION
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
57%
Grant Probability
87%
With Interview (+29.4%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 536 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month