Prosecution Insights
Last updated: April 19, 2026
Application No. 18/048,352

DEFERRED RENDERING ON EXTENDED REALITY (XR) DEVICES

Non-Final OA §103
Filed
Oct 20, 2022
Examiner
BADER, ROBERT N.
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
7 (Non-Final)
44%
Grant Probability
Moderate
7-8
OA Rounds
3y 1m
To Grant
70%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
173 granted / 393 resolved
-18.0% vs TC avg
Strong +26% interview lift
Without
With
+26.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
32 currently pending
Career history
425
Total Applications
across all art units

Statute-Specific Performance

§101
9.9%
-30.1% vs TC avg
§103
48.7%
+8.7% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 393 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/28/26 has been entered. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication 2020/0098186 A1 (hereinafter Xue) in view of “Design of a Component-Based Augmented Reality Framework” by Martin Bauer, et al. (hereinafter Bauer) in view of U.S. Patent Application 2018/0218543 A1 (hereinafter Vembar) in view of U.S. Patent 5,841,439 (hereinafter Pose) in view of U.S. Patent 10,311,833 B1 (hereinafter Qiu). Regarding claim 1, the limitations “A method for deferred rendering on an extended reality (XR) device, the method comprising: establishing a transport session for content on the XR device with a server” are taught by Xue (Xue, e.g. abstract, paragraphs 24-165, describes a split XR rendering system, including user HMD(s) for displaying XR content, e.g. paragraphs 24-28, 40, 146, 147, i.e. the claimed XR device, in communication with an XR server, e.g. paragraphs 26, 27, 37-39, 45, 146, 148, 158-161, where the XR HMD establishes a transport session with the XR server for transferring sensor and pose data captured by the XR HMD to the server and transferring XR image content rendered by the XR server to the XR HMD, e.g. paragraphs 24, 30-32, 133-161, i.e. the claimed transport session for content between the XR device and server.) The limitations “performing [a] loop configuration for the content based on the transport session between the XR device and the server; providing pose information, based on [the] parameters of the loop configuration to the server” are taught by Xue (Xue, e.g. paragraphs 133-162, teaches that the transport session alternates between an XR HMD uplink mode and XR server downlink mode, wherein the XR HMD provides pose information to the server while in the uplink mode, i.e. there is a loop configuration for controlling how often the XR HMD provides pose information in order to avoid medium contention and congestion, e.g. paragraphs 141, 156. Xue, e.g. paragraphs 158-161, indicates a variety of mechanisms may be used to determine when the XR server should send a pose update trigger, and, e.g. paragraphs 6, 7, claim 15, that the XR HMD may be configured to send the pose update if a predetermined amount of time has passed since the last trigger, corresponding to the claimed loop configuration parameters, i.e. the period and/or condition parameters for causing the server to send a trigger and the predetermined time for sending an update without a trigger.) The limitation (addressed out of order) “selecting an operational mode of the XR device for an application related to the content, the selected operational mode at least partly used for a loop configuration” is partially taught by Xue (Xue, e.g. paragraphs 6, 7, 158-161, indicates a variety of mechanisms may be used to determine when the XR server should send a pose update trigger, and that the XR HMD may be configured to send the pose update if a predetermined amount of time has passed since the last trigger, corresponding to different pose delivery mode(s), i.e. loop configurations, which could be used by different operation modes of different XR content applications of the XR device, but Xue does not explicitly address selecting a pose delivery mode based on a selected operational mode of an XR content application of the XR HMD.) However, this limitation is taught by Bauer (Bauer, e.g. abstract, sections 3-5, describes a component based augmented reality framework for enabling augmented reality applications to be used on different augmented reality systems. Bauer, e.g. sections 3.1, 3.2, indicates that the low level components of the system that are used by a variety of applications, e.g. tracking, speech recognition, gesture recognition, are implemented using separate modules that can be used together, analogous to a set of tools. Bauer, e.g. sections 3.3, 3.4, further teaches that application developers write applications that interface with high-level services, such as a user interface engine, taskflow engine, and tracking manager, which in turn interact with the low-level service modules. Further, Bauer, e.g. sections 3.4, 5.2, indicates that the service manager will select from available services depending on the needs of an application, giving the example of an augmented reality navigation application which uses different tracking services for indoor and outdoor operation, i.e. the service is selected based on a current operational mode of an XR application executed by an XR device. Finally, Bauer, e.g. section 1, indicates that one of the major advantages of supporting a component based framework is that existing software/hardware components may be reused for a variety of applications.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Xue’s split XR rendering system to include Bauer’s component-based augmented reality framework for supporting augmented reality applications in order to allow existing software/hardware components to be reused for a variety of augmented reality applications. In Xue’s modified split XR rendering system, the pose delivery mode(s) used by an augmented reality application operating on the XR HMD could be selected according to the need(s) of the current operational mode of the augmented reality application and/or the services on which it relies as taught by Bauer, e.g. sections 3.4, 5.2, i.e. Xue, paragraphs 158-161, indicates the modes have different advantages, such as being independent of renderer timing as in the periodic example of paragraph 159, or based on render timeline and Vsync for a game augmented reality application as in the example of paragraph 160. When the augmented reality application(s) operating on Xue’s modified system specify a need for a particular pose delivery mode, or rely on a service which specifies a need for a particular pose delivery mode, where, as taught by Bauer, the specified need(s) may be dependent on a current operational mode of the augmented reality application, then as claimed, the selected operational mode of the XR content application of the XR device is at least partly used for the loop configuration. The limitations “receiving pre-rendered content based on the pose information from the server; and processing and displaying the pre-rendered content on the XR device” is taught by Xue (Xue, e.g. paragraphs 24, 27, 136-138, 148, teaches that the XR server transmits rendered images based on the received pose information to the XR HMD, where the XR HMD may further perform asynchronous time warping (ATW) or other processing before displaying the images on the XR HMD, e.g. paragraphs 138, 145, i.e. the claimed receipt of pre-rendered content based on the pose information, and processing and displaying thereof on the XR device.) The limitation “wherein the parameters of the loop configuration comprise a pose delivery mode” is taught by Xue (As discussed above, Xue, e.g. paragraphs 6, 7, 158-161, indicates a variety of mechanisms may be used to determine when the XR server should send a pose update trigger, and that the XR HMD may be configured to send the pose update if a predetermined amount of time has passed since the last trigger, corresponding to pose delivery mode(s), i.e. the pose update triggers may be sent based on link conditions as in paragraph 158, or based on a period as in paragraph 159, or according to timing parameters as in paragraph 160, along with the XR HMD being configured to either wait for a pose trigger update or alternately send pose data when a period of time has elapsed since the last trigger, corresponding to at least 3 server triggered pose delivery modes and two XR HMD pose delivery modes.) The limitation “wherein the parameters of the loop configuration comprise … a media session loop setting” is not explicitly taught by Xue (As discussed above, Xue, e.g. paragraphs 6, 7, 158-161, indicates a variety of pose delivery mode(s), but Xue does not explicitly address the use of a media session loop setting, per se, as defined in the disclosure, e.g. paragraph 74, indicating future transmission of the pose information to the server and pre-rendering of content by the server.) However, this limitation is taught by Vembar (Vembar, e.g. abstract, paragraphs 20-78, describes a system for augmented or virtual reality display using an head mounted display (HMD) and a tethered computer to perform hybrid rendering, where Vembar’s HMD displays both virtual and augmented reality content, i.e. is an XR device, e.g. paragraphs 4, 5, 22, 63, and the tethered computer may be connected over a network, i.e. using transport session with a server for XR content, e.g. paragraphs 29, 66. Further, analogous to Xue, Vembar’s system generally operates in a loop, as described in paragraphs 35-39, 50-54, 57-62, figures 2, 6, 7, where for each frame, the HMD obtains and sends pose information to the tethered computer, which is used by the tethered computer to render an updated XR image that is transmitted to the HMD, where the XR image is further processed to display one or more images to the user by the HMD. Finally, Vembar, e.g. paragraphs 50-54, 57-62, indicates that when the user’s head or eye motion rate(s) exceeds a threshold, instead of sending updated pose information to the tethered computer, the HMD will send a message instructing the tethered computer to stop rendering until a new pose is provided by the HMD, i.e. a loop control parameter/variable indicating both of the variables, future transmission of the pose information is or is not temporarily paused, and pre-rendering of the content by the server should continue or be paused.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Xue’s split XR rendering system, including Bauer’s component-based augmented reality framework, to include Vembar’s HMD motion based update pausing feature in order to improve bandwidth use, e.g. Vembar, paragraphs 27, 50. In Xue’s modified system, in response to the XR device/head motion rate being above a threshold, the pose updating and server side rendering is paused, corresponding to the claimed media session loop setting parameter, i.e. when the XR device/head motion rate is high, the media session loop setting is a paused state, ceasing updates from both device and server, and when the XR device/head motion rate is not high, the media session loop setting is an active state, with pose updates and server side rendering continuing. The limitation “wherein the parameters of the loop configuration comprise … a frame recycling flag indicating whether the XR device performs frame recycling” are partially taught by Xue (Xue, e.g. paragraphs 77-80, 97-104, 138, 145, teaches that the system performs temporal and spatial frame warping, which can correct for a change in pose between the rendered image and the display time/pose, e.g. paragraph 78. While one of ordinary skill in the art of computer graphics processing would be aware that temporal and spatial frame warping can be used to generate a plurality of warped display frames from a single rendered frame in order to increase a relative display frame rate to rendered image frame rate, i.e. the claimed frame recycling, Xue does not explicitly address using the temporal and spatial frame warping for increasing the relative display rate, per se. Further, although Vembar does teach this feature, as discussed below, the modification above, wherein Xue’s system includes Vembar’s HMD motion based update pausing feature, does not include Vembar’s corresponding frame recycling feature, per se.) However, this limitation is taught by Vembar (Vembar, e.g. paragraphs 38-47, 50-54, 57-62, teaches that the HMD may use time warping to generate additional in between frames to increase the display frame rate, e.g. as in paragraphs 46, 47, a server rendering rate of 30 fps or 45 fps can be increased to 120 fps, 90 fps, or 45 fps by the HMD generating a plurality of frames inbetween the server rendered frames, i.e. the claimed frame recycling. Vembar, e.g. paragraphs 38, 49, 52, 60, teaches that using the time warping to increase the frame rate is an optional feature, i.e. there is a variable used by the system indicating whether or not to perform the frame rate increase, corresponding to the claimed frame recycling flag. Furthermore, with respect to claim 2, which as discussed in the above 112(b) rejection, is interpreted to require that the frame recycling flag indicates that the XR device performs frame recycling “when the difference between the pose of the previously-rendered frame and the pose of the subsequent frame is below a threshold”, Vembar, e.g. paragraphs 50-54, 57-62, teaches that when the head motion rate is above a threshold, the HMD will send a message instructing the tethered computer to stop rendering until a new pose is provided by the HMD, and instead the HMD locally renders a low detail view for display, e.g. paragraphs 50, 53, 61, i.e. when the rate of change in the pose is greater than a threshold, the system pauses the remote rendering to use a locally rendered low detail image, and conversely, when the rate of change in pose is below the threshold, as noted above, the HMD generates the plurality of frames inbetween the server rendered frames, i.e. the claimed frame recycling flag indicating the XR device is performing frame recycling when the difference in pose between previously-rendered frame and the subsequent frame are below a threshold.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Xue’s split XR rendering system, including Bauer’s component-based augmented reality framework, including Vembar’s HMD motion based update pausing feature, to include Vembar’s optional frame rate increase feature in order to further improve bandwidth use by having the XR device increase the display frame rate by performing time and space warping on server rendered frames to generate inbetween frames, corresponding to the claimed frame recycling flag, as discussed above. In Xue’s modified system, when the HMD motion is less than the threshold, and the frame rate increase is enabled, i.e. the claimed frame recycling flag, then the XR device would perform time and space warping to generate in between frames. That is, the loop configuration would be dependent in part on the claimed frame recycling flag, e.g. as in Vembar, paragraphs 46, 47, where the rate of pose updates and corresponding rendered images received from the server required for a given display frame rate, e.g. 60 fps, would be lower or higher depending on whether the frame rate increase is enabled, e.g. 30fps with frame rate increase to display at 60fps, compared to 60fps without frame rate increase to display at 60fps. The limitations “providing pose information, based on a send pose variable of parameters of the loop configuration, to the server”, “the media session loop is configured to control whether the XR device sends the pose information to the server and whether the XR device receives the pre-rendered content from the server and includes the send pose variable and a receive media variable” are partially taught by Xue in view of Vembar (As discussed above, Xue, e.g. paragraphs 6, 7, 158-161, indicates a variety of mechanisms may be used to determine when the XR server should send a pose update trigger, and that the XR HMD may be configured to send the pose update if a predetermined amount of time has passed since the last trigger, corresponding to pose delivery mode(s), but Xue does not explicitly address the use of a first “send pose” variable indicating future transmission of pose information to the server and/or a second “receive media” variable indicating pre-rendering of content by the server. Further, in Xue’s modified system including Vembar’s HMD motion based update pausing feature, in response to the XR device/head motion rate being above a threshold, the pose updating and server side rendering is paused, corresponding to the claimed media session loop setting parameter, i.e. when the XR device/head motion rate is high, the media session loop setting is a paused state, ceasing updates from both device and server, and when the XR device/head motion rate is not high, the media session loop setting is an active state, with pose updates and server side rendering continuing. Vembar only teaches using a single variable for pausing pose updating and rendering, rather than the claimed first and second variables.) However, this limitation is taught by Qiu (Qiu, e.g. abstract, col 8, line 50 - col 14, line 21, col 30, line 16 - col 45, line 16, describes an analogous augmented reality system for displaying augmented content rendered according to a tracked pose of a user’s HMD, and, further, suggests that when the HMD motion rate is below a threshold, i.e. substantially static, the system may enter a low power mode, e.g. col 32, line 53 - col 33, line 5, col 34, lines 1-26, col 38, line 10 - col 45, line 16. Qiu’s low power mode operations may stop pose tracking updates while continuing to render images using the last pose, e.g. col 38, lines 49-60, col 40, line 64 - col 41, line 9, col 43, lines 63-67, col 44, lines 12-16, 25-32, stop pose tracking updates and rendering updates and using the last rendered image, e.g. col 38, lines 42-48, col 41, lines 10-18, col 43, lines 56-61, col 44, lines 16-23, 32-37, or stop displaying all together, i.e. Qiu teaches that it is advantageous to disable pose tracking updates and/or rendering in response to low HMD motion rates and also that it is advantageous to separately disable pose tracking updates and rendered image updating, i.e. as claimed, separate variables for disabling pose tracking updates and rendered image updating as opposed to Vembar’s single variable disabling or enabling both.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Xue’s split XR rendering system, including Bauer’s component-based augmented reality framework, including Vembar’s HMD motion based update pausing feature and optional frame rate increase feature, to include Qiu’s HMD power saving update feature in order to reduce power usage by the XR device when the user is not moving as taught by Qiu. As noted above, Qiu teaches that it is advantageous to separately disable pose tracking updates and rendered image updating, such that in Xue’s modified system both features would be supported by adding separate variables for disabling pose tracking updates, i.e. in response to the XR device/head motion rate being above or below a threshold, and disabling rendered image updating, i.e. depending on the application’s power saving settings and the type of motion, corresponding to the media session loop setting parameter comprising the claimed send pose and receive media variables. The limitation “wherein the frame recycling flag is settable to indicate that the XR device performs frame recycling at least in response to an identification that one or more mesh objects in the pre-rendered content are static along one or more scene safe volume paths” is partially taught by Xue in view of Vembar (As discussed above, in Xue’s modified system, when the HMD motion is less than the threshold, and the frame rate increase is enabled, i.e. the claimed frame recycling flag, then the XR device would perform time and space warping to generate in between frames. Further, Vembar, e.g. paragraph 49, teaches that the frame rate increasing feature may be used/adjusted in response to determining that the scene is static, i.e. that the user’s head is not moving quickly and objects in the scene are still or moving slowly, such that the frame recycling flag/frame recycling is set/performed in response to determining that the objects in the pre-rendered content are static, where said objects may be represented by a mesh of primitives, e.g. Xue, paragraph 52. However, Vembar does not indicate any particular technique for determining that the objects in the scene are sufficiently still/slow, i.e. Vembar does not teach determining that the objects are static “along scene safe volume paths”, per se.) However, this limitation is taught by Pose (Pose, e.g. abstract, columns 1-39, describes a system for reducing latency in virtual reality rendering systems by performing prioritized rendering based on calculating object validity periods. Pose, columns 25-32 describes details of the prioritized rendering technique, which avoids re-rendering objects until they have changed more than a threshold amount, e.g. col 26, lines 2-8, 60 - col 27, line 19, col 28, line 19 - col 29, line 65. Further, Pose, e.g. col 29, calculates the validity period in consideration of translational validity, size validity, and animation validity, where the size validity is dependent on the volume of the object and the change in distance of the object relative to the viewpoint, as shown in figure 16. That is, Pose’s size validity corresponds to the claimed identification of objects which are static along one or more scene safe volume paths, i.e. the object’s radius represents the volume of the object, and the size validity measures the change in apparent size as the object travels along a translation path in consideration of the object’s radius/volume.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Xue’s split XR rendering system, including Bauer’s component-based augmented reality framework, including Vembar’s HMD motion based update pausing feature and optional frame rate increase feature, including Qiu’s HMD power saving update feature separating variables for disabling pose tracking updates and disabling rendered image updating, to use Pose’s object validity period calculation technique to determine whether a given scene is a static scene as taught by Vembar in order to adjust the frame rate of the frame rate increasing feature because Vembar does not indicate any particular technique for determining that the objects in the scene are sufficiently still/slow, and Pose’s object validity periods are used to determine whether objects are sufficiently static for the same purpose of adjusting the rate at which the objects in the scene are re-rendered. As noted in the previous modification, in Xue’s modified system, when the HMD motion is less than the threshold, and the frame rate increase is enabled, i.e. the claimed frame recycling flag, then the XR device would perform time and space warping to generate in between frames, corresponding to the claimed loop configuration dependent in part on the claimed frame recycling flag, where as in Vembar, e.g. paragraph 49, the rate of time and space warping/frame recycling would be dependent on determining that the scene was a static scene using Pose’s object validity periods, where as noted above, Pose’s size validity corresponds to the claimed identification of objects which are static along one or more scene safe volume paths, i.e. the object’s radius represents the volume of the object, and the size validity measures the change in apparent size as the object travels along a translation path in consideration of the object’s radius/volume. The limitation “each of the one or more scene safe volume paths is defined by a bounding volume comprising at least one of: … a spheroid bounding volume defined by a radius in each dimension around each point along a path segment” is taught by Xue in view of Vembar and Pose (As discussed above, in Xue’s modified system, the rate of Vembar’s time and space warping/frame recycling would be dependent on determining that the scene was a static scene using Pose’s object validity periods, where as noted above, Pose’s size validity corresponds to the claimed identification of objects which are static along one or more scene safe volume paths, i.e. the object’s radius represents the volume of the object, and the size validity measures the change in apparent size as the object travels along a translation path in consideration of the object’s radius/volume. That is, as claimed, a spheroid bounding volume is defined using a radius in each dimension, i.e. the same radius is used in all directions/dimensions, defined at each point along the path segment, i.e. the start and end points of the line segment representing the maximum translation distance, as shown in figure 16, showing a 2D representation having circles instead of spheres, where a circle(sphere) is defined at each end of the path segment having the radius.) Regarding claim 2, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 1 above, i.e. as discussed above with respect to the frame recycling flag parameter, Xue’s system modified to include Vembar’s HMD motion based update pausing feature and Vembar’s optional frame rate increase feature, when the XR device/head motion is less than the threshold, and the frame rate increase is enabled, i.e. the claimed frame recycling flag, then the XR device would perform time and space warping to generate in between frames, and when the XR device/head motion is above the threshold, the XR device will send a message instructing the tethered computer to stop rendering until a new pose is provided by the XR device, and instead the XR device locally renders a low detail view for display, e.g. Vembar, paragraphs 50, 53, 61, corresponding to the requirement of claim 2, the claimed frame recycling flag indicating the XR device is performing frame recycling under conditions where reprojection does not result in occlusion holes or artifacts, e.g. as in paragraph 83 of Applicant’s disclosure, where the conditions are that the difference between adjacent frames is sufficiently small. Further, as discussed in the claim 1 rejection with respect to the use of Pose’s object validity periods, the rate of time and space warping/frame recycling would be dependent on determining that the scene was a static scene using Pose’s object validity periods, i.e. if the scene is determined to be a relatively static scene, these are also conditions indicating reprojection would not result in occlusion holes or artifacts. Regarding claim 3, the limitations “the pose delivery mode of the loop configuration comprises one of multiple pose delivery modes; and the multiple pose delivery modes comprise: an offline mode where the pose information is not sent to the server; a periodic mode where the pose information is periodically sent to the server; and a non-periodic mode where the pose information is sent to the server only when triggered by the XR device” are taught by Xue in view of Vembar (As discussed in the claim 1 rejections above, Xue, e.g. paragraphs 6, 7, 158-161, indicates a variety of mechanisms may be used to determine when the XR server should send a pose update trigger, and that the XR HMD may be configured to send the pose update if a predetermined amount of time has passed since the last trigger, corresponding to pose delivery mode(s), i.e. the pose update triggers may be sent based on link conditions as in paragraph 158, or based on a period as in paragraph 159, or according to timing parameters as in paragraph 160, along with the XR HMD being configured to either wait for a pose trigger update or alternately send pose data when a period of time has elapsed since the last trigger, corresponding to at least 3 server triggered pose delivery modes and two XR HMD pose delivery modes. The claimed periodic mode corresponds to the periodic server triggered mode as in paragraph 159, and the non-period mode corresponds to Xue’s other modes where pose information is sent when triggered by receiving a pose trigger update packet as in paragraphs 168, 160, 161. Further, as discussed in the claim 1 rejection above, In Xue’s modified system including Vembar’s HMD motion based update pausing feature, in response to the XR device/head motion rate being above a threshold, the pose updating and server side rendering is paused, which corresponds to the claimed offline mode, i.e. when pose updating and rendering is paused, pose information is not sent to the server until the pose updating and rendering is unpaused and a different pose delivery mode, e.g. periodic or non-periodic, is reactivated.) Regarding claim 4, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 1 above, i.e. the modification of Xue’s split XR rendering system to include Qiu’s HMD power saving update feature uses separate variables for disabling pose tracking updates and disabling rendered image updating, corresponding to the send pose and receive media variables as recited in claim 4. Regarding claim 5, the limitation “changing the operational mode of the XR device, and using the changed operational mode for the loop configuration” is taught by Xue in view of Bauer (As discussed in the claim 1 rejection above, in Xue’s modified split XR rendering system, pose delivery mode(s) used by an augmented reality application operating on the XR HMD could be selected according to the need(s) of the current operational mode of the augmented reality application and/or the services on which it relies as taught by Bauer, e.g. sections 3.4, 5.2, i.e. Xue, paragraphs 158-161, indicates the modes have different advantages, such as being independent of renderer timing as in the periodic example of paragraph 159, or based on render timeline and Vsync for a game augmented reality application as in the example of paragraph 160. When the augmented reality application(s) operating on Xue’s modified system specify a need for a particular pose delivery mode, or rely on a service which specifies a need for a particular pose delivery mode, where, as taught by Bauer, the specified need(s) may be dependent on a current operational mode of the augmented reality application, then as claimed, the selected operational mode of the XR content application of the XR device is at least partly used for the loop configuration, and by extension, when the user changes the operational mode of the augmented reality application, or changes to a different application, which also changes the operational mode of the XR device, the different operational mode of the augmented reality application, or different application, may specify a need for a different pose delivery mode, i.e. using the changed operational mode for the loop configuration.) Regarding claim 6, the limitation “wherein the operational mode comprises at least one of: a head’s up display (HUD) mode, a two-dimensional (2D) mode, a media mode, a desktop augmented reality (AR) mode, a room mixed reality (MR) mode, an area MR mode, and an outside MR mode” are taught by Xue in view of Bauer (While Xue does not explicitly detail types of XR applications, Xue, e.g. paragraph 146, indicates that XR is intended to cover augmented reality, mixed reality, and virtual reality applications, which would anticipate the claimed AR and MR modes, as well as a 2D/media mode, i.e. from a game engine rendering VR. Further, Bauer, e.g. section 5, describes an exemplary augmented reality application that works both indoors and outdoors, i.e. room/area indoor MR and outdoor MR modes, and includes a 2D HUD overlay, e.g. figure 10. That is, Xue’s modified system would be able to support XR applications that are VR, AR, or MR, indoors or outdoors, 2D, and in the style of a HUD, corresponding to the claimed operational mode comprising at least one of the claimed modes.) Regarding claim 7, “wherein the XR device includes at least one camera, at least one depth sensor, and at least one inertial measurement unit (IMU)” is partially taught by Xue as modified in the claim 1 rejection (Xue, e.g. paragraphs 42, 54, 59, that the XR HMD may include camera(s) and accelerometer(s), i.e. IMU(s), and other sensors, but does not explicitly mention depth sensor(s), per se. The modification in the claim 1 rejection did not include Vembar’s sensors.) However, this limitation is taught by Vembar (Vembar, e.g. abstract, paragraphs 20-78, describes a system for augmented or virtual reality display using an head mounted display (HMD) and a tethered computer to perform hybrid rendering, analogously to Xue’s split XR rendering system, as discussed in more detail in the claim 1 rejection above. Further, Vembar, e.g. paragraphs 27, 28, 34, 69, indicates that proximity sensors and range finders, i.e. depth sensors, may be used in addition to camera(s) and IMU(s).) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Xue’s split XR rendering system, including Bauer’s component-based augmented reality framework, including Vembar’s HMD motion based update pausing feature and optional frame rate increase feature, including Qiu’s HMD power saving update feature separating variables for disabling pose tracking updates and disabling rendered image updating, using Pose’s object validity period calculation technique to determine whether a given scene is a static scene as taught by Vembar, to include additional sensors on the XR HMD as taught by Vembar, including depth sensors, because Xue indicates a variety of sensors may be used, and Vembar, describing an analogous system, suggests using depth sensor(s) in combination with camera(s) and IMU(s), i.e. to capture depth information for use in an XR application. The limitation "the operational mode is selected based on data from at least one of: the at least one camera, the at least one depth sensor, and the at least one IMU” is taught by Xue in view of Bauer (It is noted that Applicants disclosure, e.g. paragraph 106, indicates that the selection may be that certain modes are or are not available based on the availability of hardware resources. Xue’s modified split XR rendering system using Bauer’s component based augmented reality framework enforces an analogous availability requirement for enabling augmented reality applications, i.e. Bauer, e.g. section 3.4, paragraphs 3-5, indicates that the services have needs and abilities, wherein a service may only be used when its needs are met. For an exemplary augmented reality application which relies on a service which specifies the need for input from the camera, depth sensor, and/or the IMU, executing the exemplary application would be dependent on whether said camera, depth sensor, and IMU are available, i.e. as claimed the operational mode would be selected based on data from at least one of the camera, depth sensor, and/or IMU.) Regarding claims 8 and 15, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 1 above, with Xue, e.g. paragraphs 50-53, 163, describing the XR HMD device operation which includes execution of software instructions that may be stored on non-transitory media. Regarding claims 9 and 16, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 2 above. Regarding claims 10 and 17, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 3 above. Regarding claim 11, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 4 above. Regarding claims 12 and 18, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 5 above. Regarding claims 13 and 19, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 6 above. Regarding claims 14 and 20, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 7 above. Response to Arguments It is noted that Applicant asserts that the 112 rejections should be withdrawn, however, Applicant’s 12/29/25 claim amendments were entered in the 1/15/26 advisory action, and additionally the 112 rejections were withdrawn therein due to the cancellation of claim 21, i.e. the 112 rejections were already withdrawn prior to Applicant’s 1/28/26 response. Applicant's arguments filed 1/28/26 have been fully considered but they are not persuasive. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). With respect to the prior art, Applicant’s argument is merely that Xue does not anticipate the amended limitations further defining the bounding volume used to define the scene safe volume paths. Although the scene safe volume path limitation was previously taught in combination with Pose’s disclosure, including referencing the volume defined by a radius as discussed in the independent claim rejection above, Applicant’s remarks do not address Pose’s disclosure beyond simply asserting that Pose does not cure the deficiency, without suggesting any reason why Pose’ sphere volumes defined using a radius at the start and end of the path are distinct from the claimed bounding volume including a spheroid bounding volume at each point along the path. Therefore Applicant’s remarks cannot be considered persuasive. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT BADER whose telephone number is (571)270-3335. The examiner can normally be reached 11-7 m-f. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at 571-272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROBERT BADER/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Oct 20, 2022
Application Filed
May 01, 2024
Non-Final Rejection — §103
Jul 12, 2024
Interview Requested
Jul 29, 2024
Applicant Interview (Telephonic)
Jul 29, 2024
Examiner Interview Summary
Aug 07, 2024
Response Filed
Aug 14, 2024
Final Rejection — §103
Oct 11, 2024
Response after Non-Final Action
Nov 07, 2024
Response after Non-Final Action
Nov 19, 2024
Request for Continued Examination
Dec 04, 2024
Response after Non-Final Action
Dec 12, 2024
Non-Final Rejection — §103
Feb 07, 2025
Interview Requested
Feb 14, 2025
Examiner Interview Summary
Feb 14, 2025
Applicant Interview (Telephonic)
Mar 17, 2025
Response Filed
Mar 26, 2025
Final Rejection — §103
Apr 25, 2025
Applicant Interview (Telephonic)
Apr 25, 2025
Examiner Interview Summary
Jun 02, 2025
Response after Non-Final Action
Jul 01, 2025
Request for Continued Examination
Jul 02, 2025
Response after Non-Final Action
Jul 05, 2025
Non-Final Rejection — §103
Sep 04, 2025
Applicant Interview (Telephonic)
Sep 04, 2025
Examiner Interview Summary
Oct 09, 2025
Response Filed
Oct 24, 2025
Final Rejection — §103
Nov 26, 2025
Examiner Interview Summary
Nov 26, 2025
Applicant Interview (Telephonic)
Dec 29, 2025
Response after Non-Final Action
Jan 28, 2026
Request for Continued Examination
Jan 30, 2026
Response after Non-Final Action
Feb 03, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586334
SYSTEMS AND METHODS FOR RECONSTRUCTING A THREE-DIMENSIONAL OBJECT FROM AN IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12586335
SYSTEMS AND METHODS FOR RECONSTRUCTING A THREE-DIMENSIONAL OBJECT FROM AN IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12541916
METHOD FOR ASSESSING THE PHYSICALLY BASED SIMULATION QUALITY OF A GLAZED OBJECT
2y 5m to grant Granted Feb 03, 2026
Patent 12536728
SHADOW MAP BASED LATE STAGE REPROJECTION
2y 5m to grant Granted Jan 27, 2026
Patent 12505615
GENERATING THREE-DIMENSIONAL MODELS USING MACHINE LEARNING MODELS
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
44%
Grant Probability
70%
With Interview (+26.4%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 393 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month