Prosecution Insights
Last updated: April 19, 2026
Application No. 18/670,525

TILE PROCESSING AND TRANSFORMATION FOR VIDEO SEE-THROUGH (VST) EXTENDED REALITY (XR)

Non-Final OA §103
Filed
May 21, 2024
Examiner
PATEL, SHIVANG I
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
93%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
309 granted / 415 resolved
+12.5% vs TC avg
Strong +18% interview lift
Without
With
+18.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
22 currently pending
Career history
437
Total Applications
across all art units

Statute-Specific Performance

§101
10.3%
-29.7% vs TC avg
§103
57.8%
+17.8% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
13.5%
-26.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 415 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Forutanpour et al (US 20220392109 A1) in view of Fenney (US 20210042984 A1) Regarding claim 1, Forutanpour discloses a method comprising: obtaining, using at least one imaging sensor of a video see-through (VST) extended reality (XR) device ([0026] XR/AR/VR content within headsets or head-mounted displays (HMDs).), (i) a first tile corresponding to a first portion of an image frame and (ii) a second tile corresponding to a second portion of the image frame after the first tile is obtained ([0051] an image can be divided or separated into different sections or tiles. After the division of the image, each section or tile can be rendered separately.); mapping, using at least one processing device of the VST XR device, the first tile onto a first distortion tile mesh, the first distortion tile mesh based on one or more characteristics of the first tile ([0064] an iterative feedback loop for capturing distortion patterns from different eye positions and gaze directions with respect to XR/AR/VR devices); mapping, using the at least one processing device, the second tile onto a second distortion tile mesh, the second distortion tile mesh based on one or more characteristics of the second tile ([0065] capture the distortion correction information at each iteration and combine it non-linearly by compositing mesh transformations); predicting, using the at least one processing device, a head pose of a user when the image frame will be displayed ([0066] utilize an eye-tracking camera, e.g., camera 420, to record a camera/eye position and gaze direction for a dynamically switching distortion mesh); transforming, using the at least one processing device, the first and second distortion tile meshes based on the predicted head pose, the second distortion tile mesh transformed after the first distortion tile mesh ([0067] may generate a new distortion mesh file. In order to do so, aspects of the present disclosure may reference locations indicating undistorted grid positions (e.g., positions that may need to be adjusted for camera position and orientation). Aspects of the present disclosure may also record distorted locations based on a camera position point of view); initiating, using the at least one processing device, display of the first and second rendered tiles on at least one display panel of the VST XR device ([0080] , display lens 704 and 714. FIGS. 7A and 7B also depict the user's eye compared to a display panel, e.g., display panel). Fenney discloses rendering, using the at least one processing device, the first and second tiles for display based on the first and second transformed distortion tile meshes, respectively, the second tile rendered after the first tile ([0081] Once processed, this may be rendered at render step 880. After rendering, the rendered content may be displayed at display 890.) Forutanpour and Fenney are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify XR/AR/VR device of Forutanpour to include rendering, using the at least one processing device, the first and second tiles for display based on the first and second transformed distortion tile meshes, respectively, the second tile rendered after the first tile as described by Fenney. The motivation for doing so would have been for mapping the geometry into the image space so as to counteract distortion introduced by an optical arrangement of the non-standard projection display (Fenney, [0006]). Therefore, it would have been obvious to combine Forutanpour and Fenney to obtain the invention as specified in claim 1. Regarding claim 2, Forutanpour discloses a first thread executed by the at least one processing device maps the first tile, transforms the first distortion tile mesh, and renders the first tile ([0065] aspects of the present disclosure may detect a user's eye position and gaze direction in real time with eye tracking cameras and generate an interpolated correction mesh using a previously generated database); and a second thread executed by the at least one processing device maps the second tile, transforms the second distortion tile mesh, and renders the second tile ([0065] aspects of the present disclosure may detect a user's eye position and gaze direction in real time with eye tracking cameras and generate an interpolated correction mesh using a previously generated database). Regarding claim 3, Forutanpour discloses the selected base distortion mesh for one of the distortion tile meshes is selected based on a region of the image frame on which eyes of the user are focused ([0064] aspects of the present disclosure may provide an iterative feedback loop for capturing distortion patterns from different eye positions and gaze directions with respect to XR/AR/VR devices.). Fenney discloses each of the first and second distortion tile meshes is generated based on a selected base distortion mesh of a set of base distortion meshes having different resolutions ([0035] however, the tiles in the image space 702 may be of differing shapes and/or sizes. The projection plane (or virtual projection plane) 704 is however, not divided into equal shape or size tiles (as shown in FIG. 7B).) Forutanpour and Fenney are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify XR/AR/VR device of Forutanpour to include each of the first and second distortion tile meshes is generated based on a selected base distortion mesh of a set of base distortion meshes having different resolutions as described by Fenney. The motivation for doing so would have been for mapping the geometry into the image space so as to counteract distortion introduced by an optical arrangement of the non-standard projection display (Fenney, [0006]). Therefore, it would have been obvious to combine Forutanpour and Fenney to obtain the invention as specified in claim 3. Regarding claim 4 Forutanpour discloses wherein each base distortion mesh is generated during initialization of the VST XR device ([0066] initialize with a ray tracing simulation centered distortion grid and/or utilize a pre-defined reference grid pattern) by: creating an initial base distortion mesh based on one or more characteristics of the at least one imaging sensor ([0045] determine a plurality of geometry meshes based on the lens calibration data, each of the plurality of geometry meshes including a set of texture coordinates); and transforming the initial base distortion mesh to correct for lens distortion, viewpoint differences between the eyes of the user and the at least one imaging sensor, and parallax ([0045] determine a render mesh including a plurality of coordinates based on the plurality of geometry meshes and the pixel map, each of the plurality of coordinates in the render mesh being associated with the weighting factor for each of the plurality of calibration points). Regarding claim 5 Forutanpour discloses obtaining, using the at least one imaging sensor, a third tile corresponding to a third portion of the image frame after the second tile is obtained ([0051] an image can be divided or separated into different sections or tiles. After the division of the image, each section or tile can be rendered separately.); mapping, using the at least one processing device, the third tile onto a third distortion tile mesh, the third distortion tile mesh based on one or more characteristics of the third tile ([0064] an iterative feedback loop for capturing distortion patterns from different eye positions and gaze directions with respect to XR/AR/VR devices); transforming, using the at least one processing device, the third distortion tile mesh based on the predicted head pose, the third distortion tile mesh transformed after the second distortion tile mesh ([0067] may generate a new distortion mesh file. In order to do so, aspects of the present disclosure may reference locations indicating undistorted grid positions (e.g., positions that may need to be adjusted for camera position and orientation). Aspects of the present disclosure may also record distorted locations based on a camera position point of view); initiating, using the at least one processing device, display of the third rendered tile on the at least one display panel of the VST XR device ([0080] , display lens 704 and 714. FIGS. 7A and 7B also depict the user's eye compared to a display panel, e.g., display panel). Fenney discloses rendering, using the at least one processing device, the third tile for display on the at least one display panel based on the third transformed distortion tile mesh, the third tile rendered after the second tile ([0081] Once processed, this may be rendered at render step 880. After rendering, the rendered content may be displayed at display 890.) Forutanpour and Fenney are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify XR/AR/VR device of Forutanpour to include rendering, using the at least one processing device, the third tile for display on the at least one display panel based on the third transformed distortion tile mesh, the third tile rendered after the second tile as described by Fenney. The motivation for doing so would have been for mapping the geometry into the image space so as to counteract distortion introduced by an optical arrangement of the non-standard projection display (Fenney, [0006]). Therefore, it would have been obvious to combine Forutanpour and Fenney to obtain the invention as specified in claim 5. Regarding claim 6, Forutanpour is silent to wherein the first and second tiles partially overlap. Fenney discloses wherein the first and second tiles partially overlap ([0039] a primitive may be in (e.g. may overlap) one or more of the tiles of the projection space 704 and the display list for a tile (which may alternatively be referred to as a control list or control stream) includes indications of primitives (i.e. primitive IDs) which are present in the tile) Forutanpour and Fenney are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify XR/AR/VR device of Forutanpour to include wherein the first and second tiles partially overlap as described by Fenney. The motivation for doing so would have been for mapping the geometry into the image space so as to counteract distortion introduced by an optical arrangement of the non-standard projection display (Fenney, [0006]). Therefore, it would have been obvious to combine Forutanpour and Fenney to obtain the invention as specified in claim 6. Regarding claim 7, Forutanpour is silent to dynamically selecting a number of tiles and a resolution of each of the tiles based on performance of a pipeline that obtains tiles for multiple image frames and generates rendered images based on the obtained tiles. Fenney discloses dynamically selecting a number of tiles and a resolution of each of the tiles based on performance of a pipeline that obtains tiles for multiple image frames and generates rendered images based on the obtained tiles ([0038] single linear transformation which is defined per tile (as represented by the single arrow between a tile in the image space 702 and the corresponding tile in the projection plane 704), such that the distortion that is applied is the same for all pixels within the tile) Forutanpour and Fenney are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify XR/AR/VR device of Forutanpour to include dynamically selecting a number of tiles and a resolution of each of the tiles based on performance of a pipeline that obtains tiles for multiple image frames and generates rendered images based on the obtained tiles as described by Fenney. The motivation for doing so would have been for mapping the geometry into the image space so as to counteract distortion introduced by an optical arrangement of the non-standard projection display (Fenney, [0006]). Therefore, it would have been obvious to combine Forutanpour and Fenney to obtain the invention as specified in claim 7. Regarding claim 8, Forutanpour discloses A video see-through (VST) extended reality (XR) device ([0026] XR/AR/VR content within headsets or head-mounted displays (HMDs).) comprising: at least one imaging sensor configured to (i) capture a first tile corresponding to a first portion of an image frame and (ii) capture a second tile corresponding to a second portion of the image frame after the first tile is captured ([0051] an image can be divided or separated into different sections or tiles. After the division of the image, each section or tile can be rendered separately.); and at least one processing device configured to: map the first tile onto a first distortion tile mesh, the first distortion tile mesh based on one or more characteristics of the first tile ([0064] an iterative feedback loop for capturing distortion patterns from different eye positions and gaze directions with respect to XR/AR/VR devices);; map the second tile onto a second distortion tile mesh, the second distortion tile mesh based on one or more characteristics of the second tile ([0065] capture the distortion correction information at each iteration and combine it non-linearly by compositing mesh transformations); predict a head pose of a user when the image frame will be displayed ([0066] utilize an eye-tracking camera, e.g., camera 420, to record a camera/eye position and gaze direction for a dynamically switching distortion mesh); transform the first and second distortion tile meshes based on the predicted head pose, the at least one processing device configured to transform the second distortion tile mesh after the first distortion tile mesh ([0067] may generate a new distortion mesh file. In order to do so, aspects of the present disclosure may reference locations indicating undistorted grid positions (e.g., positions that may need to be adjusted for camera position and orientation). Aspects of the present disclosure may also record distorted locations based on a camera position point of view); initiate display of the first and second rendered tiles on at least one display panel of the VST XR device ([0080] , display lens 704 and 714. FIGS. 7A and 7B also depict the user's eye compared to a display panel, e.g., display panel). Fenney discloses rendering, using the at least one processing device, the first and second tiles for display based on the first and second transformed distortion tile meshes, respectively, the second tile rendered after the first tile ([0081] Once processed, this may be rendered at render step 880. After rendering, the rendered content may be displayed at display 890.) Forutanpour and Fenney are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify XR/AR/VR device of Forutanpour to include render the first and second tiles for display based on the first and second transformed distortion tile meshes, respectively, the at least one processing device configured to render the second tile after the first tile as described by Fenney. The motivation for doing so would have been for mapping the geometry into the image space so as to counteract distortion introduced by an optical arrangement of the non-standard projection display (Fenney, [0006]). Therefore, it would have been obvious to combine Forutanpour and Fenney to obtain the invention as specified in claim 8. Regarding claim 9, Forutanpour discloses execute a first thread to map the first tile, transform the first distortion tile mesh, and render the first tile ([0065] aspects of the present disclosure may detect a user's eye position and gaze direction in real time with eye tracking cameras and generate an interpolated correction mesh using a previously generated database); and execute a second thread to map the second tile, transform the second distortion tile mesh, and render the second tile ([0065] aspects of the present disclosure may detect a user's eye position and gaze direction in real time with eye tracking cameras and generate an interpolated correction mesh using a previously generated database). Regarding claim 10, Forutanpour discloses select the selected base distortion mesh for one of the distortion tile meshes based on a region of the image frame on which eyes of the user are focused ([0064] aspects of the present disclosure may provide an iterative feedback loop for capturing distortion patterns from different eye positions and gaze directions with respect to XR/AR/VR devices.). Fenney discloses generate each of the first and second distortion tile meshes based on a selected base distortion mesh of a set of base distortion meshes having different resolutions ([0035] however, the tiles in the image space 702 may be of differing shapes and/or sizes. The projection plane (or virtual projection plane) 704 is however, not divided into equal shape or size tiles (as shown in FIG. 7B).) Forutanpour and Fenney are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify XR/AR/VR device of Forutanpour to include generate each of the first and second distortion tile meshes based on a selected base distortion mesh of a set of base distortion meshes having different resolutions as described by Fenney. The motivation for doing so would have been for mapping the geometry into the image space so as to counteract distortion introduced by an optical arrangement of the non-standard projection display (Fenney, [0006]). Therefore, it would have been obvious to combine Forutanpour and Fenney to obtain the invention as specified in claim 10. Regarding claim 11 Forutanpour discloses the at least one processing device is further configured to generate each base distortion mesh during initialization of the VST XR device([0066] initialize with a ray tracing simulation centered distortion grid and/or utilize a pre-defined reference grid pattern); and to generate each base distortion mesh, the at least one processing device is configured to: create an initial base distortion mesh based on one or more characteristics of the at least one imaging sensor ([0045] determine a plurality of geometry meshes based on the lens calibration data, each of the plurality of geometry meshes including a set of texture coordinates); and transform the initial base distortion mesh to correct for lens distortion, viewpoint differences between the eyes of the user and the at least one imaging sensor, and parallax ([0045] determine a render mesh including a plurality of coordinates based on the plurality of geometry meshes and the pixel map, each of the plurality of coordinates in the render mesh being associated with the weighting factor for each of the plurality of calibration points). Regarding claim 12 Forutanpour discloses the at least one imaging sensor is further configured to capture a third tile corresponding to a third portion of the image frame after the second tile is captured ([0051] an image can be divided or separated into different sections or tiles. After the division of the image, each section or tile can be rendered separately.); and the at least one processing device is further configured to: map the third tile onto a third distortion tile mesh, the third distortion tile mesh based on one or more characteristics of the third tile ([0064] an iterative feedback loop for capturing distortion patterns from different eye positions and gaze directions with respect to XR/AR/VR devices); transform the third distortion tile mesh based on the predicted head pose, the at least one processing device configured to transform the third distortion tile mesh after the second distortion tile mesh ([0067] may generate a new distortion mesh file. In order to do so, aspects of the present disclosure may reference locations indicating undistorted grid positions (e.g., positions that may need to be adjusted for camera position and orientation). Aspects of the present disclosure may also record distorted locations based on a camera position point of view); and initiate display of the third rendered tile on the at least one display panel of the VST XR device ([0080] , display lens 704 and 714. FIGS. 7A and 7B also depict the user's eye compared to a display panel, e.g., display panel). Fenney discloses render the third tile for display based on the third transformed distortion tile mesh, the at least one processing device configured to render the third tile after the second tile ([0081] Once processed, this may be rendered at render step 880. After rendering, the rendered content may be displayed at display 890.) Forutanpour and Fenney are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify XR/AR/VR device of Forutanpour to include render the third tile for display based on the third transformed distortion tile mesh, the at least one processing device configured to render the third tile after the second tile as described by Fenney. The motivation for doing so would have been for mapping the geometry into the image space so as to counteract distortion introduced by an optical arrangement of the non-standard projection display (Fenney, [0006]). Therefore, it would have been obvious to combine Forutanpour and Fenney to obtain the invention as specified in claim 12. Regarding claim 13, Forutanpour is silent to wherein the first and second tiles partially overlap. Fenney discloses wherein the first and second tiles partially overlap [0039] a primitive may be in (e.g. may overlap) one or more of the tiles of the projection space 704 and the display list for a tile (which may alternatively be referred to as a control list or control stream) includes indications of primitives (i.e. primitive IDs) which are present in the tile) Forutanpour and Fenney are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify XR/AR/VR device of Forutanpour to include wherein the first and second tiles partially overlap as described by Fenney. The motivation for doing so would have been for mapping the geometry into the image space so as to counteract distortion introduced by an optical arrangement of the non-standard projection display (Fenney, [0006]). Therefore, it would have been obvious to combine Forutanpour and Fenney to obtain the invention as specified in claim 13. Regarding claim 14 Forutanpour is silent to wherein the at least one processing device is further configured to dynamically select a number of tiles and a resolution of each of the tiles based on performance of a pipeline that obtains tiles for multiple image frames and generates rendered images based on the obtained tiles. Fenney discloses wherein the at least one processing device is further configured to dynamically select a number of tiles and a resolution of each of the tiles based on performance of a pipeline that obtains tiles for multiple image frames and generates rendered images based on the obtained tiles ([0038] single linear transformation which is defined per tile (as represented by the single arrow between a tile in the image space 702 and the corresponding tile in the projection plane 704), such that the distortion that is applied is the same for all pixels within the tile) Forutanpour and Fenney are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify XR/AR/VR device of Forutanpour to include wherein the at least one processing device is further configured to dynamically select a number of tiles and a resolution of each of the tiles based on performance of a pipeline that obtains tiles for multiple image frames and generates rendered images based on the obtained tiles as described by Fenney. The motivation for doing so would have been for mapping the geometry into the image space so as to counteract distortion introduced by an optical arrangement of the non-standard projection display (Fenney, [0006]). Therefore, it would have been obvious to combine Forutanpour and Fenney to obtain the invention as specified in claim 14. Regarding claim 15, Forutanpour discloses A non-transitory machine readable medium containing instructions that when executed cause at least one processor of a video see-through (VST) extended reality (XR) device ([0031] the functions described may be implemented in hardware, software, or any combination thereof. If implemented in software, the functions may be stored on or encoded as one or more instructions or code on a computer-readable medium) to: obtain, using at least one imaging sensor of the VST XR device ([0026] XR/AR/VR content within headsets or head-mounted displays (HMDs).), (i) a first tile corresponding to a first portion of an image frame and (ii) a second tile corresponding to a second portion of the image frame after the first tile is obtained ([0051] an image can be divided or separated into different sections or tiles. After the division of the image, each section or tile can be rendered separately.); map the first tile onto a first distortion tile mesh, the first distortion tile mesh based on one or more characteristics of the first tile ([0064] an iterative feedback loop for capturing distortion patterns from different eye positions and gaze directions with respect to XR/AR/VR devices);; map the second tile onto a second distortion tile mesh, the second distortion tile mesh based on one or more characteristics of the second tile ([0065] capture the distortion correction information at each iteration and combine it non-linearly by compositing mesh transformations); predict a head pose of a user when the image frame will be displayed ([0066] utilize an eye-tracking camera, e.g., camera 420, to record a camera/eye position and gaze direction for a dynamically switching distortion mesh); transform (i) the first distortion tile mesh based on the predicted head pose and (ii) the second distortion tile mesh based on the predicted head pose after the first distortion tile mesh is transformed ([0067] may generate a new distortion mesh file. In order to do so, aspects of the present disclosure may reference locations indicating undistorted grid positions (e.g., positions that may need to be adjusted for camera position and orientation). Aspects of the present disclosure may also record distorted locations based on a camera position point of view); initiate display of the first and second rendered tiles on at least one display panel of the VST XR device ([0080] , display lens 704 and 714. FIGS. 7A and 7B also depict the user's eye compared to a display panel, e.g., display panel). Fenney discloses render (i) the first tile for display based on the first transformed distortion tile mesh and (ii) the second tile for display based on the second transformed distortion tile mesh after the first tile is rendered ([0081] Once processed, this may be rendered at render step 880. After rendering, the rendered content may be displayed at display 890.) Forutanpour and Fenney are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify XR/AR/VR device of Forutanpour to include render (i) the first tile for display based on the first transformed distortion tile mesh and (ii) the second tile for display based on the second transformed distortion tile mesh after the first tile is rendered as described by Fenney. The motivation for doing so would have been for mapping the geometry into the image space so as to counteract distortion introduced by an optical arrangement of the non-standard projection display (Fenney, [0006]). Therefore, it would have been obvious to combine Forutanpour and Fenney to obtain the invention as specified in claim 15. Regarding claim 16, Forutanpour discloses execute a first thread to map the first tile, transform the first distortion tile mesh, and render the first tile ([0065] aspects of the present disclosure may detect a user's eye position and gaze direction in real time with eye tracking cameras and generate an interpolated correction mesh using a previously generated database); and execute a second thread to map the second tile, transform the second distortion tile mesh, and render the second tile ([0065] aspects of the present disclosure may detect a user's eye position and gaze direction in real time with eye tracking cameras and generate an interpolated correction mesh using a previously generated database). Regarding claim 17, Forutanpour discloses select the selected base distortion mesh for one of the distortion tile meshes based on a region of the image frame on which eyes of the user are focused ([0064] aspects of the present disclosure may provide an iterative feedback loop for capturing distortion patterns from different eye positions and gaze directions with respect to XR/AR/VR devices.). Fenney discloses generate each of the first and second distortion tile meshes based on a selected base distortion mesh of a set of base distortion meshes having different resolutions ([0035] however, the tiles in the image space 702 may be of differing shapes and/or sizes. The projection plane (or virtual projection plane) 704 is however, not divided into equal shape or size tiles (as shown in FIG. 7B).) Forutanpour and Fenney are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify XR/AR/VR device of Forutanpour to include generate each of the first and second distortion tile meshes based on a selected base distortion mesh of a set of base distortion meshes having different resolutions as described by Fenney. The motivation for doing so would have been for mapping the geometry into the image space so as to counteract distortion introduced by an optical arrangement of the non-standard projection display (Fenney, [0006]). Therefore, it would have been obvious to combine Forutanpour and Fenney to obtain the invention as specified in claim 17. Regarding claim 18 Forutanpour discloses further containing instructions that when executed cause the at least one processor to generate each base distortion mesh during initialization of the VST XR device ([0066] initialize with a ray tracing simulation centered distortion grid and/or utilize a pre-defined reference grid pattern); wherein the instructions that when executed cause the at least one processor to generate each base distortion mesh comprise instructions that when executed cause the at least one processor to: create an initial base distortion mesh based on one or more characteristics of the at least one imaging sensor ([0045] determine a plurality of geometry meshes based on the lens calibration data, each of the plurality of geometry meshes including a set of texture coordinates); and transform the initial base distortion mesh to correct for lens distortion, viewpoint differences between the eyes of the user and the at least one imaging sensor, and parallax ([0045] determine a render mesh including a plurality of coordinates based on the plurality of geometry meshes and the pixel map, each of the plurality of coordinates in the render mesh being associated with the weighting factor for each of the plurality of calibration points). Regarding claim 19 Forutanpour discloses further containing instructions that when executed cause the at least one processor to: obtain a third tile corresponding to a third portion of the image frame after the second tile is obtained ([0051] an image can be divided or separated into different sections or tiles. After the division of the image, each section or tile can be rendered separately.); map the third tile onto a third distortion tile mesh, the third distortion tile mesh based on one or more characteristics of the third tile ([0064] an iterative feedback loop for capturing distortion patterns from different eye positions and gaze directions with respect to XR/AR/VR devices); transform the third distortion tile mesh based on the predicted head pose after the second distortion tile mesh is transformed ([0067] may generate a new distortion mesh file. In order to do so, aspects of the present disclosure may reference locations indicating undistorted grid positions (e.g., positions that may need to be adjusted for camera position and orientation). Aspects of the present disclosure may also record distorted locations based on a camera position point of view); initiate display of the third rendered tile on the at least one display panel of the VST XR device ([0080] , display lens 704 and 714. FIGS. 7A and 7B also depict the user's eye compared to a display panel, e.g., display panel). Fenney discloses render the third tile for display on the at least one display panel based on the third transformed distortion tile mesh after the second tile is rendered ([0081] Once processed, this may be rendered at render step 880. After rendering, the rendered content may be displayed at display 890.) Forutanpour and Fenney are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify XR/AR/VR device of Forutanpour to include render the third tile for display on the at least one display panel based on the third transformed distortion tile mesh after the second tile is rendered as described by Fenney. The motivation for doing so would have been for mapping the geometry into the image space so as to counteract distortion introduced by an optical arrangement of the non-standard projection display (Fenney, [0006]). Therefore, it would have been obvious to combine Forutanpour and Fenney to obtain the invention as specified in claim 19. Regarding claim 20 Forutanpour is silent to further containing instructions that when executed cause the at least one processor to dynamically select a number of tiles and a resolution of each of the tiles based on performance of a pipeline that obtains tiles for multiple image frames and generates rendered images based on the obtained tiles. Fenney discloses further containing instructions that when executed cause the at least one processor to dynamically select a number of tiles and a resolution of each of the tiles based on performance of a pipeline that obtains tiles for multiple image frames and generates rendered images based on the obtained tiles ([0038] single linear transformation which is defined per tile (as represented by the single arrow between a tile in the image space 702 and the corresponding tile in the projection plane 704), such that the distortion that is applied is the same for all pixels within the tile) Forutanpour and Fenney are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify XR/AR/VR device of Forutanpour to include further containing instructions that when executed cause the at least one processor to dynamically select a number of tiles and a resolution of each of the tiles based on performance of a pipeline that obtains tiles for multiple image frames and generates rendered images based on the obtained tiles as described by Fenney. The motivation for doing so would have been for mapping the geometry into the image space so as to counteract distortion introduced by an optical arrangement of the non-standard projection display (Fenney, [0006]). Therefore, it would have been obvious to combine Forutanpour and Fenney to obtain the invention as specified in claim 20. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIVANG I PATEL whose telephone number is (571)272-8964. The examiner can normally be reached on M-F 9-5am. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached on (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHIVANG I PATEL/Primary Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

May 21, 2024
Application Filed
Jan 16, 2026
Non-Final Rejection — §103
Mar 24, 2026
Applicant Interview (Telephonic)
Mar 24, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602847
SYSTEMS AND METHODS FOR LAYERED IMAGE GENERATION
2y 5m to grant Granted Apr 14, 2026
Patent 12599838
APPARATUS AND METHODS FOR RECORDING AND REPORTING ABUSIVE ONLINE INTERACTIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12592004
IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12591947
DISTORTION-BASED IMAGE RENDERING
2y 5m to grant Granted Mar 31, 2026
Patent 12584296
Work Machine Display Control System, Work Machine Display System, Work Machine, Work Machine Display Control Method, And Work Machine Display Control Program
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
93%
With Interview (+18.5%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 415 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month