Prosecution Insights
Last updated: April 19, 2026
Application No. 18/654,084

OPTIMIZING IMAGE RENDERING IN DISPLAY APPARATUS

Non-Final OA §103
Filed
May 03, 2024
Examiner
LU, WILLIAM
Art Unit
2624
Tech Center
2600 — Communications
Assignee
Varjo Technologies OY
OA Round
3 (Non-Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
78%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
425 granted / 595 resolved
+9.4% vs TC avg
Moderate +6% lift
Without
With
+6.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
31 currently pending
Career history
626
Total Applications
across all art units

Statute-Specific Performance

§101
5.2%
-34.8% vs TC avg
§103
68.4%
+28.4% vs TC avg
§102
9.8%
-30.2% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 595 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-15 filed October 14th 2025 are pending in the current action. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on October 14th 2025 has been entered. Response to Arguments Applicant's arguments filed May 12th 2025 have been fully considered but they are not persuasive. Claim(s) 1-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Peuhkurinen et al. (US2020/0348515) in view of Sztuk et al. (US2021/0173474) Peuhkurinen ¶126-132 teaches reprojection of an image based on the predicted pose and region of interest with high visual detail. However Peuhkurinen does not explicitly recite: “content of interest remains in the user’s field of view.” However, in an analogous field of endeavor Sztuk teaches content of interest remains in the user’s field of view. See Sztuk ¶56 where the display 118 is positioned to display within the field of view of the user. See Sztuk Figs. 13-14 and ¶100-103 where “scene render module 720 can adjust the content based on information from focus prediction module 708, vergence processing module 712, IMU 716, head tracking sensors 718, and eye prediction module 722. For example, upon receiving the content from engine 756, scene render module 720 adjusts the content based on the predicted state of optical block 320 received from focus prediction module 708.” Thus, everything in the display image frame 401, 401P displayed on display 118 is within the user’s field of view and the high-resolution regions 430, 430P is determined from the content identified by the focus prediction module. (i.e. content of interest). It would have been obvious for one of ordinary skill in the art that the object within the field of view of Peuhkurinen ¶201 can be the content identified by the focus prediction module of Sztuk. Thus, the applicant’s arguments are not persuasive and the 103 rejection under the prior art combination of Peuhkurinen in view of Sztuk remains. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Peuhkurinen et al. (US2020/0348515) in view of Sztuk et al. (US2021/0173474) Consider claim 1, where Peuhkurinen teaches a method of optimizing image rendering implemented in at least one display apparatus, the method comprising: predicting a pose and a first gaze location of a user for a display target time; (See Peuhkurinen Fig 10A, 10B ¶118-126, 230 where in steps 1002 and 1004 the gaze-tracking data and pose tracking data are processed) rendering an image frame based on the predicted pose and the first gaze location; (See Peuhkurinen Fig. 3, 10A, 10B and ¶209-216, 230 where the system will process an input image to generate at least one image, based on the predicted gaze position, the predicted gaze direction, the predicted gaze velocity and the predicted gaze acceleration where an image is displayed during S3) where determining a second gaze location after rendering of the image frame; (See Peuhkurinen Fig. 3, 10A, 10B and ¶209-216, 230 where the processor performs a processing task S9 of receiving, from the server arrangement, the predicted gaze information and the predicted apparatus information) adjusting the rendered image frame such that an image area of the rendered image frame under the first gaze location is moved to the second gaze location; (See Peuhkurinen Figs. 3, 10A, 10B and ¶209-216, 230 where the processor performs a processing task S13 of post-processing the at least one image prior to displaying, based on the predictions made at S11. Between time t4 and t5, the processor performs a processing task of determining whether or not a portion of at least one previous image (or the at least one N.sup.th image) is to be displayed during an adjustment of a configuration of the at least one light source and the at least one optical element.) and displaying the adjusted image frame on the at least one display apparatus. (See Peuhkurinen Figs. 3, 10A, 10B and ¶209-216, 230 and a processing task S17 of displaying the at least one image via the at least one light source is performed after the adjustments are made in S14, S15) Peuhkurinen teaches the first gaze location, however Peuhkurinen does not explicitly teach the first gaze location and its surrounding region. However, in an analogous field of endeavor Sztuk teaches the first gaze location and its surrounding region. (See Sztuk Figs. 13-16 and ¶156-161 where a gaze location 317 can indicate the center of a high-resolution region 430 and a transitional region 440) Therefore, it would have been obvious for one of ordinary skill in the art to modify the image at the gaze point of Peuhkurinen by splitting the regions into a high-resolution region and a transitional region as taught by Sztuk. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known techniques to stabilize an image to accommodate known artifacts in eye movements to yield the predictable results of stabilizing an image. (See Sztuk ¶112-117) Peuhkurinen ¶126-132 teaches reprojection of an image based on the predicted pose and region of interest with high visual detail. However Peuhkurinen does not explicitly recite the limitation wherein content of interest remains in the user’s field of view. However, in an analogous field of endeavor Sztuk teaches wherein content of interest remains in the user’s field of view. (See Sztuk ¶56 where the display 118 is positioned to display within the field of view of the user. See Sztuk Figs. 13-14 and ¶100-103 where “scene render module 720 can adjust the content based on information from focus prediction module 708, vergence processing module 712, IMU 716, head tracking sensors 718, and eye prediction module 722. For example, upon receiving the content from engine 756, scene render module 720 adjusts the content based on the predicted state of optical block 320 received from focus prediction module 708.” Thus, everything in the display image frame 401, 401P displayed on display 118 is within the user’s field of view and the high-resolution regions 430, 430P is determined from the content identified by the focus prediction module. (i.e. content of interest)) It would have been obvious for one of ordinary skill in the art that the object within the field of view of Peuhkurinen ¶201 can be the content identified by the focus prediction module of Sztuk. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of keeping the objects of interest within the scene as originally intended by Peuhkurinen. Consider Claim 2, where Peuhkurinen in view of Sztuk teaches the method of claim 1, wherein the predicting of the pose and the first gaze location comprises analysing pose-tracking data and gaze-tracking data of the user wearing the at least one display apparatus when in operation. (See Peuhkurinen ¶88, 230 where the term “display apparatus” refers to a specialized equipment that is employed to present an extended-reality (XR) environment to the user when the display apparatus in operation is worn by the user on his/her head) Consider claim 3, where Peuhkurinen in view of Sztuk teaches the method of claim 1 wherein the adjusting of the rendered image frame comprises warping the rendered image frame (See Peuhkurinen Figs. 7A, 7B and ¶227 where a distorted image is generated) However, Peuhkurinen does not explicitly teach filtering saccadic movements of eyes of the user from the warping. In an analogous field of endeavor Sztuk teaches filtering saccadic movements of eyes of the user from the warping. (See Sztuk ¶112-117 where eye prediction module 722 may include a processing filter corresponding to each type of natural eye movement. For example, filtering module 800 may include a saccade filter 806, a smooth pursuit filter 808, a vestibulo-ocular filter 810, and a vergence filter 812) Therefore, it would have been obvious for one of ordinary skill in the art to modify the image at the gaze point of Peuhkurinen with a saccade filter as taught by Sztuk. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known techniques to stabilize an image to accommodate known artifacts in eye movements to yield the predictable results of stabilizing an image. (See Sztuk ¶112-117) Consider claim 4, where Peuhkurinen in view of Sztuk teaches the method of any of claims 1, wherein the adjusting of the rendered image comprises warping the rendered image frame, wherein the image area corresponds a focus region of the rendered image frame that is moved from the first gaze location to the second gaze location to match the user's actual gaze direction in a real-time or near real-time. (See Peuhkurinen Fig. 7A, 7B and ¶109, 227 where the operation of warping undistorted region within the boundary 708 to distorted region within the boundary 706 is performed on the order of a couple of milliseconds, thus substantially in real time) Consider claim 5, where Peuhkurinen in view of Sztuk teaches the method of any of claims 1, wherein the adjusting of the rendered image frame comprises warping the rendered image frame, wherein the image area corresponds to an entire area of the image frame that is moved from the first gaze location to the second gaze location to match the user's actual gaze direction in a real-time or near real-time at the time of displaying. (See Peuhkurinen Fig. 7A, 7B and ¶109, 227 where the operation of warping undistorted region 704 to distorted region 702 is performed on the order of a couple of milliseconds, thus substantially in real time) Consider claim 6, where Peuhkurinen in view of Sztuk teaches the method of claim 1, wherein the adjusting of the rendered image frame comprises warping the rendered image frame without considering a pose delta that corresponds to any changes in the pose predicted at a first time to a subsequent second time, a subsequent third, a subsequent fourth, or a subsequent fifth time, (See Peuhkurinen Fig. 3 and ¶208-216) wherein the first time is a timepoint at which the pose and the first gaze location of the user is predicted, (See Peuhkurinen ¶210 At time t1, the processor performs a processing task S2 which is sending, to the server arrangement, gaze information indicative of the gaze position, the gaze direction, the gaze velocity and the gaze acceleration of the user, and apparatus information indicative of the position, the orientation, the velocity and the acceleration of the display apparatus) the subsequent second time is a time interval for which the image frame is rendered, (See Peuhkurinen ¶210 where S3 is displaying the image) the subsequent third time is a timepoint at which the second gaze location is determined, (See Peuhkurinen ¶212 where at time t3, the server arrangement performs a processing task S8 of either completing the processing task S6, or post processing the at least one image generated at step S6, or similar. At time t3 (or optionally after time t3), the processor performs a processing task S9 of receiving, from the server arrangement, the predicted gaze information and the predicted apparatus information) the subsequent fourth time is a timepoint at which the rendered image frame is adjusted, (See Peuhkurinen ¶213 where At time t4, the processor performs a processing task S12 of receiving, from the server arrangement, the at least one image. After completion of S11, the processor performs a processing task S13 of post-processing the at least one image prior to displaying, based on the predictions made at S11. Between time t4 and t5, the processor performs a processing task of determining whether or not a portion of at least one previous image (or the at least one N.sup.th image) is to be displayed during an adjustment of a configuration of the at least one light source and the at least one optical element.) and the subsequent fifth time is a timepoint at which the adjusted image frame is displayed. (See Peuhkurinen ¶215 where At time t6, the processor performs a processing task S17 of displaying the at least one [potentially adjusted] image via the at least one light source) Consider claim 7, where Peuhkurinen in view of Sztuk teaches the method of claim 1, wherein the image area of the image frame under the first gaze location and its surrounding region is moved to the second gaze location using at least one of: a 3 degrees of freedom (DOF) warping, a 6DOF warping, a 9DOF warping, a reprojection operation, or a combination thereof. (See Peuhkurinen ¶91 where the means for tracking the pose of the display apparatus is implemented as a true six Degrees of Freedom (6DoF) tracking system.) Consider claim 8, where Peuhkurinen in view of Sztuk teaches the method of claim 1, further comprising: - extracting motion vectors related to a gaze location, wherein the motion vectors indicate movement of an image region from one image frame to a next image frame; (See Sztuk ¶98-100 where IMU 716 integrates measurement signals received from the accelerometers over time to estimate a velocity vector and integrates the velocity vector over time to determine an estimated position of a reference point on head-mountable display device 102) - comparing a gaze delta indicative of a change between the first gaze location and the second gaze location of the user with the extracted motion vectors; (See Sztuk ¶98-100 where scene render module 720 adjusts the content based on the predicted state of optical block 320 received from focus prediction module 708. For example, scene render module 720 may apply a distortion correction to display content to be displayed (e.g., by warping a rendered image frame prior to display based on the predicted state of optical block 320) to counteract any distortion of the displayed image frame that may be caused by the optical elements of optical block 320) and - determine if the user's gaze is in smooth pursuit tracking of a moving object based on the comparison. (See Sztuk ¶114-121 where a smooth pursuit filter 808 is used in predicting future gaze locations) Therefore, it would have been obvious for one of ordinary skill in the art to modify the gaze tracking of Peuhkurinen with the teachings pertaining to the gaze movement filters as taught by Sztuk. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known techniques to stabilize an image to accommodate known artifacts in eye movements to yield the predictable results of stabilizing an image. (See Sztuk ¶112-117) Consider claim 9, where Peuhkurinen in view of Sztuk teaches the method of claim 8, further comprising refraining from warping the image frame if the gaze delta aligns with the motion vectors and the image frame is displayed at or near the target display time. (See Peuhkurinen Fig. 10B and ¶230 where a portion of the previous image is presented (step 1014) if there is previous image is still present after the predicted adjustment) Consider claim 10, where Peuhkurinen teaches the method of claim 1, wherein the image frame is a video-see-through (VST) image frame. (See Peuhkurinen ¶88 where the display apparatus acts as a device (for example, such as an XR headset, a pair of XR glasses, and the like) that is operable to present a visual scene of an XR environment to the user, thus presenting the image on XR glasses would be a video-see-through image frame) Consider claim 11, where Peuhkurinen teaches a display apparatus comprising: - at least one processor configured to: predict a pose and a first gaze location of a user for a display target time; (See Peuhkurinen Fig 10A, 10B ¶118-126, 230 where in steps 1002 and 1004 the gaze-tracking data and pose tracking data are processed) render an image frame based on the predicted pose and the first gaze location; (See Peuhkurinen Fig. 3, 10A, 10B and ¶209-216, 230 where the system will process an input image to generate at least one image, based on the predicted gaze position, the predicted gaze direction, the predicted gaze velocity and the predicted gaze acceleration where an image is displayed during S3) where determine a second gaze location after rendering of the image frame; (See Peuhkurinen Fig. 3, 10A, 10B and ¶209-216, 230 where the processor performs a processing task S9 of receiving, from the server arrangement, the predicted gaze information and the predicted apparatus information) adjust the rendered image frame such that an image area of the rendered image frame under the first gaze location is moved to the second gaze location; (See Peuhkurinen Figs. 3, 10A, 10B and ¶209-216, 230 where the processor performs a processing task S13 of post-processing the at least one image prior to displaying, based on the predictions made at S11. Between time t4 and t5, the processor performs a processing task of determining whether or not a portion of at least one previous image (or the at least one N.sup.th image) is to be displayed during an adjustment of a configuration of the at least one light source and the at least one optical element.) and display the adjusted image frame on the at least one display apparatus. (See Peuhkurinen Figs. 3, 10A, 10B and ¶209-216, 230 and a processing task S17 of displaying the at least one image via the at least one light source is performed after the adjustments are made in S14, S15) Peuhkurinen teaches the first gaze location, however Peuhkurinen does not explicitly teach the first gaze location and its surrounding region. However, in an analogous field of endeavor Sztuk teaches the first gaze location and its surrounding region. (See Sztuk Figs. 13-16 and ¶156-161 where a gaze location 317 can indicate the center of a high-resolution region 430 and a transitional region 440) Therefore, it would have been obvious for one of ordinary skill in the art to modify the image at the gaze point of Peuhkurinen by splitting the regions into a high-resolution region and a transitional region as taught by Sztuk. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known techniques to stabilize an image to accommodate known artifacts in eye movements to yield the predictable results of stabilizing an image. (See Sztuk ¶112-117) Peuhkurinen ¶126-132 teaches reprojection of an image based on the predicted pose and region of interest with high visual detail. However Peuhkurinen does not explicitly recite the limitation wherein content of interest remains in the user’s field of view. However, in an analogous field of endeavor Sztuk teaches wherein content of interest remains in the user’s field of view. (See Sztuk ¶56 where the display 118 is positioned to display within the field of view of the user. See Sztuk Figs. 13-14 and ¶100-103 where “scene render module 720 can adjust the content based on information from focus prediction module 708, vergence processing module 712, IMU 716, head tracking sensors 718, and eye prediction module 722. For example, upon receiving the content from engine 756, scene render module 720 adjusts the content based on the predicted state of optical block 320 received from focus prediction module 708.” Thus, everything in the display image frame 401, 401P displayed on display 118 is within the user’s field of view and the high-resolution regions 430, 430P is determined from the content identified by the focus prediction module. (i.e. content of interest)) It would have been obvious for one of ordinary skill in the art that the object within the field of view of Peuhkurinen ¶201 can be the content identified by the focus prediction module of Sztuk. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of keeping the objects of interest within the scene as originally intended by Peuhkurinen. Consider claim 12, where Peuhkurinen in view of Sztuk teaches the display apparatus of claim 11, wherein in order to predict the pose and the first gaze location, the at least one processor is further configured to analyse pose-tracking data and gaze-tracking data of the user wearing the at least one display apparatus when in operation. (See Peuhkurinen ¶88, 230 where the term “display apparatus” refers to a specialized equipment that is employed to present an extended-reality (XR) environment to the user when the display apparatus in operation is worn by the user on his/her head) Consider claim 13, where Peuhkurinen in view of Sztuk teaches the display apparatus of claims 11 , wherein in order to adjust the rendered image frame, the at least one processor is further configured to warp the rendered image frame (See Peuhkurinen Figs. 7A, 7B and ¶227 where a distorted image is generated) However, Peuhkurinen does not explicitly teach filtering saccadic movements of eyes of the user from the warping. In an analogous field of endeavor Sztuk teaches filtering saccadic movements of eyes of the user from the warping. (See Sztuk ¶112-117 where eye prediction module 722 may include a processing filter corresponding to each type of natural eye movement. For example, filtering module 800 may include a saccade filter 806, a smooth pursuit filter 808, a vestibulo-ocular filter 810, and a vergence filter 812) Therefore, it would have been obvious for one of ordinary skill in the art to modify the image at the gaze point of Peuhkurinen with a saccade filter as taught by Sztuk. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of using known techniques to stabilize an image to accommodate known artifacts in eye movements to yield the predictable results of stabilizing an image. (See Sztuk ¶112-117) Consider claim 14, where Peuhkurinen in view of Sztuk teaches the display apparatus of any of claims 11, wherein in order to adjust the rendered image, the at least one processor is further configured to warp the rendered image frame, wherein the image area corresponds a focus region of the rendered image frame that is moved from the first gaze location to the second gaze location to match the user's actual gaze direction in a real-time or near real-time. (See Peuhkurinen Fig. 7A, 7B and ¶109, 227 where the operation of warping undistorted region within the boundary 708 to distorted region within the boundary 706 is performed on the order of a couple of milliseconds, thus substantially in real time) Consider claim 15, where Peuhkurinen in view of Sztuk teaches the display apparatus of any of claims 11, wherein in order to adjust the rendered image, the at least one processor is further configured to warp the rendered image frame, wherein the image area corresponds to an entire area of the image frame that is moved from the first gaze location to the second gaze location to match the user's actual gaze direction in a real-time or near real-time at the time of displaying. (See Peuhkurinen Fig. 7A, 7B and ¶109, 227 where the operation of warping undistorted region 704 to distorted region 702 is performed on the order of a couple of milliseconds, thus substantially in real time) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM LU whose telephone number is (571)270-1809. The examiner can normally be reached 10am-6:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached on 571-270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. WILLIAM LU Primary Examiner Art Unit 2624 /WILLIAM LU/Primary Examiner, Art Unit 2624 14th
Read full office action

Prosecution Timeline

May 03, 2024
Application Filed
Feb 06, 2025
Non-Final Rejection — §103
May 12, 2025
Response Filed
Aug 12, 2025
Final Rejection — §103
Oct 14, 2025
Request for Continued Examination
Oct 20, 2025
Response after Non-Final Action
Nov 10, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592191
PIXEL DRIVING CIRCUIT AND DRIVING METHOD THEREFOR, AND DISPLAY PANEL AND DISPLAY APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12591307
APPARATUS AND METHOD FOR DETERMINING AN INTENT OF A USER
2y 5m to grant Granted Mar 31, 2026
Patent 12585054
SUNROOF SYSTEM FOR PERFORMING PASSIVE RADIATIVE COOLING
2y 5m to grant Granted Mar 24, 2026
Patent 12566328
OPTICAL SCANNING DEVICE AND IMAGE FORMING APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12566502
Methods and Systems for Controlling and Interacting with Objects Based on Non-Sensory Information Rendering
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
78%
With Interview (+6.5%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 595 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month