DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This office action is responsive to the amendment received 01/21/2026.
In the response to the Non-Final Office Action 10/22/2025, the applicant states that no new matter has been added in current amendments.
Claims 1, 10, and 19-20 have been amended. Claim 17 has been cancelled. In summary, claims 1-17 and 18-20 are pending in current application.
Response to Arguments
Applicant's arguments filed 01/21/2026 have been fully considered but they are not persuasive.
Regarding to claim 1, the applicant argues that cited arts fail to teach or suggest “generating a modified first image based on the captured first image and the sensor readout, the generating including equalizing respective readout resolutions of the plurality of zones corresponding to the image sensor used to capture the first image" as recited in claim 1. The arguments have been fully considered, but they are not persuasive. The examiner cannot concur with the applicant for following reasons:
Sztuk (‘474) discloses “generating a modified first image based on the captured first image and the sensor readout”. For example, in paragraph [0061], Sztuk (‘474) teaches as the user's gaze location 317 moves around the display panel due to rotation of the user's eye 350, the portions 319, 321, and 323 of display panel 118 that correspond to the high-resolution, transitional, and peripheral portions of the image change accordingly. In paragraph [0067], Sztuk (‘474) teaches identifying the content associated with the gaze location; Sztuk (‘474) further teaches rendering display images, correcting display images, and modifying display images; Sztuk (‘474) further teaches displaying pixels are operated to display a foveated display image frame 401. In paragraph [0068], Sztuk (‘474) teaches obtaining a predicted gaze location for a future display frame and generate a foveated display image frame 401 in advance. In paragraph [0151], Sztuk (‘474) teaches the size and shape of the high-resolution region 430 and transition region 440 for each prediction are modified accordingly; Sztuk (‘474) further teaches the size of the high-resolution region 430 and the transitional region 440 associated with each prediction are reduced and modified.
Sztuk (‘474) further discloses “the generating including equalizing respective readout resolutions of the plurality of zones corresponding to the image sensor used to capture the first image”. For example, in paragraph [0102], Sztuk (‘474) teaches rendering all zone of the high-resolution region 430 at a target resolution, e.g., a resolution corresponding to a fovea region of a human eye; Sztuk (‘474) further teaches the high-resolution region 430 is upsampled via super-resolution techniques; Sztuk (‘474) further more teaches adjusting the content of the peripheral portion 400 based on a predicted gaze location and a confidence level for that predicted gaze location. In paragraph [0151], Sztuk teaches the size and shape of the high-resolution region 430 and transition region 440 for each prediction are modified accordingly; Sztuk further teaches all zones of the high-resolution region 430 have high resolution. In paragraph [0152], Sztuk (‘474) teaches the first content in the high-resolution region is rendered at a relatively high, e.g., foveal, resolution; Sztuk (‘474) further teaches the resolution of the high-resolution region, i.e. foveal zone is equal. In Fig. 16 and paragraph [0161], Sztuk teaches the high-resolution region 430 has high resolution; Sztuk further teaches all zones of the high-resolution region 430 have high resolution.
PNG
media_image1.png
162
192
media_image1.png
Greyscale
.
Claims 19 and 20 recite similar subject matter as claim 1. Therefore, claims 19 and 20 are not allowable due to the similar reasons as discussed above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-8, 10-14, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Sztuk (US 20210173474 A1, hereafter ‘474) and in view of Hua (US 20240013752 A1).
Regarding to claim 1 (Currently Amended), Sztuk (‘474) discloses a method comprising, by a computing system (Fig. 1; [0047]: a head-mounted display; computing system; [0054]: head-mountable display device; Fig. 3; [0060]: display the high-resolution portion of the display image; display a transition region in which the resolution of the displayed image frame decreases with increasing distance from the gaze location 317; [0061]: as the user's gaze location 317 moves around the display panel due to rotation of the user's eye 350, the portions 319, 321, and 323 of display panel 118 that correspond to the high-resolution, transitional, and peripheral portions of the image change accordingly; [0062]: a foveated display image frame; [0067]: the user's eye is tracked):
generating, based on a gaze, a foveated map that includes a plurality of foveal regions to determine a sensor readout for an image sensor of the computing system ([0046]: one or more system cameras, i.e. image sensors, capture the real-world content, i.e., sensor readout, and display the real-world content, i.e. sensor readout, to the user by the device; Fig. 3; [0057]: identify an eye movement; identify a change in a gaze location; [0058]: the gaze direction is defined herein as the direction of a foveal axis 364; Fig. 3; [0060]: a portion 319 of display panel 118 surrounding the user's gaze location displays the high-resolution portion of the display image; a portion 321 of display panel 118 surrounding the portion 319 displays a transition region in which the resolution of the displayed image frame decreases with increasing distance from the gaze location 317; the generated foveated map includes the high-resolution portion, a transition region, and low resolution portion; [0061]: the foveated map changes, as the user's gaze location 317 moves around the display panel; the foveated map includes the high-resolution, transitional, and peripheral portions of the image; Fig. 4; [0062]: a generated foveated display image frame 401 includes a high-resolution portion 430, a peripheral portion 400, and a transition portion 440 extending between the high-resolution portion and the peripheral portion;
PNG
media_image2.png
380
530
media_image2.png
Greyscale
; [0126]);
determining the sensor readout comprising a plurality of zones corresponding to the image sensor based on the plurality of foveal regions ([0046]: one or more system cameras, i.e. image sensors, capture the real-world content, i.e., sensor readout, and display the real-world content with multiple zones to the user by the device; [0061]: as the user's gaze location 317 moves around the display panel due to rotation of the user's eye 350, the portions 319, 321, and 323 of display panel 118 that correspond to the high-resolution, transitional, and peripheral portions of the image change accordingly; Fig. 4; [0062]: display a foveated display image frame; a foveated display image frame 401 includes a high-resolution portion 430, a peripheral portion 400, and a transition portion 440 extending between the high-resolution portion and the peripheral portion; [0063]: peripheral portion 400 of image frame 401 has a resolution that is lower than the resolution of high-resolution portion 430; [0067]: display high-resolution region 430 and transition region 440 centered on a current gaze location 317), and
capturing a first image using the image sensor ([0046]: one or more system cameras, i.e. image sensors, capture the real-world content, and display the real-world content with multiple zones to the user by the device; [0106]: imaging device 760 includes one or more cameras, one or more video cameras, other devices capable of capturing images; adjust one or more imaging parameters); and
generating a modified first image based on the captured first image and the sensor readout ([0061]: as the user's gaze location 317 moves around the display panel due to rotation of the user's eye 350, the portions 319, 321, and 323 of display panel 118 that correspond to the high-resolution, transitional, and peripheral portions of the image change accordingly; [0067]: identify the content associated with the gaze location; render display images, and correct display images; display pixels are operated to display a foveated display image frame 401; [0068]: obtain a predicted gaze location for a future display frame and generate a foveated display image frame 401 in advance; [0069]), the generating including equalizing respective readout resolutions of the plurality of zones corresponding to the image sensor used to capture the first image (Sztuk (‘474); [0102]: the high-resolution region 430 is upsampled via super-resolution techniques).
Sztuk (‘474) fails to explicitly disclose:
each of the plurality of zones indicating a readout resolution for an area of the image sensor for the respective zone;
In same field of endeavor, Hua teaches:
each of the plurality of zones indicating a readout resolution for an area of the image sensor for the respective zone ([0005]: foveal vision sensor and peripheral vision sensor were integrated to capture foveated images and the peripheral images; the high-resolution inset region is optically relocated to different locations; Fig. 19; [0050]: a captured image of a foveated image; [0056]: the foveated region offers a high angular resolution; the peripheral region offers a low resolution; [0057]: the multi-resolution foveation scheme; Fig. 19 A-C; [0096]: convolve the original image with the relative resolution distribution function as a filter; the captured image of the entire FOV for overall effects; capture zoomed-in views of three different viewing angles; the captured images show noticeable degradation from the center to the 80-degree angle);
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sztuk (‘474) to include each of the plurality of zones indicating a readout resolution for an area of the image sensor for the respective zone as taught by Hua. The motivation for doing so would have been to apply foveation techniques in imaging and display applications; to capture foveated images; to implement a foveated HMD; to convolve the original image with the relative resolution distribution function as a filter as taught by Hua in paragraphs [0005-006] and [0096].
Regarding to claim 2 (Original), Sztuk (‘474) and Hua discloses the method of claim 1, wherein the plurality of foveal regions comprises one or more of a central foveal region, a mid foveal region, or an outer foveal region (or is optional; Sztuk (‘474); Fig. 3; [0060]: a portion 319 of display panel 118 surrounding the user's gaze location displays the high-resolution portion of the display image; Fig. 4; [0062]: a foveated display image frame 401 includes a high-resolution portion 430, a peripheral portion 400, and a transition portion 440 extending between the high-resolution portion and the peripheral portion;
PNG
media_image2.png
380
530
media_image2.png
Greyscale
; Fig. 4; [0065]: high-resolution portion 430 has a rectilinear border and is central foveal region; the surrounding transition portion 440 is mid foveal region).
Regarding to claim 3 (Original), Sztuk (‘474) and Hua discloses the method of claim 2, wherein the plurality of zones comprises one or more of a first zone corresponding to the central foveal region, a second zone corresponding to the mid foveal region, or a third zone corresponding to the outer foveal region (or is optional; Sztuk (‘474); Fig. 3; [0060]: a portion 319 of display panel 118 surrounding the user's gaze location displays the high-resolution portion of the display image; Fig. 4; [0062]: a foveated display image frame 401 includes a high-resolution portion 430, a peripheral portion 400, and a transition portion 440 extending between the high-resolution portion and the peripheral portion;
PNG
media_image2.png
380
530
media_image2.png
Greyscale
; Fig. 4; [0065]: high-resolution portion 430 has a rectilinear border and is first zone corresponding to central foveal region; the surrounding transition portion 440 is a second zone corresponding to mid foveal region).
Regarding to claim 4 (Original), Sztuk (‘474) and Hua discloses the method of claim 1, wherein each of the plurality of zones further indicates a field-of-view (FOV) for the image sensor for the respective zone (Sztuk (‘474); [0056]: the field of view of the displayed content; [0060]: a portion 321 of display panel 118 surrounding the portion 319 displaying the high-resolution portion of the image frame displays a transition region in which the resolution of the displayed image frame decreases with increasing distance from the gaze location 317; Fig. 4; [0065]: an edge of their field of view; [0106]: a field of view of imaging device).
Sztuk (‘474) and Hua further discloses a field-of-view (FOV) for the image sensor (Hua; Fig. 12; [0090]: the full FOV of the system is 80 degrees; Fig. 12; [0090]: the center of the field of view to the ±40° edge fields; [0095]: a foveated display with 130-degree FOV yields; Fig. 19A; [0096]: FIG. 19A shows the captured image of the entire FOV for overall effects).
Same motivation of claim 1 is applied here.
Regarding to claim 5 (Original), Sztuk (‘474) and Hua discloses the method of claim 1,
wherein the first readout resolution is a higher resolution than the second readout resolution (Sztuk (‘474); Fig. 4; [0062]: a foveated display image frame 401 includes a high-resolution portion 430, a peripheral portion 400, and a transition portion 440 extending between the high-resolution portion and the peripheral portion;
PNG
media_image2.png
380
530
media_image2.png
Greyscale
).
Sztuk (‘474) and Hua further discloses:
wherein a first zone of the plurality of zones indicates a first readout resolution for a first area of the image sensor and a second zone of the plurality of zones indicates a second readout resolution of a second area of the image sensor (Hua; [0005]: foveal vision sensor and peripheral vision sensor were integrated to capture foveated images and the peripheral images; the high-resolution inset region is optically relocated to different locations; Fig. 19; [0050]: a captured image of a foveated image; [0056]: the foveated region offers a high angular resolution; the peripheral region offers a low resolution; [0057]: the multi-resolution foveation scheme), and wherein the first readout resolution is a higher resolution than the second readout resolution (Hua; [0056]: the foveated region offers a high angular resolution; the peripheral region offers a low resolution).
Same motivation of claim 1 is applied here.
Regarding to claim 6 (Original), Sztuk (‘474) and Hua discloses the method of claim 5, wherein the first area overlaps the second area (Sztuk (‘474); Fig. 4; [0062]: a foveated display image frame 401 includes a high-resolution portion 430, a peripheral portion 400, and a transition portion 440 extending between the high-resolution portion and the peripheral portion;
PNG
media_image2.png
380
530
media_image2.png
Greyscale
; a high-resolution portion and transition portion are overlapped as illustrated in Fig. 4).
Regarding to claim 7 (Original), Sztuk (‘474) and Hua discloses the method of claim 1, further comprising:
blending, using an image signal processor of the computing system, a first sensor readout corresponding to a first zone of the plurality of zones and a second sensor readout corresponding to a second zone of the plurality of zones (Sztuk (‘474); [0064]: transitional portion 440 is blended such that the resolution smoothly varies from the outer boundary 455 at the resolution of the low resolution portion 400; the transitional portion 450 may be faded to blend with background portion; [0103]: apply a blending function to adjust the resolution of transitional portion 440 to be rendered; upon rendering, the resolution smoothly transitions from a resolution of the high-resolution portion 430 of the image to the resolution of the background region; the blended transitional portion is used for the transitional region 440 of the composite content).
Regarding to claim 8 (Original), Sztuk (‘474) and Hua discloses the method of claim 1, further comprising:
omitting one or more areas corresponding to one or more zones of the plurality of zones to be readout in the sensor readout (Hua; Fig. 19B; [0096]: the captured images show noticeable degradation from the center to the 80-degree angle;
PNG
media_image3.png
228
404
media_image3.png
Greyscale
; apply the filter to omit some details in the captured images with 60-degree angle; Fig. 19C; [0096]: these images were obtained by convolving the original image with the human relative resolution distribution function as a filter.).
Regarding to claim 10 (Currently Amended), Sztuk (‘474) and Hua discloses the method of claim 1, further comprising:
determining an updated gaze (Sztuk (‘474); Fig. 3; [0057]: eye tracking unit 215 includes one or more cameras; identify an eye movement; identify a change in a gaze location; [0058]: the gaze direction is defined herein as the direction of a foveal axis 364; [0061]: user's gaze location 317 moves around the display panel due to rotation of the user's eye 350; Fig. 12; [0154]: determine and update a static, or current, gaze location using, for example, a latest (most recent) eye tracking data point); and
generating an updated foveated map based on the updated gaze, wherein the updated foveated map comprises a plurality of updated foveal regions (Sztuk (‘474); [0061]: as the user's gaze location 317 moves around the display panel due to rotation of the user's eye 350, the portions 319, 321, and 323 of display panel 118 that correspond to the high-resolution, transitional, and peripheral portions of the image change accordingly; Fig. 4; [0062]; Fig. 12; [0155]: the size of the high-resolution region is further reduced to a minimum size for the foveated display frames to be displayed while the gaze location is static).
Regarding to claim 11 (Original), Sztuk (‘474) and Hua discloses the method of claim 10, further comprising:
determining an updated sensor readout comprising a plurality of updated zones corresponding to the image sensor based on the plurality of updated foveal regions (Sztuk (‘474); [0046]: one or more system cameras, i.e. image sensors, capture the real-world content, i.e., sensor readout, and display the real-world content with multiple zones to the user by the device; [0061]: as the user's gaze location 317 moves around the display panel due to rotation of the user's eye 350, the portions 319, 321, and 323 of display panel 118 that correspond to the high-resolution, transitional, and peripheral portions of the image change accordingly; Fig. 4; [0062]);
capturing a second image using the image sensor (Sztuk (‘474); [0046]: a real-world content is captured by one or more system cameras, i.e. image sensor, and is displayed to the user by the device; [0106]: imaging device 760 includes one or more cameras, one or more video cameras, other devices capable of capturing images; imaging device 760 receives one or more calibration parameters from console 750 to adjust one or more imaging parameters and to capture a second image); and
generating a modified second image based on the captured second image and the updated sensor readout (Sztuk (‘474); [0061]: as the user's gaze location 317 moves around the display panel due to rotation of the user's eye 350, the portions 319, 321, and 323 of display panel 118 that correspond to the high-resolution, transitional, and peripheral portions of the image change accordingly; [0067]: content associated with the gaze location is identified; display images are rendered, and corrected; display pixels are operated to display a foveated display image frame 401; [0106]).
Regarding to claim 12 (Original), Sztuk (‘474) and Hua discloses the method of claim 1, further comprising:
performing, using an image signal processor of the computing system, noise reduction (Sztuk (‘474); [0056]: correct the display light for one or more additional optical errors; [0106]: increase signal to noise ratio, i.e. reducing noise; [0155]: reduce the impact of system latency and prevent visual errors or disruptions for the user), wherein a first noise reduction algorithm is used for a first zone of the plurality of zones and a second noise reduction algorithm is used for a second zone of the plurality of zones (Sztuk (‘474); [0103]: smooth the content with an appropriate smoothing filter; [0106]: one or more filters; increase signal to noise ratio; [0113]: filtering module 800 receives other data such as user calibration data 816, scene content information 818, and head tracking data 820; [0116-0117]).
Regarding to claim 13 (Original), Sztuk (‘474) and Hua discloses the method of claim 1, further comprising:
performing depth processing using a depth map of a scene captured by the first image to determine a mapping (Fig. 5; [0074]: the vergence depth may correspond to the virtual depth of virtual object 508; [0094]: achieve a corresponding image plane depth to maintain a zone of comfort for the user; [0104]: intentionally blur an object in the high-resolution region 430 that is at a different depth from another object in the high-resolution region on which the user's eyes are verged; region that are at depths that are far from the focal plane; render objects in the high-resolution region that are at depths that are far from the focal plane at a lower resolution; [0131]: identify the vergence depth); and
warping a first sensor readout corresponding to a first zone of the plurality of zones based on the mapping (Sztuk (‘474); [0100]: scene render module 720 applies a distortion correction to display content to be displayed; warp a rendered image frame prior to display based on the predicted state of optical block 320; counteract any distortion of the displayed image frame; a predicted three-dimensional gaze location is used to predict the distortion correction).
Regarding to claim 14 (Original), Sztuk (‘474) and Hua discloses the method of claim 13, further comprising:
determining an inpainting algorithm based on a zone of the plurality of zones (Sztuk (‘474); [0100]: scene render module 720 applies a distortion correction to display content to be displayed; warp a rendered image frame prior to display based on the predicted state of optical block 320; counteract any distortion of the displayed image frame; a predicted three-dimensional gaze location is used to predict the distortion correction); and
performing a first inpainting algorithm on the warped first sensor readout based on the first zone (Sztuk (‘474); [0067]: display images are rendered, corrected, and processed; [0100]: scene render module 720 applies a distortion correction to display content to be displayed; warp a rendered image frame prior to display based on the predicted state of optical block 320; counteract any distortion of the displayed image frame; a predicted three-dimensional gaze location is used to predict the distortion correction).
Regarding to claim 16 (Original), Sztuk (‘474) and Hua discloses the method of claim 1, wherein the modified first image comprises a first image section corresponding to a first sensor readout of a first zone of the plurality of zones, a second image section corresponding to a second sensor readout of a second zone of the plurality of zones, and a third image section corresponding to a third sensor readout of a third zone of the plurality of zones (Sztuk (‘474); Sztuk (‘474); Fig. 3; [0060]: a portion 319 of display panel 118 surrounding the user's gaze location displays the high-resolution portion of the display image; Fig. 4; [0062]: a foveated display image frame 401 includes a high-resolution portion 430, a peripheral portion 400, and a transition portion 440 extending between the high-resolution portion and the peripheral portion;
PNG
media_image2.png
380
530
media_image2.png
Greyscale
; [0156]: display a foveated display image frame 401 for a current static gaze location; Fig. 13; [0157]: gaze location 317 is a current, static gaze location based on a most recent measurement of gaze direction 506 from eye tracking units 215 and eye tracking module 710; high-resolution region 430 and transitional region 440 of foveated display image frame 401 are circular, and have respective widths R.sub.H and R.sub.T).
Claims 9 and 19-20 rejected under 35 U.S.C. 103 as being unpatentable over Sztuk (US 20210173474 A1, hereafter ‘474) in view of Hua (US 20240013752 A1), and further in view of Barry (US 20170287447 A1).
Regarding to claim 9 (Original), Sztuk (‘474) and Hua discloses the method of claim 1,
Sztuk (‘474) and Hua fails to explicitly disclose:
wherein a first zone of the plurality of zones is associated with a first frame rate and a second zone of the plurality of zones is associated with a second frame rate.
In same field of endeavor, Barry teaches:
wherein a first zone of the plurality of zones is associated with a first frame rate and a second zone of the plurality of zones is associated with a second frame rate ([0040]: resolution and frame-rate of the foveal region are different than that of the extra-foveal region; the extra-foveal region is represented with at lower resolution and lower frame-rate; [0041]: transfer display data corresponding to the foveal region at relatively higher frame rate and resolution; transmit the extra-foveal data at relatively lower frame rate and/or resolution; [0044]: extra-foveal frame buffers 308/312 and foveal frame buffers 310/314 are loaded with different frame rates).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sztuk (‘474) and Hua to include wherein a first zone of the plurality of zones is associated with a first frame rate and a second zone of the plurality of zones is associated with a second frame rate as taught by Barry. The motivation for doing so would have been to improve a head-mounted display design with separate left and right foveal display and lower resolution extra-foveal display buffers combined with alpha-blending; to represent the extra-foveal region with at lower resolution and lower frame-rate; to transfer, in parallel, display data corresponding to the foveal region at relatively higher frame rate as taught by Barry in paragraphs [0034] and [0040-0041].
Regarding to claim 19 (Currently Amended), Sztuk (‘474) discloses software that is operable when executed to (Fig. 1; [0047]: a head-mounted display; computing system; [0054]: head-mountable display device; Fig. 3; [0060]: display the high-resolution portion of the display image; display a transition region in which the resolution of the displayed image frame decreases with increasing distance from the gaze location 317; [0061]: as the user's gaze location 317 moves around the display panel due to rotation of the user's eye 350, the portions 319, 321, and 323 of display panel 118 that correspond to the high-resolution, transitional, and peripheral portions of the image change accordingly; [0062]: a foveated display image frame; [0067]: the user's eye is tracked; [0109]: when a group of instructions are executed by a processor, generates content for presentation to the user):
Sztuk (‘474) and Hua fails to explicitly disclose:
one or more computer-readable non-transitory storage media embodying.
In same field of endeavor, Barry teaches:
one or more computer-readable non-transitory storage media embodying ([0018]: a non-transitory computer readable medium storing a computer-readable program for a head-mounted display device with eye-tracking of a user's two eyes; Fig. 2; [0033]: a memory 205 comprises the frame buffer 203 and the frame buffer 204; [0054]: enable compressed storage in a memory; Fig. 4; [0046]: the memory 406).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sztuk (‘474) and Hua to include one or more computer-readable non-transitory storage media embodying as taught by Barry. The motivation for doing so would have been to improve a head-mounted display design with separate left and right foveal display and lower resolution extra-foveal display buffers combined with alpha-blending; to represent the extra-foveal region with at lower resolution and lower frame-rate; to transfer, in parallel, display data corresponding to the foveal region at relatively higher frame rate as taught by Barry in paragraphs [0034] and [0040-0041].
The rest claim limitations are similar to claim limitations recited in claim 1. Therefore, same rational used to reject claim 1 is also used to reject claim 19.
Regarding to claim 20 (Currently Amended), Sztuk (‘474) discloses a system (Fig. 1; [0047]: a head-mounted display; computing system; [0054]: head-mountable display device; Fig. 3; [0060]: display the high-resolution portion of the display image; display a transition region in which the resolution of the displayed image frame decreases with increasing distance from the gaze location 317; [0061]: as the user's gaze location 317 moves around the display panel due to rotation of the user's eye 350, the portions 319, 321, and 323 of display panel 118 that correspond to the high-resolution, transitional, and peripheral portions of the image change accordingly; [0062]: a foveated display image frame; [0067]: the user's eye is tracked) comprising:
Sztuk (‘474) and Hua fails to explicitly disclose:
one or more processors; and
one or more computer-readable non-transitory storage media coupled to one or more of the processors and comprising instructions operable when executed by one or more of the processors to cause the system to:
In same field of endeavor, Barry teaches:
one or more processors (Fig. 2; [0033]: a processor 206 controls the loading of pixels/image data into the frame buffers 203/204; Fig. 4; [0046]: the processor); and
one or more computer-readable non-transitory storage media coupled to one or more of the processors and comprising instructions operable when executed by one or more of the processors to cause the system to ([0018]: a non-transitory computer readable medium storing a computer-readable program for a head-mounted display device with eye-tracking of a user's two eyes; Fig. 2; [0033]: a memory 205 comprises the frame buffer 203 and the frame buffer 204; a processor 206 controls the loading of pixels/image data into the frame buffers 203/204; [0034]: the processor 304 loads into memory 306 images that correspond to what a user may see; [0054]: enable compressed storage in a memory; Fig. 4; [0046]: the memory 406; claim 20: a non-transitory computer readable medium storing a computer-readable program for a head mounted display device ):
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sztuk (‘474) and Hua to include one or more processors; and one or more computer-readable non-transitory storage media coupled to one or more of the processors and comprising instructions operable when executed by one or more of the processors to cause the system to: as taught by Barry. The motivation for doing so would have been to improve a head-mounted display design with separate left and right foveal display and lower resolution extra-foveal display buffers combined with alpha-blending; to represent the extra-foveal region with at lower resolution and lower frame-rate; to transfer, in parallel, display data corresponding to the foveal region at relatively higher frame rate as taught by Barry in paragraphs [0034] and [0040-0041].
The rest claim limitations are similar to claim limitations recited in claim 1. Therefore, same rational used to reject claim 1 is also used to reject claim 20.
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over Sztuk (US 20210173474 A1, hereafter ‘474) in view of Hua (US 20240013752 A1), and further in view of Kurlethimar (US 20190339770 A1).
Regarding to claim 15 (Original), Sztuk (‘474) and Hua discloses the method of claim 1, further comprising:
Sztuk (‘474) and Hua fails to explicitly disclose:
rendering one or more virtual objects in the modified first image based on the sensor readout.
In same field of endeavor, Kurlethimar teaches:
rendering one or more virtual objects in the modified first image based on the sensor readout (Kurlethimar (US 20190339770 A1); [0037]: an augmented reality (AR) environment; the virtual objects are superimposed over the physical environment; [0038]: an augmented reality environment also refers to a simulated environment in which a representation of a physical environment is transformed by computer-generated sensory information; transform one or more sensor images to impose a select perspective, e.g., viewpoint, different than the perspective captured by the imaging sensors; [0039]: a virtual or computer generated environment incorporates one or more sensory inputs from the physical environment).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sztuk (‘474) and Hua to include rendering one or more virtual objects in the modified first image based on the sensor readout as taught by Kurlethimar. The motivation for doing so would have been to superimpose the virtual objects over the physical environment; to improve the overall accuracy of gaze prediction; to improve the final displacement estimation as taught by Kurlethimar in paragraphs [0037], [0070], and [0089].
Claim 18 is rejected under 35 U.S.C. 103 as being unpatentable over Sztuk (US 20210173474 A1, hereafter ‘474) in view of Hua (US 20240013752 A1), and further in view of Zhang (US 20200058152 A1).
Regarding to claim 18 (Original), Sztuk (‘474) and Hua discloses the method of claim 1, further comprising: adjusting a parameter level of the modified first image based on the foveated map (Sztuk (‘474); Fig. 4; [0062]: a foveated display image frame 401 includes a high-resolution portion 430, a peripheral portion 400, and a transition portion 440 extending between the high-resolution portion and the peripheral portion;
PNG
media_image2.png
380
530
media_image2.png
Greyscale
).
Sztuk (‘474) and Hua fails to explicitly disclose:
a parameter level is a brightness level.
In same field of endeavor, Zhang teaches a parameter level is a brightness level ([0063]: the brightness of the projected images is modulated and adjusted based on the user's pupil dilation as determined by the gaze tracking sensors 224.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sztuk (‘474) and Hua to include a parameter level is a brightness level as taught by Zhang. The motivation for doing so would have been to adjust and modulate the brightness of the projected images; to improve quality of the rendered and displayed frames as taught by Zhang in paragraphs [0063] and [0097].
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hai Tao Sun whose telephone number is (571)272-5630. The examiner can normally be reached 9:00AM-6:00PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at 5712727642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HAI TAO SUN/Primary Examiner, Art Unit 2616