DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment and Argument
Applicant’s amendment and argument with respect to pending claims 1-8 and 10-15 filed on 02/24/2026 have been fully considered but the argument has been rendered moot in view of a new ground(s) of rejection necessitated by the amendment of the independent claims 1 and 12.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-8 and 10-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Young et al. (US 20170285736 A1) in view of Connor et al. (US 20210373657 A1) and Yee (US 20180077345 A1).
Regarding claim 1, Young discloses a system comprising at least one server that is communicably coupled to at least one display apparatus, wherein the at least one server is configured to: receive, from the at least one display apparatus, information indicative of a gaze direction of a user's eye (Figs. 1-3, ¶0024, 0027: The sensor 104 is preferably an image sensor, e.g., a digital camera that can produce an image of the eye E which may be analyzed to determine a gaze direction GD from the relative position of the pupil. This image may be produced with a local processor 120 or via the transmission of the obtained gaze tracking data to a remote computing device 160); process the information to detect a beginning of a saccade (¶0037, 0069: As discussed with respect to FIG. 1A, camera-based eye tracking can be augmented with other methods to update eye tracking during a blink phase…This information can also be used help detect the start and end of blinks and saccades, or to predict the duration of blinks and saccades); predict a target gaze location of the saccade, based on the information (¶0039: gaze tracking may also be analyzed to predict the user's gaze point on the display at the end of the saccade or blink and render the frame using foveated rendering).
Young does not explicitly disclose foveate a video stream according to the target gaze location starting after the beginning of the saccade and ending before an end of the saccade; refine the prediction of the target gaze location continuously during the saccade a the saccade progresses.
However, Connor teaches foveate a video stream according to the target gaze location starting after the beginning of the saccade and ending before an end of the saccade (Connor at para. [0117] discloses generating, during the detected saccadic movement, at least one image to be displayed by the display unit after the detected saccadic movement. Connor at para. [0129]-[0130] further discloses in response to detecting the start of the saccade at T1 generating image frames for display having the first image resolution Q1 lower than the second image resolution Q2, and a second image resolution Q2 higher than the first image resolution are generated for display just before the time T2 at which the saccade is predicted to end. See Figs. 16a-16b).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Young by incorporating the teaching of Connor as noted above, in order to provide a more efficient use of processing resources (Connor: ¶0095).
Furthermore, Yee teaches refine the prediction of the target gaze location continuously during the saccade a the saccade progresses (¶0088-0089: The predictive saccade detection module 193 then executes to identify predictive saccades using the velocity profiles…In identifying the predictive saccades the predictive saccade detection module 193 continuously monitors the saccade velocity profile…the predictive saccades identified at the step 230 can be further refined or filtered by noting that the velocity profile of predicative saccades are more skewed in comparison with the velocity profile of other types of other more symmetric saccades).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify Young in view of Connor by incorporating the teaching of Yee as noted above, in order to effectively use predictive saccades to determine points or regions of future interest of the viewer based upon the determined direction of the saccades (Yee: ¶0092)
Regarding claim 2, Young discloses the system of claim 1, wherein the at least one server is further configured to: predict a duration of the saccade, based on the information (¶0037, 0069: As discussed with respect to FIG. 1A camera-based eye tracking can be augmented with other methods to update eye tracking during a blink phase…This information can also be used help detect the start and end of blinks and saccades or to predict the duration of blinks and saccades); and choose at least one encoding parameter to be used for encoding the video stream during the saccade (¶0039-0041 0052 0068-0069 0083-0084: the images may be selectively compressed based on additional parameters determined from the gaze tracking data. For example and not by way of limitation the quantization parameters may be determined for each foveal region of the image presented to the user and this parameter may be used to selectively compress the foveal regions of the image before transmission and subsequent presentation to the user).
Regarding claim 3, Young discloses the system of claim 2, wherein the at least one encoding parameter is chosen based on the duration of the saccade (¶0037-0041, 0052, 0068-0069, 0083-0084: detect the start and end of blinks and saccades or to predict the duration of blinks and saccades...the images may be selectively compressed based on additional parameters determined from the gaze tracking data. For example and not by way of limitation the quantization parameters may be determined for each foveal region of the image presented to the user and this parameter may be used to selectively compress the foveal regions of the image before transmission and subsequent presentation to the user).
Regarding claim 4, Young discloses the system of claim 2, wherein the at least one serverbitrate during the saccade (¶0039-0041, 0052, 0068-0069, 0083-0084: Using Foveated rendering images as input to compression one can use varying levels of compression for each rendered region. The output is one or more compression streams with varying levels of compression or quality…for regions outside the fovea the eye is less sensitive and therefore higher compression is acceptable. The result is a reduction in the bandwidth required for frame transmission while preserving quality of important regions).
Regarding claim 5 Young discloses the system of claim 4 wherein the at least one server is further configured to change the at least one encoding parameter and employ the at least one encoding parameter to increase the video stream bitrate before the saccade ends (¶0039-0041 0052 0068-0069 0083-0084: Using Foveated rendering images as input to compression one can use varying levels of compression for each rendered region. The output is one or more compression streams with varying levels of compression or quality. For the foveal ROI the highest quality settings are used giving minimal or no compression).
Regarding claim 6 Young discloses the system claims 2 wherein the at least one server is further configured to prioritize at least one other data stream during the saccade (¶0039-0041 0052 0068-0069 0083-0084: Using Foveated rendering images as input to compression one can use varying levels of compression for each rendered region. The output is one or more compression streams with varying levels of compression or quality. For the foveal ROI the highest quality settings are used giving minimal or no compression. However for regions outside the fovea the eye is less sensitive and therefore higher compression is acceptable).
Regarding claim 7 Young discloses the system of claim 2 wherein the at least one server is further configured to increase a bitrate for encoding at least one other data stream during the saccade (¶0039-0041 0052 0068-0069 0083-0084: Using Foveated rendering images as input to compression one can use varying levels of compression for each rendered region. The output is one or more compression streams with varying levels of compression or quality. For the foveal ROI the highest quality settings are used giving minimal or no compression).
Regarding claim 8 Young discloses the system claim 1 wherein the at least one server is further configured to: determine a region of interest within the video stream based on the target gaze location of the saccade; and choose different encoding parameters to be used for encoding the region of interest and a remaining region of the video stream (¶0039-0041 0052 0068-0069 0083-0084: Using Foveated rendering images as input to compression one can use varying levels of compression for each rendered region. The output is one or more compression streams with varying levels of compression or quality. For the foveal ROI the highest quality settings are used giving minimal or no compression. However for regions outside the fovea the eye is less sensitive and therefore higher compression is acceptable).
Regarding claim 10 Young discloses the system wherein the at least one server is further configured to send to a congestion control network device information indicative of at least one of: the beginning of the saccade a duration of the saccade (¶0050-0052 0083-0084: foveated rendering may augment computational resource savings from leveraging knowledge of blinks or saccadic masking…The output is one or more compression streams with varying levels of compression or quality. For the foveal ROI the highest quality settings are used giving minimal or no compression. However for regions outside the fovea the eye is less sensitive and therefore higher compression is acceptable. The result is a reduction in the bandwidth required for frame transmission while preserving quality of important regions).
Regarding claim 11 Young discloses the system claim 1 wherein the at least one server is further configured to choose at least one foveation parameter to be used for foveating the video stream during the saccade (¶0039-0041 0052 0068-0069 0083-0084: the quantization parameters may be determined for each foveal region of the image presented to the user and this parameter may be used to selectively compress the foveal regions of the image before transmission and subsequent presentation to the user).
Regarding claim 12 the claim is drawn to a method claim and recites the limitation analogous to claim 1 and is rejected due to the same reason set forth above with respect to claim 1.
Regarding claim 13 the claim is drawn to a method claim and recites the limitation analogous to claim 2 and is rejected due to the same reason set forth above with respect to claim 2.
Regarding claim 14 the claim is drawn to a method claim and recites the limitation analogous to claims 3 and 5 and is rejected due to the same reason set forth above with respect to claims 3 and 5.
Regarding claim 15 the claim is drawn to a method claim and recites the limitation analogous to claim 8 and is rejected due to the same reason set forth above with respect to claim 8.
The following is the prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Kurlethimar et al. (US 20190339770 A1) describes “Electronic Device With Foveated Display And Gaze Prediction”. Title
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period then the shortened statutory period will expire on the date the advisory action is mailed and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event however will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NATHNAEL AYNALEM whose telephone number is (571)270-1482. The examiner can normally be reached M-F 9AM-5:30 PM ET.
Examiner interviews are available via telephone in-person and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful the examiner’s supervisor SATH PERUNGAVOOR can be reached at 571-272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NATHNAEL AYNALEM/Primary Examiner Art Unit 2488