DETAILED ACTION
Claims 1-20 filed June 24th 2024 are pending in the current action.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 20 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) does/do not fall within at least one of the four categories of patent eligible subject matter because the claim is directed to a computer readable medium which covers a non-statutory embodiment See applicant’s ¶95 where “In this manner, computer-readable media generally may correspond to: (1) tangible computer-readable storage media, which is non-transitory; or (2) a communication medium such as a signal or carrier wave.” See, e.g., Mentor Graphics v. EVE-USA, Inc., 851 F.3d at 1294-95, 112 USPQ2d at 1134 (claims to a "machine-readable medium" were non-statutory, because their scope encompassed both statutory random-access memory and non-statutory carrier waves).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-8, 10-17, 19, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wilairat et al. (US2015/0331485) in view of Eble et al. (US2021/0142443)
Consider claim 1, where Wilariat teaches an apparatus for display processing, comprising: a memory; and a processor coupled to the memory and, based at least in part on information stored in the memory, (See Wilariat Fig. 1 where the computing device comprises a memory and processor) the processor is configured to: execute an application that displays a first scene to a display, wherein, while the application is executed, the processor is further configured to: receive a set of eye gaze data associated with an eye gaze of a user; (See Wilariat Fig. 5 and ¶43 where the gaze tracking system estimates an uncalibrated gaze location 508) display a calibration indicator to the display; (See Wilariat Fig. 6 and ¶44 where In response to an input initiating calibration, the gaze location calibration program 46 may control the tablet computer 252 to display a guide visual 512 at the uncalibrated location 508.) receive a calibration feedback from the user comprising an error correction indicator based on the displayed calibration indicator; (See Wilariat ¶45-47 where the gaze location calibration program 46 may control the tablet computer 252 to display the guide visual 512 at the calibrated location 514 that corresponds to the location of the button 502. The gaze location calibration program 46 may then calculate an offset vector 520 based on the uncalibrated location 508 and the calibrated location 514.) adjust a calibration of the display relative to the set of eye gaze data based on the error correction indicator; (See Wilariat ¶47-48 where It will be appreciated that the offset vector may comprise a horizontal dx component and a vertical dy component that represent an error in the estimated gaze location. The gaze location calibration program 46 may utilize these components in a local transformation of gaze computation logic utilized by the gaze tracking system 54 to calibrate the system to produce more accurate estimated gaze locations.) and display a second scene to the display based on the adjusted calibration of the display and the set of eye gaze data. (See Wilariat Fig. 8 and ¶52 where with reference to FIG. 8, after calibration the gaze tracking system 54 may generate an estimated gaze location 730 that more closely corresponds to an actual gaze location 734 of the viewer.)
Wilariat teaches eye gaze, however Wilariat does not explicitly teach eye focus. However, in an analogous field of endeavor Eble teaches eye focus. (See Eble Fig. 9 and ¶98-101, 72-74 where the field of focus information is derived from the eye gaze information.) Therefore, it would have been obvious for one of ordinary skill in the art to further derive the eye focus information as taught by Eble from the eye gaze information in Wilariat. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of performing dynamic foveation from a gaze point to yield predictable results. (See Eble ¶87)
Consider claim 2, where Wilariat in view of Eble teaches the apparatus of claim 1, wherein, to receive the set of eye focus data, the processor is configured to: receive a set of sensor data from a set of sensors monitoring the user. (See Wilariat ¶59-66 where the gaze tracking system receives data from image sensors to capture an image of the wearer’s eyes)
Consider claim 3, where Wilariat in view of Elbe teaches the apparatus of claim 2, wherein the set of sensors comprises at least one of: a head pose sensor; and an eye focus sensor. (See Wilariat ¶59-66 where the gaze tracking system receives data from image sensors to capture an image of the wearer’s eye. Head pose and/or movement data may be determined based on sensor information from any combination of sensors mounted on the wearer and/or external to the wearer including, but not limited to, any number of gyroscopes, accelerometers, inertial measurement units, GPS devices, barometers, magnetometers, cameras (e.g., visible light cameras, infrared light cameras, time-of-flight depth cameras, structured light depth cameras, etc.), communication devices (e.g., WIFI antennas/interfaces), etc.)
Consider claim 4, where Wilariat in view of Eble teaches the apparatus of claim 1, wherein the processor is further configured to: estimate the eye focus based on the set of eye focus data. (See Wilariat Fig. 5 and ¶43 where the gaze tracking system estimates an uncalibrated gaze location 508)
Consider claim 5, where Wilariat in view of Eble teaches the apparatus of claim 1, wherein the first scene comprises an outer quad (OQ) area and an inner quad (IQ) area, wherein the IQ area has a higher pixel per degree (PPD) than the OQ area, (See Eble Figs. 8A, 8B and ¶94-95 where there is a center region that forms a square, thus an inner quad. There is a corner region surrounding the center region, thus an outer quad. See Eble ¶71-73 where within the field of focus the rendering resolution is S.sub.max is the maximum of the rendering resolution function (e.g., approximately 60 PPD) and drops off. See Eble Fig. 6A ) wherein, to display the calibration indicator to the display, the processor is configured to: center the IQ area on the eye focus based on the set of eye focus data. (See Wilariat Fig. 8 and ¶52 where with reference to FIG. 8, after calibration the gaze tracking system 54 may generate an estimated gaze location 730 that more closely corresponds to an actual gaze location 734 of the viewer.)
Consider claim 6, where Wilariat in view of Eble teaches the apparatus of claim 5, wherein the OQ area has a wider field of view (FOV) than the IQ area. (See Eble Figs. 8A, 8B and ¶94-95 where there is a center region that forms a square, thus an inner quad. There is a corner region surrounding the center region, thus an outer quad with a larger field of view)
Consider claim 7, where Wilariat in view of Eble teaches the apparatus of claim 1, wherein the error correction indicator comprises a vertical scale factor and a horizontal scale factor. (See Wilariat ¶47-48 where It will be appreciated that the offset vector may comprise a horizontal dx component and a vertical dy component that represent an error in the estimated gaze location. The gaze location calibration program 46 may utilize these components in a local transformation of gaze computation logic utilized by the gaze tracking system 54 to calibrate the system to produce more accurate estimated gaze locations.)
Consider claim 8, where Wilariat in view of Eble teaches the apparatus of claim 7, wherein, to adjust the calibration of the display relative to the set of eye focus data based on the error correction indicator, the processor is configured to: determine a modified horizontal direction based on the horizontal scale factor; determine a modified vertical direction based on the vertical scale factor; and determine an adjusted eye focus based on the determined modified horizontal direction and the determined modified vertical direction. (See Wilariat ¶47-48 where It will be appreciated that the offset vector may comprise a horizontal dx component and a vertical dy component that represent an error in the estimated gaze location. The gaze location calibration program 46 may utilize these components in a local transformation of gaze computation logic utilized by the gaze tracking system 54 to calibrate the system to produce more accurate estimated gaze locations.)
Consider claim 10, where Wilariat in view of Eble teaches the apparatus of claim 8, wherein the second scene comprises an outer quad (OQ) area and an inner quad (IQ) area, wherein the IQ area has a higher pixel per degree (PPD) than the OQ area, wherein, to display the second scene to the display based on the adjusted calibration of the display and the set of eye focus data, (See Eble Figs. 8A, 8B and ¶94-95 where there is a center region that forms a square, thus an inner quad. There is a corner region surrounding the center region, thus an outer quad. See Eble ¶71-73 where within the field of focus the rendering resolution is S.sub.max is the maximum of the rendering resolution function (e.g., approximately 60 PPD) and drops off. See Eble Fig. 6A ) the processor is configured to: center the IQ area on the determined adjusted eye focus. (See Wilariat Fig. 8 and ¶52 where with reference to FIG. 8, after calibration the gaze tracking system 54 may generate an estimated gaze location 730 that more closely corresponds to an actual gaze location 734 of the viewer.)
Consider claim 11, where Wilariat in view of Eble teaches the apparatus of claim 1, wherein the apparatus comprises a wireless communication device. (See Wilariat ¶94)
Consider claim 12, where Wilariat in view of Eble teaches the apparatus of claim 1, wherein the processor is further configured to: output a first indication of the adjusted calibration of the display to a system calibration engine; receive a second indication that the system calibration engine received the first indication; and reset an adjustment of the calibration of the display in response to a reception of the second indication. (See Wilariat ¶52 the gaze location calibration program 46 may calibrate the gaze tracking system 54 to generate updated estimated gaze locations of viewer Rebecca in subsequent iterations. In some examples, the gaze location calibration program 46 may utilize the calculated offset vectors to generate and apply a local transformation to gaze computation logic utilized by the gaze tracking system 54 to calibrate the system to produce more accurate estimated gaze locations.)
Consider claim 13, where Wilariat in view of Elbe teaches the apparatus of claim 12, wherein the processor is further configured to: receive a calibration verification indicator from the user, wherein, to output the first indication of the adjusted calibration of the display to the system calibration engine, the processor is configured to: output the first indication of the adjusted calibration of the display to the system calibration engine in response to a second reception of the calibration verification indicator. (See Wilariat ¶52 the gaze location calibration program 46 may calibrate the gaze tracking system 54 to generate updated estimated gaze locations of viewer Rebecca in subsequent iterations. In some examples, the gaze location calibration program 46 may utilize the calculated offset vectors to generate and apply a local transformation to gaze computation logic utilized by the gaze tracking system 54 to calibrate the system to produce more accurate estimated gaze locations. Thus, the subsequent calibrations are a se4cond reception of the calibration verification)
Consider claim 14, where Wilariat in view of Eble teaches the apparatus of claim 12, wherein the processor is further configured to: determine that an update to the calibration feedback has not been received for at least a threshold amount of time, wherein, to output the first indication of the adjusted calibration of the display to the system calibration engine, the processor is configured to: output the first indication of the adjusted calibration of the display to the system calibration engine in response to a determination that the update to the calibration feedback has not been received for at least the threshold amount of time. (See Wilariat ¶53-59 where the gaze location calibration program 46 may determine that the estimated gaze location 916 dwells within at least a portion of the selection region 904 for at least a dwell timeframe. In some examples the period of the dwell timeframe may be 1 second (sec), 2 secs, 3 secs, or any other suitable timeframe. Thus, if the dwell timeframe is not achieved than the calibration engine does not update)
Consider claim 15, where Wilariat teaches a method of display processing, comprising: execute an application that displays a first scene to a display, wherein, while the application is executed, the processor is further configured to: receive a set of eye gaze data associated with an eye gaze of a user; (See Wilariat Fig. 5 and ¶43 where the gaze tracking system estimates an uncalibrated gaze location 508) display a calibration indicator to the display; (See Wilariat Fig. 6 and ¶44 where In response to an input initiating calibration, the gaze location calibration program 46 may control the tablet computer 252 to display a guide visual 512 at the uncalibrated location 508.) receive a calibration feedback from the user comprising an error correction indicator based on the displayed calibration indicator; (See Wilariat ¶45-47 where the gaze location calibration program 46 may control the tablet computer 252 to display the guide visual 512 at the calibrated location 514 that corresponds to the location of the button 502. The gaze location calibration program 46 may then calculate an offset vector 520 based on the uncalibrated location 508 and the calibrated location 514.) adjust a calibration of the display relative to the set of eye gaze data based on the error correction indicator; (See Wilariat ¶47-48 where It will be appreciated that the offset vector may comprise a horizontal dx component and a vertical dy component that represent an error in the estimated gaze location. The gaze location calibration program 46 may utilize these components in a local transformation of gaze computation logic utilized by the gaze tracking system 54 to calibrate the system to produce more accurate estimated gaze locations.) and display a second scene to the display based on the adjusted calibration of the display and the set of eye gaze data. (See Wilariat Fig. 8 and ¶52 where with reference to FIG. 8, after calibration the gaze tracking system 54 may generate an estimated gaze location 730 that more closely corresponds to an actual gaze location 734 of the viewer.)
Wilariat teaches eye gaze, however Wilariat does not explicitly teach eye focus. However, in an analogous field of endeavor Eble teaches eye focus. (See Eble Fig. 9 and ¶98-101, 72-74 where the field of focus information is derived from the eye gaze information.) Therefore, it would have been obvious for one of ordinary skill in the art to further derive the eye focus information as taught by Eble from the eye gaze information in Wilariat. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of performing dynamic foveation from a gaze point to yield predictable results. (See Eble ¶87)
Consider claim 16, where Wilariat in view of Eble teaches the method of claim 15, wherein the first scene comprises an outer quad (OQ) area and an inner quad (IQ) area, wherein the IQ area has a higher pixel per degree (PPD) than the OQ area, (See Eble Figs. 8A, 8B and ¶94-95 where there is a center region that forms a square, thus an inner quad. There is a corner region surrounding the center region, thus an outer quad. See Eble ¶71-73 where within the field of focus the rendering resolution is S.sub.max is the maximum of the rendering resolution function (e.g., approximately 60 PPD) and drops off. See Eble Fig. 6A ) wherein displaying the calibration indicator to the display comprises: centering the IQ area on the eye focus based on the set of eye focus data. (See Wilariat Fig. 8 and ¶52 where with reference to FIG. 8, after calibration the gaze tracking system 54 may generate an estimated gaze location 730 that more closely corresponds to an actual gaze location 734 of the viewer.)
Consider claim 17, where Wilariat in view of Eble teaches the method of claim 16, wherein the OQ area has a wider field of view (FOV) than the IQ area, wherein the error correction indicator comprises a vertical scale factor from the user and a horizontal scale factor, wherein adjusting the calibration of the display relative to the set of eye focus data based on the error correction indicator comprises: determining a modified horizontal direction based on the horizontal scale factor; determining a modified vertical direction based on the vertical scale factor; and determining an adjusted eye focus based on the determined modified horizontal direction and the determined modified vertical direction. (See Wilariat ¶47-48 where It will be appreciated that the offset vector may comprise a horizontal dx component and a vertical dy component that represent an error in the estimated gaze location. The gaze location calibration program 46 may utilize these components in a local transformation of gaze computation logic utilized by the gaze tracking system 54 to calibrate the system to produce more accurate estimated gaze locations.)
Consider claim 19, where Wilariat in view of Eble teaches the method of claim 17, wherein the second scene comprises an outer quad (OQ) area and an inner quad (IQ) area, wherein the IQ area has a higher pixel per degree (PPD) than the OQ area, (See Eble Figs. 8A, 8B and ¶94-95 where there is a center region that forms a square, thus an inner quad. There is a corner region surrounding the center region, thus an outer quad. See Eble ¶71-73 where within the field of focus the rendering resolution is S.sub.max is the maximum of the rendering resolution function (e.g., approximately 60 PPD) and drops off. See Eble Fig. 6A ) wherein displaying the second scene to the display based on the adjusted calibration of the display and the set of eye focus data comprises: centering the IQ area on the determined adjusted eye focus. (See Wilariat Fig. 8 and ¶52 where with reference to FIG. 8, after calibration the gaze tracking system 54 may generate an estimated gaze location 730 that more closely corresponds to an actual gaze location 734 of the viewer.)
Consider claim 20, where Wilariat teaches a computer-readable medium storing computer executable code, the code, when executed by a processor, causes the processor to: execute an application that displays a first scene to a display, wherein, while the application is executed, the processor is further configured to: receive a set of eye gaze data associated with an eye gaze of a user; (See Wilariat Fig. 5 and ¶43 where the gaze tracking system estimates an uncalibrated gaze location 508) display a calibration indicator to the display; (See Wilariat Fig. 6 and ¶44 where In response to an input initiating calibration, the gaze location calibration program 46 may control the tablet computer 252 to display a guide visual 512 at the uncalibrated location 508.) receive a calibration feedback from the user comprising an error correction indicator based on the displayed calibration indicator; (See Wilariat ¶45-47 where the gaze location calibration program 46 may control the tablet computer 252 to display the guide visual 512 at the calibrated location 514 that corresponds to the location of the button 502. The gaze location calibration program 46 may then calculate an offset vector 520 based on the uncalibrated location 508 and the calibrated location 514.) adjust a calibration of the display relative to the set of eye gaze data based on the error correction indicator; (See Wilariat ¶47-48 where It will be appreciated that the offset vector may comprise a horizontal dx component and a vertical dy component that represent an error in the estimated gaze location. The gaze location calibration program 46 may utilize these components in a local transformation of gaze computation logic utilized by the gaze tracking system 54 to calibrate the system to produce more accurate estimated gaze locations.) and display a second scene to the display based on the adjusted calibration of the display and the set of eye gaze data. (See Wilariat Fig. 8 and ¶52 where with reference to FIG. 8, after calibration the gaze tracking system 54 may generate an estimated gaze location 730 that more closely corresponds to an actual gaze location 734 of the viewer.)
Wilariat teaches eye gaze, however Wilariat does not explicitly teach eye focus. However, in an analogous field of endeavor Eble teaches eye focus. (See Eble Fig. 9 and ¶98-101, 72-74 where the field of focus information is derived from the eye gaze information.) Therefore, it would have been obvious for one of ordinary skill in the art to further derive the eye focus information as taught by Eble from the eye gaze information in Wilariat. One of ordinary skill in the art would have been motivated to perform the modification for the advantage of/ benefit of performing dynamic foveation from a gaze point to yield predictable results. (See Eble ¶87)
Claim(s) 9 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wilariat in view of Eble as applied to claim 1 above, and further in view of Model (US2014/0211995)
Consider claim 9, where Wilariat in view of Eble teaches the apparatus of claim 8, wherein, to adjust the calibration of the display relative to the set of eye focus data based on the error correction indicator, the processor is further configured to: normalize the adjusted eye focus further to have a unit norm based on the determined modified horizontal direction and the determined modified vertical direction. (See Wilariat ¶47-48 where It will be appreciated that the offset vector may comprise a horizontal dx component and a vertical dy component that represent an error in the estimated gaze location. The gaze location calibration program 46 may utilize these components in a local transformation of gaze computation logic utilized by the gaze tracking system 54 to calibrate the system to produce more accurate estimated gaze locations.)
Wilariat teaches an offset vector, however Wilariat does not expressly teach a unit norm vector. However, in an analogous field of endeavor Model teaches a unit norm vector. (See Model ¶106-109 where the vectors are expressed as unit vectors) Therefore, it would have been an obvious matter of simple substitution to use the scaled vector as taught by Wilariat or the unit vector as taught by Model with little impact to the final invention.
Consider claim 18, where Wilariat in view of Eble teaches the method of claim 17, wherein adjusting the calibration of the display relative to the set of eye focus data based on the error correction indicator further comprises: normalizing the adjusted eye focus further to have a unit norm based on the determined modified horizontal direction and the determined modified vertical direction. (See Wilariat ¶47-48 where It will be appreciated that the offset vector may comprise a horizontal dx component and a vertical dy component that represent an error in the estimated gaze location. The gaze location calibration program 46 may utilize these components in a local transformation of gaze computation logic utilized by the gaze tracking system 54 to calibrate the system to produce more accurate estimated gaze locations.)
Wilariat teaches an offset vector, however Wilariat does not expressly teach a unit norm vector. However, in an analogous field of endeavor Model teaches a unit norm vector. (See Model ¶106-109 where the vectors are expressed as unit vectors) Therefore, it would have been an obvious matter of simple substitution to use the scaled vector as taught by Wilariat or the unit vector as taught by Model with little impact to the final invention.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM LU whose telephone number is (571)270-1809. The examiner can normally be reached 10am-6:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached at 571-270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
WILLIAM LU
Primary Examiner
Art Unit 2624
/WILLIAM LU/Primary Examiner, Art Unit 2624