DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55 (Korean Application KR10-2024-0003120 filed January 8th, 2024).
Information Disclosure Statement
The information disclosure statements (IDS) submitted on January 10th, 2025 and June 30th, 2025 were filed after the mailing date of the First Action on the Merits (this Office Action). The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the Examiner.
Election/Restrictions
Applicant's election with traverse of Species IV in the reply filed on September 29th, 2025 is acknowledged. The traversal is on the ground(s) that the Species are drawn towards the same technical problem (summary of arguments).
This is not found persuasive because the various technical features / problems claimed and illustrated require different searching techniques and have been shown to have different classification thus the elements for search burden have been met.
Note: The Examiner begins line numbering with section IB.
First, Applicant states intention to elect with Traverse [Section II: Page 6 lines 1 – 2] and provides a provisional election of Species IV contending the species is drawn towards claimed 1 – 8 and 12 – 20 [Section IA: Page 2]. In providing their provisional election, Applicant reproduces Figure 10 (elected in Species IV) and makes their determination of claims into the Figure.
Second, Applicant contends all Figures are drawn towards the same invention contending the various embodiments are to one problem / solution [Section IIB: Page 3 lines 1 – 18]. The Examiner notes in the Specification all Figures are listed as a unique / different embodiments thus contrary to Applicant’s position, multiple solutions to the common problem are proposed in which the solutions require different search techniques supporting the restriction requirement (Reasons B and C at least).
Third, the Applicant cites Specification Paragraph 93 to argue Species should be grouped together in analyzing Species I – IV [Section IIB: Page 3 line 19 – Page 4 line 11]. While the various steps are detailed, the steps are labeled in the Specification as distinct / separate embodiments and thus are at least separable, but usable together. Thus, the analysis shows the Species are divergent (details of different aspects as shown by distinct CPC Symbols assigned) and require different searching techniques thus Reasons A), B), and C) for Restriction have been shown. Further, claims 5 – 6 and 17 – 18 are drawn towards Species II in which the filter strength and non-linear input considerations are distinct from the other Species and require different search techniques / strategies and have status as different classifications thus Reasons A – C for restriction are met.
Fourth, the Applicant contends Species V is related to Species I – IV [Section IIB: Page 4 lines 12 – 33]. While the argument is not persuasive, the prior art relied upon covers claim 9 and thus, the Examiner cites prior art against claim 9.
Fifth, the Applicant cited Specification Paragraph 197 to relate Figure 13 to Figures 2 – 12 [Section IIB: Page 5 lines 1 – 20]. However, the relationship does not overcome the distinction event the Specification recognizes as different embodiments nor shows how a lack of search burden is present.
Sixth, the Applicant contends the independent claims are generic [Section IIB: Page 5 lines 21 – 33]. However, the showing of the independent claims being generic does not alleviate the restriction requirement regarding the dependent claims (while not listed, the various Figures correspond to various dependent claims). The argument does not show a lack of search burden but support divergence in the claimed subject matter and different classifications of the Species assigned.
The requirement is still deemed proper and is therefore made FINAL.
Claims 5 – 6 and 17 – 18 are withdrawn from further consideration pursuant to 37 CFR 1.142(b), as being drawn to a nonelected Species II, there being no allowable generic or linking claim. Applicant timely traversed the restriction (election) requirement in the reply filed on September 29th, 2025.
The pending claims are 1 – 4, 7 – 9, 12 – 16, and 19 – 20.
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: “No” and “Yes” [Figures 4, 6, 8, ].
Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
The abstract of the disclosure is objected to because the Abstract exceeds 150 words and is written in legalese instead of a series of brief sentences in narrative format describing the inventive concept. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b).
Applicant is reminded of the proper language and format for an abstract of the disclosure.
The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details.
The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided.
The lengthy specification has not been checked to the extent necessary to determine the presence of all possible minor errors. Applicant’s cooperation is requested in correcting any errors of which applicant may become aware in the specification.
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The disclosure is objected to because of the following informalities:
In Paragraph 159 line 5, reference character “230” should read as --320-- for correctness, correspondence to the and clarity.
Appropriate correction is required.
Claim Interpretation – Functional Analysis
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that use the word “means” or “step” but are nonetheless not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph because the claim limitation(s) recite(s) sufficient structure, materials, or acts to entirely perform the recited function.
Such claim limitation(s) is/are: “processors […] cause the electronic device to” in claim 13.
The Examiner notes one of ordinary skill in the art would understand the claimed “memory” and “processor” connote sufficient structure and thus claim 13 does NOT invoke Functional Analysis under 112(f).
Because this/these claim limitation(s) is/are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are not being interpreted to cover only the corresponding structure, material, or acts described in the specification as performing the claimed function, and equivalents thereof.
If applicant intends to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to remove the structure, materials, or acts that performs the claimed function; or (2) present a sufficient showing that the claim limitation(s) does/do not recite sufficient structure, materials, or acts to perform the claimed function.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1 – 2, 4, 7 – 9, 12 – 14, 16, and 19 – 20 are rejected under 35 U.S.C. 103 as being unpatentable over Sztuk, et al. (US PG PUB 2021/0173474 A1 referred to as “Sztuk” throughout) [Cited in Applicant’s January 10th, 2025 IDS], and further in view of Lee, et al. (US Patent #10,115,204 B2 referred to as “Lee” throughout), Gotsch (CA-3-038-584 A1 referred to as “Gotsch” throughout), and Fattal et al. (WO2023/219916 A1 referred to as “Fattal” throughout).
Regarding claim 1, see claim 13 which is the apparatus performing the steps of the claimed method.
Regarding claim 2, see claim 14 which is the apparatus performing the steps of the claimed method.
Regarding claim 4, see claim 16 which is the apparatus performing the steps of the claimed method.
Regarding claim 7, see claim 19 which is the apparatus performing the steps of the claimed method.
Regarding claim 8, see claim 20 which is the apparatus performing the steps of the claimed method.
Regarding claim 12, see claim 13 which is the apparatus performing the steps of the claimed program.
Regarding claim 13, Sztuk teaches a gaze / eye tracking system that uses position, velocity, and acceleration information to predict future gaze / eye locations. Lee teaches an eye tracking camera / imaging system with predictive capabilities that uses weights on the velocity information. Gotsch teaches using past information to predict future and interpolated information for eye location / gaze directions. Fattal teaches well known use of physics / relationships of position, velocity, and acceleration related to eye tracking applications to compute future velocities and accelerations.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Sztuk’s prediction system for eyes with the camera / processor arrangement of Lee and the weights on velocities / states for filtering / smoothing considerations, Gotsch’s suggested equations to predict / interpolate future eye / gaze / head locations, and Fattal’s teachings of filters and physics to use for future state variables for eye tracking (position, velocity, and acceleration). The combination teaches
memory storing one or more computer programs [Lee Figure 2 (see at least reference character 240) as well as Column 3 line 41 – Column 4 line 50 (memory / RAM / ROM storing program code for a CPU / processor to execute) and Column 6 lines 31 – 50 (memory / processor in a system)]; and
one or more processors communicatively coupled to the memory [Lee Figure 2 (see at least reference character 230) as well as Column 3 line 41 – Column 4 line 50 (memory / RAM / ROM storing program code for a CPU / processor to execute) and Column 6 lines 31 – 50 (memory / processor in a system)],
wherein the one or more computer programs include computer-executable instructions that, when executed by the one or more processors individually or collectively [Lee Figure 2 (see at least reference character 230) as well as Column 3 line 41 – Column 4 line 50 (memory / RAM / ROM storing program code for a CPU / processor to execute) and Column 6 lines 31 – 50 (memory / processor in a system)], cause the electronic device to:
obtain position information of a target part corresponding to a past time point, and position information of the target part corresponding to a reference time point [Sztuk Figures 3, 7 (see at least reference character 710)), and 10 – 11 as well as Paragraphs 88 (previous eye states recorded to predict future locations), 92 – 95, 100 – 104 (current eye position / gaze location) and 131 – 135 (current eye movement / position / location); Lee Column 7 lines 1 – 60 (current eye position obtained); Gotsch Figures 1 and 10 – 12 (see current “N” or “T” value for current position) as well as Paragraphs 52 – 54, 63 – 65, and 83 – 85 (current position determined from previous / use of tracks of eye positions)], from an image that includes a facial region of a viewer and is input through a camera [Sztuk Figures 1 – 3 as well as Paragraphs 50 – 53 and Lee Figures 1 – 3 (see at least reference character 210) as well as Column 6 lines 34 – 65 (user face including eye position of user imaged)],
obtain position change information of the target part corresponding to the reference time point, based on the position information of the target part corresponding to the past time point, and the position information of the target part corresponding to the reference time point [Sztuk Figures 7 and 10 – 12 as well as Paragraphs 111 – 114 and 133 – 138 (velocity as a change in location / position combinable with Lee Column 7 lines 1 – 60 and definitions in Fattal Paragraphs 69 – 74 (velocity as change in position over time)); Gotsch Figures 9 – 12 as well as 56 – 60 and 65 – 68 (position updates using acceleration and velocity information)],
predict a future velocity of the target part based on the position change information of the target part corresponding to the reference time point, and position change information of the target part corresponding to the past time point [Sztuk Figures 7 and 11 – 12 as well as Paragraphs 92 – 98 (velocity based on position changes), 111 – 112, and 135 – 138 (updates / computing future velocity in machine models); Lee Figures 5 – 7 as well as Column 7 lines 1 – 60 (future velocity determinations); Gotsch Figures 4 and 10 – 12 as well as Paragraphs 57 – 58 (velocity state information used in models for eye tracking with updates / techniques for future state parameters in Paragraphs 62 – 67); Fattal Paragraphs 70 – 74 (velocity computation as a position change over time)],
obtain velocity change information of the target part corresponding to the reference time point, based on the position change information of the target part corresponding to the reference time point, and the position change information of the target part corresponding to the past time point [Sztuk Figures 7 and 11 – 12 as well as Paragraphs 92 – 98 (velocity based on position changes), 111 – 112, and 135 – 138 (updates / computing future acceleration / velocity change in machine models); Lee Figures 5 – 7 as well as Column 7 lines 1 – 60 (future velocity determinations using changes / weights of previous velocities); Gotsch Figures 4 and 10 – 12 as well as Paragraphs 57 – 58 (acceleration state information used in models for eye tracking with updates / techniques for future state parameters in Paragraphs 62 – 67); Fattal Paragraphs 70 – 74 (acceleration computation as a velocity change over time)],
predict a future acceleration of the target part based on the velocity change information of the target part corresponding to the reference time point, and velocity change information of the target part corresponding to the past time point [Sztuk Figures 7 and 11 – 12 as well as Paragraphs 92 – 98 (accelerations considered / measured to predict and update), 111 – 112, and 135 – 138 (updates / computing future acceleration / velocity change in machine models); Gotsch Figures 4 and 10 – 12 as well as Paragraphs 57 – 58 (acceleration state information used in models for eye tracking with updates / techniques for future state parameters in Paragraphs 62 – 67); Fattal Paragraphs 70 – 74 (acceleration computation as a velocity change over time)],
predict future positions of both eyes corresponding to a target time point based on the future velocity and the future acceleration [Sztuk Figures 7 – 9 and 11 – 12 as well as Paragraphs 91 – 95 (current / predicted position, velocity, and accelerations used), 110 – 117 (using predictions of state variables / values or future values estimated and the four types of eye movement in Figure 11 computed which considers both eyes), and 134 – 140 (machine learning with the state information for eye tracking and updated based on confidence level thus future state values are used to update current / future predictions of state values); Lee Figures 5 and 7 – 9 as well as Column 7 lines 1 – 60 (computing / predicting future positions); Gotsch Figures 1 and 10 – 12 as well as Paragraphs 58 – 64 (used of current and future predicted data to determine future eye locations / gaze directions including velocity and acceleration predictions – combinable with Sztuk and Lee at least and may incorporate Fattal’s motion equations in Paragraphs 70 – 74)], and
output an image based on the future positions of the eyes corresponding to the target time point [Sztuk Figures 7 – 9 as well as Paragraphs 90 – 95 (output image based on future / predicted eye / gaze locations); Gotsch Figures 1 and 10 – 12 as well as Paragraphs 31 (adjusted pixel / image data based on the future / predicted eye position / gaze), 57 – 58 and 83 – 84 (output pupil locations)].
The motivation to combine Lee with Sztuk is to combine features in the same / related field of invention of predicting eye positions [Lee Column 1 lines 16 – 31] in order to improve accuracy of predictions by accounting for time delays [Lee Column 1 lines 16 – 31 and Column 5 line 59 – Column 6 line 6 where the Examiner observes KSR Rationales (D) or (F) are also applicable].
The motivation to combine Gotsch with Lee and Sztuk is to combine features in the same / related field of invention of gaze tracking technologies [Gotsch Paragraphs 1 – 4] in order to improve real-time gaze tracking [Gotsch Paragraphs 2 – 4 where the Examiner observes KSR Rationales (D) or (F) are also applicable].
The motivation to combine Fattal with Gotsch, Lee, and Sztuk is to combine features in the same / related field of invention of predicting head movement [Fattal Paragraphs 20 – 21] in order to improve user viewing experience or real time performance / accuracy of predictions [Fattal Paragraphs 20 – 23 where the Examiner observes KSR Rationales (D) or (F) are also applicable].
This is the motivation to combine Sztuk, Lee, Gotsch, and Fattal which will be used throughout the Rejection.
Regarding claim 14, Sztuk teaches a gaze / eye tracking system that uses position, velocity, and acceleration information to predict future gaze / eye locations. Lee teaches an eye tracking camera / imaging system with predictive capabilities that uses weights on the velocity information. Gotsch teaches using past information to predict future and interpolated information for eye location / gaze directions.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Sztuk’s prediction system for eyes with the camera / processor arrangement of Lee and the weights on velocities / states for filtering / smoothing considerations and Gotsch’s suggested equations to predict / interpolate future eye / gaze / head locations. The combination teaches
predict a first future position of the target part corresponding to the target time point based on the future velocity and the future acceleration [Sztuk Figures 7 – 9 and 11 – 12 as well as Paragraphs 91 – 95 (current / predicted position, velocity, and accelerations used), 110 – 114 (using predictions of state variables / values or future values estimated), and 134 – 140 (machine learning with the state information for eye tracking and updated based on confidence level thus future state values are used to update current / future predictions of state values); Lee Figures 5 and 7 – 9 as well as Column 7 lines 1 – 60 (computing / predicting future positions); Gotsch Figures 1 and 10 – 12 as well as Paragraphs 58 – 64 (used of current and future predicted data to determine future eye locations / gaze directions including velocity and acceleration predictions – combinable with Sztuk and Lee at least and equations of Fattal in Paragraphs 70 – 74)],
obtain a second future position of the target part corresponding to the target time point [Sztuk Figures 10 – 12 as well as Paragraphs 88 – 92 (future times listed for prediction), 147 and 153 – 155 (one to ten frames predicted in advance for gaze / eye location); Gotsch Figures 1 and 11 – 12 (see at least reference character 119) as well as Paragraphs 57 – 58 (future prediction techniques for the eye locations)], based on the first future position of the target part corresponding to the target time point, and a future position of the target part corresponding to a first time point that is prior to the target time point [Sztuk Figures 10 – 12 as well as Paragraphs 88 – 92 (future times listed for prediction), 147 and 152 – 155 (one to ten frames predicted in advance for gaze / eye location with current / previous gaze / eye information); Gotsch Figures 1 and 10 – 12 (see at least reference character 109, 113, 119) as well as Paragraphs 56 – 59 (future prediction techniques for the eye locations combinable with Fattal Paragraphs 72 – 74) and 83 – 85], and
predict the future positions of the eyes corresponding to the target time point based on the second future position of the target part [Sztuk Figures 10 – 12 as well as Paragraphs 88 – 92 (future times listed for prediction), 147 and 152 – 155 (one to ten frames predicted in advance for gaze / eye location with current / previous gaze / eye information); Gotsch Figures 1 and 10 – 12 (see at least reference character 109, 113, 119) as well as Paragraphs 56 – 59 and 63 (future prediction techniques for the eye locations combinable with Fattal Paragraphs 72 – 74 and extrapolation of position / tracking in Fattal Paragraph 63) and 83 – 85].
See claim 13 for the motivation to combine Sztuk, Lee, Gotsch, and Fattal.
Regarding claim 16, Sztuk teaches a gaze / eye tracking system that uses position, velocity, and acceleration information to predict future gaze / eye locations. Lee teaches an eye tracking camera / imaging system with predictive capabilities that uses weights on the velocity information. Gotsch teaches using past information to predict future and interpolated information for eye location / gaze directions.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Sztuk’s prediction system for eyes with the camera / processor arrangement of Lee and the weights on velocities / states for filtering / smoothing considerations and Gotsch’s suggested equations to predict / interpolate future eye / gaze / head locations. The combination teaches
obtain raw position information of the target part corresponding to the reference time point [Sztuk Figures 8 – 11 as well as Paragraphs 112 – 114 (raw position data from user eyes imaged obtained)], from the image including the facial region of the viewer [Sztuk Figures 1 – 3 as well as Paragraphs 50 – 53 and Lee Figures 1 – 3 (see at least reference character 210) as well as Column 6 lines 34 – 65 (user face including eye position of user imaged)], and
obtain the position information of the target part corresponding to the reference time point, based on the position information of the target part corresponding to the past time point, and the raw position information of the target part [Sztuk Figures 1 – 3 and 8 – 11 as well as Paragraphs 112 – 114 (raw data used for predictions) and 129 – 135 (computing current position / gaze information with position considerations); Gotsch Figures 1 (computing for time “N” with current / previous information) and 10 – 12 (see at least reference character 109, 113, 119) as well as Paragraphs 56 – 59 and 63 (future prediction techniques for the eye locations combinable with Fattal Paragraphs 72 – 74 and extrapolation of position / tracking in Fattal Paragraph 63), 67 (compute current position based on current data and previous data) and 83 – 85].
See claim 13 for the motivation to combine Sztuk, Lee, Gotsch, and Fattal.
Regarding claim 19, Sztuk teaches a gaze / eye tracking system that uses position, velocity, and acceleration information to predict future gaze / eye locations. Lee teaches an eye tracking camera / imaging system with predictive capabilities that uses weights on the velocity information. Gotsch teaches using past information to predict future and interpolated information for eye location / gaze directions.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Sztuk’s prediction system for eyes with the camera / processor arrangement of Lee and the weights on velocities / states for filtering / smoothing considerations and Gotsch’s suggested equations to predict / interpolate future eye / gaze / head locations. The combination teaches
obtain a scaling value for the future velocity based on a direction of the future velocity [Lee Figure 5 (see at least reference characters 510 and 520) and 7 – 9 (see at least reference characters 710 and 720) as well as Column 8 line 30 – Column 9 line 42 (weighting velocity) and Column 9 line 43 – Page 10 line 8 (weighting based on distance / direction traveled for normal / eccentric motion); Fattal Paragraphs 70 – 75 (velocity computation to scale / adjust)], and
predict the future positions of the eyes corresponding to the target time point, based on the future velocity, the future acceleration and the scaling value for the future velocity [See previous limitation for the weight / scaling value feature claimed for citation as well as Sztuk Figures 7 – 9 and 11 – 12 as well as Paragraphs 91 – 95 (current / predicted position, velocity, and accelerations used), 110 – 114 (using predictions of state variables / values or future values estimated), and 134 – 140 (machine learning with the state information for eye tracking and updated based on confidence level thus future state values are used to update current / future predictions of state values); Lee Figures 5 and 7 – 9 as well as Column 7 lines 1 – 60 (computing / predicting future positions); Gotsch Figures 1 and 10 – 12 as well as Paragraphs 58 – 64 (used of current and future predicted data to determine future eye locations / gaze directions including velocity and acceleration predictions – combinable with Sztuk and Lee at least)].
See claim 13 for the motivation to combine Sztuk, Lee, Gotsch, and Fattal.
Regarding claim 20, Sztuk teaches a gaze / eye tracking system that uses position, velocity, and acceleration information to predict future gaze / eye locations. Lee teaches an eye tracking camera / imaging system with predictive capabilities that uses weights on the velocity information. Gotsch teaches using past information to predict future and interpolated information for eye location / gaze directions.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Sztuk’s prediction system for eyes with the camera / processor arrangement of Lee and the weights on velocities / states for filtering / smoothing considerations and Gotsch’s suggested equations to predict / interpolate future eye / gaze / head locations. The combination teaches
obtain a future position of the eyes corresponding to a second time point that is subsequent to the target time point [Gotsch Figures 1 and 11 – 12 (see at least reference character 119) as well as Paragraphs 57 – 58 and 67 (future prediction techniques for the eye locations including interpolation teachings in Paragraph 67 for the time between two samples or extensions as N + 2 or N + 3 (and so on))],
predict future positions of the eyes with respect to the target time point, based on the future positions of the eyes corresponding to the target time point, and the future positions of the eyes corresponding to the second time point [Sztuk Figures 7 – 11 (see at least reference character 722) as well as Paragraphs 88 – 96 (extrapolating to various future times – combinable with Gotsch), 112 – 117 (using future information), 135 – 138 (machine learning / iterating future predictions for refinement using state information), and 152 – 155 (motion and position for future eye position / gaze estimates); Lee Figures 5 and 7 – 9 as well as Column 7 lines 1 – 60 (computing / predicting future positions); Gotsch Figures 1 and 11 – 12 (see at least reference character 119) as well as Paragraphs 50, 57 – 58 and 66 – 67 (future prediction techniques for the eye locations including interpolation teachings in Paragraphs 66 – 67 for the time between two samples including the use of the equations of Fattal Paragraphs 70 – 74)], and
output the image based on the future positions of the eyes with respect to the target time point [Sztuk Figures 7 – 9 as well as Paragraphs 90 – 95 (output image based on future / predicted eye / gaze locations); Gotsch Figures 1 and 10 – 12 as well as Paragraphs 31 (adjusted pixel / image data based on the future / predicted eye position / gaze), 57 – 58 and 83 – 84 (output pupil locations)].
See claim 13 for the motivation to combine Sztuk, Lee, Gotsch, and Fattal.
Regarding claim 9, Sztuk teaches a gaze / eye tracking system that uses position, velocity, and acceleration information to predict future gaze / eye locations. Lee teaches an eye tracking camera / imaging system with predictive capabilities that uses weights on the velocity information. Gotsch teaches using past information to predict future and interpolated information for eye location / gaze directions.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify the teachings of Sztuk’s prediction system for eyes with the camera / processor arrangement of Lee and the weights on velocities / states for filtering / smoothing considerations and Gotsch’s suggested equations to predict / interpolate future eye / gaze / head locations. The combination teaches
obtaining future positions of the eyes corresponding to a third time point that is between the target time point and the second time point [Gotsch Figures 1 and 11 – 12 (see at least reference character 119) as well as Paragraphs 57 – 58 and 66 – 67 (future prediction techniques for the eye locations including interpolation teachings in Paragraphs 66 – 67 for the time between two samples)],
wherein the predicting of the future positions of the eyes with respect to the target time point, based on the future positions of the eyes corresponding to the target time point, and the future positions of the eyes corresponding to the second time point comprises predicting the future positions of the eyes with respect to the target time point, based on the future positions of the eyes corresponding to the target time point, the future positions of the eyes corresponding to the second time point, and the future positions of the eyes corresponding to the third time point [Sztuk Figures 7 – 11 (see at least reference character 722) as well as Paragraphs 88 – 96 (extrapolating to various future times – combinable with Gotsch), 112 – 117 (using future information), 135 – 138 (machine learning / iterating future predictions for refinement using state information), and 152 – 155 (motion and position for future eye position / gaze estimates); Lee Figures 5 and 7 – 9 as well as Column 7 lines 1 – 60 (computing / predicting future positions); Gotsch Figures 1 and 11 – 12 (see at least reference character 119) as well as Paragraphs 50, 57 – 58 and 66 – 67 (future prediction techniques for the eye locations including interpolation teachings in Paragraphs 66 – 67 for the time between two samples including the use of the equations of Fattal Paragraphs 70 – 74)].
See claim 13 for the motivation to combine Sztuk, Lee, Gotsch, and Fattal.
Allowable Subject Matter
Claims 3 and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: Claim 15 is taken as the representative claim as claim 3 is the method performed by claim 15. The claims uses two different measurements IPD (inter pupillary distance) and the center between the two eyes being imaged for the prediction / future position considerations that the cited references in combination while may suggest, do not outright render obvious. While other references teach IPD considerations and thus would render obvious such feature, the claimed combination of IPD and the center between the eyes determinations in claim 15 is not obvious in view of the prior art.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tyler W Sullivan whose telephone number is (571)270-5684. The examiner can normally be reached IFP.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached at (571)-272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TYLER W. SULLIVAN/Primary Examiner, Art Unit 2487