Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings were received on 2/15/2024. These drawings are accepted.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: gaze point detection unit, first gaze distance estimation unit, second gaze distance estimation unit and hybrid estimation unit in claim 20
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a mental process, mathematical calculation and data gathering without significantly more. The claims recite mathematical calculation with no additional limitations because the steps of “estimating a first/second distance…based on a binocular convergence angle” involve data used in a calculation. Further, given the broadest reasonable interpretation, the claims recite merely a mental process because the “determining a final gaze distance between the both eyes of the user and the gaze point" without further details. Further, given the broadest reasonable interpretation, the claims recite merely data gathering step used in calculation because the “detecting a gaze point of a user” acquires data for where the user is gazing. This judicial exception is not integrated into a practical application because the claimed limitations are merely calculations to arrive at an answer. The processes, given the broadest reasonable interpretation, could potentially be applied using a pen and paper as a mental process. The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the claimed limitations do not integrate the mathematical procedure into a practical application.
To distinguish ineligible claims that merely recite a judicial exception from eligible claims that require an implementation of judicial exception, the Supreme Court uses a two-step framework: Step One (Step 2A), determine whether the claims at issue are directed to one of those patent-ineligible concepts; and Step Two (Step 28), if so, ask "what else is there in the claims?" to determine whether the additional elements transform the nature of the claim into a patent eligible application.
The first step I Prong One of Step One (Step 2A) to determine patent eligibility requires the determination of whether the claims at issue are directed to an enumerated patent ineligible concept.
Prong (1) requires the determination of (a) the specific limitations in the claim under examination (individually or in combination) that the examiner believes recites an abstract idea and (b) determining whether the identified limitations falls within the subject matter groupings of abstract ideas enumerated.
The enumerated patent ineligible concepts comprising:
(a) Mathematical Concepts - mathematical relationships, mathematical formulas or equations, mathematical calculations;
(b) Certain methods of organizing human activity - fundamental economic
principles / practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules/ instructions) and
(c) Mental processes - concepts performed in the human mind (including an observation, evaluation, judgment, opinion).
Claims 1-20 recite a series of steps for determining how far an object is from a person’s eyes based on the gaze direction of both eyes. This judicial exception is not integrated into a practical application because the data gathering step, i.e. acquire attribute data steps, do not add a meaningful limitation to the system, method or computer readable medium as it is insignificant extra-solution data gathering activity and is nothing more than generally linking the product to a particular technological environment.
Accordingly, this judicial exception does not integrate the abstract idea into a practical application. Specifically, independent claim 1 and 20 only recites limitations directed at mathematical calculation and data gathering. It is not integrated into any practical application. Dependent claims 2-19 are only additional mathematical calculations or data manipulation and data gathering.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-3 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Deng et al. (US20180095282, hereinafter “Deng”)
Claim 1. Deng teaches A method of estimating a gaze distance of a user,
the method comprising:
detecting a gaze point ([0037] “the interested point IPa of the target object OBJ1.” And fig. 3) of the user ([0025] “track positions and movements of two pupils of the user.”) in a front image, ([0024] “front image”) which is an image of an environment in front of a field of view of the user, ([0024] “front image covering the target object OBJ1 located in front of the head mounted device 200.”) from a binocular image, which is an image of both eyes of the user; ([0028] “FIG. 2A, according to the sequential images captured by the eye-tracking cameras 241 and 242, the control module 260 is able to detect that the user focuses on an interested point IPa on the target object OBJ1.” When there are two cameras capturing images it is understood to be a binocular image)
estimating a first gaze distance between both eyes of the user ([0037] “calculate the gap distance D1 between the pupils PL1/PL2 and the interested point IPa of the target object OBJ1.” And fig. 3) and the gaze point based on a binocular convergence angle of the user ([0037] “The gap distance D1 is calculated according to the interpupillary distance IPD1 and the convergence angle θc1 as shown in FIG. 2A”) estimated from the binocular image; ([0035] “FIG. 2A, the visions of pupils PL1 and PL2 are converged at the interested point IPa on the target object OBJ1.”)
estimating a second gaze distance between the both eyes of the user and the gaze point based on a depth value of the gaze point of the user ([0050] “obtain the gap distance D3 according the depth value” is understood to be the same as the claimed second gaze distance based on a depth value in light of instant specifications [0045]) estimated from a depth image ([0050] “at the corresponding position in the depth map.”) of the environment in front of the field of view of the user; ([0048] “depth map in front of the head mounted device 300.”) and
determining a final gaze distance between the both eyes of the user and the gaze point ([0056] “the gap distance D4 between the target object OBJ3 and the pupils PL1/PL2 can be obtained.”) based on the first gaze distance ([0053] “calculate the gap distance D4 according to a convergence angle between two visions of the pupils according to the positions of the pupils (similar to operations S311-S313 shown in FIG. 3).”) and the second gaze distance. ([0053] “control module 360 is not able to obtain a depth value of the target object OBJ3 from the depth map.” is understood to be the same as the claimed determine a final gaze distance based on the first gaze distance and the second gaze distance because the second gaze distance is only included if the depth value “may not be measured” in light of instant specifications [0088])
Claim 2. Deng teaches The method of claim 1, wherein,
in the estimating of the first gaze distance, ([0037] “Operation S313 in FIG. 3 is performed by the control module 260 to calculate the gap distance D1”) the first gaze distance is estimated by detecting an angle ([0035] “sum of the first angle θ1 and the second angle θ2 is calculated by the control module 260 to obtain the convergence angle θc1”) between a normal line of a center point of a left eye pupil and a normal line of a center point of a right eye pupil ([0035] “between two visions of the pupils PL1 and PL2.” And fig. 2A ) shown in the binocular image as binocular convergence of the user, ([0035] “through the eye-tracking camera 241… through the eye-tracking camera 241”) and calculating a gaze distance between the both eyes of the user and the gaze point from a distance between the center point of the left eye pupil and the center point of the right eye pupil ([0037] “calculate the gap distance D1 between the pupils PL1/PL2 and the interested point IPa of the target object OBJ1…”) and the detected binocular convergence angle. ([0037] “gap distance D1 is calculated according to the …convergence angle θc1 as shown in FIG. 2A”)
PNG
media_image1.png
480
530
media_image1.png
Greyscale
Claim 3. Deng teaches The method of claim 1, wherein
in the estimating of the second gaze distance, ([0050] “obtain the gap distance D3 according the depth value”) the second gaze distance is estimated by extracting a depth value of a pixel corresponding to the detected gaze point of the user from among depth values of a plurality of pixels shown in the depth image, ([0050] “depth value at the corresponding position in the depth map.”) and determining the extracted depth value as a gaze distance between the both eyes of the user and the gaze point. ([0050] “gap distance D3 between the target object OBJ2 (far from the user) and the pupils PL1/PL2 can be obtained.”)
Claim 20. Deng teaches A device for estimating a gaze distance of a user, the device (Abstract “head mounted device … to obtain a gap distance between the pupils and the target object.”) comprising:
a gaze point detection unit configured to detect a gaze point ([0037] “the interested point IPa of the target object OBJ1.” And fig. 3) of the user ([0025] “track positions and movements of two pupils of the user.”) in a front image, ([0024] “front image”) which is an image of an environment in front of a field of view of the user, ([0024] “front image covering the target object OBJ1 located in front of the head mounted device 200.”) from a binocular image, which is an image of both eyes of the user; ([0028] “FIG. 2A, according to the sequential images captured by the eye-tracking cameras 241 and 242, the control module 260 is able to detect that the user focuses on an interested point IPa on the target object OBJ1.” When there are two cameras capturing images it is understood to be a binocular image)
a first gaze distance estimation unit configured to estimate a first gaze distance between both eyes of the user ([0037] “calculate the gap distance D1 between the pupils PL1/PL2 and the interested point IPa of the target object OBJ1.” And fig. 3) and the gaze point based on a binocular convergence angle of the user ([0037] “The gap distance D1 is calculated according to the interpupillary distance IPD1 and the convergence angle θc1 as shown in FIG. 2A”) estimated from the binocular image; ([0035] “FIG. 2A, the visions of pupils PL1 and PL2 are converged at the interested point IPa on the target object OBJ1.”)
a second gaze distance estimation unit configured to estimate a second gaze distance between the both eyes of the user and the gaze point based on a depth value of the gaze point of the user ([0050] “obtain the gap distance D3 according the depth value” is understood to be the same as the claimed second gaze distance based on a depth value in light of instant specifications [0045]) estimated from a depth image ([0050] “at the corresponding position in the depth map.”) of the front of the field of view of the user; ([0048] “depth map in front of the head mounted device 300.”) and
a hybrid estimation unit configured to determine a final gaze distance between the both eyes of the user and the detected gaze point ([0056] “the gap distance D4 between the target object OBJ3 and the pupils PL1/PL2 can be obtained.”) based on the first gaze distance ([0053] “calculate the gap distance D4 according to a convergence angle between two visions of the pupils according to the positions of the pupils (similar to operations S311-S313 shown in FIG. 3).”) and the second gaze distance. ([0053] “control module 360 is not able to obtain a depth value of the target object OBJ3 from the depth map.” is understood to be the same as the claimed determine a final gaze distance based on the first gaze distance and the second gaze distance because the second gaze distance is only included if the depth value “may not be measured” in light of instant specifications [0088])
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 6 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Deng et al. (US20180095282, hereinafter “Deng”) and in view of Nishizawa et al (US20160284129, hereinafter “Nishizawa”)
Claim 6. Deng teaches The method of claim 1, further comprising:
Deng does not explicitly teach detecting a pupil size of the user from the binocular image; and correcting the estimated first gaze distance based on the detected pupil size, wherein, in the determining of the final gaze distance, the final gaze distance is determined based on the corrected first gaze distance.
Nishizawa teaches detecting a pupil size of the user ([0236] “detecting a change in the pupil diameter or the pupil diameter”) from the binocular image; ([0236] “using a stereo camera”) and correcting the estimated first gaze distance based on the detected pupil size, ([0255] “calculates the gaze distance of the user by obtaining at least one of …the pupil diameter,”) wherein, in the determining of the final gaze distance, the final gaze distance is determined based on the corrected first gaze distance. ([0255] “Thus, it is possible to calculate the gaze distance with high accuracy from the state of both eyes of the user.”)
It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify Deng to have detecting a pupil size, correcting the first distance and final distance as taught by Nishizawa to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been to have the (Nishizawa [0235] “advantage of obtaining the gaze distance at a high degree of accuracy”)
Claim 19. Deng teaches the method of claim 1
Deng does not explicitly teach A computer-readable recording medium in which a program for performing the method of claim 1 using a computer is recorded.
Nishizawa teaches A computer-readable recording medium in which a program for performing the method of claim 1 using a computer is recorded. ([0161] “The control unit 140 controls each unit of the HMD 1 by reading and executing the program stored in the storage unit 120 or the ROM. In addition, the control unit 140 executes the programs, and functions as an operating system (OS) 150,”)
It would have been obvious to persons of ordinary skill in the art before the effective filing date of the claimed invention to modify Deng to have computer recording medium in which a program for performing a method is recorded as taught by Nishizawa to arrive at the claimed invention discussed above. The motivation for the proposed modification would have been to have the (Nishizawa [0235] “advantage of obtaining the gaze distance at a high degree of accuracy”)
Allowable Subject Matter
Claims 4-5 and 7-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and if rewritten to overcome the 35 USC 101 rejection.
In regards to Claims 4-5: Lu et al US11748906 teaches pixel remapping being used to rectify epipolar lines in a second image based on the first image which has rectangular coordinates which results in a linear relationship however does not disclose that one of the images being obtained by a depth value of a depth image to estimate the gaze distance.
In regards to Claims 7-8: Di et al US20250139911 teaches enabling a correction model if the user indicates a negative feedback however does not disclose correcting the distance based on the pupil size. Similarly Nishizawa US20160284129 teaches correcting a gaze distance based on pupil size however does not teach applying a weight that changes according to a change in pupil size.
In regards to Claims 9-10: Strandborg et al US20250076974 teaches updating gaze convergence distance based on a threshold distance however does not disclose updating it based on the distance estimated at the time point immediately before the current time point
In Regards to Claims 11-15: Nilsson et al US20160323540 teaches determining gaze based off of camera images and depth sensor and correcting gaze tracking based on the depth camera however it does not teach correcting gaze distance nor teaches correcting the depth distance based on the gaze point
In regards to Claims 16-18: Klingström US20200183490 teaches correcting gaze tracking based on confidence levels of the gaze positions and the distance between them however does not teach gaze distances.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure:
Koo et al US20210248766 teaches determining distance based on stereoscopic cameras and a depth sensor
Martin et al US20210248399 teaches gaze distance determination
Any inquiry concerning this communication or earlier communications from the examiner should be directed to OWAIS MEMON whose telephone number is (571)272-2168. The examiner can normally be reached M-F (7:00am - 4:00pm) CST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at (571) 272-3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/OWAIS I MEMON/Examiner, Art Unit 2663