DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-19, 31 and 32 are rejected under 35 U.S.C. 103 as being unpatentable over HEMPEL et al Pub. No.: US 2024/0290112 A1 (Hereinafter “Hempel”) in view of Raphael et al Pub. No.: US 2018/0272978 A1 (Hereinafter “Raphael”).
Regarding Claim 1, Hempel discloses a computer-implemented method for performing gaze tracking in a vehicle space (see abstract), the method comprising:
obtaining face image data, eye region image data, and head pose data for one or more occupants within a field of view of one or more cameras within a vehicle space (see paragraph [0090]: Cameras with a field of view that include portions of the interior environment within the cabin of the vehicle 1200 (e.g., such as one or more OMS sensors 101) may be used for an occupant monitoring system (OMS) such as, but not limited to, a driver monitoring system (DMS). For example, OMS sensors (e.g., such as one or more OMS sensor 101) may be used (e.g., by controller(s) 1236) to track an occupant's and/or driver's gaze direction, head pose, and/or blinking. see paragraph [0040]: capturing (e.g., using a calibrated OMS sensor 101) sensor data 102 (e.g., image data comprising one or more image frames) of a test driver's eyes and gaze direction ), wherein the face image data, eye region image data, and head pose data is reflected from one or more surfaces within the vehicle space (see paragraph [0054]: projected gaze targets 620 may be produced at projection points on various surfaces of the cabin interior, );
Hempel fails to disclose:
evaluating the face image data, the eye region image data, and the head pose data for image quality; and
for image data meeting or exceeding one or more image quality parameters, determining eye tracking information for each of the one or more occupants based on the face image data, the eye region image data, and head pose data.
In analogous art, Raphael teaches:
evaluating the face image data, the eye region image data, and the head pose data for image quality (see paragraphs [0056-0057]: a first image 401 shows an image of a subject that is captured by direct imaging. A second image 402 is an image of subject captured by imaging the reflection of the subject. As can be seen, the direct imaging image 401 is much clearer than the image of the reflection 402… A third image 403 captured by an exemplary embodiment is clearer than the second image 402.); and
for image data meeting or exceeding one or more image quality parameters, determining eye tracking information for each of the one or more occupants based on the face image data, the eye region image data, and head pose data (see paragraphs [0057-0058]: A third image 403 captured by an exemplary embodiment is clearer than the second image 402 and almost as clear and detailed as the direct imaging image 401. The clarity of this image enables the use of reflectance imaging to produce an image that may be used to perform functions based on an analysis of the image or the actions of the subject in the image…..According to an example shown in FIG. 4, facial features 404, among other features, may be detected in the third image and used to determine facial expressions of a subject and/or gaze of a subject. The analysis of the third image 403 or the facial expressions 404 may be used to perform functions based on the third image 403.).
Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the computer-implemented method of Hempel with the teaching as taught by Raphael in order to provide a greater flexibility in capturing images of an occupant and analyzing the captured image to determine at least one from among a gesture of a subject, a direction of a subject's gaze, facial tracking of a subject, and a motion of a subject.
Regarding Claim 2, Hempel in view of Raphael discloses the computer-implemented method as discussed in the rejection of claim 1. Hempel further discloses
wherein the one or more cameras comprises at least one of a digital camera with a wide field-of-view (FOV) (see paragraph [0090]), a plurality of cameras directed at one or more reflective surfaces within the vehicle space, or a plurality of cameras capturing one or more of direct and reflected images of the one or more occupants (see paragraph [0040]).
Regarding Claim 3, Hempel in view of Raphael discloses the computer-implemented method as discussed in the rejection of claim 1. Raphael further discloses
selecting one or more optimal views of each of the one or more occupants (see paragraph [0057); and estimating a position of at least one of the one or more occupants based on the selecting one or more optimal views of each of the one or more occupants, for multi-view localization (see paragraph [0052]).
Regarding Claim 4, Hempel in view of Raphael discloses the computer-implemented method as discussed in the rejection of claim 3. Raphael further discloses
wherein the multi-view localization is performed using camera triangulation of reflected image data captured by a single camera (see fig.3 and paragraph [0055]).
Regarding Claim 5, Hempel in view of Raphael discloses the computer-implemented method as discussed in the rejection of claim 1. Hempel further discloses
wherein the one or more surfaces comprises at least one of a diffuse surface or a specular surface (see paragraph [0031]).
Regarding Claim 6, Hempel in view of Raphael discloses the computer-implemented method as discussed in the rejection of claim 1. Hempel further discloses
wherein the one or more surfaces within the vehicle space comprises: one or more of a highly reflective surface, a mirrored surface, a metal-coated surface, or a reflective plastic surface (see paragraph [0031]).
Regarding Claim 7, Hempel in view of Raphael discloses the computer-implemented method as discussed in the rejection of claim 1. Raphael further discloses
wherein the one or more image quality parameters comprises at least one of eye landmark detectability, image contrast, minimal intensity, image sharpness, or image resolution (see fig.4).
Regarding Claim 8, Hempel in view of Raphael discloses the computer-implemented method as discussed in the rejection of claim 1. Raphael further discloses
selecting image data from the one or more cameras based on the evaluating the face image data, the eye region image data, and the head pose data for image quality (see paragraph [0058]).
Regarding Claim 9, Hempel in view of Raphael discloses the computer-implemented method as discussed in the rejection of claim 8. Raphael further discloses
the selecting image data from the one or more cameras (see paragraph [0044]) based on the evaluating the face image data, the eye region image data, and the head pose data for image quality comprises: dynamically selecting image data from the one or more cameras based on the evaluating the face image data, the eye region image data, and the head pose data for image quality (see paragraph [0058]).
Regarding Claim 10, Hempel in view of Raphael discloses the computer-implemented method as discussed in the rejection of claim 9. Raphael further discloses
wherein the dynamically selecting image data from the one or more cameras (see paragraph [0044]) based on the evaluating the face image data, the eye region image data, and the head pose data for image quality is carried out in response to a change in at least one reflection (see paragraphs [0042] and [0052]).
Regarding Claim 11, Hempel in view of Raphael discloses the computer-implemented method as discussed in the rejection of claim 9. Raphael further discloses
wherein the dynamically selecting one or more cameras based on the evaluating the face image data, the eye region image data, and the head pose data for image quality is carried out in response to at least one movement of at least one occupant (see paragraphs [0048] and [0057-0058]).
Regarding Claim 12, Hempel in view of Raphael discloses the computer-implemented method as discussed in the rejection of claim 1. Hampel further discloses
wherein at least one of the one or more cameras within the vehicle space is configured to capture within its field of view one or more surface reflections of at least one occupant of the vehicle space (see abstract and paragraph [0030]).
Regarding Claim 13, Hempel in view of Raphael discloses the computer-implemented method as discussed in the rejection of claim 12. Hampel further discloses wherein at least one of the one or more cameras is positioned to capture within its field of view at least one reflection from at least one of a window surface, a dashboard surface, a side panel surface, a center console surface, a seat surface, a mirror surface, or a display surface (see paragraphs [0030 and 0042]).
Regarding Claim 14, Hempel in view of Raphael discloses the computer-implemented method as discussed in the rejection of claim 1. Hampel further discloses
wherein the one or more surfaces within the vehicle space does not include a windshield or a rear-facing mirror (see paragraph [0031]).
Regarding Claim 15, Hempel in view of Raphael discloses the computer-implemented method as discussed in the rejection of claim 12. Hampel further discloses wherein at least one of the one or more surface reflections of at least one occupant of the vehicle space comprises: at least one surface reflection of at least one reflective surface (see paragraph [0031]).
Regarding Claim 16, Hempel in view of Raphael discloses the computer-implemented method as discussed in the rejection of claim 1. Hampel further discloses wherein the determining eye tracking information comprises: determining, using an artificial intelligence model (see paragraph [0107]).
Regarding Claim 17, Hempel in view of Raphael discloses the computer-implemented method as discussed in the rejection of claim 16. Hampel further discloses wherein the artificial intelligence model comprises at least one of a convolutional neural network, a neural radiance field (NeRF), a neural radiance field to handle scenes with reflections (NeRFReN), or a generative pre-trained transformer network (see paragraph [0107]).
Regarding Claim 18, Hempel in view of Raphael discloses the computer-implemented method as discussed in the rejection of claim 16. Hampel further discloses wherein the artificial intelligence model comprises: a deep learning network trained on face and eye images reflected from one or more surfaces within one or more vehicle spaces (see paragraph [0030]).
Regarding Claim 19, Hempel in view of Raphael discloses the computer-implemented method as discussed in the rejection of claim 1. Raphael further discloses
wherein the face image data and the eye region image data comprise: at least one digital intensity image, wherein the at least one digital intensity image includes at least one visible eye region (see fig.4).
Regarding Claim 31, the limitations are being analyzed as discussed
with respect to the rejection of claim 1.
Regarding Claim 32, the claim is directed toward embody the method of claim 1 in a “non-transitory computer-readable medium”. It would have been obvious to embody the procedures of Hempel in view of Raphael discussed with respect to claim 1 in a “non-transitory computer-readable medium” in order that the instructions could be automatically performed by a processor.
Claims 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over HEMPEL et al Pub. No.: US 2024/0290112 A1 (Hereinafter “Hempel”) in view of Raphael et al Pub. No.: US 2018/0272978 A1 (Hereinafter “Raphael”), further in view of KOISO et al Pub. No.: US 2023/0334326 A1(Hereinafter “Koiso”).
Regarding Claim 20, Hempel in view of Raphael discloses the computer-implemented method as discussed in the rejection of claim 1.
Hempel in view of Raphael fail to disclose:
wherein the obtaining face image data further comprises: associating at least one digital user identifier with each face in the face image data.
In analogous art, Koiso teaches:
wherein the obtaining face image data further comprises: associating at least one digital user identifier with each face in the face image data (see paragraph [0031]).
Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the computer-implemented method of Hempel in view of Raphael with the teaching as taught by Koiso in order to identify biometric information of a passenger on board a vehicle from a video captured by a camera that captures the inside of the cabin of the vehicle and specifies the boarding position of the passenger.
Regarding Claim 21, Hempel in view of Raphael and Koiso discloses the computer-implemented method as discussed in the rejection of claim 20. Koiso further discloses wherein the at least one digital user identifier comprises at least one anonymized unique digital user identifier (see paragraph [0037]).
Claims 22-26, 29 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over HEMPEL et al Pub. No.: US 2024/0290112 A1 (Hereinafter “Hempel”) in view of Raphael et al Pub. No.: US 2018/0272978 A1 (Hereinafter “Raphael”), further in view of HAIMOVITCH-YOGEV et al Pub. No.: US 2023/0017282 A1 (Hereinafter “HAIMOVITCH-YOGEV”).
Regarding Claim 22, Hempel in view of Raphael discloses the computer-implemented method as discussed in the rejection of claim 1. Raphael further discloses
wherein the evaluating the face image data, the eye region image data, and the head pose data for image quality (see paragraph [0058]). comprises: b) number of supported occupants data (see paragraph [0045]);
Hempel in view of Raphael fail to disclose:
receiving a) system calibration data; and c) extracted image data; and applying a rule set based on at least one of power optimization, camera location parameters, camera field-of-view (FOV) parameters, camera image quality, and eye tracking information quality for each face having a unique digital identifier.
In analogous art, HAIMOVITCH-YOGEV teaches:
receiving a) system calibration data (see paragraph [0025]); and c) extracted image data (see paragraph [0023]); and applying a rule set based on at least one of power optimization, camera location parameters, camera field-of-view (FOV) parameters, camera image quality, and eye tracking information quality for each face having a unique digital identifier (see paragraphs [0027-0029]).
Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the computer-implemented method of Hempel in view of Raphael with the teaching as taught by HAIMOVITCH-YOGEV in order to accurately extract eye landmarks, which could enable gaze estimation.
Regarding Claim 23, Hempel in view of Raphael and HAIMOVITCH-YOGEV discloses the computer-implemented method as discussed in the rejection of claim 22. HAIMOVITCH-YOGEV further discloses wherein the system calibration data comprises at least one of: camera setting data, resolution information, data processing and storage capability information, or system latency information (see paragraphs [0061 and 0064]).
Regarding Claim 24, Hempel in view of Raphael and HAIMOVITCH-YOGEV discloses the computer-implemented method as discussed in the rejection of claim 22. HAIMOVITCH-YOGEV further discloses wherein the extracted image data comprises at least one of: digital unique identifier data, eye state data, head pose data, eye gaze data, Point-of-Regard (PoR) data, eye region intensity level data, or eye position data (see paragraphs [0021 and 0063]).
Regarding Claim 25, Hempel in view of Raphael and HAIMOVITCH-YOGEV discloses the computer-implemented method as discussed in the rejection of claim 24. HAIMOVITCH-YOGEV further discloses wherein the eye state data comprises at least one of eye open, eye closed, eye partially closed, eye X percent closed, or eye X percent open (see figs. 2A-2D).
Regarding Claim 26, Hempel in view of Raphael and HAIMOVITCH-YOGEV discloses the computer-implemented method as discussed in the rejection of claim 22. HAIMOVITCH-YOGEV further discloses wherein the rule set comprises at least one decision tree structure (see paragraph [0063]).
Regarding Claim 29, Hempel in view of Raphael and HAIMOVITCH-YOGEV discloses the computer-implemented method as discussed in the rejection of claim 22. HAIMOVITCH-YOGEV further discloses wherein the camera image quality comprises at least one of: eye region presence or absence in an image, eye state, resolution of eye region (pixels-per-millimeter) in an image, illumination of eye region in an image, PoR information, gaze direction information, head position information, or head orientation information (see paragraphs [0023, 0029, 0032 and 0061]).
Regarding Claim 30, Hempel in view of Raphael and HAIMOVITCH-YOGEV discloses the computer-implemented method as discussed in the rejection of claim 22. HAIMOVITCH-YOGEV discloses wherein the applying a rule set based on at least one of power optimization, camera location parameters, camera image quality; and eye tracking information quality for each face having a unique digital identifier (see paragraphs [0027-0029]) comprises, setting a threshold value for at least one of power optimization, camera location parameters, camera image quality; and eye tracking information quality for each face having a unique digital identifier (see paragraphs [0060-0061]).
Claim 27 is rejected under 35 U.S.C. 103 as being unpatentable over HEMPEL et al Pub. No.: US 2024/0290112 A1 (Hereinafter “Hempel”) in view of Raphael et al Pub. No.: US 2018/0272978 A1 (Hereinafter “Raphael”), in view of HAIMOVITCH-YOGEV et al Pub. No.: US 2023/0017282 A1 (Hereinafter “HAIMOVITCH-YOGEV”) further in view of Hu et al. Patent. No.: US 11144754 B2 (Hereinafter “Hu”).
Regarding Claim 27, Hempel in view of Raphael and HAIMOVITCH-YOGEV discloses the computer-implemented method as discussed in the rejection of claim 22. Hempel in view of Raphael and HAIMOVITCH-YOGEV fail to disclose:
wherein the power optimization comprises: information about the number of cameras providing image data per digital unique identifier.
In analogous art, Hu teaches:
wherein the power optimization comprises: information about the number of cameras providing image data per digital unique identifier (see col.11, lines 39-48).
Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the computer-implemented method of Hempel in view of Raphael and HAIMOVITCH-YOGEV with the teaching as taught by Hu in order a gaze of one or more occupants of a vehicle is determined independently of a location of one or more sensors used to detect those occupants.
Claim 28 is rejected under 35 U.S.C. 103 as being unpatentable over HEMPEL et al Pub. No.: US 2024/0290112 A1 (Hereinafter “Hempel”) in view of Raphael et al Pub. No.: US 2018/0272978 A1 (Hereinafter “Raphael”), in view of HAIMOVITCH-YOGEV et al Pub. No.: US 2023/0017282 A1 (Hereinafter “HAIMOVITCH-YOGEV”) further in view of LEE et al. Pub. No.: US 2023/0132473 A1 (Hereinafter “Lee”).
Regarding Claim 28, Hempel in view of Raphael and HAIMOVITCH-YOGEV discloses the computer-implemented method as discussed in the rejection of claim 22. Hampel further discloses cameras to be used for gaze tracking and their respective reflective surfaces within the FOV of each camera (see paragraphs [0030 and 0042]).
Hempel in view of Raphael and HAIMOVITCH-YOGEV fail to disclose:
wherein the camera location parameters comprise: information about the number and 6DoF location of cameras to be used for gaze tracking.
In analogous art, Lee teaches:
wherein the camera location parameters comprise: information about the number and 6DoF location of cameras (see paragraphs [0437]) to be used for gaze tracking (see paragraphs [0283]).
Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention to modify the computer-implemented method of Hempel in view of Raphael and HAIMOVITCH-YOGEV with the teaching as taught by Lee in order to enable a user to consume more various sensory experiences by providing a 3DoF or 360-degree video newly formed according to a user movement.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Alazar Tilahun whose telephone number is (571)270-5712. The examiner can normally be reached Monday -Friday, From 9:00 AM-6:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benjamin Bruckart can be reached at 571-272-3982. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALAZAR TILAHUN/
Primary Examiner
Art Unit 2424
/A.T/Primary Examiner, Art Unit 2424