Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Cancelled Claims
Claims 20 and 21 have been cancelled.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-13, 17-19 and 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Martin (2021/0248399) in view of Lewis (2019/0325746).
With respect to claim 1, Martin teaches a camera system 118 which inherently has an image sensor for receiving image information.
Martin teaches eye tracking systems 112/122 for detecting eye information of the driver while in the vehicle. (See para. 25).
Martin teaches determining a gaze of a first direction gaze (regarding an indirect situational awareness) and a second direction gaze (regarding direct subjective or objective direct measurements).
Martin teaches predicting the amount of time between gazes via ADAS module 124 (see para. 25). The ADAS module 124 predicts the awareness of the driver based on real-time measurements with respect to a period of time (based on a predetermined window of time). Martin teaches the predictive relationship between the indirect eye glance measures and the direct measure of situational awareness may be associated with real time information and historical information. See para. 34.
Martin teaches all of the subject matter upon which the claim depends but does not address detecting the time between the first and second gaze, as set forth by the sixth limitation of claim 1.
Lewis teaches a system and method for detecting the gaze of a driver driving vehicle 802. Lewis teaches determining the orientation of the gaze of the driver at a first angle and determining the gaze angle as the driver gazes at a second angle. See paras. 78 and 79.
Lewis also teaches implementation of the gaze detection system and method using a non-transitory computer readable medium for storing instructions implemented by one or more processors, see paras. 166, 168 and 169.
Since both references are directed to the detection of the gaze of driver, including the distance between gazes while driving, the concept of implementing a system or method by computer readable medium would have been recognized by Martin as set forth by Lewis.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to implement a gaze system or method wherein a gaze is determined from a first position to a second position. One of ordinary skill in the art, would have known to look to the teaching of Lewis for storing a program or instructions on a computer readable medium for implementing the method, as taught by Martin.
With respect to claim 2, Martin teaches all of the subject matter upon which the claim depends, as set forth above, except for the use of machine learning to identify the user. Lewis teaches using machine learning to identify the user, see para. 82, line 7 and paras. 83.
Since, Martin and Lewis are both directed to the detection of the eye gaze of the driver, the purpose of using machine learning to identify the user would have been recognized by Martin as clearly set forth by Lewis.
It would have been obvious to one of ordinary skill in the art, before the effective filing of the claim invention, to use machine learning for identifying a driver, in the Martin disclosure as clearly taught by the Lewis disclosure.
With regard to claim 3, Martin teaches eye tracking sensors 122 which detect the presence of the user and capturing image of the environment. See paras. 29 and 30. Hence, the image capture of the environment includes the location of the driver in the seat and/or the description of position of the driver.
With respect to claim 4, Martin teaches eye tracking sensors 122 for tracking the focus and attention of the driver, see para. 29, lines 1-5. Both of which are physiological factors or features of consideration for the driver.
With respect to claim 5, Martin teaches characteristics of driver eye glance behavior (corresponding to a physiological or psychological) featured and measured by fixation time variables (telapse and tdwell). This is in relation to the focus and attentiveness as claimed and support at paras. 29 and 30.
With respect to claim 6, Martin teaches that prediction amounts are based on driving conditions. For example, at para. 44, lines 1-10, the collected indirect eye glance behavior relates to one or more awareness of a road hazard, wherein the road hazard is the driving condition.
With respect to claim 7, Martin teaches predicting a drivers awareness when a pedestrian crosses in front of camera or road hazards, which broadly may be automobiles configures on the road.
With respect to claim 8, Martin teaches the Advanced Drivers Assistance System, see for example, paras. 18, 19 and 23.
With respect to claim 9, Martin teaches measuring or predicting the driver’s awareness, see paras. 22, 24 and 25 to predict the time to gave from one angle or point to a second angle or point. Martin teaches determining a gaze of a first direction gaze (regarding an indirect situational awareness) and a second direction gaze (regarding direct subjective or objective direct measurements). Martin teaches predicting the amount of time between gazes via ADAS module 124 (see para. 25).
With respect to claim 10, Martin teaches detecting a gesture, such as an eye gesture and determining the level of awareness, see paras. 22, 24 and 25.
Martin teaches predicting the amount of time between gazes via ADAS module 124 (see para. 25). Martin also teaches determining the amount of awareness, see paras. 24 and 25.
With respect to claim 11, Martin address the measurement or prediction of the driver preparedness or awareness. See paras. 22, 24 and 25.
With respect to claim 12, Martin teaches determining a person outside a vehicle, such as pedestrians in a cross walk, see paras. 36 and 81. Martin teaches an ADAS for determining the driver’s level of preparedness or awareness. See paras. 24 and 25.
With respect to claim 13, Martin teaches all of the subject matter upon which the claim depends except for the use of historical data of the driver for predicting the timing information.
Lewis teaches using machine learning to identify the user, see para. 82, line 7 and paras. 83. It is understood, or a matter of common knowledge, that a machine learning platform uses historical data. It is clear from the Lewis teaching, see the last 13 lines of para. 12, where Lewis states that historical information associated with the user is obtained for determining gaze and position data.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to use historical data of the driver taught by Martin as clearly suggested by Lewis, as it relates to at least para. 12.
With respect to claim 16, Martin teaches a system and method for detecting the gaze of a driver driving vehicle 802.
Martin teaches eye tracking systems 112/122 for receiving first eye information of the driver while in the vehicle. (See para. 25).
Martin teaches ADAS modules 120 for processing, by means of process step 401, the first gaze information, see para. 76, lines 1-6. See also para. 77, beginning at line 12 and para. 82, lines 5-8 and 20-23.
Martin teaches extracting eye gazing features/characteristics. See para. 20, lines 6-10 and para. 24, lines 7-10.
Martin teaches determining a gaze of a first direction gaze (regarding an indirect situational awareness) and receiving second information which is the second direction gaze (regarding direct subjective or objective direct measurements). See also para. 82 regarding multiple gaze angles at different times within a predetermined window of time.
Martin teaches predicting the amount of time between gazes via ADAS module 124 (see para. 25). The ADAS module 124 predicts the awareness of the driver based on real-time measurements with respect to a period of time (based on a predetermined window of time). Martin teaches the predictive relationship between the indirect eye glance measures and the direct measure of situational awareness may be associated with real time information and historical information. See para. 34.
Martin teaches sending a message by means of a display, that the predicted amount of time was not pursuant to a predetermined threshold. See para. 84, lines 7-21.
Martin teaches all of the subject matter upon which the claim depends but does not address detecting the time between the first and second gaze, as set forth by the claim.
Lewis teaches a system and method for detecting the gaze of a driver driving vehicle 802. Lewis teaches determining the orientation of the gaze of the driver at a first angle and determining the gaze angle as the driver gazes at a second angle. See paras. 78 and 79.
Lewis also teaches implementation of the gaze detection system and method using a non-transitory computer readable medium for storing instructions implemented by one or more processors, see paras. 166, 168 and 169.
Since both references are directed to the detection of the gaze of driver, including the distance between gazes while driving, the concept of implementing a system or method by computer readable medium would have been recognized by Martin as set forth by Lewis.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to implement a gaze system or method wherein a gaze is determined from a first position to a second position. One of ordinary skill in the art, would have known to look to the teaching of Lewis for storing a program or instructions on a computer readable medium for implementing the method, as taught by Martin.
With regard to claim 17, Martin teaches a sensor for detecting one or more first characteristics with, associated with the gaze of the driver with respect to a predetermined period of time.
With respect to claim 18, Martin teaches wherein the state of the vehicle is in motion or being driven. The amount of time during a second timing is at least taught with respect to para. 86, lines 6-8. See also para. 77.
With respect to claim 19, Martin teaches assessing the road conditions, see para. 81, lines 1-10.
With respect to claim 22, Martin teaches a system and method for detecting the gaze of a driver driving vehicle 802.
Martin teaches eye tracking systems 112/122 for receiving first eye information of the driver while in the vehicle. (See para. 25).
Martin teaches ADAS modules 120 for processing, by means of process step 401, the first gaze information, see para. 76, lines 1-6. See also para. 77, beginning at line 12 and para. 82, lines 5-8 and 20-23.
Martin teaches extracting eye gazing features/characteristics. See para. 20, lines 6-10 and para. 24, lines 7-10.
Martin teaches determining a gaze of a first direction gaze (regarding an indirect situational awareness) and receiving second information which is the second direction gaze (regarding direct subjective or objective direct measurements). See also para. 82 regarding multiple gaze angles at different times within a predetermined window of time.
Martin teaches predicting the amount of time between gazes via ADAS module 124 (see para. 25). The ADAS module 124 predicts the awareness of the driver based on real-time measurements with respect to a period of time (based on a predetermined window of time). Martin teaches the predictive relationship between the indirect eye glance measures and the direct measure of situational awareness may be associated with real time information and historical information. See para. 34.
Martin teaches sending a message by means of a display, that the predicted amount of time was not pursuant to a predetermined threshold. See para. 84, lines 7-21.
Martin teaches all of the subject matter upon which the claim depends but does not address detecting the time between the first and second gaze, as set forth by the claim. Moreover, Martin does not teach implementing the system or steps using a computer readable medium for storing instructions that cause a processor to perform the claimed steps.
Lewis teaches a system and method for detecting the gaze of a driver driving vehicle 802. Lewis teaches determining the orientation of the gaze of the driver at a first angle and determining the gaze angle as the driver gazes at a second angle. See paras. 78 and 79.
Lewis also teaches implementation of the gaze detection system and method using a non-transitory computer readable medium for storing instructions implemented by one or more processors, see paras. 166, 168 and 169.
Since both references are directed to the detection of the gaze of driver, including the distance between gazes while driving, the concept of implementing a system or method by computer readable medium would have been recognized by Martin as set forth by Lewis.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to implement a gaze system or method wherein a gaze is determined from a first position to a second position. One of ordinary skill in the art, would have known to look to the teaching of Lewis for storing a program or instructions on a computer readable medium for implementing the method, as taught by Martin.
Allowable Subject Matter
Claims 14 and 15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEROME GRANT II whose telephone number is (571)272-7463. The examiner can normally be reached M-F 9:00 a.m. - 5:00 p.m..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JEROME GRANT II/Primary Examiner, Art Unit 2664