DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
2. This office action is in response to application number 18/141,604 filed on 05/01/2023,
and the amendments and arguments filed on 12/10/2025.
Claims 1, 10, and 17 have been amended.
No claims have been added.
No claims have been cancelled.
Claims 1-20 are currently pending and have been examined.
Information Disclosure Statement
3. The information disclosure statement (IDS) submitted on 05/01/2023 and 09/04/2024
have been received and considered.
Response to Amendment
4. Applicant' s amendments to the Claims have overcome each and every rejection previously set forth in the Non-Final Office Action mailed 11/28/2025.
Applicant’s arguments, see page 8 filed 12/10/2025, with respect to the rejections(s)
of claim(s) 1-20 under 35 USC 103 have been fully considered and are persuasive. Therefore, the
rejection has been withdrawn.
However, upon further consideration, a new grounds for rejection as necessitated by amendment is made under 35 USC 103 over Hanna (US 20180086339 A1) in view of Blayvas (US 20180012085 A1) further in view of Arar (US 20220121867 A1) and further in view of Nave (US10106156 B1).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
5. Claim(s) 1, 3, 5-8, 10-11, 13-15, and 17-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hanna (US 20180086339 A1) in view of (US 20180012085 A1) to Blayvas et al. (hereinafter Blayvas).
Regarding claim 1, Hanna discloses A non-transitory computer-readable storage medium storing instructions that, when executed by one or more processors in a vehicle, cause the one or more processors to: (Hanna Paragraph 0027: “In these embodiments, the computing device 100 may store and execute firmware or other executable instructions that, when executed, direct the one or more processing units 121 to simultaneously execute instructions or to simultaneously execute instructions on a single piece of data.”) (Hanna Paragraph 0028: “The processing cores of the processing unit 121 may in some embodiments access available memory as a global address space, or in other embodiments, memory within the computing device 100 can be segmented and assigned to a particular core within the processing unit 121.”) receive a reading from a forward-facing sensor of the vehicle indicating a detected object forward of the vehicle; (Hanna Paragraph 0038: “FIG. 2A shows a top-view of a driver D sitting in a seat of a vehicle. In one embodiment of the system, a camera “A” (e.g., a first sensor) senses imagery from in front of the vehicle.”)
PNG
media_image1.png
395
567
media_image1.png
Greyscale
perform object recognition of the detected object by: (Hanna Paragraph 0047: “An activation engine in communication with the first sensor and the second sensor can determine a proximity of the gaze angle of the user to the determined position of the potential obstacle (205). The activation engine can control, in response to the determined proximity, an operation of the ADAS mechanism/system for responding to the potential obstacle, and/or may provide an alert to at least: the user regarding the potential obstacle, or the potential obstacle.”) (Hanna Paragraph 0048: “The first sensor may include a sensing, detecting and/or measurement device, that may be based on one or more of imaging (e.g., computer vision, infra-red, object recognition), LiDAR, radar, audio/ultrasound, sonar, etc. The potential obstacle may include a potential obstacle to the vehicle and/or a user of the vehicle. The potential obstacle may include or refer to a road user such as a person, animal and/or other vehicle. The potential obstacle may include any object, stationary or otherwise, and in some embodiments can include an object smaller than or of height below a certain threshold, and/or partially obscured from the first sensor due to reduced visibility or contrast from fog, smoke, or the like, and/or due to light foliage, low light and/or limited field of view of the first sensor.”) receiving a reading from another sensor of the vehicle indicating a detected driver alertness problem; (Hanna Paragraph 0050: “Referring now to 203, and in some embodiments, a second sensor can determine a gaze angle of a user of the vehicle. The second sensor may comprise any type or form of sensor or device described above in connection with the first sensor. The second sensor can determine a gaze angle of the user at the same time or substantially the same time as the determination of the location of the potential obstacle, e.g., so that the gaze angle and location can be registered or compared contemporaneously.”) (Note: Gaze can determine alertness of a driver) […] and in response to classifying the detected object as a hazardous target cause the collision avoidance system to automatically take action to attempt to avoid the collision. (Hanna Paragraph 0038: “FIG. 2A shows a top-view of a driver D sitting in a seat of a vehicle. In one embodiment of the system, a camera “A” (e.g., a first sensor) senses imagery from in front of the vehicle. Within the coordinate system of camera A, a vulnerable user V (e.g., a potential obstacle) can be detected by the ADAS system at a particular range and angle with respect to the ADAS camera. In ADAS systems, there may be uncertainty regarding the determination that an obstacle is a hazard or not. The likelihood or uncertainty can depend on many factors, including the resolution of the sensor, the movement of the obstacle and/or many other factors. This can be an issue particularly when potential obstacles are smaller and when there is less information for ADAS systems to determine its presence with certainty. In one embodiment, the present solution may relate to mitigating this uncertainty.”) (Hanna Paragraph 0041: “For example, if the ADAS system detected a potential obstacle with a particular probability, P_ADAS_DETECTION, and the driver was looking away from the potential obstacle such that DIFF is large, then the probability of actuation P_ACTUATION can be almost equal to the probability of ADAS detection, P_ADAS_DETECTION. In this case, because the driver has not seen the potential obstacle and therefore would not actuate the brakes himself/herself, then it may be safer for the system to actuate the brakes even if there is some limited uncertainty in the ADAS system. In addition or alternatively, the system may provide an alert to the user regarding the potential obstacle so that the user themselves can apply the brakes, and/or provide an alert to the potential obstacle (which can be a road user such as a person or animal) to be aware of the vehicle and can take action to distance itself from the vehicle or otherwise improve safety. On the other hand, for the same value of P_ADAS_DETECTION, if the user is gazing at the potential obstacle such that the angular distance between their gaze direction and the direction of the object provided by the ADAS system is small or close to zero, then from the formula in this particular embodiment, P_ACTUATION would be near to zero. In this case, because the driver has been detected to be looking at the potential obstacle, then it is more likely that the driver himself/herself would apply braking if it was a real potential obstacle, and therefore in the presence of the same uncertainty of the ADAS system, in this case the ADAS system would not apply the brakes automatically, and the system may not provide an alert to the user regarding the potential obstacle.”) (Hanna Paragraph 0056: “As another non-limiting example, the activation engine can determine that the determined proximity of the gaze angle of the user to the determined position is above a predefined threshold (e.g., the user may be unaware of or not fully aware of the potential obstacle), and may determine or decide to maintain or decrease a threshold for the ADAS to initiate collision avoidance, and/or may reduce the threshold for the activation engine to provide an alert to the user regarding the potential obstacle (and/or to the potential obstacle), making the alert more likely to be announced.”) (Note: The object is determined to be hazardous when the users gaze does not match the location of the obstacle therefore determining the user is not aware and the obstacle is hazardous and vice versa if the gaze does match the location of the obstacle.)
Hanna does not disclose […] in response to the detected driver awareness problem, lowering a confidence level threshold used by a collision avoidance system of the vehicle for classifying the detected object as a hazardous target, wherein an amount by which the confidence level threshold is lowered depends on a type of the detected driver alertness problem; and classifying the detected objects as a hazardous target or an irrelevant target based on the lowered confidence threshold;
However, Blayvas does teach […] in response to the detected driver awareness problem, lowering a confidence level threshold used by a collision avoidance system of the vehicle for classifying the detected object as a hazardous target, (Blayvas Paragraph 0010: “For example, alert thresholds may be adjusted based on driver alertness.”) (Blayvas Paragraph 0028: “According to some embodiments, the one or more processors may be adapted to assess levels and/or natures/characteristics of potentially hazardous situations and react accordingly.”) (Blayvas Paragraph 0079: “Furthermore, a detector output may contain a measure of confidence level. A standalone detector can only compare this confidence level with a threshold”) (Blayvas Paragraph 0103: “For example, if the potential obstacle is detected, but the driver is fully alert and looking in the direction of the obstacle, the warning threshold level remains relatively high or even slightly elevated, and the alert is not triggered. In the opposite case, if driver inattentiveness is detected, or his glance is astray from the potentially dangerous obstacle, the alert threshold is lowered, and therefore an alert can be triggered even in a relatively low risk situation.”) (Blayvas Paragraph 0127: “(i) determine a level of risk for each identified hazardous situation,”) (Note: The alert threshold is lowered based on the attentiveness of the driver. A certain alert threshold can be lowered and determined because a particular hazardous situation is based on the confidence level of classifying a target as hazardous) wherein an amount by which the confidence level threshold is lowered depends on a type of the detected driver alertness problem; and classifying the detected objects as a hazardous target or an irrelevant target based on the lowered confidence threshold; (Blayvas Paragraph 0083: “For example a certain low collision risk with hefty amount of time combined with the high alertness level of the driver and his glance in the direction of potential danger might not lead to issuance of the collision warning, while exactly the same road situation, but with the driver distracted by his phone is a completely different story.”) (Blayvas Paragraph 0131: “wherein the dynamic threshold is dependent upon a current alertness of the driver determined by the processing circuitry based upon the parameters sensed by said interior sensors relating to an alertness of a driver of the vehicle.”) (Note: The target is classified as a hazard when the collision warning is issued)
Therefore, it would have been obvious to one of ordinary skill in art before the effective
filing date of the claimed invention to have modified Hanna to include […] in response to the detected driver awareness problem, lowering a confidence level threshold used by a collision avoidance system of the vehicle for classifying the detected object as a hazardous target, wherein an amount by which the confidence level threshold is lowered depends on a type of the detected driver alertness problem; and classifying the detected objects as a hazardous target or an irrelevant target based on the lowered confidence threshold; taught by Blayvas. This would have been for the benefit to decrease the risks associated with vehicle driving by timely detection of dangerous situations and providing appropriate signals to the driver or to the vehicle controls. [Blayvas Paragraph 0005]
Regarding claim 3, Hanna discloses The non-transitory computer-readable storage medium of Claim 1, wherein the collision avoidance system further comprises a collision alert system. (Hanna Paragraph 0055: “In some embodiments, the ADAS mechanism implemented in the present solution may include (vehicle user or road user) alerting functionality in the ADAS mechanism's collision avoidance operations. In this case, the system may accordingly adjust respective threshold(s) for triggering or sending alert(s) to the user and/or the road user (potential obstacle).”)
Regarding claim 5, Hanna discloses The non-transitory computer-readable storage medium of Claim 1, wherein the another sensor further comprises a driver-facing sensor. (Hanna Paragraph 0039: “At substantially the same time, an eye tracking camera E (e.g., a second sensor) mounted in the vehicle facing the driver can detect a gaze angle of the driver D.”)
Regarding claim 6, Hanna discloses The non-transitory computer-readable storage medium of Claim 5, wherein the driver-facing sensor further comprises a driver-facing camera. (Hanna Paragraph 0039: “At substantially the same time, an eye tracking camera E (e.g., a second sensor) mounted in the vehicle facing the driver can detect a gaze angle of the driver D.”)
Regarding claim 10, Hanna discloses performing in one or more processors in a vehicle: (Hanna Paragraph 0004: “A method comprising: In some aspects, this disclosure is directed a method for operating or controlling an ADAS mechanism/system”) receiving a reading from a forward-facing sensor of the vehicle indicating a detected object forward of the vehicle; (Hanna Paragraph 0038: “FIG. 2A shows a top-view of a driver D sitting in a seat of a vehicle. In one embodiment of the system, a camera “A” (e.g., a first sensor) senses imagery from in front of the vehicle.”)
PNG
media_image1.png
395
567
media_image1.png
Greyscale
perform object recognition of the detected object by: (Hanna Paragraph 0047: “An activation engine in communication with the first sensor and the second sensor can determine a proximity of the gaze angle of the user to the determined position of the potential obstacle (205). The activation engine can control, in response to the determined proximity, an operation of the ADAS mechanism/system for responding to the potential obstacle, and/or may provide an alert to at least: the user regarding the potential obstacle, or the potential obstacle.”) (Hanna Paragraph 0048: “The first sensor may include a sensing, detecting and/or measurement device, that may be based on one or more of imaging (e.g., computer vision, infra-red, object recognition), LiDAR, radar, audio/ultrasound, sonar, etc. The potential obstacle may include a potential obstacle to the vehicle and/or a user of the vehicle. The potential obstacle may include or refer to a road user such as a person, animal and/or other vehicle. The potential obstacle may include any object, stationary or otherwise, and in some embodiments can include an object smaller than or of height below a certain threshold, and/or partially obscured from the first sensor due to reduced visibility or contrast from fog, smoke, or the like, and/or due to light foliage, low light and/or limited field of view of the first sensor.”) determining whether a driver of the vehicle is in a first awareness state or a second wares ness state based on a reading from a driver-facing sensor and/or a reading from a vehicle sensor, wherein the driver is more aware in the first awareness stat than in the second awareness state (Hanna Paragraph 0041: “For example, if the ADAS system detected a potential obstacle with a particular probability, P_ADAS_DETECTION, and the driver was looking away from the potential obstacle such that DIFF is large, then the probability of actuation P_ACTUATION can be almost equal to the probability of ADAS detection, P_ADAS_DETECTION. In this case, because the driver has not seen the potential obstacle and therefore would not actuate the brakes himself/herself, then it may be safer for the system to actuate the brakes even if there is some limited uncertainty in the ADAS system. In addition or alternatively, the system may provide an alert to the user regarding the potential obstacle so that the user themselves can apply the brakes, and/or provide an alert to the potential obstacle (which can be a road user such as a person or animal) to be aware of the vehicle and can take action to distance itself from the vehicle or otherwise improve safety. On the other hand, for the same value of P_ADAS_DETECTION, if the user is gazing at the potential obstacle such that the angular distance between their gaze direction and the direction of the object provided by the ADAS system is small or close to zero, then from the formula in this particular embodiment, P_ACTUATION would be near to zero. In this case, because the driver has been detected to be looking at the potential obstacle, then it is more likely that the driver himself/herself would apply braking if it was a real potential obstacle, and therefore in the presence of the same uncertainty of the ADAS system, in this case the ADAS system would not apply the brakes automatically, and the system may not provide an alert to the user regarding the potential obstacle.”) (Hanna Paragraph 0050: “Referring now to 203, and in some embodiments, a second sensor can determine a gaze angle of a user of the vehicle. The second sensor may comprise any type or form of sensor or device described above in connection with the first sensor. The second sensor can determine a gaze angle of the user at the same time or substantially the same time as the determination of the location of the potential obstacle, e.g., so that the gaze angle and location can be registered or compared contemporaneously.”) (Note: Gaze can determine alertness of a driver) […] and in response to classifying the detected object as a hazardous target causing the driver assistance system to automatically take action to attempt to avoid the collision. (Hanna Paragraph 0038: “FIG. 2A shows a top-view of a driver D sitting in a seat of a vehicle. In one embodiment of the system, a camera “A” (e.g., a first sensor) senses imagery from in front of the vehicle. Within the coordinate system of camera A, a vulnerable user V (e.g., a potential obstacle) can be detected by the ADAS system at a particular range and angle with respect to the ADAS camera. In ADAS systems, there may be uncertainty regarding the determination that an obstacle is a hazard or not. The likelihood or uncertainty can depend on many factors, including the resolution of the sensor, the movement of the obstacle and/or many other factors. This can be an issue particularly when potential obstacles are smaller and when there is less information for ADAS systems to determine its presence with certainty. In one embodiment, the present solution may relate to mitigating this uncertainty.”) (Hanna Paragraph 0041: “For example, if the ADAS system detected a potential obstacle with a particular probability, P_ADAS_DETECTION, and the driver was looking away from the potential obstacle such that DIFF is large, then the probability of actuation P_ACTUATION can be almost equal to the probability of ADAS detection, P_ADAS_DETECTION. In this case, because the driver has not seen the potential obstacle and therefore would not actuate the brakes himself/herself, then it may be safer for the system to actuate the brakes even if there is some limited uncertainty in the ADAS system. In addition or alternatively, the system may provide an alert to the user regarding the potential obstacle so that the user themselves can apply the brakes, and/or provide an alert to the potential obstacle (which can be a road user such as a person or animal) to be aware of the vehicle and can take action to distance itself from the vehicle or otherwise improve safety. On the other hand, for the same value of P_ADAS_DETECTION, if the user is gazing at the potential obstacle such that the angular distance between their gaze direction and the direction of the object provided by the ADAS system is small or close to zero, then from the formula in this particular embodiment, P_ACTUATION would be near to zero. In this case, because the driver has been detected to be looking at the potential obstacle, then it is more likely that the driver himself/herself would apply braking if it was a real potential obstacle, and therefore in the presence of the same uncertainty of the ADAS system, in this case the ADAS system would not apply the brakes automatically, and the system may not provide an alert to the user regarding the potential obstacle.”) (Hanna Paragraph 0056: “As another non-limiting example, the activation engine can determine that the determined proximity of the gaze angle of the user to the determined position is above a predefined threshold (e.g., the user may be unaware of or not fully aware of the potential obstacle), and may determine or decide to maintain or decrease a threshold for the ADAS to initiate collision avoidance, and/or may reduce the threshold for the activation engine to provide an alert to the user regarding the potential obstacle (and/or to the potential obstacle), making the alert more likely to be announced.”) (Note: The object is determined to be hazardous when the users gaze does not match the location of the obstacle therefore determining the user is not aware and the obstacle is hazardous and vice versa if the gaze does match the location of the obstacle.)
Hanna does not disclose […] lowering a confidence level threshold used by a driver assistance system of the vehicle for classifying the detected object as a hazardous target, wherein an amount by which the confidence level threshold is lowered depends on whether the driver is in the first awareness state or the second awareness state; and classifying the detected object as a hazardous target or an irrelevant target based on the lowered confidence threshold;
However, Blayvas does teach […] lowering a confidence level threshold used by a driver assistance system of the vehicle for classifying the detected object as a hazardous target, (Blayvas Paragraph 0010: “For example, alert thresholds may be adjusted based on driver alertness.”) (Blayvas Paragraph 0028: “According to some embodiments, the one or more processors may be adapted to assess levels and/or natures/characteristics of potentially hazardous situations and react accordingly.”) (Blayvas Paragraph 0079: “Furthermore, a detector output may contain a measure of confidence level. A standalone detector can only compare this confidence level with a threshold”) (Blayvas Paragraph 0103: “For example, if the potential obstacle is detected, but the driver is fully alert and looking in the direction of the obstacle, the warning threshold level remains relatively high or even slightly elevated, and the alert is not triggered. In the opposite case, if driver inattentiveness is detected, or his glance is astray from the potentially dangerous obstacle, the alert threshold is lowered, and therefore an alert can be triggered even in a relatively low risk situation.”) (Blayvas Paragraph 0127: “(i) determine a level of risk for each identified hazardous situation,”) (Note: The alert threshold is lowered based on the attentiveness of the driver. A certain alert threshold can be lowered and determined because a particular hazardous situation is based on the confidence level of classifying a target as hazardous) wherein an amount by which the confidence level threshold is lowered depends on whether the driver is in the first awareness state or the second awareness state; and classifying the detected object as a hazardous target or an irrelevant target based on the lowered confidence threshold; (Blayvas Paragraph 0083: “For example a certain low collision risk with hefty amount of time combined with the high alertness level of the driver and his glance in the direction of potential danger might not lead to issuance of the collision warning, while exactly the same road situation, but with the driver distracted by his phone is a completely different story.”) (Blayvas Paragraph 0131: “wherein the dynamic threshold is dependent upon a current alertness of the driver determined by the processing circuitry based upon the parameters sensed by said interior sensors relating to an alertness of a driver of the vehicle.”)
Therefore, it would have been obvious to one of ordinary skill in art before the effective
filing date of the claimed invention to have modified Hanna to include […] lowering a confidence level threshold used by a driver assistance system of the vehicle for classifying the detected object as a hazardous target, wherein an amount by which the confidence level threshold is lowered depends on whether the driver is in the first awareness state or the second awareness state; and classifying the detected object as a hazardous target or an irrelevant target based on the lowered confidence threshold; taught by Blayvas. This would have been for the benefit to decrease the risks associated with vehicle driving by timely detection of dangerous situations and providing appropriate signals to the driver or to the vehicle controls. [Blayvas Paragraph 0005]
Regarding claim 11, Hanna discloses The method of Claim 10, wherein the driver assistance system further comprises an autonomous braking system. (Hanna Paragraph 0041: “In this case, because the driver has not seen the potential obstacle and therefore would not actuate the brakes himself/herself, then it may be safer for the system to actuate the brakes even if there is some limited uncertainty in the ADAS system.”)
Regarding claim 13, Hanna discloses The method of Claim 10, wherein the driver assistance system further comprises a collision alert system. (Hanna Paragraph 0055: “In some embodiments, the ADAS mechanism implemented in the present solution may include (vehicle user or road user) alerting functionality in the ADAS mechanism's collision avoidance operations. In this case, the system may accordingly adjust respective threshold(s) for triggering or sending alert(s) to the user and/or the road user (potential obstacle).”)
Regarding claim 14, Hanna discloses The method of claim 10, wherein the driver-facing sensor further comprises a driver-facing camera. (Hanna Paragraph 0039: “At substantially the same time, an eye tracking camera E (e.g., a second sensor) mounted in the vehicle facing the driver can detect a gaze angle of the driver D.”)
Regarding claim 17, Hanna discloses A vehicle safety system comprising: (Hanna Paragraph 0008: “In some aspects, this disclosure is directed to a system for operating an advanced driver assistance system (ADAS) mechanism”) means for receiving a reading from a forward-facing sensor of the vehicle indicating a detected object forward of the vehicle; (Hanna Paragraph 0038: “FIG. 2A shows a top-view of a driver D sitting in a seat of a vehicle. In one embodiment of the system, a camera “A” (e.g., a first sensor) senses imagery from in front of the vehicle.”)
PNG
media_image1.png
395
567
media_image1.png
Greyscale
means for performing object recognition of the detected object by: (Hanna Paragraph 0047: “An activation engine in communication with the first sensor and the second sensor can determine a proximity of the gaze angle of the user to the determined position of the potential obstacle (205). The activation engine can control, in response to the determined proximity, an operation of the ADAS mechanism/system for responding to the potential obstacle, and/or may provide an alert to at least: the user regarding the potential obstacle, or the potential obstacle.”) (Hanna Paragraph 0048: “The first sensor may include a sensing, detecting and/or measurement device, that may be based on one or more of imaging (e.g., computer vision, infra-red, object recognition), LiDAR, radar, audio/ultrasound, sonar, etc. The potential obstacle may include a potential obstacle to the vehicle and/or a user of the vehicle. The potential obstacle may include or refer to a road user such as a person, animal and/or other vehicle. The potential obstacle may include any object, stationary or otherwise, and in some embodiments can include an object smaller than or of height below a certain threshold, and/or partially obscured from the first sensor due to reduced visibility or contrast from fog, smoke, or the like, and/or due to light foliage, low light and/or limited field of view of the first sensor.”) receiving a reading from another sensor of the vehicle indicating a detected driver alertness problem; (Hanna Paragraph 0050: “Referring now to 203, and in some embodiments, a second sensor can determine a gaze angle of a user of the vehicle. The second sensor may comprise any type or form of sensor or device described above in connection with the first sensor. The second sensor can determine a gaze angle of the user at the same time or substantially the same time as the determination of the location of the potential obstacle, e.g., so that the gaze angle and location can be registered or compared contemporaneously.”) (Note: Gaze can determine alertness of a driver) […] and means for, in response to classifying the detected object as a hazardous target causing the collision avoidance system to automatically take action to attempt to avoid the collision. (Hanna Paragraph 0038: “FIG. 2A shows a top-view of a driver D sitting in a seat of a vehicle. In one embodiment of the system, a camera “A” (e.g., a first sensor) senses imagery from in front of the vehicle. Within the coordinate system of camera A, a vulnerable user V (e.g., a potential obstacle) can be detected by the ADAS system at a particular range and angle with respect to the ADAS camera. In ADAS systems, there may be uncertainty regarding the determination that an obstacle is a hazard or not. The likelihood or uncertainty can depend on many factors, including the resolution of the sensor, the movement of the obstacle and/or many other factors. This can be an issue particularly when potential obstacles are smaller and when there is less information for ADAS systems to determine its presence with certainty. In one embodiment, the present solution may relate to mitigating this uncertainty.”) (Hanna Paragraph 0041: “For example, if the ADAS system detected a potential obstacle with a particular probability, P_ADAS_DETECTION, and the driver was looking away from the potential obstacle such that DIFF is large, then the probability of actuation P_ACTUATION can be almost equal to the probability of ADAS detection, P_ADAS_DETECTION. In this case, because the driver has not seen the potential obstacle and therefore would not actuate the brakes himself/herself, then it may be safer for the system to actuate the brakes even if there is some limited uncertainty in the ADAS system. In addition or alternatively, the system may provide an alert to the user regarding the potential obstacle so that the user themselves can apply the brakes, and/or provide an alert to the potential obstacle (which can be a road user such as a person or animal) to be aware of the vehicle and can take action to distance itself from the vehicle or otherwise improve safety. On the other hand, for the same value of P_ADAS_DETECTION, if the user is gazing at the potential obstacle such that the angular distance between their gaze direction and the direction of the object provided by the ADAS system is small or close to zero, then from the formula in this particular embodiment, P_ACTUATION would be near to zero. In this case, because the driver has been detected to be looking at the potential obstacle, then it is more likely that the driver himself/herself would apply braking if it was a real potential obstacle, and therefore in the presence of the same uncertainty of the ADAS system, in this case the ADAS system would not apply the brakes automatically, and the system may not provide an alert to the user regarding the potential obstacle.”) (Hanna Paragraph 0056: “As another non-limiting example, the activation engine can determine that the determined proximity of the gaze angle of the user to the determined position is above a predefined threshold (e.g., the user may be unaware of or not fully aware of the potential obstacle), and may determine or decide to maintain or decrease a threshold for the ADAS to initiate collision avoidance, and/or may reduce the threshold for the activation engine to provide an alert to the user regarding the potential obstacle (and/or to the potential obstacle), making the alert more likely to be announced.”) (Note: The object is determined to be hazardous when the users gaze does not match the location of the obstacle therefore determining the user is not aware and the obstacle is hazardous and vice versa if the gaze does match the location of the obstacle.)
Hanna does not disclose […] lowering a confidence level threshold used by a collision avoidance system of the vehicle for classifying the detected object as a hazardous target, wherein an amount by which the confidence level threshold is lowered depends on a type of the detected driver alertness problem; and classifying the detected objects as a hazardous target or an irrelevant target based on the lowered confidence threshold;
However, Blayvas does teach […] lowering a confidence level threshold used by a collision avoidance system of the vehicle for classifying the detected object as a hazardous target, (Blayvas Paragraph 0010: “For example, alert thresholds may be adjusted based on driver alertness.”) (Blayvas Paragraph 0028: “According to some embodiments, the one or more processors may be adapted to assess levels and/or natures/characteristics of potentially hazardous situations and react accordingly.”) (Blayvas Paragraph 0079: “Furthermore, a detector output may contain a measure of confidence level. A standalone detector can only compare this confidence level with a threshold”) (Blayvas Paragraph 0103: “For example, if the potential obstacle is detected, but the driver is fully alert and looking in the direction of the obstacle, the warning threshold level remains relatively high or even slightly elevated, and the alert is not triggered. In the opposite case, if driver inattentiveness is detected, or his glance is astray from the potentially dangerous obstacle, the alert threshold is lowered, and therefore an alert can be triggered even in a relatively low risk situation.”) (Blayvas Paragraph 0127: “(i) determine a level of risk for each identified hazardous situation,”) (Note: The alert threshold is lowered based on the attentiveness of the driver. A certain alert threshold can be lowered and determined because a particular hazardous situation is based on the confidence level of classifying a target as hazardous) wherein an amount by which the confidence level threshold is lowered depends on a type of the detected driver alertness problem; and classifying the detected objects as a hazardous target or an irrelevant target based on the lowered confidence threshold; (Blayvas Paragraph 0083: “For example a certain low collision risk with hefty amount of time combined with the high alertness level of the driver and his glance in the direction of potential danger might not lead to issuance of the collision warning, while exactly the same road situation, but with the driver distracted by his phone is a completely different story.”) (Blayvas Paragraph 0131: “wherein the dynamic threshold is dependent upon a current alertness of the driver determined by the processing circuitry based upon the parameters sensed by said interior sensors relating to an alertness of a driver of the vehicle.”)
Therefore, it would have been obvious to one of ordinary skill in art before the effective
filing date of the claimed invention to have modified Hanna to include […] in response to the detected driver awareness problem, lowering a confidence level threshold used by a collision avoidance system of the vehicle for classifying the detected object as a hazardous target, wherein an amount by which the confidence level threshold is lowered depends on a type of the detected driver alertness problem; and classifying the detected objects as a hazardous target or an irrelevant target based on the lowered confidence threshold; taught by Blayvas. This would have been for the benefit to decrease the risks associated with vehicle driving by timely detection of dangerous situations and providing appropriate signals to the driver or to the vehicle controls. [Blayvas Paragraph 0005]
Regarding claim 18, Hanna discloses The vehicle safety system of Claim 17, wherein the collision avoidance system further comprises an automatic braking system. (Hanna Paragraph 0041: “In this case, because the driver has not seen the potential obstacle and therefore would not actuate the brakes himself/herself, then it may be safer for the system to actuate the brakes even if there is some limited uncertainty in the ADAS system.”)
Regarding claim 19, Hanna discloses The vehicle safety system of Claim 17, wherein the collision avoidance system further comprises a collision warning system. (Hanna Paragraph 0055: “In some embodiments, the ADAS mechanism implemented in the present solution may include (vehicle user or road user) alerting functionality in the ADAS mechanism's collision avoidance operations. In this case, the system may accordingly adjust respective threshold(s) for triggering or sending alert(s) to the user and/or the road user (potential obstacle).”)
6. Claim(s) 2, 4, 8-9, 12, 15-16, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hanna (US 20180086339 A1) in view of Balyvas (US 20180012085 A1) and further in view of (US 20220121867 A1) to Arar et al. (hereinafter Arar).
Regarding claim 2, Hanna in view of Blayvas teaches claim 1, accordingly, the rejection of claim 1 is incorporated above.
Hanna in view of Blayvas does not teach The non-transitory computer-readable storage medium of Claim 1, wherein the collision avoidance system further comprises an advanced emergency braking system.
However, Arar does teach The non-transitory computer-readable storage medium
of Claim 1, wherein the collision avoidance system further comprises an advanced emergency braking system. (Arar Paragraph 0145: “The vehicle 500 may include an ADAS system 538. The ADAS system 538 may include a SoC, in some examples. The ADAS system 538 may
include autonomous/adaptive/automatic cruise control (ACC), cooperative adaptive cruise control (CACC), forward crash warning (FCW), automatic emergency braking (AEB),”)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Hanna in view of Blayvas to include The non-transitory computer-readable storage medium of Claim 1, wherein the collision avoidance system further comprises an advanced emergency braking system taught by Arar. This would have been for the benefit to provide more efficient systems and methods to compare estimated field of view or gaze information of a user to vehicle perception information corresponding to an environment outside of the vehicle. As a result, interior monitoring of a driver or occupant of the vehicle may be extended to an exterior of the vehicle to determine whether the driver or occupant has processed or seen certain object types, environmental conditions, or other information exterior to the vehicle—e.g., dynamic actors, static objects, vulnerable road users (VRUs), wait condition information, signs, potholes, bumps, debris, etc. [Arar Paragraph 0004]
Regarding claim 4, Hanna in view of Blayvas teaches claim 1, accordingly, the rejection of claim 1 is incorporated above.
Hanna in view of Blayvas does not teach The non-transitory computer-readable storage medium of Claim 1, wherein the collision avoidance system further comprises an automated steering controller.
However, Arar does teach The non-transitory computer-readable storage medium of Claim 1, wherein the collision avoidance system further comprises an automated steering controller. (Arar Paragraph 0145: “The vehicle 500 may include an ADAS system 538. The ADAS system 538 may include a SoC, in some examples. The ADAS system 538 may include autonomous/adaptive/automatic cruise control (ACC), cooperative adaptive cruise control (CACC), forward crash warning (FCW), automatic emergency braking (AEB), lane departure warnings (LDW), lane keep assist (LKA), ”) (Arar Paragraph 0151: “LKA systems are a variation of LDW systems. LKA systems provide steering input or braking to correct the vehicle 500 if the vehicle 500 starts to exit the lane”.)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Hanna in view of Blayvas to include The non-transitory computer-readable storage medium of Claim 1, wherein the collision avoidance system further comprises an automated steering controller taught by Arar. This would have been for the benefit to provide more efficient systems and methods to compare estimated field of view or gaze information of a user to vehicle perception information corresponding to an environment outside of the vehicle. As a result, interior monitoring of a driver or occupant of the vehicle may be extended to an exterior of the vehicle to determine whether the driver or occupant has processed or seen certain object types, environmental conditions, or other information exterior to the vehicle—e.g., dynamic actors, static objects, vulnerable road users (VRUs), wait condition information, signs, potholes, bumps, debris, etc. [Arar Paragraph 0004]
Regarding claim 8, Hanna in view of Blayvas teaches claim 5, accordingly, the rejection of claim 5 is incorporated above.
Hanna in view of Blayvas does not teach The non-transitory computer-readable storage medium of Claim 5, wherein the driver-facing sensor further comprises an infrared sensor positioned to observe a pupil of the driver.
However, Arar does teach The non-transitory computer-readable storage medium of Claim 5, wherein the driver-facing sensor further comprises an infrared sensor positioned to observe a pupil of the driver. (Arar Paragraph 0023: “In some embodiments, the sensor data 102A may correspond to sensor data generated using in-cabin sensors, such as one or more in- cabin cameras, in-cabin near-infrared (NIR) sensors, in-cabin microphones, and/or the like,”). (Arar Paragraph 0024: “The sensor data 102A may be used by a body tracker 104 and/or an eye tracker 106 to determine gestures, postures, activities, eye movements (e.g., saccade velocity, smooth pursuits, gaze locations, directions, or vectors, pupil size, blink rate, road scan range and distribution, etc.)”)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Hanna in view of Blayvas to include The non-transitory computer-readable storage medium of Claim 5, wherein the driver-facing sensor further comprises an infrared sensor positioned to observe a pupil of the driver taught by Arar. This would have been for the benefit to provide more efficient systems and methods to compare estimated field of view or gaze information of a user to vehicle perception information corresponding to an environment outside of the vehicle. As a result, interior monitoring of a driver or occupant of the vehicle may be extended to an exterior of the vehicle to determine whether the driver or occupant has processed or seen certain object types, environmental conditions, or other information exterior to the vehicle—e.g., dynamic actors, static objects, vulnerable road users (VRUs), wait condition information, signs, potholes, bumps, debris, etc. [Arar Paragraph 0004]
Regarding claim 9, Hanna in view of Blayvas teaches claim 5, accordingly, the rejection of claim 5 is incorporated above.
Hanna in view of Blayvas does not teach The non-transitory computer-readable storage medium of The non-transitory computer-readable storage medium of Claim 5, wherein the driver-facing sensor further comprises microphone configured to detect speech of the driver.
However, Arar does teach The non-transitory computer-readable storage medium of The non-transitory computer-readable storage medium of Claim 5, wherein the driver-facing sensor further comprises microphone configured to detect speech of the driver. (Arar Paragraph 0023: “In some embodiments, the sensor data 102A may correspond to sensor data generated using in-cabin sensors, such as one or more in-cabin cameras, in-cabin near-infrared (NIR) sensors, in-cabin microphones, and/or the like,”) (Arar Paragraph 0169: “computing device(s) 600 suitable for use in implementing some embodiments of the present disclosure.”) (Arar Paragraph 0180: “The I/O ports 612 may enable the computing
device 600 to be logically coupled to other devices including the I/O components 614, the
presentation component(s) 618, and/or other components, some of which may be built in to
(e.g., integrated in) the computing device 600. Illustrative I/O components 614 include a
microphone, mouse, keyboard, joystick, game pad, game controller, satellite dish, scanner,
printer, wireless device, etc. The I/O components 614 may provide a natural user interface
(NUI) that processes air gestures, voice, or other physiological inputs generated by a user.”)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Hanna in view of Blayvas to include The non-transitory computer-readable storage medium of The non-transitory computer-readable storage medium of Claim 5, wherein the driver-facing sensor further comprises microphone configured to detect speech of the driver taught by Arar. This would have been for the benefit to provide more efficient systems and methods to compare estimated field of view or gaze information of a user to vehicle perception information corresponding to an environment outside of the vehicle. As a result, interior monitoring of a driver or occupant of the vehicle may be extended to an exterior of the vehicle to determine whether the driver or occupant has processed or seen certain object types, environmental conditions, or other information exterior to the vehicle—e.g., dynamic actors, static objects, vulnerable road users (VRUs), wait condition information, signs, potholes, bumps, debris, etc. [Arar Paragraph 0004]
Regarding claim 12, Hanna in view of Blayvas teaches claim 10, accordingly, the rejection of claim 10 is incorporated above.
Hanna in view of Blayvas does not teach The method of Claim 10, wherein the driver assistance system further comprises an autonomous steering system.
However, Arar does teach The method of Claim 10, wherein the driver assistance system further comprises an autonomous steering system. (Arar Paragraph 0145: “The vehicle 500 may include an ADAS system 538. The ADAS system 538 may include a SoC, in some examples. The ADAS system 538 may include autonomous/adaptive/automatic cruise control (ACC), cooperative adaptive cruise control (CACC), forward crash warning (FCW), automatic emergency braking (AEB), lane departure warnings (LDW), lane keep assist (LKA), ”) (Arar Paragraph 0151: “LKA systems are a variation of LDW systems. LKA systems provide steering input or braking to correct the vehicle 500 if the vehicle 500 starts to exit the lane”.)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Hanna in view of Blayvas to include The method of Claim 10, wherein the driver assistance system further comprises an autonomous steering system taught by Arar. This would have been for the benefit to provide more efficient systems and methods to compare estimated field of view or gaze information of a user to vehicle perception information corresponding to an environment outside of the vehicle. As a result, interior monitoring of a driver or occupant of the vehicle may be extended to an exterior of the vehicle to determine whether the driver or occupant has processed or seen certain object types, environmental conditions, or other information exterior to the vehicle—e.g., dynamic actors, static objects, vulnerable road users (VRUs), wait condition information, signs, potholes, bumps, debris, etc. [Arar Paragraph 0004]
Regarding claim 15, Hanna in view of Blayvas teaches claim 10, accordingly, the rejection of claim 10 is incorporated above.
Hanna in view of Blayvas does not teach The method of claim 10, wherein the driver-facing sensor further comprises lidar, an infrared sensor, or a microphone.
However, Arar does teach The method of claim 10, wherein the driver-facing sensor further comprises lidar, an infrared sensor, or a microphone. (Arar Paragraph 0023: “In some embodiments, the sensor data 102A may correspond to sensor data generated using in-cabin sensors, such as one or more in- cabin cameras, in-cabin near-infrared (NIR) sensors, in-cabin microphones, and/or the like,”).
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Hanna in view of Blayvas to include The method of claim 10, wherein the driver-facing sensor further comprises lidar, an infrared sensor, or a microphone taught by Arar. This would have been for the benefit to provide more efficient systems and methods to compare estimated field of view or gaze information of a user to vehicle perception information corresponding to an environment outside of the vehicle. As a result, interior monitoring of a driver or occupant of the vehicle may be extended to an exterior of the vehicle to determine whether the driver or occupant has processed or seen certain object types, environmental conditions, or other information exterior to the vehicle—e.g., dynamic actors, static objects, vulnerable road users (VRUs), wait condition information, signs, potholes, bumps, debris, etc. [Arar Paragraph 0004]
Regarding claim 16, Hanna in view of Blayvas teaches claim 10, accordingly, the rejection of claim 10 is incorporated above.
Hanna in view of Blayvas does not teach The method of Claim 10, wherein the vehicle sensor further comprises radar, lidar, a vehicle location sensor, a deceleration sensor, a steering angle sensor, a wheel speed sensor, and a brake pressure sensor.
However, Arar does teach The method of Claim 10, wherein the vehicle sensor further comprises radar, lidar, a vehicle location sensor, a deceleration sensor, (Arar Paragraph 0066: “The sensor data may be received from, for example and without limitation, global navigation satellite systems sensor(s) 558 (e.g., Global Positioning System sensor(s)), RADAR sensor(s) 560, ultrasonic sensor(s) 562, LiDAR sensor(s) 564, inertial measurement unit (IMU) sensor(s) 566 (e.g., accelerometer(s),”) a steering angle sensor, (Arar Paragraph 0066: “The controller(s) 536 may provide the signals for controlling one or more components and/or systems of the vehicle 500 in response to sensor data received from one or more sensors (e.g., sensor inputs). The sensor data may be received from, for example and without limitation, global navigation satellite systems sensor(s) 558 (e.g., Global Positioning System sensor(s)), RADAR sensor(s) 560, ultrasonic sensor(s) 562, LiDAR sensor(s) 564, inertial measurement unit (IMU) sensor(s) 566 (e.g., accelerometer(s), gyroscope(s), magnetic compass(es), magnetometer(s), etc.), microphone(s) 596, stereo camera(s) 568, wide-view camera(s) 570 (e.g., fisheye cameras), infrared camera(s) 572, surround camera(s) 574 (e.g., 360 degree cameras), long- range and/or mid-range camera(s) 598, speed sensor(s) 544 (e.g., for measuring the speed of the vehicle 500), vibration sensor(s) 542, steering sensor(s) 540, brake sensor(s) (e.g., as part of the brake sensor system 546), and/or other sensor types.”) (Arar Paragraph 0079: “The CAN bus may be read to find steering wheel angle”) (Arar Paragraph 0080: “In some examples, each SoC 504, each controller 536, and/or each computer within the vehicle may have access to the same input data (e.g., inputs from sensors of the vehicle 500), and may be connected to a common bus, such the CAN bus.”) a wheel speed sensor, (Arar Paragraph 006: “speed sensor(s) 544 (e.g., for measuring the speed of the vehicle 500)”) and a brake pressure sensor. (Arar Paragraph 0066: “brake sensor(s) (e.g., as part of the brake sensor system 546)”)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Hanna in view of Blayvas to include The method of Claim 10, wherein the vehicle sensor further comprises radar, lidar, a vehicle location sensor, a deceleration sensor, a steering angle sensor, a wheel speed sensor, and a brake pressure sensor taught by Arar. This would have been for the benefit to provide more efficient systems and methods to compare estimated field of view or gaze information of a user to vehicle perception information corresponding to an environment outside of the vehicle. As a result, interior monitoring of a driver or occupant of the vehicle may be extended to an exterior of the vehicle to determine whether the driver or occupant has processed or seen certain object types, environmental conditions, or other information exterior to the vehicle—e.g., dynamic actors, static objects, vulnerable road users (VRUs), wait condition information, signs, potholes, bumps, debris, etc. [Arar Paragraph 0004]
Regarding claim 20, Hanna in view of Blayvas teaches claim 17, accordingly, the rejection of Claim 17 is incorporated above.
Hanna in view of Blayvas does not teach The vehicle safety system of Claim 17, wherein the collision avoidance system further comprises an automatic steering system.
However, Arar does teach The vehicle safety system of Claim 17, wherein the collision avoidance system further comprises an automatic steering system. (Arar Paragraph 0145: “The vehicle 500 may include an ADAS system 538. The ADAS system 538 may include a SoC, in some examples. The ADAS system 538 may include autonomous/adaptive/automatic cruise control (ACC), cooperative adaptive cruise control (CACC), forward crash warning (FCW), automatic emergency braking (AEB), lane departure warnings (LDW), lane keep assist (LKA), ”) (Arar Paragraph 0151: “LKA systems are a variation of LDW systems. LKA systems provide steering input or braking to correct the vehicle 500 if the vehicle 500 starts to exit the lane”.)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Hanna in view of Blayvas to include The vehicle safety system of Claim 17, wherein the collision avoidance system further comprises an automatic steering system taught by Arar. This would have been for the benefit to provide more efficient systems and methods to compare estimated field of view or gaze information of a user to vehicle perception information corresponding to an environment outside of the vehicle. As a result, interior monitoring of a driver or occupant of the vehicle may be extended to an exterior of the vehicle to determine whether the driver or occupant has processed or seen certain object types, environmental conditions, or other information exterior to the vehicle—e.g., dynamic actors, static objects, vulnerable road users (VRUs), wait condition information, signs, potholes, bumps, debris, etc. [Arar Paragraph 0004]
7. Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hanna (US 20180086339 A1) in view of Balyvas (US 20180012085 A1) and further in view of (US10106156 B1) to Nave et al. (hereinafter Nave).
Regarding claim 7, Hanna in view of Blayvas teaches claim 5, accordingly, the rejection of claim 5 is incorporated above.
Hanna in view of Blayvas does not teach The non-transitory computer-readable storage medium of Claim 5, wherein the driver-facing sensor further comprises a lidar sensor positioned to detect head movement of the driver.
However, Nave does teach The non-transitory computer-readable storage medium of Claim 5, wherein the driver-facing sensor further comprises a lidar sensor positioned to detect head movement of the driver. (Nave Column 14, line number 52-55: “The position of the occupant may include the occupant's body orientation, the location of specific limbs, and/or other positional information. In one example, plurality of sensors 105 may include an in-cabin facing camera, LIDAR”) (Nave Column 28, line number 53-59: “The skeletal positioning may include positioning of the occupant's joints, spine, arms, legs, torso, neck
face, head, major bones, hands, and/or feet. In some embodiments, the internal sensors 105 constantly transmit sensor data to vehicle computer device 110, which constantly determines 920 the positional information of the occupants.”)
Therefore, it would have been obvious to one of ordinary skill in art before the effective filing date of the claimed invention to have modified Hanna in view of Blayvas to include The non-transitory computer-readable storage medium of Claim 5, wherein the driver-facing sensor further comprises a lidar sensor positioned to detect head movement of the driver taught by Nave. This would have been for the benefit to provide reconstruction of a vehicular crash and, more particularly, to a network-based system and method for reconstructing a vehicular crash or other collision based upon sensor data and determining a severity of the vehicular crash based upon the reconstruction. [Nave Column 1, line number 31-36]
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEVIN J HARVEY whose telephone number is 571-272-5327. The examiner can normally be reached 8:00AM-5:00PM M-Th, 8:00AM-4:00PM F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kito Robinson can be reached at 571-270-3921. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/K.J.H./Junior Patent Examiner, Art Unit 3664
/KITO R ROBINSON/Supervisory Patent Examiner, Art Unit 3664