Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to 35 U.S.C. 101 have been fully considered and are persuasive. The previous rejection of record has been withdrawn.
Applicant’s arguments with respect to 35 U.S.C. 102/103 have been considered but are moot (with one exception, discussed below) because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant’s arguments with respect to a portion of claim 11 are not fully persuasive. Specifically, applicant argues: “Through the system of amended claim 11, the inventors have determined that several technical advantages are achieved. Specifically, the time-dynamic functionality provides significant technical advantages by allowing the navigation system to present contextually relevant information that reflects current conditions. A location that may have high driver state event clustering during rush hour may not present the same risk at other times of day, and the dynamic map layer reflects this temporal variation.
None of the cited references, even if considered in combination, disclose or suggest the specific limitations of amended claim 11, particularly "wherein the map layer is dynamically updated based on a time of day to indicate time-specific clustering of the driver state events, and wherein the map layer changes depending on the time of day to coincide with the time-specific clustering of the driver state events." While Fouad describes "mood mapping" generally, Fouad does not teach dynamically updating a map layer based on time of day such that the map layer changes to show different clustering patterns at different times. Fouad's disclosure of mood mapping relates to tracking cognitive states over time during a journey, not to displaying different map layers that change based on the current time of day to reflect time-specific clustering patterns” (emphasis added).
Examiner respectfully disagrees. Although Fouad does not teach a map “layer”, Fouad nonetheless does teach mapping and clustering driver states along a route according to time. The cluster analysis occurs in time, as described in paragraphs [0042] (e.g. “time of day”), [0053] (“ratings of the various segments can vary over time due to changing traffic conditions, an accident, a change in vehicle occupant cognitive state, etc…. [T]he sensor data can include one or more of…time of day, level of daylight…. The renderings of the travel route segments can vary over time based on changing travel route segment rankings”), [0061], [0063], [0101] (e.g. “analysing facial expressions en masse in real time”), or [0123] (“in a real-time or near real-time embodiment”). Such information is also displayed to the user, see, e.g., paragraph [0051].
Claim Objections
Claim 11 objected to because of the following informalities: at line 12, the claim recites “wherein the navigation guidance comprising a map layer…” (emphasis added). It appears that “comprising” should instead be “comprises”.
Appropriate correction is required.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over US20190162549 by Fouad et al. (hereinafter “Fouad”), further in view of GB2583960A by Saradindubasu (hereinafter “Saradindubasu”), and further in view of US20180075309 by Sathyanarayana et al. (hereinafter “Sathyanarayana”).1
Regarding claim 1, Fouad teaches A method of operation of a navigation system of a vehicle, comprising: detecting a plurality of driver state events via an advanced driver assistance system (ADAS), see for example paragraphs [0038]-[0047], where the vehicle detects drivers’ and passengers’ cognitive states; in particular, paragraph [0043] (as well as, e.g., paragraph [0050]) describes detecting driver cognitive states in other vehicles, including remote vehicles.
generating navigation guidance based on the plurality of driver state events detected via ADAS; see for example paragraphs [0043]-[0044], where the vehicle maps the mood map along a vehicle travel route.
and communicating the navigation guidance to a user of the vehicle via the navigation system, the navigation system comprising a display and the navigation guidance including at least one of a route recommendation and a map layer which are displayed via the display of the navigation system. See for example paragraph [0047], where the system displays a route along a map to a user.
determining, based on the navigation guidance, that the vehicle is approaching a location having high driver state event clustering; see for example paragraphs [0050]-[0051], where cognitive states from multiple vehicles are used to update a map along the vehicle travel route.
Fouad does not explicitly teach wherein detecting the plurality of driver state events comprises analyzing data from a combination of an in-vehicle camera, seat sensors providing cabin occupancy data, and at least one of pedal position sensors and steering wheel sensors; nor does Fouad explicitly teach in response to determining that the vehicle is approaching the location having high driver state event clustering, automatically adjusting at least one vehicle operating parameter via the ADAS, wherein the at least one vehicle operating parameter comprises at least one of lane-keeping assistance level, vehicle speed, and driver alert output level.
However, Saradindubasu teaches a system wherein detecting the plurality of driver state events comprises analyzing data from a combination of an in-vehicle camera, seat sensors providing cabin occupancy data, and at least one of pedal position sensors and steering wheel sensors. See for example paragraph [025], where the emotional state of vehicle occupants can be measured by interior cameras and pressure sensors on the steering wheel and levers, as well as electrodermal activity meters on the seats and headrest.
Saradindubasu also teaches a system where in response to determining that the vehicle is approaching the location having high driver state event clustering, automatically adjusting at least one vehicle operating . See for example paragraphs [017]-[020], where the system detects clusters of highly emotional areas and navigates or suggests navigation around them, or paragraphs [015]-[016] and [039] where the vehicle generates an alert. See also paragraphs [039]-[041].
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the clustering system of Fouad with the cluster avoidance system of Saradindubasu with a reasonable expectation of success. Doing so allows the system to detect emotional distress (which can indicate other issues on the route) and generate a safer and less anxiety-inducing route for the occupant.
Saradindubasu (in addition to Fouad) does not explicitly teach wherein the at least one vehicle operating parameter comprises at least one of lane-keeping assistance level, vehicle speed, and driver alert output level. Although Saradindubasu describes various mitigation actions when approaching a high emotional area, such as navigating the vehicle around the area, outputting an alert, or controlling the emotional state of the occupant via soft music and fragrance (see paragraphs [039]-[041]); but none of these read on a vehicle operating parameter via the ADAS, wherein the at least one vehicle operating parameter comprises at least one of lane-keeping assistance level, vehicle speed, and driver alert output level.
However, Sathyanarayana suggests a system in response to determining that the vehicle is approaching the location having high driver state event clustering, automatically adjusting at least one vehicle operating parameter via the ADAS, wherein the at least one vehicle operating parameter comprises at least one of lane-keeping assistance level, vehicle speed, and driver alert output level. See for example paragraphs [0104]-[0105], where the detection of high-risk areas can be used to increase ADAS sensitivity and automatically slow the vehicle down.
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the clustering system of Fouad, modified by the cluster avoidance system of Saradindubasu, with the ADAS adjustment system of Sathyanarayana with a reasonable expectation of success. Doing so allows the system to enact safety measures for the occupants’ sake, greatly lowering any upcoming danger.
Regarding claim 11, Fouad teaches A method for navigation, comprising: receiving, at a cloud computing system, externally provided data comprising a plurality of driver state events from a plurality of vehicles, each driver state event tagged with a location of occurrence and a time of occurrence; see for example paragraphs [0038]-[0047], where the vehicle detects drivers’ and passengers’ cognitive states; in particular, paragraph [0043] (as well as, e.g., paragraph [0050]) describes detecting driver cognitive states in other vehicles, including remote vehicles.
performing, via the cloud computing system, a cluster analysis on the plurality of driver state events to identify statistically significant clusters of driver state events; see paragraphs [0092]-[0097], where the system clusters information from large numbers of people and relates it to a navigation route.
generating, via the cloud computing system, a navigation route for a vehicle based on navigation guidance determined from at least one of internally provided data collected within the vehicle and the externally provided data from the cloud computing system, wherein the internally provided data comprises data of a driver profile received from a plurality of images of a driver of the vehicle obtained via an in-vehicle camera, wherein the navigation guidance comprising a map see for example paragraphs [0043], [0053], or [0057], where the system generates a route based on the cognitive states of the ego vehicle and other vehicles based on cluster analysis. The cluster analysis occurs in time, as described in paragraphs [0042] (e.g. “time of day”), [0053], [0061], [0063], and [0101] (e.g. “analysing facial expressions en masse in real time”).
communicating the navigation route to a user of the vehicle via a display of a navigation system housed inside the vehicle; see for example paragraph [0047], where the system displays a route along a map to a user.
determining, based on the navigation route, that the vehicle is approaching a location having high driver state event clustering; see for example paragraphs [0050]-[0051], where cognitive states from multiple vehicles are used to update a map along the vehicle travel route.
.
Fouad does not explicitly teach a map layer of clustered information; nor does Fouad explicitly teach in response to determining that the vehicle is approaching the location having high driver state event clustering, automatically adjusting at least one vehicle operating parameter via an advanced driver assistance system (ADAS) of the vehicle by adjusting the at least one vehicle operating parameter.
However, Saradindubasu teaches a map layer of clustered information. See for example paragraph [032], where the emotional values can be displayed in a map layer.
Saradindubasu also teaches a system where in response to determining that the vehicle is approaching the location having high driver state event clustering, automatically adjusting at least one vehicle operating See for example paragraphs [017]-[020], where the system detects clusters of highly emotional areas and navigates or suggests navigation around them, or paragraphs [015]-[016] and [039] where the vehicle generates an alert. See also paragraphs [039]-[041].
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the clustering system of Fouad with the cluster avoidance system of Saradindubasu with a reasonable expectation of success. Doing so allows the system to detect emotional distress (which can indicate other issues on the route) and generate a safer and less anxiety-inducing route for the occupant.
Saradindubasu (in addition to Fouad) does not explicitly teach adjusting at least one vehicle operating parameter. Although Saradindubasu describes various mitigation actions when approaching a high emotional area, such as navigating the vehicle around the area, outputting an alert, or controlling the emotional state of the occupant via soft music and fragrance (see paragraphs [039]-[041]); but none of these read on a vehicle operating parameter.
However, Sathyanarayana suggests a system in response to determining that the vehicle is approaching the location having high driver state event clustering, automatically adjusting at least one vehicle operating parameter via an advanced driver assistance system (ADAS) of the vehicle by adjusting the at least one vehicle operating parameter. See for example paragraphs [0104]-[0105], where the detection of high-risk areas can be used to increase ADAS sensitivity and automatically slow the vehicle down.
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the clustering system of Fouad, modified by the cluster avoidance system of Saradindubasu, with the ADAS adjustment system of Sathyanarayana with a reasonable expectation of success. Doing so allows the system to enact safety measures for the occupants’ sake, greatly lowering any upcoming danger.
Regarding claim 18, Fouad teaches A vehicle system, comprising: one or more processors; an in-vehicle camera housed within a cabin of the vehicle; an advanced driver assistance system (ADAS); a navigation system comprising a display; and a non-transitory memory including instructions that, when executed, cause the one or more processors to: see for example paragraphs [0054]-[0056] describing the components of a vehicle.
detect driver state events in the vehicle based on driver images acquired by the in-vehicle camera and analysis of the driver images performed by the ADAS, see for example paragraphs [0038]-[0047], where the vehicle detects drivers’ and passengers’ cognitive states; in particular, paragraph [0043] (as well as, e.g., paragraph [0050]) describes detecting driver cognitive states in other vehicles, including remote vehicles.
report the driver state events to a cloud computing platform in an anonymized manner and to a driver profile specific to a driver imaged by the in-vehicle camera; see for example paragraph [0119], where the data is transmitted to a server for analysis. See also paragraph [0040], where the system learns driver profiles.
receive navigation guidance from the cloud computing platform, the navigation guidance determined by one of the cloud computing platform based on the driver state events reported by a plurality of vehicles and the driver profile specific to the driver of the vehicle, wherein the navigation guidance comprises a map see for example paragraphs [0043], [0053], or [0057], where the system generates a route based on the cognitive states of the ego vehicle and other vehicles based on cluster analysis; see also paragraph [0047], where the system displays a route along a map to a user.
determine, based on the received navigation guidance, that the vehicle is approaching a location having high driver state event clustering; see for example paragraphs [0050]-[0051], where cognitive states from multiple vehicles are used to update a map along the vehicle travel route.
Fouad does not explicitly teach: 1) wherein detecting the driver state events comprises analyzing data from a combination of the in-vehicle camera, seat sensors, and at least one of pedal position sensors and steering wheel sensors; nor does Fouad explicitly teach 2) a map layer of clustered information; nor does Fouad explicitly teach a system that will 3) provide, via the display of the navigation system, an interactive user interface comprising: a first toggle control selectable by a user to switch between displaying and not displaying the map laver; and a second toggle control selectable by the user to switch between receiving and not receiving the route recommendation; wherein the map layer and the route recommendation are displayed via the navigation system based on user input received via the first toggle control and the second toggle control, respectively; nor does Fouad explicitly teach 4) in response to determining that the vehicle is approaching the location having high driver state event clustering, automatically adjust at least one vehicle operating parameter via the ADAS, wherein the at least one vehicle operating parameter comprises at least one of lane-keeping assistance level, vehicle speed, and driver alert output level.
However, Saradindubasu teaches a system wherein detecting the driver state events comprises analyzing data from a combination of the in-vehicle camera, seat sensors, and at least one of pedal position sensors and steering wheel sensors. See for example paragraph [025], where the emotional state of vehicle occupants can be measured by interior cameras and pressure sensors on the steering wheel and levers, as well as electrodermal activity meters on the seats and headrest.
Saradindubasu also teaches a system that will provide, via the display of the navigation system, an interactive user interface comprising: a first toggle control selectable by a user to switch between displaying and not displaying the map laver; and a second toggle control selectable by the user to switch between receiving and not receiving the route recommendation; wherein the map layer and the route recommendation are displayed via the navigation system based on user input received via the first toggle control and the second toggle control, respectively. See for example paragraph [032], where the layers (including the driver state layer) are activatable based on user selection, and user can also turn the layer off.
Saradindubasu also teaches a map layer of clustered information. See for example paragraph [032], where the emotional values can be displayed in a map layer.
Saradindubasu also teaches a system where in response to determining that the vehicle is approaching the location having high driver state event clustering, automatically adjust at least one vehicle operating See for example paragraphs [017]-[020], where the system detects clusters of highly emotional areas and navigates or suggests navigation around them, or paragraphs [015]-[016] and [039] where the vehicle generates an alert. See also paragraphs [039]-[041].
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the clustering system of Fouad with the cluster avoidance system of Saradindubasu with a reasonable expectation of success. Doing so allows the system to detect emotional distress (which can indicate other issues on the route) and generate a safer and less anxiety-inducing route for the occupant.
Saradindubasu (in addition to Fouad) does not explicitly teach wherein the at least one vehicle operating parameter comprises at least one of lane-keeping assistance level, vehicle speed, and driver alert output level. Although Saradindubasu describes various mitigation actions when approaching a high emotional area, such as navigating the vehicle around the area, outputting an alert, or controlling the emotional state of the occupant via soft music and fragrance (see paragraphs [039]-[041]); but none of these read on a vehicle operating parameter via the ADAS, wherein the at least one vehicle operating parameter comprises at least one of lane-keeping assistance level, vehicle speed, and driver alert output level.
However, Sathyanarayana suggests a system in response to determining that the vehicle is approaching the location having high driver state event clustering, automatically adjust at least one vehicle operating parameter via the ADAS, wherein the at least one vehicle operating parameter comprises at least one of lane-keeping assistance level, vehicle speed, and driver alert output level. See for example paragraphs [0104]-[0105], where the detection of high-risk areas can be used to increase ADAS sensitivity and automatically slow the vehicle down.
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the clustering system of Fouad, modified by the cluster avoidance system of Saradindubasu, with the ADAS adjustment system of Sathyanarayana with a reasonable expectation of success. Doing so allows the system to enact safety measures for the occupants’ sake, greatly lowering any upcoming danger.
Regarding claim 2, Fouad teaches wherein the plurality of driver state events is detected via one of a plurality of different ADAS for a plurality of drivers and an individual driver of the vehicle, wherein the individual driver of the vehicle is the user of the vehicle, wherein the method further comprises: outputting driver state events for the plurality of drivers to a cloud computing system and outputting driver state events for the individual driver to a driver profile. See again paragraphs [0043] and [0050], where the system detects cognitive state data from occupants of other vehicles and other vehicle drivers, reading at least on a plurality of different ADAS for a plurality of drivers. See also paragraphs [0040]-[0044], where the system maps cognitive state data from a driver along the multiple route segments, reading at least on an individual driver of the vehicle. In particular, paragraph [0040] describes using the data to create user profiles, and paragraph [0061] describes collecting the cognitive state data on a cloud storage, or paragraph [0070] can be analyzed on a remote server.
Regarding claim 3, Fouad teaches wherein generating the navigation guidance based on the plurality of driver state events detected via the ADAS comprises: detecting each driver state event of the plurality of driver state events via the ADAS; and tagging each driver state event of the plurality of driver state events with a location of occurrence. In addition to paragraph [0043], see also paragraph [0057], where cognitive states of the driver are mapped to location data.
Regarding claim 4, Fouad teaches wherein generating the navigation guidance based on the plurality of driver state events detected via the ADAS further comprises tagging each driver state event of the plurality of driver state events with a time of occurrence. See again paragraph [0057], where the cognitive states can be mapped to a timeline.
Claim 13 has similar limitations to claims 3-4 above, and is therefore rejected using a similar rationale.
Regarding claim 5, Fouad teaches wherein generating the navigation guidance based on the plurality of driver state events detected via the ADAS further comprises statistically grouping the plurality of driver state events with respect to location of occurrence. See for example paragraphs [0096]-[0097], where the cognitive state data is clustered based on location, among other things.
Claim 14 has similar limitations to claim 5 above, and is therefore rejected using a similar rationale.
Regarding claim 6, Fouad teaches wherein generating the navigation guidance based on the plurality of driver state events detected via the ADAS further comprises statistically grouping the plurality of driver state events with respect to time of occurrence within the location of occurrence. See again paragraphs [0096]-[0097], where the system clusters the cognitive state data based on location and time, and the travel route information is updated based on that data.
Regarding claim 7, Fouad teaches wherein statistically grouping the plurality of driver state events comprises performing a cluster analysis. See again paragraphs [0096]-[0097], where the system clusters the cognitive state data based on location and time.
Claim 15 has similar limitations to claims 6 and 7 above, and is therefore rejected using a similar rationale.
Regarding claim 8, Fouad teaches wherein detecting each driver state event of the plurality of driver state events via the ADAS comprises: receiving images of the plurality of drivers of a plurality of vehicles at the ADAS; analyzing facial structures in the received images of the plurality of drivers to determine a state of each of the plurality of drivers; and outputting a driver state event indication in response to the state being one or more of asleep, tired, and distracted. See again paragraphs [0043] and [0050], where the system detects cognitive state data from occupants of other vehicles and other vehicle drivers. See also paragraph [0061], which describes collecting the cognitive state data on a cloud storage, or paragraph [0070] which states that the data can be analyzed on a remote server. See also paragraphs [0041]-[0042], where the system analyzes the facial expressions. See also paragraph [0037], where the driver states can include drowsiness, fatigue, distraction, impairment, where impairment, fatigue, and drowsiness can all read on asleep.
Claims 12 and 17 have similar limitations to different parts of claim 8 above, and are therefore rejected using a similar rationale.
Regarding claim 9, Fouad teaches wherein the map layer comprises a . See again, for example, paragraphs [0044] and [0050], where the system creates a mood map of driver states along route segments.
Fouad does not explicitly teach a heat map.
However, Sathyanarayana teaches a system wherein risk data can be displayed as a heat map. See for example paragraph [061], where risk data is displayed in a heatmap according to its location.
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the clustering system of Fouad, modified by the cluster avoidance system of Saradindubasu, with the ADAS adjustment system of Sathyanarayana with a reasonable expectation of success. Doing so allows the system to enact safety measures for the occupants’ sake, greatly lowering any upcoming danger.
Claim 20 has similar limitations to claims 8, 9, and 10 above, and is therefore rejected using a similar rationale.
Regarding claim 10, Fouad teaches wherein the route recommendation reduces vehicle travel through locations and travel times having high driver state event clustering. See again paragraphs [0044]-[0048], where the system updates a route based on the mood map. In particular, paragraph [0044] describes that the route should avoid stressful cognitive state locations along the way.
Regarding claim 16, Fouad teaches wherein the route recommendation reduces an extent of travel through locations having statistically significant clusters of driver state events based on at least one of internally provided data collected within the vehicle and externally provided data, wherein the route recommendation generated based on the internally provided data is individualized to a driver of the vehicle and the route recommendation generated based on the externally provided data is generalized based on data of a plurality of drivers uploaded to a cloud computing system. See again paragraphs [0044]-[0048], where the system updates a route based on the mood map. In particular, paragraph [0044] describes that the route should avoid stressful cognitive state locations along the way.
Regarding claim 19, Fouad teaches wherein the non-transitory memory further includes further instructions that, when executed, cause the one or more processors to: output the navigation guidance to the display of the navigation system within the vehicle, the navigation guidance comprising at least one of a map layer and a route recommendation. See for example paragraph [0047], where the system displays a recommended route along a map to a user.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORDAN THOMAS SMITH whose telephone number is (571)272-0522. The examiner can normally be reached Monday - Friday, 9am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Anne Antonucci can be reached at (313) 446-6519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JORDAN T SMITH/Examiner, Art Unit 3666
/ANNE MARIE ANTONUCCI/Supervisory Patent Examiner, Art Unit 3666
1 Examiner is first rejecting the independent claims (1, 11, and 18) because their limitations are sufficiently dissimilar to each other, then rejecting the dependent claims, many of which are similar to each other.