Response to Amendment
CLAIM INTERPRETATION
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
In independent claim 20, the limitation “the communications unit” has/have been interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because it uses/they use a generic placeholder “unit” coupled with functional language “configured to send the vehicle based location data” without reciting sufficient structure to achieve the function.
Since the claim limitation(s) invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, claim(s) 20 has/have been interpreted to cover the corresponding structure described in the specification that achieves the claimed function, and equivalents thereof.
A review of the specification shows that the following appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation: Communications unit (See Para. 33) is disclosed in Applicant’s specification as MQ Telemetry Transport (MQTT). MQTT is lightweight open messaging protocol that provides a simple way to wirelessly communicate telemetry information in a bandwidth- and/or resource-limited environment
If applicant wishes to provide further explanation or dispute the examiner’s interpretation of the corresponding structure, applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action.
If applicant does not intend to have the claim limitation(s) treated under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112 , sixth paragraph, applicant may amend the claim(s) so that it/they will clearly not invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, or present a sufficient showing that the claim recites/recite sufficient structure, material, or acts for performing the claimed function to preclude application of 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
For more information, see MPEP § 2173 et seq. and Supplementary Examination Guidelines for Determining Compliance With 35 U.S.C. 112 and for Treatment of Related Issues in Patent Applications, 76 FR 7162, 7167 (Feb. 9, 2011).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 5, 8, 10-12, 14, 16, 18, and 20-22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Faizan et al. (US 20210272440 A1) in view of Kapil (US 20240230370 A1) and further in view of Ghannam et al. (US 10255782 B1).
In regard to claim 1, Faizan teaches a flood detection and alert method for detecting flooding and sending alerts to one or more vehicle(s) (Faizan, Fig. 7, Para. 48, provides a kind of flash flood or any puddles detecting system that may be mounted on vehicle 25; The CDAU may include a speaker and a display screen to provide early warning of prompt information to the user device. In some examples, the control Display and Alarm Unit (CDAU) 24 may be connected to an external network 26. In such examples, vehicle 25 may communicate with the external network 26), the method comprising the steps of: obtaining sensor readings with one or more water level sensor(s) and using the obtained sensor readings to provide sensor-based data, the water level sensor(s) are mounted on the host vehicle (Faizan, Para. 48, the flash flood or any puddles detecting System includes water level sensor, CDAU, Sonar flowmeter, Extendable retractable system, and Extendable retractable shaft. The sensor may be mounted on the predetermined position of the vehicle.); obtaining location readings with one or more vehicle position sensor(s) and using the obtained location readings to provide vehicle-based location data, the vehicle position sensor(s) are mounted on the host vehicle (Faizan, Para. 53, the control Display and Alarm Unit (CDAU) 24 may include a global positioning (GPS) receiver that collects coordinates of the vehicle along with information of the water depth and water speed and send it using GSM module); using real-time flooding information to determine if there is a flooding event nearby the host vehicle, the real-time flooding information includes at least one of the image-based data or the sensor-based data (Faizan, Para. 50, the processor from the water level sensor 23 obtains the water level value and then detects the water level value and the first preset threshold. When detecting water level value less than the first preset threshold, it is pre—that the control output CDAU may be used to indicate current first information, and when detecting that water level value is more than or equal to the first preset threshold then the water level sensor sends a signal to a Control, Display and Alert Unit (CDAU) screen 22); sending the vehicle-based location data from the host vehicle to a backend portion of a cloud-based system when a flooding event nearby the host vehicle is determined (Faizan, Para. 53, the control Display and Alarm Unit (CDAU) 24 may include a global positioning (GPS) receiver that collects coordinates of the vehicle along with information of the water depth and water speed and send it using GSM module, to be stored in a computer server); using the vehicle-based location data to identify one or more affected flooding area(s) (Faizan, Para. 53, the control Display and Alarm Unit (CDAU) 24 may include a global positioning (GPS) receiver that collects coordinates of the vehicle along with information of the water depth and water speed and send it using GSM module, to be stored in a computer server); monitoring locations of a plurality of vehicles with the backend portion of the cloud-based system to determine if any of the vehicles enter or are expected to enter the affected flooding area(s) (Faizan, Para. 56, a plurality of users may be driving along a seaway, whereby there is already an alert of the flash flood by the authorities. Out of the plurality of users, a user operates the disclosed system and may be alarmed by the system of the possibility of getting stuck in the water); and sending a real-time flooding notification from the backend portion of the cloud-based system to one or more vehicle(s) when the vehicle(s) enter or are expected to enter the affected flooding area(s) (Faizan, Para. 56, At this instance, the other users communicatively attached to the present system may get an alert of a possible drowning in the area where the user's car shas been stuck. Hence, this may safeguard others from entering into the dangerous area).
Faizan does not teach obtaining images with one or more camera(s) and using the obtained images to provide image-based data, the camera(s) are mounted on a host vehicle; wherein the using vehicle-based location data step further comprises using vehicle-based location data to establish a geofence around the host vehicle, and wherein the geofence is an adjusted geofence that has a size, shape and/or location that is at least partially based on the severity of flooding, and the severity of flooding is at least partially based on the image-based data and/or the sensor-based data.
However, Kapil teaches obtaining images with one or more camera(s) and using the obtained images to provide image-based data, the camera(s) are mounted on a host vehicle (Kapil, Fig. 3; Para. 38, using a different number of sensors that are mounted on AV 102 as well as different types and/or different positions of sensors that may be configured to detect a precipitation condition and/or measure a level of precipitation on a road in real-time; Para. 42, data center 310 is configured to collect and store data from various data sources that can help predict and determine a potential, imminent, or occurring flood event/condition. In some cases, data center 310 can receive and/or store sensor data that is captured by the sensor(s) of AV 102 (e.g., sensor systems 104-108 such as a camera, a LiDAR, a RADAR, a rain gauge, a rain sensor, etc.)); wherein the using vehicle-based location data step further comprises using vehicle-based location data to establish a geofence around the host vehicle (Kapil, Para. 47, system 300 can leverage a combination of different types of data (e.g., sensor data, fleet data, map data, absorbability data, historical data, and/or third-party weather data as described above) stored in data center 310 to predict and detect a flood condition on road; Para. 49, fleet management system 320 and/or AV 102 can generate and/or provide an updated map based on the combination of available data associated with a potential, imminent, or occurring flood condition. A map can be updated continuously as the data in data center 310 can be collected and monitored in real time. the updated map can include visualized markings of safe and/or dangerous zones or spots based on the predictions and detection of the flood condition), and wherein the geofence is an adjusted geofence that has a size, shape and/or location that is at least partially based on the severity of flooding (Kapil, Fig. 2, Para. 39, if a level or amount of precipitation on the road (e.g., ground surface 202), on street gutter 210, and/or potholes 220 exceeds a predetermined precipitation threshold, the systems and techniques described herein can identify street gutter 210 and potholes 220 as a location affected by or in a flood condition; Para. 49, the updated map can include visualized markings of safe and/or dangerous zones or spots based on the predictions and detection of the flood condition; the updated map can show a progression rate of precipitation on road and/or a flood probability or flood risk, which can be calculated based on the combination of data available in data center 310), and the severity of flooding is at least partially based on the image-based data and/or the sensor-based data (Kapil, Para. 44, when coupled with the measurements associated with a level of precipitation based on the sensor data, the map data can provide the probability of flood (or flood risk) or the progression rate of precipitation at certain locations on a map).
Faizan and Kapil are analogous art because they both pertain to flood monitoring and providing guidance to vehicles.
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to generate a map of a visualized markings of safe and/or dangerous zones based on vehicle location and environmental data (as taught by Kapil) in order to provide a visualization of the flood condition at the one or more locations on a map (Para. 76).
Combination of Faizan and Kapil do not teach wherein the using real-time flooding information step further comprises using one of the image-based data or the sensor-based data to determine if there is a potential flooding event nearby the host vehicle, and using the other of the image-based data or the sensor-based data to determine if there is a confirmed flooding event nearby the host vehicle.
However, Ghannam teaches wherein the using real-time flooding information step further comprises using one of the image-based data or the sensor-based data to determine if there is a potential flooding event nearby the host vehicle (Ghannam, Fig. 4, step 404; Col. 12, lines 4-18, the control module 126 determines whether a flood characteristic is detected at and/or near the vehicle 100. the control module 126 detects a flood characteristic based upon (ii) image(s) and/or video collected by one or more of the cameras 114); and using the other of the image-based data or the sensor-based data to determine if there is a confirmed flooding event nearby the host vehicle (Ghannam, Fig. 4, steps 406-412; Col. 12, lines 19-41, the control module 126 collects the humidity measurement from the humidity sensor 120 of the engine 104. At block 410, the control module 126 compares the humidity measurement to the humidity level. At block 412, the control module 126 determines whether there is a flooding event).
Faizan, Kapil, and Ghannam are analogous art because they all pertain to flood detection system.
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to collect a second sensor measurement in response to detecting a flood characteristic from a first sensor measurement (as taught by Ghannam) resulting in predictable result of detecting flooding event of the vehicle.
In regard to claim 5, Combination of Faizan, Kapil, and Ghannam teach the flood detection and alert method of claim 1, wherein the obtaining location readings step further comprises obtaining location readings with a global positioning system (GPS) unit located within the vehicle position sensor (Faizan, Para. 53, the control Display and Alarm Unit (CDAU) 24 may include a global positioning (GPS) receiver that collects coordinates of the vehicle along with information of the water depth and water speed and send it using GSM module).
In regard to claim 8, Combination of Faizan, Kapil, and Ghannam teach the flood detection and alert method of claim 1, wherein the using real-time flooding information step further comprises using the sensor-based data and a water level evaluation technique to determine if there is a flooding event nearby the host vehicle (Faizan, Para. 50, the processor from the water level sensor 23 obtains the water level value and then detects the water level value and the first preset threshold. When detecting water level value less than the first preset threshold, it is pre—that the control output CDAU may be used to indicate current first information, and when detecting that water level value is more than or equal to the first preset threshold then the water level sensor sends a signal to a Control, Display and Alert Unit (CDAU) screen 22).
In regard to claim 10, Combination of Faizan, Kapil, and Ghannam teach the flood detection and alert method of claim 1, wherein the using real-time flooding information step further comprises using the image-based data from the camera(s) to determine if there is a potential flooding event nearby the host vehicle ((Ghannam, Fig. 4, step 404; Col. 12, lines 4-18, the control module 126 determines whether a flood characteristic is detected at and/or near the vehicle 100. the control module 126 detects a flood characteristic based upon (ii) image(s) and/or video collected by one or more of the cameras 114), and then using the sensor-based data from the water level sensor(s) to determine if there is a confirmed flooding event nearby the host vehicle, wherein the confirmed flooding event is determined after the potential flooding event (Ghannam, Fig. 4, steps 406-412; Col. 12, lines 19-41, the control module 126 collects the humidity measurement from the humidity sensor 120 of the engine 104. At block 410, the control module 126 compares the humidity measurement to the humidity level. At block 412, the control module 126 determines whether there is a flooding event; Col. 11, lines 54-67, While the example program is described with reference to the flowchart illustrated in FIG. 4, many other methods of implementing the example control module 126 may alternatively be used. For example, the order of execution of the blocks may be rearranged, changed, eliminated, and/or combined to perform the method 400; Therefore, it would be obvious to combine data collected by the water level sensor 118 with humidity sensor to confirm flooding event).
In regard to claim 11, Combination of Faizan, Kapil, and Ghannam teach the flood detection and alert method of claim 1, wherein the using real-time flooding information step further comprises using the sensor-based data from the water level sensor(s) to determine if there is a potential flooding event nearby the host vehicle ((Ghannam, Fig. 4, step 404; Col. 12, lines 4-18, the control module 126 determines whether a flood characteristic is detected at and/or near the vehicle 100. the control module 126 detects a flood characteristic based upon (iv) data collected by the water level sensor 118), and then using the image-based data from the camera(s) to determine if there is a confirmed flooding event nearby the host vehicle, wherein the confirmed flooding event is determined after the potential flooding event (Ghannam, Fig. 4, steps 406-412; Col. 12, lines 19-41, the control module 126 collects the humidity measurement from the humidity sensor 120 of the engine 104. At block 410, the control module 126 compares the humidity measurement to the humidity level. At block 412, the control module 126 determines whether there is a flooding event; Col. 11, lines 54-67, While the example program is described with reference to the flowchart illustrated in FIG. 4, many other methods of implementing the example control module 126 may alternatively be used. For example, the order of execution of the blocks may be rearranged, changed, eliminated, and/or combined to perform the method 400; Therefore, it would be obvious to combine image(s) and/or video collected by one or more of the cameras 114 with humidity sensor to confirm flooding event).
In regard to claim 12, Combination of Faizan, Kapil, and Ghannam teach the flood detection and alert method of claim 1, wherein the sending vehicle-based location data step further comprises wirelessly sending the vehicle-based location data from a telematics unit on the host vehicle to the backend portion of the cloud-based system via a lightweight messaging protocol, the vehicle-based location data indicates the location of the flooding event (Faizan, Para. 53, the control Display and Alarm Unit (CDAU) 24 may include a global positioning (GPS) receiver that collects coordinates of the vehicle along with information of the water depth and water speed and send it using GSM module, to be stored in a computer server).
In regard to claim 14, Combination of Faizan, Kapil, and Ghannam teach the flood detection and alert method of claim 1, wherein the geofence is either a distance-based geofence that is generally centered on the location of the host vehicle or a geography-based geofence that is generally based on a geographic or topographical feature located near the host vehicle (Kapil, Fig. 2; Para. 43, fleet management system 320 can provide sensor data that is captured by one or more AVs in a fleet that are navigating on road. The fleet sensor data can enable real-time data collection associated with a flood condition that covers a geographic area on the road).
In regard to claim 16, Combination of Faizan, Kapil, and Ghannam teach the flood detection and alert method of claim 1, wherein the geofence is a combined geofence that has a size, shape and/or location that is at least partially based on real-time flooding information reported from a plurality of vehicles (Kapil, Para. 57, process 400 can include collecting sensor data from one or more AVs. In some examples, the sensor data can include measurements associated with a level of precipitation in one or more locations. For example, while AV 102 is navigating on a road, various sensors of AV 102 (e.g., sensor systems 104-108) can capture and collect sensor data that includes measurements associated with a level of precipitation (e.g., a water level or an amount of precipitation on the surface of the ground)), the combined geofence encompasses a plurality of individual geofenced areas each associated with one of the plurality of vehicles, where the individual geofenced areas have been combined or merged into a larger geofenced area of the combined geofence (Kapil, Fig. 2, Para. 61, process 400 can include determining a progression rate of precipitation based on a combination of the sensor data from the one or more AVs, the elevation of ground at the one or more locations, the absorbability data, and the historical data; Para. 49, the updated map can include visualized markings of safe and/or dangerous zones or spots based on the predictions and detection of the flood condition; the updated map can show a progression rate of precipitation on road and/or a flood probability or flood risk, which can be calculated based on the combination of data available in data center 310).
In regard to claim 18, Combination of Faizan, Kapil, and Ghannam teach the flood detection and alert method of claim 1, wherein the sending a real-time flooding notification step further comprises sending a real-time flooding notification from the backend portion of the cloud-based system to the vehicle(s) that includes at least one piece of data selected from the list consisting of: a warning of flooding in the affected flooding area, location data of the affected flooding area, an alternative navigation route to avoid the affected flooding area, real-time flooding information previously provided by the host vehicle, or weather or traffic reports (Kapil, Para. 62, process 400 can include transmitting an alert signal to the one or more AVs to notify a flood condition at the one or more locations. For example, fleet management system 320 can transmit an alert signal to AV 102 to notify a flood condition in an environment near AV 102 so AV 102 can route/re-route to avoid the locations or spots that are in a flood condition).
In regard to claims 20 and 21, the claims are interpreted and rejected for the same reasons as stated in the rejection of claim 1 as stated above.
In regard to claim 22, the claim is interpreted and rejected for the same reasons as stated in the rejection of claim 16 as stated above.
Claim(s) 2, 6, and 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Faizan et al. (US 20210272440 A1) in view of Kapil (US 20240230370 A1) and Ghannam et al. (US 10255782 B1), and further in view of Avadhanam et al. (US 20220012988 A1).
In regard to claim 2, Combination of Faizan, Kapil, and Ghannam do not teach the flood detection and alert method of claim 1, wherein the obtaining images step further comprises obtaining images with a 360˚ camera that is mounted on a roof of the host vehicle.
However, the concept of using a 360-degree camera mounted on the roof of the vehicle capturing images around the vehicle is well known in the art as also taught by Avadhanam. Avadhanam teaches wherein the obtaining images step further comprises obtaining images with a 360˚ camera that is mounted on a roof of the host vehicle (Avadhanam, Fig. 5A, surround camera 574, Para. 183, the processing circuitry receives, from one or more sensors, data indicative of a trajectory of an object external to the vehicle. the processing circuitry receives data from at least one of surround camera(s) 574 (e.g., 360 degree cameras)). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to use a 360-degree camera (as taught by Avadhanam) resulting in predictable result of detecting flooded road (Avadhanam, Para. 161, the updates to the map information 594 may include updates for the HD map 522, such as information regarding construction sites, potholes, detours, flooding, and/or other obstructions).
In regard to claim 6, Combination of Faizan, Kapil, Ghannam, and Avadhanam teach the flood detection and alert method of claim 1, wherein the using real-time flooding information step further comprises using the image-based data and an object detection technique to determine if there is a flooding event nearby the host vehicle (Avadhanam, Para. 161, the updates to the map information 594 may include updates for the HD map 522, such as information regarding construction sites, potholes, detours, flooding, and/or other obstructions; Para. 184, the processing circuitry determines one or more attributes of the object external to the vehicle. In some embodiments, the processing circuitry is communicatively coupled via a network 590 to one or more neural networks 592 to determine the one or more attributes of the object. In some embodiments, the processing circuitry determines one additional attribute including at least one of a location attribute, a weather attribute, or a driving condition attribute).
In regard to claim 7, Combination of Faizan, Kapil, Ghannam, and Avadhanam teach the flood detection and alert method of claim 6, wherein the object detection technique is carried out by vehicle resources mounted on the vehicle (Avadhanam, Para. 191, The processing circuitry uses a machine learning model to determine the attributes. In some embodiments, the machine learning model uses inference and/or training logic 515 to perform the training and interference modeling of the neural networks. The training of the neural networks uses training dataset 602 of a myriad of object types and different orientations and/or attributes (e.g., people of all ages, body types, and in different postures and different clothing). The training framework 604 facilitates the learning of the neural network using the training dataset 602) and is selected from at least one of the following families of techniques: regions with convolutional neural networks (R-CNN) family of techniques or you-only-look-once (YOLO) family of techniques (Avadhanam, Para. 96, The term “CNN,” as used herein, may include all types of CNNs, including region-based or regional convolutional neural networks (RCNNs) and Fast RCNNs (e.g., as used for object detection).
Claim(s) 3 is/are rejected under 35 U.S.C. 103 as being unpatentable over Faizan et al. (US 20210272440 A1) in view of Kapil (US 20240230370 A1) and Ghannam et al. (US 10255782 B1), and further in view of Chen et al. (US 20030222768 A1).
In regard to claim 3, Combination of Faizan, Kapil, Ghannam do not specifically teach the flood detection and alert method of claim 1, wherein the obtaining sensor readings step further comprises obtaining sensor readings with four water level sensors, each water level sensor is mounted at or near a different wheel or corner of the host vehicle.
However, the concept of using multiple water level sensors is well known in the art as also taught by Chen. Chen teaches two water level sensors are used and respectively installed in the bottom side of the front bumper and the bottom side of the rear bumper. Alternatively, more than three water level sensors may be used and equally spaced in the bottom side of the floor of the motor vehicle. When multiple water level sensors are used, the microprocessor calculates the data obtained from the water level sensors through Boolean Operator and then makes a logic judgment (Para. 10-11). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to use four water level sensors (as taught by Chen) resulting in predictable result of detecting water level on the road.
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Faizan et al. (US 20210272440 A1) in view of Kapil (US 20240230370 A1) and Ghannam et al. (US 10255782 B1), and further in view of Chen et al. (US 20030222768 A1) and Salter et al. (US 20210213976 A1).
In regard to claim 4, Combination of Faizan, Kapil, Ghannam, and Chen do not teach the flood detection and alert method of claim 3, wherein each water level sensor is mounted in a different wheel well of the host vehicle and indicates if a water level at that particular wheel well is below a bottom of the sensor, is between the bottom and a top of the sensor, or is above the top of the sensor.
However, Salter teaches wherein each water level sensor is mounted in a different wheel well of the host vehicle and indicates if a water level at that particular wheel well is below a bottom of the sensor, is between the bottom and a top of the sensor, or is above the top of the sensor (Salter, Para. 64, vehicle 1103 may receive capacitive signals from capacitive sensors in the vehicle wheel well of that vehicle, and determine that the first capacitive signal exceeds a threshold value, due to the possible presence of standing or moving floodwater that covers that respective sensor (not shown in FIG. 12). The capacitive water depth sensor system 1135 may receive a second capacitive signal from a second capacitive sensor configured at a higher elevation in the wheel well respective to the first sensor, and determine that the water level is above the second, higher sensor. The water depth sensor system 1135 may further determine a rate in which the water level is rising (that is, a change in water level covering the capacitive sensors with respect to time). Responsive to determining that the water 1205 is higher than the second sensor, (e.g., the second sensor is submerged in the water 1205), the capacitive water depth sensor system 1135 may send the AV controller 1100 first water depth information that includes the depth of the water 1205).
Faizan, Kapil, Ghannam, Chen, and Salter are analogous art because they all pertain to water level monitoring sensors.
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have water level sensor on the vehicle wheel wall (as taught by Salter) resulting in predictable result of detecting water depth information.
Claim(s) 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Faizan et al. (US 20210272440 A1) in view of Kapil (US 20240230370 A1) and Ghannam et al. (US 10255782 B1), and further in view of Hughes et al. (US 20240349012 A1).
In regard to claim 17, Combination of Faizan, Kapil, Ghannam do not specifically teach the flood detection and alert method of claim 1, wherein the monitoring step further comprises monitoring the locations of the plurality of vehicles by comparing each vehicle location to a geofenced area corresponding to the affected flooding area(s), by comparing each expected navigational route or driving pattern, when one exists, to the geofenced area corresponding to the affected flooding area(s), or by comparing both.
However, Hughes teaches the flood detection and alert method of claim 1, wherein the monitoring step further comprises monitoring the locations of the plurality of vehicles by comparing each vehicle location to a geofenced area corresponding to the affected flooding area(s), by comparing each expected navigational route or driving pattern, when one exists, to the geofenced area corresponding to the affected flooding area(s), or by comparing both. (Hughes, Para. 64, the telematics service provider may repeatedly perform operations 312-322 for each vehicle in each dynamic geofence area. In operation 312, the telematics service provider may determine whether the current vehicle is in a dynamic geofence state relative to the current dynamic geofence area. Para. 99, a vehicle may be determined to be entering a dynamic geofence area if the vehicle location is currently outside the dynamic geofence area, and the vehicle location is within range (e.g., 10 km) of a dynamic geofence associated with the dynamic geofence area, and the vehicle is travelling towards the dynamic geofence associated with the dynamic geofence area (e.g., based upon the vehicle velocity received in operation 308 with reference to FIG. 3). In response to determining that the vehicle is entering the dynamic geofence area (i.e., determination block 606=“Yes”), in block 608 the processing system may set the dynamic geofence state of the vehicle to be that it is entering the dynamic geofence area).
Faizan, Kapil, Ghannam, and Hughes are analogous art because they both pertain to flood monitoring and providing guidance to vehicles.
Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to determine if a vehicle is entering a dynamic geofence area (as taught by Hughes) in order to proactively inform renters of potential risks and hazards associated with environmental events.
Claim(s) 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Faizan et al. (US 20210272440 A1) in view of Kapil (US 20240230370 A1) and Ghannam et al. (US 10255782 B1), and further in view of Starsinic et al. (US 20240007878 A1).
In regard to claim 19, Combination of Faizan, Kapil, and Ghannam do not specifically teach he flood detection and alert method of claim 1, wherein the sending a real-time flooding notification step further comprises sending a real-time flooding notification as an over-the-air (OTA) alert via a lightweight messaging protocol.
However, the concept of using Over the air (OTA) alert is well known in the art as also taught by Starsinic. Starsinic teaches specific apps may be provided for OTA messaging while the SMS infrastructure is down, which may be used seamlessly during the disaster if off-band connectivity (e.g., WiFi) is established. This would allow users to send and receive messages using the same app or contact information as for SMS, and not have to move to alternative OTA messaging solutions (Starsinic, Para. 123). Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to use OTA alert (as taught by Starsinic) which may be used seamlessly during the disaster.
Response to Arguments
Applicant's arguments filed on 11/25/2025 have been fully considered but they are not persuasive. In that remarks, applicant's argues in substance:
Applicant argues: " In claim 1, the sensor- based data relates to water level sensors whereas Ghannam's disclosure regarding block 412 and determining whether there is a flooding event uses only humidity sensors. A humidity sensor is not a water level sensor as set forth in claim 1, a humidity sensor is an air-based sensors.
Beyond that significant difference, and as further evidence that the humidity sensor used in Ghannam is not relevant to the claimed water level sensors, the method in Ghannam requires ambient humidity level information from a remote weather service. The information received from the remote weather service is required by Ghannam for comparison to a "humidity measurement from within the engine 104". Thus, Ghannam is not using a first on-vehicle source to determine a potential for flooding and then a second and different on-vehicle source to confirm an actual flood has occurred. Instead, Ghannam teaches using on-vehicle sources to determine a potential for flooding and then requires a vehicle-independent source of humidity information to determine "whether this is a flooding event.
Next, Applicant notes that in the rejections of both claims 10 and 11, the examiner asserted that the generic statement in Ghannam that execution of the blocks may be rearranged or combined supports the rejection of claims 10 and 11 even though Ghannam does not disclose the methods set forth in claims 9, 10 or 11. As an initial matter, this generic statement does not provide any specific teaching by which the examiner could arrive at the specific method set forth in amended claim 1, or claims 10 or 11. Hindsight reconstruction is required to reach the assertions made against claims 10 and 11, and such hindsight reconstruction is improper as it uses Applicant's own teachings against the Applicant.
Beyond that failing, the substitution of a water level sensor and water level readings for the humidity sensor and humidity readings in Ghannam does not make sense because there is no teaching or suggestion in Ghannam of a remote weather service providing water level readings that could be compared to an on-vehicle water level sensor measurement. That is, Ghannam requires a vehicle-independent source of information that is used to determine if a flooding event has occurred and this does not make sense with regard to a water level sensor. Nor does this make sense with regard to images/camera data. Claim 1 recites use of two on-vehicle devices, with a first one of the on-vehicle devices used to determine a potential for flooding and a second one of the on-vehicle devices used to confirm an actual flooding event”.
Examiner’s Response: In response to applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971). Also, Ghannam teaches the order of execution of the blocks may be rearranged, changed, eliminated, and/or combined to perform the method 400 (Col. 11, lines 54-67). Ghannam ref is used to show it’s obvious to use multiple sensors to detect and confirm flooding events. Therefore, it would be obvious to combine data collected by the water level sensor 118 with humidity sensor to confirm flooding event.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHARMIN AKHTER whose telephone number is (571)272-9365. The examiner can normally be reached on Monday - Thursday 8:00am-5:00pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Davetta W Goins can be reached on (571) 272.2957. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHARMIN AKHTER/
Examiner, Art Unit 2689
/DAVETTA W GOINS/Supervisory Patent Examiner, Art Unit 2689