Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This Office Action is in response to Applicant Amendment and Arguments filed on 11/25/2025.
Claim(s) 18 are canceled.
Claim(s) 12-17 and 19-23 are pending for examination.
This Action is made FINAL.
Response to Arguments
With regards to claim(s) 12-17 and 19-23 previously rejected under 35 U.S.C. 103, Applicant's arguments have been fully considered, but are not persuasive.
Applicant argues “More specifically, the Examiner cites Figure 1b and paragraphs [0100] and [0147] of Yu as teaching a similar feature with a system including sensors for detecting living things and environment conditions. However, the Applicant notes that there is no teaching in the cited portions of Yu of a camera that is configured to detect both living beings and at least some of the at least one environmental condition in the interior of the vehicle as recited in amended claim 12. For at least this reason, amended independent claim 12, and any claim that depends on claim 12, are patentable over the proposed combinations of Yu, Gross, Kothari, Ireri and Dulin as contemplated by the Examiner.”
First Yu does teaches “wherein the camera is configured to detect both living beings and at least some of the at least one environmental condition in the interior of the vehicle.” As discussed in para [0100] the camera is able to detect occupants. Additionally, para [0147] discusses that “other vehicle sensors” (sensors that are not the weight sensor) can detect environmental conditions. As the camera is not a weight sensor, it can be considered a part of the other vehicle sensors group. This is further evidenced by para [0207] “Still further, the control algorithm can read a light sensor, camera, or other optical device to obtain a representation of the light conditions, and convert the light reading into a scalar to modify the risk and urgency level assessment table.”
Furthermore Kothari as previously recited in the rejection of the independent claims also clearly teaches “wherein the camera is configured to detect both living beings and at least some of the at least one environmental condition in the interior of the vehicle.” Korthari recites Para [0061] “Vehicle sensor 218 also may include cameras and/or proximity sensors capable of recording additional conditions inside or outside of the vehicle 217. For example, internal cameras may detect conditions such as the number of the passengers and the types of passengers (e.g. adults, children, teenagers, pets, etc.) in the vehicles, and potential sources of driver distraction within the vehicle (e.g., pets, phone usage, and unsecured objects in the vehicle).” Where both phone usage and unsecured objects can be considered as environmental conditions as they can be considered conditions of “the surroundings or conditions in which a person, animal, or plant lives or operates.”
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 12-18, 20, and 22-23 are rejected under 35 U.S.C. 103 as being unpatentable over Yu et al. (US 20200198439 A1, hereinafter known as Yu) in view of Gross (US 20180108369 A1) and Kothari et al. (US 20180302484 A1; hereinafter known as Kothari).
Regarding claim 12, Yu teaches A system for monitoring an interior of a vehicle, comprising: a camera configured to detect living beings in the interior of the vehicle
{Abstract “A computer-implemented process for controlling a vehicle interior includes detecting a previously defined situation that relates to an undesirable environmental condition of the vehicle interior, and assessing both a risk level and an urgency level, based on a vehicle sensor input. The process also includes generating a vehicle command based upon the detected previously defined situation, the assessed risk level, and assessed urgency level, and executing the generated vehicle command to control at least one of an engine, a window, and a heating, ventilation and air conditioning (HVAC) unit to modify an environmental condition of the vehicle interior.”
Fig. 1b labels 106, 108, 110, 112, 114, 116 and Para [0100] “The process determines at 228 whether the vehicle is occupied, e.g., based upon the obtained sensor information at 222, and optionally based upon other information available to the process.”
Para [0100] “The process determines at 228 whether the vehicle is occupied, e.g., based upon the obtained sensor information at 222, and optionally based upon other information available to the process. If the vehicle is determined to not be occupied (No), then the process loops back to the beginning, e.g., via the sleep/wait state at 224. Vehicle occupancy can be determined using any of the techniques set out herein. For instance, the process 220 can obtain weight data from vehicle weight sensors 114—FIG. 1B, and infer occupancy therefrom. Occupancy can also be detected using motion sensors, cameras and feature extraction, etc.”
}
and at least one processor,
{Fig. 1b label 104 and Para [0060-0066] Which describe how the controller 104 interacts with and controls the vehicles various sub components.
}
wherein the processor is configured to actuate at least one vehicle function in order to influence at least one environmental condition in the interior of the vehicle
{Fig. 2B label 250 which are responsive to 228, 230, 234, 244 in the flowchart
Para [0109] “The process 220 then performs the action(s) at 250. In an example embodiment, the process 220 uses action parameters, as well as command and control instructions to issue operational commands for one or more actions. Here, actions can be the same or different for each cyclical loop through this process 220 responsive to the PDS.”
Para [0241] “As noted in greater detail herein, based upon a measured risk type and urgency level (or probability of a PDS), an action is taken. The action herein can correspond for instance, to the action 208, 218 (FIG. 2A); 240, 242, 250 (FIG. 2B); 286 (FIG. 2C); 332 (FIG. 3D); 408 (FIG. 4), etc.”
Para [0243-0244] “In an example embodiment, two general classes of action can be defined, including preservation and rescue.
Preservation actions are measures that involve activating a vehicles' available native resources, such as turning off an engine, turning on/off air conditioning or a heater, open/close windows, etc., for the purpose of maintaining suitable vehicle interior environment for occupants, human and otherwise, to remain in the vehicle without being physically and/or mentally harmed by adverse environmental conditions, which may lead to heatstroke, hypothermia, high level carbon monoxide poisoning, low-level-oxygen-induced asphyxia, etc. Preservation actions are performed while the system is waiting for eventual rescue, one way or another.”
}
(i) the camera detecting a living being in the interior of the vehicle, and
{Fig. 2b label 228
Para [0100] “The process determines at 228 whether the vehicle is occupied, e.g., based upon the obtained sensor information at 222, and optionally based upon other information available to the process. If the vehicle is determined to not be occupied (No), then the process loops back to the beginning, e.g., via the sleep/wait state at 224. Vehicle occupancy can be determined using any of the techniques set out herein. For instance, the process 220 can obtain weight data from vehicle weight sensors 114—FIG. 1B, and infer occupancy therefrom. Occupancy can also be detected using motion sensors, cameras and feature extraction, etc.”
}
(ii) the at least one environmental condition in the interior of the vehicle exceeding or falling below at least one predetermined threshold value; and/or
{fig. 2b label 230
Para [0101] “If an occupant is detected at 228 (Yes), the process determines at 230 whether a PDS is detected. The determination of PDS can be accomplished using any of the techniques set out in greater detail herein. If a PDS is not detected (No), then the process loops back to the beginning, e.g., via the sleep/wait state at 226. The above, thus correspond to Phase I as described herein.”
Para [0041] “In an example implementation, each PDS (representing a corresponding adverse situation type) is associated with an undesirable (including unintended) environmental condition of the vehicle interior, such as an excessively hot temperature, an excessively cold temperature, poor air quality (e.g., poor oxygen level, excessive carbon monoxide level, etc.), or combination thereof.”
}
wherein the processor is configured to actuate at least one vehicle function in order to influence at least one environmental condition in the interior of the vehicle in response to:
{Fig. 2B label 250 which are responsive to 228, 230, 234, 244 in the flowchart
Para [0109] “The process 220 then performs the action(s) at 250. In an example embodiment, the process 220 uses action parameters, as well as command and control instructions to issue operational commands for one or more actions. Here, actions can be the same or different for each cyclical loop through this process 220 responsive to the PDS.”
Para [0241] “As noted in greater detail herein, based upon a measured risk type and urgency level (or probability of a PDS), an action is taken. The action herein can correspond for instance, to the action 208, 218 (FIG. 2A); 240, 242, 250 (FIG. 2B); 286 (FIG. 2C); 332 (FIG. 3D); 408 (FIG. 4), etc.”
Para [0243-0244] “In an example embodiment, two general classes of action can be defined, including preservation and rescue.
Preservation actions are measures that involve activating a vehicles' available native resources, such as turning off an engine, turning on/off air conditioning or a heater, open/close windows, etc., for the purpose of maintaining suitable vehicle interior environment for occupants, human and otherwise, to remain in the vehicle without being physically and/or mentally harmed by adverse environmental conditions, which may lead to heatstroke, hypothermia, high level carbon monoxide poisoning, low-level-oxygen-induced asphyxia, etc. Preservation actions are performed while the system is waiting for eventual rescue, one way or another.”
}
(i) the camera detecting a living being in the interior of the vehicle, and
{ Fig. 2b label 228
Para [0100] “The process determines at 228 whether the vehicle is occupied, e.g., based upon the obtained sensor information at 222, and optionally based upon other information available to the process. If the vehicle is determined to not be occupied (No), then the process loops back to the beginning, e.g., via the sleep/wait state at 224. Vehicle occupancy can be determined using any of the techniques set out herein. For instance, the process 220 can obtain weight data from vehicle weight sensors 114—FIG. 1B, and infer occupancy therefrom. Occupancy can also be detected using motion sensors, cameras and feature extraction, etc.”
}
(iii) the at least one environmental condition in the interior of the vehicle exhibiting a rise or fall.
{Para [0148] “If the situation changes, such as a significant increase in temperature, then the increased temperature will act as a triggering event for the calculation. FIG. 3A is a visual representation of that calculation. The process will dynamically establish the time zones based on a variety of factors such as the current temperature, the rate of increase of the temperature, the known physical characteristics of the vehicle, vehicle occupant, etc. As a working example, assume that the first time zone (between t0 and t1) is 15 minutes. After 15 minutes has elapsed, assume that the process determines that the vehicle occupant is now within the second probability zone 306, which has a higher degree of risk to the vehicle occupant. At this point, the process may determine that a predetermined action 310 is necessary. As a result, the process may activate the engine via the engine controller 124 and HVAC system via the HVAC controller 126 to cool down the cabin temperature. The process may also and/or alternatively decide to lower the window(s), e.g., 4 inches (10.16 cm). Alternatively, the process can wait until the third probability zone 308 to take action.”
}
wherein the camera is configured to detect both living beings and at least some of the at least one environmental condition in the interior of the vehicle.
{Fig. 1b were it can be seen the sensors (which include a camera) are for detecting living things and environmental conditions
Para [0100] “The process determines at 228 whether the vehicle is occupied, e.g., based upon the obtained sensor information at 222, and optionally based upon other information available to the process. If the vehicle is determined to not be occupied (No), then the process loops back to the beginning, e.g., via the sleep/wait state at 224. Vehicle occupancy can be determined using any of the techniques set out herein. For instance, the process 220 can obtain weight data from vehicle weight sensors 114—FIG. 1B, and infer occupancy therefrom. Occupancy can also be detected using motion sensors, cameras and feature extraction, etc.”
Para [0147] “The following is a working example of an embodiment corresponding to FIG. 3D. In this example, assume that the vehicle 102 is parked, and the weight sensor 114 has detected an occupant within the vehicle 102. Other vehicle sensors such as the motion sensor 108 or infrared sensor 110 can confirm, corroborate, substantiate, verify, etc., the detection of the occupant. The other vehicle sensors such as the temperature 106 and air quality sensors 112 may continuously or periodically take readings from the environment, which are considered by the algorithm. Based on the sensor readings, the process determines that the vehicle occupant is within the first probability zone 304, and that no predetermined action 332 is necessary. For instance, assume that the temperature is within a predetermined range, e.g., 70 degrees Fahrenheit (about 21.1 degrees Celsius), and the air quality is acceptable. This results in a relatively low probability of an adverse event.”
Para [0207] “Still further, the control algorithm can read a light sensor, camera, or other optical device to obtain a representation of the light conditions, and convert the light reading into a scalar to modify the risk and urgency level assessment table.”
}
Yu does not teach, and to determine and to distinguish between different species of living beings in the interior of the vehicle from captured images;
wherein the processor is configured to actuate at least one vehicle function in order to influence at least one environmental condition in the interior of the vehicle based in part on the determined species of the living beings in response to:
However, Gross teaches to determine and to distinguish between different species of living beings in the interior of the vehicle
{Para [0021] “In some embodiments, audio recognition system 300 may include a processor 310 communicatively coupled to process sound data received from the one or more microphones 320(1)-320(N). Processor 310 may also be coupled to memory device 340 to access data stored therein and to execute any firmware and/or software programs stored therein. Processor 310 may be configured to store in memory 340 as raw data 342 of sounds the sound data received from the one or more microphones 320(1)-320(N). In some embodiments, Audacity™ may be used in connection with Ford Sync™ microphones of Ford Motor Company to record the sounds into audio files. The audio files may be processed by processor 310 to calculate Mel-frequency cepstrum coefficients. Processor 310 may determine the origin of each sound as being originated from inside or outside of a vehicle and classify the sounds into different categories based on neural network learning. The categories may include, for example and without limitation, at least adult, child, and animal sounds.”
Where the system is at least able to distinguish between a human species and a non-human species sound.
}
wherein the processor is configured to actuate at least one vehicle function in order to influence at least one environmental condition
{Para [0042-0043] “At 630, process 600 may involve processor 310 determining whether an occupant of vehicle 220 is at risk of danger or whether vehicle 220 is at risk of theft. A combination of conditions of vehicle 220 and information from first and second neural networks 350 and 360 may be sufficient for determining whether an occupant inside vehicle 220 is at risk of danger. For example, an occupant crying (inside sounds determined by first neural network 350 and crying identified as child sounds by second neural network 360) inside vehicle 220 with a temperature higher than 100 degree Fahrenheit may be a situation that places the occupant in a status of being in danger. As another example, a loud bang from inside of vehicle 220 may indicate a battery explosion of an electronic device and a fire hazard for vehicle 220. Process 600 may proceed from 630 to 640.
At 640, process 600 may involve processor 310 triggering one or more actions upon determining that the occupant or vehicle 220 is at risk. In some embodiments, processor 310 may, via communication device 330, provide instructions to one or more components of vehicle 220 to execute a series of responsive actions. For example, a series of alert message may be sent to owner(s) of vehicle 220 via a wirelessly transmittable component once an occupant is identified to be at risk of danger. Whether to call 911 may be decided by owner(s) of vehicle 220, depending on his/her proximity relative to the location of vehicle 220. Flashing headlights and honking horns may be deployed to catch attentions of bystanders. Processor 310 may further issue instructions to unlock doors of the vehicle for increasing chance of survival of the occupant.”
}
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yu to incorporate the teachings of Gross to identify the type and species of occupant because as discussed in para [0002] of Gross “The safety of occupant(s) inside a moving automotive vehicle is an important consideration in the decision of purchasing an automotive vehicle. There are many technological advances and innovations towards protection for occupant(s) inside a moving automotive vehicle in the event of an accident or in the forms of preventive measures. The safety of occupant(s) inside a parked vehicle imparts a distinctive challenge to the intelligence of a safety system, especially when a young occupant is left alone inside. Parents may be unaware of the danger of leaving an infant, a toddler, or a kindergarten child in an unattended vehicle. The temperature inside a vehicle may rise significantly in a short amount of time in a hot day and the body temperature of a child may rise much faster than an adult. The safety for young occupant(s) may be improved if a safety system in a vehicle has the intelligence to recognize that the occupant is young and the vehicle is in a state that could cause harm to the young occupant, and the safety system proactively issues warning signals or notifies the owner of the vehicle when such a situation occurs.”
Yu in view of Gross does not teach, and to determine and to distinguish between different species of living beings in the interior of the vehicle from captured images;
However Kothari and to determine and to distinguish between different species of living beings in the interior of the vehicle from captured images;….
wherein the camera is configured to detect both living beings and at least some of the at least one environmental condition in the interior of the vehicle.
{Para [0061] “Vehicle sensor 218 also may include cameras and/or proximity sensors capable of recording additional conditions inside or outside of the vehicle 217. For example, internal cameras may detect conditions such as the number of the passengers and the types of passengers (e.g. adults, children, teenagers, pets, etc.) in the vehicles, and potential sources of driver distraction within the vehicle (e.g., pets, phone usage, and unsecured objects in the vehicle).”
}
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yu in view of Gross to incorporate the teachings of Kothari to identify the type and species of occupant using a camera because it would be obvious to try. There are only so many difference sensors that can identify living beings. Additionally it is well known that cameras can be used for object detection.
Regarding claim 13, Yu in view of Gross and Kothari teaches The system according to claim 12. Yu further teaches wherein: the camera is further configured to detect living beings in the interior of the vehicle when the vehicle is in a parked state, and the processor is further configured to actuate the at least one vehicle function in order to influence at least one environmental condition in the interior of the vehicle when conditions (i) and (ii) and/or (i) and (iii) are present when: (iv) the vehicle is in the parked state.
{Para [0098] “The process 220 comprises determining a vehicle state. For instance, the process 220 can read one or more sensor(s) at 222 and determine at 224, based upon the sensor reading(s), whether the vehicle is in use. If Yes (the vehicle is actively being used, e.g., traveling), then the process 220 loops back to the beginning, or the process 220 can sleep at 226, e.g., for a predetermined amount of time before re-checking the vehicle state. In alternative embodiments, the process 220 need not sleep or perform the determination at 224, e.g., such as where a supervisory controller initiates the process 220 responsive to the vehicle state indicating that the vehicle is not in use.”
Not in use in this case means not traveling which can be interpreted as parked.
As seen in fig. 2b the checking the environment (230, 234, 244) and actuating vehicle functions (250) occurs after step 224
}
Regarding claim 14, Yu in view of Gross and Kothari teaches The system according to claim 12. Yu further teaches wherein: the at least one environmental condition is selected from a group comprising: a temperature, an incidence of light, an oxygen content, and a carbon dioxide content.
{Para [0041] “In an example implementation, each PDS (representing a corresponding adverse situation type) is associated with an undesirable (including unintended) environmental condition of the vehicle interior, such as an excessively hot temperature, an excessively cold temperature, poor air quality (e.g., poor oxygen level, excessive carbon monoxide level, etc.), or combination thereof.”
Para [0207] “Still further, the control algorithm can read a light sensor, camera, or other optical device to obtain a representation of the light conditions, and convert the light reading into a scalar to modify the risk and urgency level assessment table.”
As it is not explicitly claimed that a vehicle system itself is doing the selection from the group (there is no indication that the entire group is being programed into the vehicle system), and since the language of “at least one environmental condition” under BRI one of but not all of “a temperature, an incidence of light, an oxygen content, and a carbon dioxide content.” must be taught by the prior art
}
Regarding claim 15, Yu in view of Gross and Kothari teaches The system according to claim 14. Yu further teaches wherein: the at least one vehicle function is selected from a group comprising: a cooling function, a heating function, a sunroof function, a window function, a door function, and a shading function.
{Para [0244] “Preservation actions are measures that involve activating a vehicles' available native resources, such as turning off an engine, turning on/off air conditioning or a heater, open/close windows, etc., for the purpose of maintaining suitable vehicle interior environment for occupants, human and otherwise, to remain in the vehicle without being physically and/or mentally harmed by adverse environmental conditions, which may lead to heatstroke, hypothermia, high level carbon monoxide poisoning, low-level-oxygen-induced asphyxia, etc. Preservation actions are performed while the system is waiting for eventual rescue, one way or another.”
As it is not explicitly claimed that a vehicle system itself is doing the selection from the group (there is no indication that the entire group is being programed into the vehicle system), and since the language of “at least one vehicle function” under BRI at least one of but not all of “a cooling function, a heating function, a sunroof function, a window function, a door function, and a shading function” must be taught by the prior art
}
Regarding claim 16, Yu in view of Gross and Kothari teaches The system according to claim 12. Yu further teaches wherein: the at least one vehicle function is selected from a group comprising: a cooling function, a heating function, a sunroof function, a window function, a door function, and a shading function.
{ Para [0244] “Preservation actions are measures that involve activating a vehicles' available native resources, such as turning off an engine, turning on/off air conditioning or a heater, open/close windows, etc., for the purpose of maintaining suitable vehicle interior environment for occupants, human and otherwise, to remain in the vehicle without being physically and/or mentally harmed by adverse environmental conditions, which may lead to heatstroke, hypothermia, high level carbon monoxide poisoning, low-level-oxygen-induced asphyxia, etc. Preservation actions are performed while the system is waiting for eventual rescue, one way or another.”
As it is not explicitly claimed that a vehicle system itself is doing the selection from the group (there is no indication that the entire group is being programed into the vehicle system), and since the language of “at least one vehicle function” under BRI at least one of but not all of “a cooling function, a heating function, a sunroof function, a window function, a door function, and a shading function” must be taught by the prior art
}
Regarding claim 17, Yu in view of Gross and Kothari teaches The system according to claim 12. Yu further teaches wherein: the processor is configured to monitor the rise or fall of the at least one environmental condition and to determine a time at which the at least one environmental condition exceeds or falls below, or will exceed or fall below, the at least one predetermined threshold value.
{Para [0044] “Yet further, detection can be based upon a prediction (e.g., near future occurrence such as within a few minutes, within 30 minutes, within an hour, etc.) of reaching a pre-defined vehicle interior environmental condition, or a predicted likelihood of reaching a pre-defined vehicle interior environmental condition (e.g., likelihood exceeding a preset probability within a preset time).”
}
Regarding claim 20, Yu in view of Gross and Kothari teaches The system according to claim 12. Yu further teaches wherein: the camera comprises at least one of: a thermal imaging camera, a microphone, or a seat occupancy sensor.
{Para [0061] “Now referring to FIG. 1B, a block diagram illustrates a Monitor, Command, and Control (MCC) system, which can utilize several of the components described with reference to FIG. 1A. The controller 104 collects readings from one or more sensors, e.g., temperature sensor(s) 106, motion sensor(s) 108, infrared sensor(s) 110, air quality sensor(s) 112, weight sensor(s) 114 (e.g., in one or more front seats, back seats, or combination thereof), miscellaneous sensor(s) 116, etc. The controller 104 also interacts with select electronic components, such as a communication device 120 (e.g., cellular, Wi-Fi, Bluetooth, combination thereof, etc.), a siren 122, an engine controller 124, HVAC controller 126, window controller 128, etc.”
}
Regarding claim 22, it recites A method having limitations similar to those of claim 12 and therefore is rejected on the same basis.
As with claim 12 Gross teaches actuating, based in part on the determined species of each at least one living being,
{Para [0042-0043] “At 630, process 600 may involve processor 310 determining whether an occupant of vehicle 220 is at risk of danger or whether vehicle 220 is at risk of theft. A combination of conditions of vehicle 220 and information from first and second neural networks 350 and 360 may be sufficient for determining whether an occupant inside vehicle 220 is at risk of danger. For example, an occupant crying (inside sounds determined by first neural network 350 and crying identified as child sounds by second neural network 360) inside vehicle 220 with a temperature higher than 100 degree Fahrenheit may be a situation that places the occupant in a status of being in danger. As another example, a loud bang from inside of vehicle 220 may indicate a battery explosion of an electronic device and a fire hazard for vehicle 220. Process 600 may proceed from 630 to 640.
At 640, process 600 may involve processor 310 triggering one or more actions upon determining that the occupant or vehicle 220 is at risk. In some embodiments, processor 310 may, via communication device 330, provide instructions to one or more components of vehicle 220 to execute a series of responsive actions. For example, a series of alert message may be sent to owner(s) of vehicle 220 via a wirelessly transmittable component once an occupant is identified to be at risk of danger. Whether to call 911 may be decided by owner(s) of vehicle 220, depending on his/her proximity relative to the location of vehicle 220. Flashing headlights and honking horns may be deployed to catch attentions of bystanders. Processor 310 may further issue instructions to unlock doors of the vehicle for increasing chance of survival of the occupant.”
}
Regarding claim 23, it recites A computer product comprising a non-transitory computer-readable medium having limitations similar to those of claim 12 and therefore is rejected on the same basis.
Additionally Yu teaches A computer product comprising a non-transitory computer-readable medium having stored thereon program code which, when executed on one or more processors, carries out the acts of:
{Para [0066] “Yet further, the controller 104 includes (or is coupled to) program code 150 (e.g., e.g., which embodies the control algorithm that implements the three-phase process described herein), which is stored in memory 152 (e.g., read only memory, random access memory, non-volatile storage media such as a hard drive or solid state drive, combinations thereof, etc.). When the program code is read out and executed, the vehicle carries out the algorithms and approaches described more fully herein. In this regard, the controller 104 can read sensor data, store sensor data in the memory, access event logs 134, cause the vehicle to write event logs 134, store information in the memory 152, access metadata stored in the memory 152, etc., as described more fully herein.”
}
As with claim 12 Gross teaches wherein the at least one vehicle function is actuated based in part on the determined species of the living being.
{Para [0042-0043] “At 630, process 600 may involve processor 310 determining whether an occupant of vehicle 220 is at risk of danger or whether vehicle 220 is at risk of theft. A combination of conditions of vehicle 220 and information from first and second neural networks 350 and 360 may be sufficient for determining whether an occupant inside vehicle 220 is at risk of danger. For example, an occupant crying (inside sounds determined by first neural network 350 and crying identified as child sounds by second neural network 360) inside vehicle 220 with a temperature higher than 100 degree Fahrenheit may be a situation that places the occupant in a status of being in danger. As another example, a loud bang from inside of vehicle 220 may indicate a battery explosion of an electronic device and a fire hazard for vehicle 220. Process 600 may proceed from 630 to 640.
At 640, process 600 may involve processor 310 triggering one or more actions upon determining that the occupant or vehicle 220 is at risk. In some embodiments, processor 310 may, via communication device 330, provide instructions to one or more components of vehicle 220 to execute a series of responsive actions. For example, a series of alert message may be sent to owner(s) of vehicle 220 via a wirelessly transmittable component once an occupant is identified to be at risk of danger. Whether to call 911 may be decided by owner(s) of vehicle 220, depending on his/her proximity relative to the location of vehicle 220. Flashing headlights and honking horns may be deployed to catch attentions of bystanders. Processor 310 may further issue instructions to unlock doors of the vehicle for increasing chance of survival of the occupant.”
}
Claim(s) 19 is rejected under 35 U.S.C. 103 as being unpatentable over Yu et al. (US 20200198439 A1, hereinafter known as Yu) in view of Gross (US 20180108369 A1), Kothari et al. (US 20180302484 A1; hereinafter known as Kothari), and Ireri (US 20170240022 A1).
Regarding Claim 19, Yu in view of Gross and Kothari teaches the system according to claim 12.
Yu does not teach, and the camera comprises an integrated temperature sensor.
However, Ireri teaches the camera comprises an integrated temperature sensor.
{Para [0019] “In the present embodiment, the temperature detection sensor is an infrared camera. In this embodiment, the sensor is a digital infrared camera configured to take an image of an occupant. From that image, the processor can 1) verify if the object is a child and/or animal and 2) determine the temperature of the child. The infrared photo will determine, based on the elements in the array, the temperature of the child in the car seat. It is noted that the temperature sensor may be any camera or other sensor configured to 1) verify if the object is a child and/or animal and 2) determine the temperature of the child. In this specification, the terms camera and sensor may be used interchangeable when referring to determining and detecting the temperature of a child occupant.”
}
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yu in view of Gross and Kothari to incorporate the teachings of Ireri to use a camera with temperature sensing capability because as discussed in para [0024] “Temperature sensing, specifically infrared camera image processing such as used in the present specification, is accomplished safely, wirelessly, and requires no human intervention, or physical attachment to a car seat to ensure reliable operation. The system design has the added benefit of preventing infant heatstroke related fatalities that have hereto lacked a concrete engineering solution, by alerting parent/caregiver, and or turning on the cars and its air-conditioning system on.”
Claim(s) 21 is rejected under 35 U.S.C. 103 as being unpatentable over Yu et al. (US 20200198439 A1, hereinafter known as Yu) in view of Gross (US 20180108369 A1), Kothari et al. (US 20180302484 A1; hereinafter known as Kothari), and Dulin et al. (US 20020161501 A1, hereinafter known as Dulin).
Regarding Claim 21, Yu in view of Gross and Kothari teaches The system according to claim 12. Yu further teaches wherein: the camera is configured to determine a position of a living being in the interior of the vehicle,
{para [0077] “Referring to FIG. 1D and FIG. 1E generally, a few example weight sensor configurations are illustrated. Generally, weight sensors may be placed under seat cushions, under each designated seating area, in both front and back rows, and on the left and/or right sides of the vehicle. Moreover, the weight sensors may also detect and measure a fluctuation of weight displacement caused by an occupant in a corresponding vehicle seat.”
Para [0083] “Weight shift detection enables weight shift pattern identification, that can be used to distinguish occupants from dead weight on vehicle seats. Weight shift detection and total weight on each seat, combined with the sensor input from all seats may allow for complete information on seat occupation.”
The vehicle can determine which seat an occupant is in.
}
Yu in view of Gross and Kothari does not teach, and the processor is configured to actuate the at least one vehicle function based on a detected position.
However, Dulin teaches wherein the camera is configured to determine a position of a living being in the interior of the vehicle, and the processor is configured to actuate the at least one vehicle function based on a detected position.
{para [0052] “The control unit employs an occupancy state algorithm to determine the nature and location of the occupant(s) of the vehicle.”
Para [0060] “Thus, depending on temperature and temperature rise rate, the controller can activate one or more relief activities 30, such as progressively rolling down one or more windows, starting the outside air intake fan; turning on the air conditioning unit; darkening windows where electro-active windshields or windows are employed, and deploying other shading devices, such as retractable/unfurlable screening or reflective elements. In the case of electro-active windows, the windows could be controlled to change to a silver or white color, rather than darken, to increase solar reflectivity. In addition to the GPS locator can become active, sending out a distress signal to assist rescuers to locate the vehicle. The windows can be selectively rolled down, first the passenger side window closest to the passenger, and then other windows if conditions continue to worsen.”
Para [0069] “In the event driver 96 is not present, which is the normal condition where the vehicle motion 88 is sensed negative (vehicle stopped), the operating cycle proceeds to Interior Warning II, 106. These warnings may be one or more of Warning set I, but directed to the passenger(s) and may be a higher-level warning. For example, the Warning set II may comprise a different synthesized voice announcement, or an insistent buzzer associated with a back-lit instruction panel, which could read for example: "Push Button To Reduce Interior Temperature". This "emergency" button can be a separate activator for window lowering or fan operation, by way of example, and can be associated with each passenger seat, which results in partial lowering of the window adjacent that passenger seat. Each successive push of the flashing, back-lit button can increment the window to open wider, until fully opened, upon which it can start the fan, or initiate external warnings.”
}
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Yu in view of Gross to incorporate the teachings of Dulin to actuate vehicle functions based on passenger position because only lowering the window closest to the passenger can have the biggest impact on passenger cooling while reducing possible negative effects that can occur from opening a window such as unwanted attempts by others to enter the vehicle.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Avrahami et al. (US 20160249191 A1) teaches in para [0021] “Additionally, it should be appreciated that although the one or more cameras 124 of the illustrative embodiment capture images and/or video of the interior cabin space of the vehicle 102, one or more of the cameras 124 may be embodied as one of the sensors 126. For example, in some embodiments, one or more of the cameras 124 may be embodied as a thermal imaging camera configured to capture infrared radiation information sensed within the vehicle 102 as discussed. In such embodiments, the in-vehicle warning system 110 may utilize the one or more cameras 124 as a sensor 126. In doing so, the in-vehicle warning system 110 may use data, images, and/or video captured by the one or more cameras 124 to determine the occupancy of the vehicle 102 and/or the environmental conditions within the vehicle 102. Of course, in some embodiments, one or more of the cameras 124 may serve multiple purposes (e.g., the generated images may be used to sense occupants of the vehicle 102 and to sense characteristics of the occupant and/or environment of the interior cabin space of the vehicle 102).”
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDER MATTA whose telephone number is (571)272-4296. The examiner can normally be reached Mon - Fri 10:00-6:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, James Lee can be reached on (571) 270-5965. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.G.M./Examiner, Art Unit 3668
/JUSTIN S LEE/Primary Examiner, Art Unit 3668