DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
This Office Action is in response to communications filed on 9/019/2024. Claims 1-14 are pending for examination.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(d):
(d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph:
Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
Claims 11-12 are rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to include all the limitations of the claim upon which they depend.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 4-6, 10-12 and 13-14 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ono et al. (DE-112005003266).
Regarding claims 1, 11, 13 and 14, Ono teaches a warning device/system/method/non-transitory computer-readable record medium storing a warning program (¶166; "a risk of a passage time ... driver's vehicle runs through a lane on which a prevention action cannot be performed ... a downshift control can be performed to increase the number of revolutions of the engine of the driver's vehicle .... by automatically downshifting, the number of revolutions of an engine is increased, and a pedestrian can be informed of the presence of the driver's vehicle" - hence warning device for the pedestrian), comprising:
processing circuitry (Fig 1; control device 24)
to estimate movement information including a position, a moving direction and a moving speed of an object person based on a first detection signal outputted from a vicinity sensor that performs sensing in regard to a vicinity of the object person (Fig.1, ref. no.18; ¶068; "Camera 18 includes a front camera ... preferably a front infrared camera ... When the infrared camera is used, a pedestrian can be reliably detected") to estimate movement information including a position, a moving direction and a moving speed of an object person (¶166; “future risk based on the passage time risk map at the time the pedestrian passes”) based on a first detection signal outputted from a vicinity sensor that performs sensing in regard to a vicinity of the object person (¶096; "future risk based on the passage time risk map at the time the pedestrian passes”). "Obstacle extracted by image processing of the image area ... a pedestrian crossing in front of the driver's vehicle and a two-wheeler crossing in front of the driver's vehicle are extracted as obstacles ...a deviation of the position of the pedestrian and the like is predicted, and consequently it is possible to predict the movement of the pedestrian and the will of the pedestrian, for example in which direction the pedestrian intends to move or the like");
to estimate a movement anticipation area, as an area through which the object person is anticipated to move, based on the movement information (Fig.1, ref. no.24, 26; ¶ 074; "action prediction model indicating the probabilities of the actions (i) to (iv) is stored in advance in the action prediction database 26 as the pedestrian action prediction model, and when a pedestrian is detected as an obstacle, a future action of the pedestrian is predicted by the control device based on the pedestrian action prediction model stored in the action prediction database”) to estimate a movement anticipation area, as an area through which the object person is anticipated to move, based on the movement information (¶096; "a pedestrian crossing in front of the driver's vehicle.... a deviation of the pedestrian’s position and the like is predicted, and consequently it is possible to predict the pedestrian's movement and the pedestrian’s will, for example in which direction the pedestrian intends to move or the like");
to provide vicinal situation information including a position of an obstacle (vicinal situation provision unit ; ¶ 065; "a group of sensors provided on the vehicle as an external environment detecting means for detecting a state of an external environment”), based on at least one of the first detection signal and previously acquired map information (¶067; "Detecting the state of an extemal environment are provided: a camera 18 for capturing the front area, the side area and the rear area of the dniver's vehicle, a laser radar 20 for detecting an obstacle in front of the driver's vehicle”) and previously acquired map information (see Fig.1, ref. no. 34; ¶ 093; "detected by the group of sensors detecting the surrounding condition, a surrounding map including a road shape and obstacles is generated by using map information stored in the map database 34");
to estimate condition of the object person based on a second detection signal outputted from a person sensor that senses at least one of the condition and voice of the object person (¶ 074-¶075; "When a pedestrian is sleeping or lying on a road, when a pedestrian is heavily intoxicated, or when a pedestrian is wandering around, it is predicted that the present state will be maintained in the future, and consequently, a prediction model indicating that such a present state will be maintained is also stored in the action prediction database 26."); and
to make a judgment (risk estimation unit; ¶013. "a risk map generating unit for generating a position and type of an obstacle based on the external environment detected by the external environment detecting unit and generating a risk map to predict a current risk level based on the position and type of the obstacle, an action prediction database in which data including data indicating a probability of the presence of an obstacle for predicting a behavior of an obstacle”) on whether the obstacle is a warning object or not (¶166; "if an obstacle is a pedestrian - thus judgement), based on the vicinal situation information, the movement anticipation area, and the condition of the object person (¶166; "When an obstacle is a pedestrian, a passage time risk map indicating a risk of a passage time at which the driver's vehicle passes is generated based on the passage time at which the driver's vehicle passes through a lane on which a prevention action cannot be performed and based on the future risk map, and if the future risk based on the passage time risk map at the time at which the pedestrian passes’ in conjunction with [0075] "if a pedestrian is heavily intoxicated, [...] the current condition is predicted to be maintained in the future”), and to output a warning signal when the obstacle is a warning object (¶166 "a downshift control . to increase the number of revolutions of the engine of the driver's vehicle, . by automatically downshifting, the number of revolutions of an engine is increased, and a pedestrian can be informed of the presence of the driver's vehicle. The control to increase the engine noise is stopped when the future risk becomes smaller than the predetermined value, or when the driver's vehicle passes the pedestrian” - thus issuing a warning signal to the pedestrian; ¶162 "an automatic steering operation, an automatic braking operation and an automatic driving operation are fully controlled based on the estimated risk, and an obstacle is avoided. A selection range of a driving prevention path of the driver's vehicle to avoid an obstacle can be enlarged” in conjunction with ¶165 "During, automatic steering operation, in order to prevent surrounding vehicles from being secondarily damaged when automatic steering operation prevention and automatic brake prevention (direct braking) are performed, an alarm light may illuminate (for vehicle following behind), a high beam light may illuminate (for ahead), and a car hom may sound simultaneously when the driver is informed of the above fact, or independently when the driver is informed of the above fact” - to both driver and surroundings), wherein
the processing circuitry adjusts the movement anticipation area based on the condition of the object person (¶074; "only one model is used as the pedestrian action prediction model in which a pedestrian continues the state of motion immediately & ¶075; "If a pedestrian is sleeping or lying on a road, if a pedestrian is heavily intoxicated, or if a pedestrian is wandering around, the current state is predicted to be maintained in the future, and consequently, a prediction model indicating that such a current state will be maintained is also stored in the action prediction database 26."),
the processing circuitry estimates a wobble level representing magnitude of a wobble of the object person when walking, based on the second detection signal (¶075, , "when a pedestrian is heavily intoxicated or when a pedestrian is wandering, the present state is predicted to be maintained in the future, and consequently, a prediction model indicating that such a present state will be maintained”), and
the processing circuitry widens the movement anticipation area with an increase in the wobble level (¶162; "a direction of line of sight are used as attributes of a pedestrian in the above explanation, but at least one of a direction of a face, a contribution, and an action can be added to the attributes ... probability model of an accident prevention action of a pedestrian is added to the pedestrian action prediction model, and the risk can be corrected .. A selection range of a travel prevention path of the driver's vehicle to avoid an obstacle can be increased, and a disharmony of an interposition into an operation of the driver can be reduced.”).
Regarding claim 2, Ono teaches the warning device according to claim 1, and Ono further
teaches wherein the processing circuitry estimates a foot elevation level of the object person when walking based on the second detection signal, and the processing circuitry makes the judgment based on the vicinal situation information, the movement anticipation area, and the foot elevation level of the object person (¶075; "a pedestrian is sleeping or lying on a road, if a pedestrian is heavily intoxicated, or if a pedestrian is wandering, it is predicted that the current state will be maintained in the future” - contains information of a foot height level).
Regarding claim 4, Ono teaches a warning device (¶166; "a risk of a passage time ... driver's vehicle runs through a lane on which a prevention action cannot be performed ... a downshift control can be performed to increase the number of revolutions of the engine of the driver's vehicle .... by automatically downshifting, the number of revolutions of an engine is increased, and a pedestrian can be informed of the presence of the driver's vehicle" - hence warning device for the pedestrian), comprising:
processing circuitry (Fig 1; control device 24)
to estimate movement information including a position, a moving direction and a moving speed of an object person based on a first detection signal outputted from a vicinity sensor that performs sensing in regard to a vicinity of the object person (Fig.1, ref. no.18; ¶068; "Camera 18 includes a front camera ... preferably a front infrared camera ... When the infrared camera is used, a pedestrian can be reliably detected") to estimate movement information including a position, a moving direction and a moving speed of an object person (¶166; “future risk based on the passage time risk map at the time the pedestrian passes”) based on a first detection signal outputted from a vicinity sensor that performs sensing in regard to a vicinity of the object person (¶096; "future risk based on the passage time risk map at the time the pedestrian passes”). "Obstacle extracted by image processing of the image area ... a pedestrian crossing in front of the driver's vehicle and a two-wheeler crossing in front of the driver's vehicle are extracted as obstacles ...a deviation of the position of the pedestrian and the like is predicted, and consequently it is possible to predict the movement of the pedestrian and the will of the pedestrian, for example in which direction the pedestrian intends to move or the like");
to estimate a movement anticipation area, as an area through which the object person is anticipated to move, based on the movement information (Fig.1, ref. no.24, 26; ¶ 074; "action prediction model indicating the probabilities of the actions (i) to (iv) is stored in advance in the action prediction database 26 as the pedestrian action prediction model, and when a pedestrian is detected as an obstacle, a future action of the pedestrian is predicted by the control device based on the pedestrian action prediction model stored in the action prediction database”) to estimate a movement anticipation area, as an area through which the object person is anticipated to move, based on the movement information (¶096; "a pedestrian crossing in front of the driver's vehicle.... a deviation of the pedestrian’s position and the like is predicted, and consequently it is possible to predict the pedestrian's movement and the pedestrian’s will, for example in which direction the pedestrian intends to move or the like");
to provide vicinal situation information including a position of an obstacle, based on at least one of the first detection signal and previously acquired map information (vicinal situation provision unit ; ¶ 065; "a group of sensors provided on the vehicle as an external environment detecting means for detecting a state of an external environment”), based on at least one of the first detection signal and previously acquired map information (see ¶067; "Detecting the state of an extemal environment are provided: a camera 18 for capturing the front area, the side area and the rear area of the dniver's vehicle, a laser radar 20 for detecting an obstacle in front of the driver's vehicle”) and previously acquired map information (see Fig.1, ref. no. 34; ¶093; "detected by the group of sensors detecting the surrounding condition, a surrounding map including a road shape and obstacles is generated by using map information stored in the map database 34");
to estimate condition of the object person based on a second detection signal outputted from a person sensor that senses at least one of the condition and voice of the object person (¶ 074-¶075; "When a pedestrian is sleeping or lying on a road, when a pedestrian is heavily intoxicated, or when a pedestrian is wandering around, it is predicted that the present state will be maintained in the future, and consequently, a prediction model indicating that such a present state will be maintained is also stored in the action prediction database 26."); and
to make a judgment (risk estimation unit; ¶013. "a risk map generating unit for generating a position and type of an obstacle based on the external environment detected by the external environment detecting unit and generating a risk map to predict a current risk level based on the position and type of the obstacle, an action prediction database in which data including data indicating a probability of the presence of an obstacle for predicting a behavior of an obstacle”) on whether the obstacle is a warning object or not (¶166; "if an obstacle is a pedestrian - thus judgement) based on the vicinal situation information, the movement anticipation area, and the condition of the object person (¶166; "When an obstacle is a pedestrian, a passage time risk map indicating a risk of a passage time at which the driver's vehicle passes is generated based on the passage time at which the driver's vehicle passes through a lane on which a prevention action cannot be performed and based on the future risk map, and if the future risk based on the passage time risk map at the time at which the pedestrian passes’ in conjunction with ¶075; "if a pedestrian is heavily intoxicated, [...] the current condition is predicted to be maintained in the future”), and to output a warning signal when the obstacle is a warning object (¶166; "a downshift control . to increase the number of revolutions of the engine of the driver's vehicle, . by automatically downshifting, the number of revolutions of an engine is increased, and a pedestrian can be informed of the presence of the driver's vehicle. The control to increase the engine noise is stopped when the future risk becomes smaller than the predetermined value, or when the driver's vehicle passes the pedestrian” - thus issuing a warning signal to the pedestrian; ¶162 "an automatic steering operation, an automatic braking operation and an automatic driving operation are fully controlled based on the estimated risk, and an obstacle is avoided. A selection range of a driving prevention path of the driver's vehicle to avoid an obstacle can be enlarged” in conjunction with ¶165 "During, automatic steering operation, in order to prevent surrounding vehicles from being secondarily damaged when automatic steering operation prevention and automatic brake prevention (direct braking) are performed, an alarm light may illuminate (for vehicle following behind), a high beam light may illuminate (for ahead), and a car hom may sound simultaneously when the driver is informed of the above fact, or independently when the driver is informed of the above fact” - to both driver and surroundings), wherein
the processing circuitry adjusts the movement anticipation area based on the condition of the object person,
the processing circuitry estimates a fixation point indicating a position at which the object person is gazing, based on the second detection signal (¶116; “the direction of the line of sight is checked, and the probability of the presence of the obstacle after the predetermined time is corrected based on the result of the check”; & ¶162, "a direction of the line of sight is used as attributes of a pedestrian”), and
the processing circuitry narrows the movement anticipation area with a decrease in a degree of spreading of distribution of the fixation point (¶162; "a direction of line of sight are used as attributes of a pedestrian ... at least one of a direction of a face, a contribution and an action can be added to the attributes… a probability model of a pedestrian’s accident prevention action is added to the pedestrian action prediction model, and the risk can be corrected” in conjunction with ¶168; "If the expectation accuracy of the risk is improved by detecting a line of sight of the pedestrian and correcting the future risk map, unnecessary motor noise increase control can be avoided." - consequently risk reduction, as the direction of movement corresponds to the direction of view, which corresponds to a decrease in the degree of risk propagation).
Regarding claim 5, Ono teaches the warning device according to claim 4, and Ono further teaches wherein the processing circuitry regards the obstacle overlapping with the fixation point as not being a warning object when making the judgment (¶116; “the direction of the line of sight is checked, and the probability of the presence of the obstacle after the predetermined time is corrected based on the result of the check”.
Regarding claim 6, Ono teaches the warning device according to claim 1, and Onon teaches wherein the processing circuitry estimates a fixation point indicating a position at which the object person is gazing, based on the second detection signal, and the processing circuitry regards the obstacle overlapping with the fixation point as not being a warning object when making the judgment (¶116; "the direction of the line of sight is checked, and the probability of the presence of the obstacle after the predetermined time is corrected based on the result of the check"; ¶162; "a direction of the line of sight is used as attributes of a pedestrian .... at least one of a direction of a face, a contribution and an action can be added to the attributes. In this example, a probability model of a pedestrian’s accident prevention action is added to the pedestrian action prediction model, and the risk can be corrected” in conjunction with ¶168; "If the expectation accuracy of the risk is improved by detecting a line of sight of the pedestrian and correcting the future risk map, unnecessary motor noise increase control can be avoided." - which corresponds to an assessment as no warning object)
Regarding claim 10, Ono teaches the warning device according to claim 1, and Ono further teaches wherein the processing circuitry estimates a free space representing a region in which the object person can move, based on the first detection signal, and adjusts the movement anticipation area based on the free space (¶168; "If the expectation accuracy of the risk is improved by detecting a line of sight of the pedestrian and correcting the future risk map, unnecessary motor noise increase control can be avoided. " - consequently a free space in which the person can move without a warning being issued …referring back to at least claims 1 and 4 respectively).
Regarding claim 12, Ono teaches the warning system according to claim 11, wand Ono further teaches wherein the person sensor includes a wearable sensor attached to the object person (see claim 1), and the vicinity sensor includes a camera (also see claim 1).
Claims 3, is rejected under 35 U.S.C. 102(a)(1) as being anticipated by Song (KR 20160015463).
Regarding claim 3, Song teaches a warning device (¶001; "fall prevention device") comprising:
processing circuitry (¶007; control unit determines whether the wearer walks)
to estimate movement information including a position, a moving direction and a moving speed of an object person based on a first detection signal outputted from a vicinity sensor that performs sensing in regard to a vicinity of the object person (Fig.1,2, ref. no. 20; ¶030; "an angular velocity measurement unit (20) which is disposed on the waist and measures an angular velocity”, ¶048 "control unit (60) determines ... abnormal movement ... a normal motion");
to estimate a movement anticipation area, as an area through which the object person is anticipated to move, based on the movement information (¶012; "the control unit .. determines whether the foot is caught by the obstacle from the distance to the obstacle and the foot height”);
to provide vicinal situation information including a position of an obstacle, based on at least one of the first detection signal and previously acquired map information (Fig.1, 2, ref. no. 30; ¶030; "a distance measurement unit (30) which is disposed on the foot (5) of the wearer (1)") ; ¶036 "distance measuring unit (30) is a measuring device for measuring a distance to a specific target ... the distance from the foot (5) to the obstacle (7) in front and the height of the obstacle (7) can be determined”);
to estimate condition of the object person based on a second detection signal outputted from a person sensor that senses at least one of the condition and voice of the object person (¶048; "the control unit (60) determines that a fall has occurred when the waist height is lowered due to abnormal movement, rather than in a normal motion. In this case, when the abnormal movement is determined through the data, the abnormal movement means a case in which the abnormal movement is moved at a fast operation speed. In such abnormal movement, changes in acceleration and slope are large”)’; and
to make a judgment on whether the obstacle is a warning object or not based on the vicinal situation information, the movement anticipation area, and the condition of the object person (Fig.1, ref. no. 60; ¶ 040; "control unit (60) compares the measured acceleration, the angular velocity, the distance to the obstacle (7), the foot height, the data value obtained from the pressure, and a preset reference value to predict the likelihood of fall"), and to output a warning signal when the obstacle is a warning object (Fig.1, Bzz.70; ¶041; "Stimulation signal unit (70) may include a vibration motor to give vibration stimulation or apply an electrical stimulus including an electrode. Here, the stimulation signal is applied to a direction in which the fall is expected. Therefore, the stimulation signal unit (70) notifies the wearer (1) of a fall possibility and allows the wearer (1) to act autonomously in response to the fall”),
wherein the processing circuitry adjusts the movement anticipation area based on the condition of the object person (¶044; "the control unit (60) compares the preset reference value with the data value, and predicts that there is a fall possibility when the data value exceeds the reference value. In this case, the stimulation signal unit (70) applies a, stimulation signal to the wearer (1) in a direction in which the fall is expected” in conjunction with [0041], as before),
the processing circuitry estimates a foot elevation level (walking estimation unit that estimates a wobble level representing magnitude of a wobble) of the object person when walking based on the second detection signal (¶041; "Stimulation signal unit (70) may include a vibration motor to give vibration stimulation or apply an electrical stimulus including an electrode. Here, the stimulation signal is applied to a direction in which the fall is expected. Therefore, the stimulation signal unit (70) notifies the wearer (1) of a fall possibility and allows the wearer (1) to act autonomously in response to the fall.”), and
the processing circuitry makes the judgment based on the vicinal situation information, the movement anticipation area, and the foot elevation level of the object person (¶040; in particular "control unit (60) compares the measured acceleration, the angular velocity, the distance to the obstacle (7), the foot height, the data value obtained from the pressure, and a preset reference value to predict the likelihood of fall. Here, the data value includes an acceleration, an angular velocity, a distance to the obstacle (7), a foot height, a pressure value as well as a slope of the wearer (1) obtained by this value, a slope of the floor surface, a height of the obstacle (7), an area in which the foot (5) comes in contact with the foot (5), and a vertical drag force").
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim 7, 8 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Ono et al (DE-112005003266) in view of Boccuccia (WO 2018017060).
Regarding claim 7, Ono teaches a warning device (see claim 1) comprising:
processing circuitry (see claim 1)
to estimate movement information including a position, a moving direction and a moving speed of an object person based on a first detection signal outputted from a vicinity sensor that performs sensing in regard to a vicinity of the object person (see claim 1);
to estimate a movement anticipation area, as an area through which the object person is anticipated to move, based on the movement information (see claim 1);
to provide vicinal situation information including a position of an obstacle, based on at least one of the first detection signal and previously acquired map information (see claim 1);
to estimate condition of the object person based on a second detection signal outputted from a person sensor that senses at least one of the condition and voice of the object person (see claim 1); and
to make a judgment on whether the obstacle is a warning object or not based on the vicinal situation information, the movement anticipation area, and the condition of the object person, and to output a warning signal when the obstacle is a warning object (see claim 1), wherein
the processing circuitry adjusts the movement anticipation area based on the condition of the object person (see claim 1),
the second detection signal includes a voice signal based on the voice of the object person (see claim 1).
Ono mentions the recognition of risk situations by/for persons in the vicinity of a vehicle, but is silent on the remainder of the claim. Moreover, one of ordinary skill in the art would be aware of other risks such as threats, robbery, pursuit, etc., as further taught by Boccuccia from an analogous art. Boccuccia teaches a warning device (¶033; "autonomous vehicle 202 may sound the hom or activate another external alarm to alert bystanders to potential criminal activity or a crime in progress”) and sensors (¶021; "detect situations of distress and/or danger involving one or more individuals 208"), person position detection unit (¶025; in particular "Camera sensors 212d may be equipped to gather images of human facial expressions”), movement anticipation unit (¶027; in particular "Motion sensors 212f may gather motion data to detect the presence of one or more individuals 208 in need of assistance, or other persons or objects in the surrounding environment”; ¶030; "The microphone sensors 212e may detect yelling, crying, and the sound of a gunshot, while the motion sensors 212fmay detect continuous motion consistent with the data gathered by the other sensors 212.7), vicinal situation provision unit (¶025; in particular "Camera sensors 212d may be equipped to gather ... changes in the surrounding environment to detect unusual circumstances or behavior”) and the concept wherein a processing circuitry estimates (¶025; in particular "the vehicle computing system 216 to detect human emotions and behavior indicating potential danger or vulnerability”),
the second detection signal includes a voice signal based on the voice of the object person (¶026; "Microphone sensors 212e may be configured to collect audio data of environmental sounds including, for example, human voices and sounds of distress like yelling, screaming, or crying.”).
an emotion level indicating a degree of excitement of the object person, based on the voice signal (¶025; "detect human emotions and behavior indicating potential danger or vulnerability such as fear ... camera sensors 212d may also detect objective evidence of danger, such as personal injury, the presence of a weapon” in conjunction with ¶026; "human voices and sounds of distress like yelling, screaming, or crying ", and when the emotion level exceeds a previously set threshold level ¶026; "yelling, screaming, or crying"), and
when the emotion level exceeds a previously set threshold level, the processing circuitry regards all obstacles in the movement anticipation area as warning objects (¶029; "sensors 212 may gather data indicating a victim 300 being pursued by one or more aggressors 302. For example, the radar/lidar sensors 212a may detect the presence of the victim 300 and aggressors 302°). Therefore, it would have been obvious to one having ordinary skill in the art at the time of filing the invention to combine Ono’s warning device with the concepts of the second detection signal includes a voice signal based on the voice of the object person, an emotion level indicating a degree of excitement of the object person, based on the voice signal and
when the emotion level exceeds a previously set threshold level, the processing circuitry regards all obstacles in the movement anticipation area as warning objects, as taught by Boccuccia in order to enhance the device with human voices and sounds of distress to convenience users.
Regarding claim 8, Ono teaches the warning device according to claim 5, but Ono is silent on the remainder of claim 8. Boccuccia teaches wherein the second detection signal includes a voice signal based on the voice of the object person, the processing circuitry estimates an emotion level indicating a degree of excitement of the object person, based on the voice signal, and when the emotion level exceeds a previously set threshold level, the processing circuitry regards all obstacles in the movement anticipation area including the obstacle overlapping with the fixation point, as warning objects (¶025, ¶026, ¶029; "detect human emotions and behavior indicating potential danger or vulnerability such as fear*; "Microphone sensors 212e may be configured to collect audio data of environmental sounds including, for example, human voices and sounds of distress like yelling, screaming, or crying." - thus voice and emotion; "sensors 212 may gather data indicating a victim 300 being pursued by one or more aggressors 302 ... sensors 212a may further gather data indicating that the victim 300 has proportions and height characteristics of a woman, while the aggressors 302 have proportions and height characteristics consistent with men. In other embodiments, the aggressors 302 may be one or more non-human predators” - hence consideration as warming objects). Therefore, it would have been obvious to one having ordinary skill in the art at the time of filing the invention to combine Ono’s warning device with the concepts wherein the second detection signal includes a voice signal based on the voice of the object person, the processing circuitry estimates an emotion level indicating a degree of excitement of the object person, based on the voice signal, and when the emotion level exceeds a previously set threshold level, the processing circuitry regards all obstacles in the movement anticipation area including the obstacle overlapping with the fixation point, as warning objects, as taught by Boccuccia in order to enhance the device with human voices and sounds of distress to convenience users.
. Regarding claim 9, Ono teaches the warning device according to claim 1, but Ono is silent on the remainder of claim 9. Boccuccia teaches wherein the processing circuitry detects a noise signal in the second detection signal, and the processing circuitry determines the warning objects based on a noise level represented by the noise signal, the vicinal situation information, the movement anticipation area, and the condition of the object person (Fig.3; ¶029-¶030]; "The thermal sensors 212c may indicate or confirm the presence of persons in the surrounding environment, and may gather additional data showing that the victim 300 and aggressors 302 are approaching the autonomous vehicle 202. Vibration sensors 212b may detect vibrations consistent with a gunshot. Camera sensors 212d may detect images consistent with the data gathered by the other sensors 212. The camera sensors 212d may also show a weapon carried by at least one of the aggressors 302. The microphone sensors 212e may detect yelling, crying, and the sound of a gunshot, while the motion sensors 212f may detect continuous motion consistent with the data gathered by the other sensors 212." i.V.m, ¶029] "sensors 212 may gather data indicating a victim 300 being pursued by one or more aggressors 302";). Therefore, it would have been obvious to one having ordinary skill in the art at the time of filing the invention to combine Ono’s warning device with the concepts wherein the processing circuitry detects a noise signal in the second detection signal, and the processing circuitry determines the warning objects based on a noise level represented by the noise signal, the vicinal situation information, the movement anticipation area, and the condition of the object person, as taught by Boccuccia in order to enhance the device with human voices and sounds of distress to convenience users.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MANCIL H LITTLEJOHN JR whose telephone number is (571)270-3718. The examiner can normally be reached M-F 8:30-5 (CST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Quan-Zhen Wang can be reached at (571) 272-3114. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MANCIL LITTLEJOHN JR/Examiner, Art Unit 2685
/QUAN ZHEN WANG/ Supervisory Patent Examiner, Art Unit 2685