DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Status
Claims 1-11 are pending for examination in the application filed 11/08/2023.
Priority
Acknowledgement is made of Applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent application JP2021-079992 filed on 05/10/2021. Acknowledgement is additionally made of the present application as a national stage entry of PCT/JP2022/019602, international filing date: 05/06/2022.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 11/08/2023 and 06/02/2025 have been considered by the examiner.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier, as explained in MPEP §2181, subsection I (note that the list of generic placeholders below is not exhaustive, and other generic placeholders may invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph):
A. The Claim Limitation Uses the Term “Means” or “Step” or a Generic Placeholder (A Term That Is Simply A Substitute for “Means”)
With respect to the first prong of this analysis, a claim element that does not include the term “means” or “step” triggers a rebuttable presumption that 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, does not apply. When the claim limitation does not use the term “means,” examiners should determine whether the presumption that 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, paragraph 6 does not apply is overcome. The presumption may be overcome if the claim limitation uses a generic placeholder (a term that is simply a substitute for the term “means”). The following is a list of non-structural generic placeholders that may invoke 35 U.S.C. 112(f) or pre- AIA 35 U.S.C. 112, paragraph 6: “mechanism for,” “module for,” “device for,” “unit for,” “component for,” “element for,” “member for,” “apparatus for,” “machine for,” or “system for.” Welker Bearing Co., v. PHD, Inc., 550 F.3d 1090, 1096, 89 USPQ2d 1289, 1293-94 (Fed. Cir. 2008); Massachusetts Inst. of Tech. v. Abacus Software, 462 F.3d 1344, 1354, 80 USPQ2d 1225, 1228 (Fed. Cir. 2006); Personalized Media,161 F.3d at 704, 48 USPQ2d at 1886–87; Mas- Hamilton Group v. LaGard, Inc., 156 F.3d 1206, 1214-1215, 48 USPQ2d 1010, 1017 (Fed. Cir.1998). This list is not exhaustive, and other generic placeholders may invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, paragraph 6.
Such claim limitations are:
An observation device comprising: an observation unit configured to observe a predetermined area ([0010] The observation device 10 is a roadside unit or a monitoring camera device) in independent claim 1 and dependent claims 2-11.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 3-6, 8-9, and 10-11 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 3-6, 8-9, and 11 recite limitations regarding the “categories”. Claim 3 introduces the “categories”, reciting “identify the observation target by classifying the observation target into one of a plurality of types, and the storage is configured to store the visibility distance associated with a respective one of categories based on the types”. Furthermore, claims 4-5 describe “classifying” targets based on speed or temperature categories, however claims 6, 8-9, and 11 describe multiple categories corresponding to a target. It is unclear what the relation or difference is between the “categories” and “types” is, as well as how targets are being classified. Please clarify.
Claim 10 recites the limitation “when the observation target is not present in an observation result, the controller is configured to create and include information pertaining to the visibility distance in the notification information on a basis of the type of the observation target and the visibility distance associated with the relevant type”. Claim 10 is dependent on claim 2, which recites “wherein the controller is configured to identify a type of the observation target on a basis of an observation result from the observation unit”. It is unclear how a type of observation target is determined and utilized in the notification information if the observation target is not present.
Claims 8-9 recites limitation involving "the first condition". There is insufficient antecedent basis for the limitation in these claims.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-4, 6, and 10 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Mantyjarvi (US20210142526A1).
Regarding claim 1, Mantyjarvi teaches an observation device comprising: an observation unit configured to observe a predetermined area ([0019] Some embodiments of the apparatus further may include: a set of sensors; a blind area prediction module configured to identify potential blind areas; a driving mode selection module configured to select a driving mode responsive to a comparison of the potential blind areas with a visibility area threshold; a communication module configured to receive vehicle-to-vehicle (V2V) messages; and an augmented reality (AR) display device. [0028] FIG. 2 is a picture illustration depicting an example illustration of an autonomous vehicle sensing a surrounding environment according to some embodiments. [0077] Modern self-driving cars may use Bayesian simultaneous localization and mapping (SLAM) algorithms, which fuse data from multiple sensors and an off-line map into current location estimates and map updates. SLAM may be combined with detection and tracking of other moving objects (DATMO), which handles the recognition of objects such as cars and pedestrians. Simpler systems may use roadside real-time locating system (RTLS) beacon systems to aid localization. [0071] The peripherals 138 may include one or more sensors, the sensors may be one or more of a camera);
storage configured to store a visibility distance, the visibility distance being a maximum distance at which an observation target is identifiable within the predetermined area ([0088] Typical detection ranges, for example, may be 150 m for vehicles or up to 50 m for pedestrians, although other ranges and distances may be used. Detection ranges may vary according to the size and reflectivity of the object and current environmental factors (e.g., humidity, rain, fog, snow, or hail, as just some examples). [0104] the limited sensor visibility prediction area module 502 computes a level of reduction of effective sensor range due to e.g., road geometry, sensor placement, and other visibility factors like the weather and lighting conditions. [0134] A visibility requirements determination may use the information about oncoming traffic and other information related to ability to maneuver (which may have been gathered previously). In some embodiments, determining minimum visibility requirements may include receiving minimum visibility requirements (or minimum sensor visibility requirements) from another networked entity and retrieving minimum visibility requirements from a storage device (e.g., a database). In some embodiments, determining may include, e.g., calculating locally minimum visibility requirements prior to travel and/or automatically calculating (and updating) during travel);
and a controller configured to identify the distance to the position of the observation target on a basis of an observation result from the observation unit, and use the identified distance as a basis for updating the visibility distance stored in the storage ([0077] Autonomous vehicles (which may include cars) generally may have control systems that are capable of analyzing sensory data to distinguish between different cars, motorcycles, bikes, and pedestrians on the road, which is very useful in planning a safe path to a desired destination. Modern self-driving cars may use Bayesian simultaneous localization and mapping (SLAM) algorithms, which fuse data from multiple sensors and an off-line map into current location estimates and map updates. SLAM may be combined with detection and tracking of other moving objects (DATMO), which handles the recognition of objects such as cars and pedestrians. Simpler systems may use roadside real-time locating system (RTLS) beacon systems to aid localization. [0103] The sensor and communication module 508 may send 510 data comprising a present sensor field-of-view, effective range, vehicle location, vehicle speed, and any local dynamic map updates to the limited sensor visibility area module 502. [0104] Furthermore, the limited sensor visibility prediction area module 502 computes a level of reduction of effective sensor range due to e.g., road geometry, sensor placement, and other visibility factors like the weather and lighting conditions. [0110] The limited sensor visibility area prediction module 602 may continually evaluate 620 blind areas along the route. The evaluation may take into account a real-time stream of sensor data from the sensor information module 608 and the evaluation may be carried out continually in order to keep the blind area prediction up to date).
Regarding claim 2, Mantyjarvi teaches the device of claim 1. Mantyjarvi further teaches wherein the controller is configured to identify a type of the observation target on a basis of an observation result from the observation unit ([0077] Autonomous vehicles (which may include cars) generally may have control systems that are capable of analyzing sensory data to distinguish between different cars, motorcycles, bikes, and pedestrians on the road, which is very useful in planning a safe path to a desired destination. Modern self-driving cars may use Bayesian simultaneous localization and mapping (SLAM) algorithms, which fuse data from multiple sensors and an off-line map into current location estimates and map updates. SLAM may be combined with detection and tracking of other moving objects (DATMO), which handles the recognition of objects such as cars and pedestrians. Simpler systems may use roadside real-time locating system (RTLS) beacon systems to aid localization).
Regarding claim 3, Mantyjarvi teaches the device of claim 2. Mantyjarvi further teaches wherein the controller is configured to identify the observation target by classifying the observation target into one of a plurality of types ([0077] Autonomous vehicles (which may include cars) generally may have control systems that are capable of analyzing sensory data to distinguish between different cars, motorcycles, bikes, and pedestrians on the road, which is very useful in planning a safe path to a desired destination), and the storage is configured to store the visibility distance associated with a respective one of categories based on the types ([0088] Typical detection ranges, for example, may be 150 m for vehicles or up to 50 m for pedestrians, although other ranges and distances may be used. Detection ranges may vary according to the size and reflectivity of the object and current environmental factors (e.g., humidity, rain, fog, snow, or hail, as just some examples). [0104] the limited sensor visibility prediction area module 502 computes a level of reduction of effective sensor range due to e.g., road geometry, sensor placement, and other visibility factors like the weather and lighting conditions. [0134] A visibility requirements determination may use the information about oncoming traffic and other information related to ability to maneuver (which may have been gathered previously). In some embodiments, determining minimum visibility requirements may include receiving minimum visibility requirements (or minimum sensor visibility requirements) from another networked entity and retrieving minimum visibility requirements from a storage device (e.g., a database). In some embodiments, determining may include, e.g., calculating locally minimum visibility requirements prior to travel and/or automatically calculating (and updating) during travel).
Regarding claim 4, Mantyjarvi teaches the device of claim 3. Mantyjarvi further teaches wherein the categories include speed categories classified on a basis of a speed anticipated for the observation target ([0092] A driver of an AV may benefit from, e.g., knowing if a vehicle's sensors are not able to properly observe a foreseeable traffic situation. In dangerous situations, for example, the driver may not know that the range or field-of-view may be reduced in one or several directions—or that range or FoV is reduced only for specific types of objects, such as (fast) approaching vehicles (which may include cars, trucks, and motorcycles, for example). Slower-approaching pedestrians may be detected with shorter detection distances).
Regarding claim 6, Mantyjarvi teaches the device of claim 3. Mantyjarvi further teaches wherein when the visibility distance updated for a first category of an identified type of the observation target is less than a visibility distance associated with the first category stored in the storage, the controller is configured to execute a common update process to also update a visibility distance associated with a second category different from the first category, the visibility distance associated with the second category being reduced to the visibility distance updated for the first category ([0123] For some embodiments, at each step, an analysis may calculate areas that are viewable by a vehicle's sensors at that location and orientation, using an HD 3D map, and recorded locations, orientations, FoVs, and ranges of the vehicle's sensors. FIG. 10 shows one example of a visualization of a single step of such an analysis. For some embodiments, at each step, viewable areas may be calculated for one or more object types that the sensors are able to recognize. The calculation may take into account the current lighting and weather conditions, e.g., by referencing a manufacturer specified look-up table indicating lab-measured sensor performance in various conditions. [0148] FIG. 14 depicts an example of a style of AR visualization of sensor range and field-of-view on a decline, in accordance with at least one embodiment. FIG. 14 depicts an environment with a decline, followed by a flat area, and followed by another decline that may impact object detection by a vehicle's sensors. The sensors may be oriented along the decline, and the sensors' FoVs may max out in the vertical direction. In the flat section, the sensors may be unable to detect objects past the intersection because the road section following the intersection declines steeply and the sensors may overshoot the road. In FIG. 14's augmented view 1400 a blind area AR visualization 1404 and a region of trusted sensor coverage 1402 are shown. The AVs current sensor coverage is indicated by the horizontal slashes 1402 (which may be shown in green in some embodiments). Based on high-definition map data, the AV may determine that due to the steep decline, the AV's sensors are unable to detect objects past the flat section in the forthcoming intersection. The potential blind area may be indicated (or highlighted) to the driver by overlaying (or projecting) vertical slashes (in some embodiments, red may be used instead) over the blind areas of the AR visualization of a map).
Regarding claim 10, Mantyjarvi teaches the device of claim 2. Mantyjarvi further teaches wherein the controller is configured to generate notification information including an indication of the presence or absence of the observation target ([0135] A warning may be displayed to the driver in response to the AV detecting a nearby vehicle in a potential blind area), and when the observation target is not present in an observation result, the controller is configured to create and include information pertaining to the visibility distance in the notification information on a basis of the type of the observation target and the visibility distance associated with the relevant type ([0082] In a real driving environment, the driver may see areas or situations in which sensors may be unable to detect other traffic users/objects or may produce unreliable measurements to warrant automated driving, and an advanced driver-assistance systems (ADAS) may be engaged to assist the driver. For some embodiments, if a situation occurs in which vehicle sensors are measuring degraded measurements (which, e.g., may affect an ADAS), this situation may be communicated to the driver (such as by displaying a warning message and/or playing a sound, for example)).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Mantyjarvi in view of Xue (US20220044023A1).
Regarding claim 5, Mantyjarvi teaches the device of claim 3. Mantyjarvi further teaches wherein the observation unit is configured to detect a temperature of the target ([0071] The peripherals 138 may include one or more sensors, the sensors may be one or more of a camera, a RADAR, a LIDAR, a gyroscope, an accelerometer, a hall effect sensor, a magnetometer, an orientation sensor, a proximity sensor, a temperature sensor, a time sensor; a geolocation sensor; an altimeter, a light sensor, a touch sensor, a magnetometer, a barometer, a gesture sensor, a biometric sensor, and/or a humidity sensor).
Mantyjarvi does not teach the categories include temperature categories classified on a basis of a temperature anticipated for the observation target.
Xue, in the same field of endeavor of vehicle image analysis, teaches the categories include temperature categories classified on a basis of a temperature anticipated for the observation target (See Fig. 4. [0006] The invention concerns an apparatus comprising an interface and a processor. The interface may be configured to receive pixel data generated by a capture device and a temperature measurement generated by a thermal sensor. The processor may be configured to receive the pixel data and the temperature measurement from the interface, generate video frames in response to the pixel data, perform computer vision operations on the video frames to detect objects, perform a classification of the objects detected based on characteristics of the objects, detect a temperature anomaly in response to the temperature measurement and the classification, and generate a control signal in response to the temperature anomaly. The control signal may provide a warning based on the temperature anomaly. The classification may provide a normal temperature range for the objects detected).
Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the device of Mantyjarvi with the teachings of Xue to include temperature categories classified on a basis of a temperature anticipated for the observation target because "each of the object types 250a-250n may have similar but different normal operating temperatures" [Xue 0133].
Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Mantyjarvi in view of Fujita (WO2020012212A1).
Regarding claim 7, Mantyjarvi teaches the device of claim 6. Fujita, in the same field of endeavor of vehicle object detection, teaches wherein the controller is configured to execute the common update process upon meeting a first condition regarding a decrease in a number of observation targets passing through the predetermined area ([pg. 2 para. 9] The radar 12 is a distance measuring sensor that realizes, as functions required for automatic driving, a function of detecting the presence of an object around the vehicle and a function of detecting the distance to an object around the vehicle. [pg. 22 para. 3] The predetermined distance may be changed based on the surrounding traffic volume from the own vehicle surrounding information. For example, the predetermined distance is set shorter as the traffic volume increases, and the predetermined distance is set longer as the traffic volume decreases).
Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the device of Mantyjarvi with the teachings of Fujita to execute a common update based on a decrease in the number of observation targets because "As a result, when the own vehicle V enters the roundabout RA (annular road CR), it is possible to provide a driving support method for transmitting an action plan of the own vehicle V to other vehicles in the vicinity of the own vehicle according to a direction instruction" [Fujita pg. 19 para. 4].
Claims 8-9 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Mantyjarvi in view of Murayama (US20150310313A1).
Regarding claim 8, Mantyjarvi teaches the device of claim 6. Murayama, in the same field of endeavor of vehicle visibility processing, teaches wherein when the first condition is met (within judgement threshold) after the common update process and the visibility distance updated for the category of the type of the observation target identified in a new observation result (calculated detection distance of a specific landmark) is greater than the visibility distance associated with the relevant category stored in the storage, the controller is configured to update the visibility distance associated with the relevant category, and does not update the visibility distance associated with a category other than the relevant category ([0053] In the detection distance record unit 22 in the information storage unit 2b, a distance from a vehicle position where a landmark is detected for the first time to the landmark is recorded as a detection history used for visibility estimation. The distance is used as a reference detection distance (detection distance in the past) being a comparison target for a detection distance in the following driving. The reference detection distance is calculated as follows. When obtaining an image recognition result of a landmark from the image recognition unit 1 for the first time, the detection distance record unit 22 obtains vehicle position information as well as a position where the detected landmark is actually situated from the landmark position record unit 21 and comparers the information with the position so as to calculate a distance from the vehicle position to the landmark. For example, if the image recognition unit 1 detects a road sign situated in a vehicle traveling direction and outputs an image analysis result of “speed-limit sign” and “40 km/h”, the detection distance record unit 22 obtains position information of the road sign from the landmark position record unit 21. By comparing the obtained position of the road sign with the current vehicle position, the detection distance record unit 22 calculates a distance, e.g. “25 m”. That is, the fact that the vehicle can detect the road sign 25 m before the sign is recorded. [0055] On receiving the image analysis result from the image recognition unit 1, the visibility judgment unit 3b receives the vehicle position information at that time. The visibility judgment unit 3b calculates a distance from the vehicle to the landmark by using the inputted vehicle position information and the landmark position information. That is, a detection distance showing how short a distance is which is needed to detect the landmark this time is calculated. By comparing the calculated detection distance with the reference detection distance obtained from the information storage unit 2b, it is determined whether the former is shorter than a reference detection distance recorded in the past, i.e. whether or not the detection is made at a closer distance from the landmark. When making the comparison, a judgment threshold is used similar to the case in Embodiment 1. For example, if the reference detection distance is “25 m”, the detection distance calculated this time is “20 m”, and the threshold is “3 m”, the difference of 5 m between the reference detection distance and the detection distance calculated this time, i.e. a moving distance toward the landmark, exceeds the threshold, and thus it is determined as “visibility decreased”. On the other hand, if the detection distance of this time is “23 m”, a moving distance to the landmark of 2 m does not exceed the threshold, and thus the visibility judgment result is determined as “visibility normal”).
Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the device of Mantyjarvi with the teachings of Murayama to only update the visibility distance associated with the relevant category because "for each type of various landmarks, the detection distance record unit 22 records a distance detected for the first time for each type of the landmarks as the reference detection distance" [Murayama 0059].
Regarding claim 9, Mantyjarvi and Murayama teach the device of claim 8. Murayama teaches wherein when the first condition is not met (exceeds judgement threshold), the controller is configured to update the visibility distance only for the category of the identified type of observation target (calculated detection distance of a specific landmark) ([0053] In the detection distance record unit 22 in the information storage unit 2b, a distance from a vehicle position where a landmark is detected for the first time to the landmark is recorded as a detection history used for visibility estimation. The distance is used as a reference detection distance (detection distance in the past) being a comparison target for a detection distance in the following driving. The reference detection distance is calculated as follows. When obtaining an image recognition result of a landmark from the image recognition unit 1 for the first time, the detection distance record unit 22 obtains vehicle position information as well as a position where the detected landmark is actually situated from the landmark position record unit 21 and comparers the information with the position so as to calculate a distance from the vehicle position to the landmark. For example, if the image recognition unit 1 detects a road sign situated in a vehicle traveling direction and outputs an image analysis result of “speed-limit sign” and “40 km/h”, the detection distance record unit 22 obtains position information of the road sign from the landmark position record unit 21. By comparing the obtained position of the road sign with the current vehicle position, the detection distance record unit 22 calculates a distance, e.g. “25 m”. That is, the fact that the vehicle can detect the road sign 25 m before the sign is recorded. [0055] On receiving the image analysis result from the image recognition unit 1, the visibility judgment unit 3b receives the vehicle position information at that time. The visibility judgment unit 3b calculates a distance from the vehicle to the landmark by using the inputted vehicle position information and the landmark position information. That is, a detection distance showing how short a distance is which is needed to detect the landmark this time is calculated. By comparing the calculated detection distance with the reference detection distance obtained from the information storage unit 2b, it is determined whether the former is shorter than a reference detection distance recorded in the past, i.e. whether or not the detection is made at a closer distance from the landmark. When making the comparison, a judgment threshold is used similar to the case in Embodiment 1. For example, if the reference detection distance is “25 m”, the detection distance calculated this time is “20 m”, and the threshold is “3 m”, the difference of 5 m between the reference detection distance and the detection distance calculated this time, i.e. a moving distance toward the landmark, exceeds the threshold, and thus it is determined as “visibility decreased”. On the other hand, if the detection distance of this time is “23 m”, a moving distance to the landmark of 2 m does not exceed the threshold, and thus the visibility judgment result is determined as “visibility normal”).
Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the device of Mantyjarvi with the teachings of Murayama to only update the visibility distance associated with the relevant category because "for each type of various landmarks, the detection distance record unit 22 records a distance detected for the first time for each type of the landmarks as the reference detection distance" [Murayama 0059].
Regarding claim 11, Mantyjarvi teaches the device of claim 10. Murayama teaches wherein when the visibility distance for a certain category is less than or equal to a predetermined value, the controller is configured to not include visibility distance corresponding to the certain category in the notification information ([0057] Note that, while the detection distance when a landmark is detected for the first time is recorded in the detection distance record unit 22 as a reference value in the above-described explanation, the reference detection distance recorded in the detection distance record unit 22 may be updated every time when a landmark is detected. By employing such a configuration, determination whether visibility is better or worse than that at the previous time can be made. Also, the reference detection position may be obtained by averaging a plurality of detection distances. In addition, while a detection distance when visibility is good is recorded, update may not be made when visibility is estimated to be poor. [0036] On the other hand, when visibility is poor, for example, due to fog, etc., detection of a road sign should be made from a closer position than usual).
Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the device of Mantyjarvi with the teachings of Murayama to not include visibility distance less than or equal to a threshold because the "reference detection position information (detection position in the past) and is used as a determination criterion when estimating visibility" [Murayama 0031].
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Yamakawa (US20210316723A1) teaches a device to detect targets and determine observation ranges based on categorized threat levels.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jacqueline R Zak whose telephone number is (571)272-4077. The examiner can normally be reached M-F 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JACQUELINE R ZAK/Examiner, Art Unit 2666
/EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666