Prosecution Insights
Last updated: April 19, 2026
Application No. 18/709,173

ENVIRONMENT RECOGNITION DEVICE AND PROGRAM

Non-Final OA §102§103
Filed
May 10, 2024
Examiner
AZARIAN, SEYED H
Art Unit
2675
Tech Center
2600 — Communications
Assignee
Aisin Corporation
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
807 granted / 901 resolved
+27.6% vs TC avg
Moderate +12% lift
Without
With
+11.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 3m
Avg Prosecution
9 currently pending
Career history
910
Total Applications
across all art units

Statute-Specific Performance

§101
17.0%
-23.0% vs TC avg
§103
21.5%
-18.5% vs TC avg
§102
31.4%
-8.6% vs TC avg
§112
13.9%
-26.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 901 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-2 and 4-7 are rejected under 35 U.S.C. 102(a) (1) as being anticipated by Takemura et al (U.S. Pub. No: 2015/0334385 A1). Regarding claim 1, Takemura discloses an environment recognition device comprising: a three-dimensional location estimating part that estimates a three-dimensional location of a lighting device in an environment from an image photographed by a photographing device that is mounted on a mobile unit to photograph an environment around the mobile unit (see abstract, also page 1, paragraph, [0001] the present invention relates to a device for recognizing vehicle-mounted environment of a subject vehicle using a vehicle-mounted camera. Also, page 4, paragraph, [0060-0061] road surface reflections of late-afternoon sunlight, a car behind, street lamps and the like tend to create scenes where erroneous detection or a lack of detection is caused by multiple applications for lane recognition, vehicle detection, pedestrian detection and the like. Thus, reliability and reflection strength scores of the surrounding environment as to the likelihood of road surface reflection are estimated. Also, concerning whether reflected light is instantaneously entering the processing region, a reflection region is estimated by extracting a high brightness region. Particularly, a general three-dimensional position of the headlight of a car behind can be estimated from the direction and height of the headlight. Thus, a reflection countermeasure is implemented by utilizing the three-dimensional position. Whether the road surface is wet and in an easily reflecting status based on the weather status is also considered for masking positions on the map such as shown in FIG. 6 that are not be utilized for image recognition, for example. A weather detection unit 210 detects the weather status, such as rainfall, snowfall, mist, cloudy, fine, etc., and their levels. Based on the detection, parameter adjustment and the like for preventing erroneous detection by applications is implemented at separate timings for each application, or, fail determination is automatically implemented to stop the operation of the application so as to prevent erroneous detection because performance may not be ensured depending on the application); a location information obtaining part that obtains location information of the mobile unit (see above, also page 2, paragraph, [0026] FIG. 1 is a block diagram of a vehicle-mounted environment recognition device 10 according to an embodiment of the present invention. An imaging unit 100 acquires an image from a vehicle-mounted camera (not shown). The image is utilized in an image self-diagnosis unit 200 to detect lens water droplets, lens cloudiness, lens reflection, a low visibility region, contamination, road surface reflection, road surface water film, road surface sludge, light source environment, weather and the like. The result of detection is utilized for determining a status where erroneous detection, lack of detection and the like tends to be caused during multi-application execution, and to then determine a subsequent response method. The image self-diagnosis unit 200 may acquire subject vehicle travel information (such as vehicle speed, steering angle, yaw rate, windshield wiper, and outer air temperature) obtained through CAN communication, or information about subject vehicle's GPS position or a road map and the like obtained from a car navigation system, and use them as material for making a decision. Weather information or atmospheric temperature information may be acquired from a server and the like, or a millimeter-wave radar, vehicle-to-vehicle communication, road-to-vehicle communication and the like may be utilized so as to provide information utilized for increasing the accuracy of image diagnosis); a road surface reflection region estimating part that estimates a road surface reflection region in an image photographed by the photographing device at a second time point of the mobile unit, based on a three-dimensional location of a lighting device estimated from an image photographed by the photographing device at a first time point of the mobile unit and based on the location information obtained at the second time point, the road surface reflection region being a region where illuminating light radiated from the lighting device is reflected (see page2, paragraph, [0037] and [0039], not all of the applications are operating in a certain 100 msec period within the 500 msec. In order to utilize the remaining time in the multi-application execution unit 400 and to allow the multi-application execution unit to operate at 100 msec intervals, the detection units in the self-diagnosis unit that can be started in the remaining time are started. For example, in the initial 100 msec period (first time), the reflection detection unit, the road surface water film detection unit, the lens water droplets detection unit, and the lens cloudiness detection unit are called; in the next 100 msec period (second time), the low visibility detection unit is called in addition to the reflection detection unit and the road surface water film detection unit, and so on. In this way, image self-diagnosis is implemented properly within the remaining time. According to a water droplets detection technique, if a location is found that stays at substantially the same position for longer than a certain time and where brightness is higher than in surrounding regions even though the background flows farther away because the camera is installed in the vehicle, that location is extracted as having high probability of being water droplets attached to the lens and appearing as white light. Also, page 5, paragraphs, [0069-0070] other than the application-by-application determination, a final determination is implemented in the system control unit 350 based on contamination removal hardware control information or information acquired from a vehicle information unit 360. For example, it is assumed that, when information requesting the implementation of a first suppression mode in the lane recognition fail determination unit 301 is input to the system control unit 350, implementation of contamination removal hardware is determined upon request from the pedestrian detection fail determination unit 303. In this case, the system control unit 350 determines the suppression mode of the operation mode of an application at the time of starting of the hardware or in a certain period before and after the starting. For example, during the starting of the contamination removal hardware, lane recognition is set to a second suppression mode so as to suppress erroneous detection. When lens contamination is being removed, the lens state is changed. Thus, if the image from which contamination is being removed is considered in the same way as the object of recognition during normal time, the probability of erroneous detection will be increased. Accordingly, during the starting of the contamination removal hardware, it may be preferable to utilize the second suppression mode. If there is a request from another application during the starting of hardware, such adjustment is implemented in the fail computing unit 310. the score may be computed in terms of the ratio of what percentage is occupied by the contamination in the world coordinates in the vehicle detection processing region. In this way, rather than being an index of the contamination ratio on the screen, the score may be expressed as an index of invisibility of a vehicle located at a certain position on the road surface on the world coordinates. Thus, a threshold value can be obtained that facilitates the consideration of an appropriate timing for contamination removal or fail. Finally, page 8, paragraphs, [0098] and [0104], In the case of reflection detection, when the computation of the reflection region of FIG. 13 is implemented, the scores on the map indicate higher brightness than the surrounding areas. Thus, in vehicle detection, it is desirable to include even a little brightness as the object of processing so that a vehicle body feature amount around the headlight can be acquired. Thus, scores of values not more than a certain constant value are eliminated from the object of counting when the region computation is implemented. In this way, the influence of reflection on vehicle detection can be stably determined, whereby transition to the suppression mode and the like can be made at appropriate timing. The fail determination table may be dynamically modified in accordance with the scene so that the fail determination table can be utilized for the suppression mode, contamination removal, or fail determination at even more appropriate timing. With regard to water droplets detection, based on the assumption that water droplets do not become attached to the camera lens surface immediately after the start of rain, for example, reference to the water droplets detection value may be made after the rain is determined by weather detection. In a method, in order to prevent erroneous operation of the suppression mode, the starting of contamination removal hardware, or fail determination, a rain determination result by weather detection may be required and used along with detection logic. However, under the condition of the result of rain determination detection lasting for a certain time or longer, at the end of a windshield wiper operation that has cleared the condition, the result of water droplets detection may be utilized for a certain period of time, considering that the probability of water splashing from the road surface is high because the road surface is still wet right after the rain. Instead of the rain determination by weather detection, the operation of the windshield wiper may be substituted); and an environment recognizing part that recognizes an environment from an image region excluding the road surface reflection region where the illuminating light is reflected (see page 2, paragraphs, [0027-0028] An application-by-application fail determination unit 300 determines how the system should respond on an application-by-application basis on the basis of the information detected by the image self-diagnosis unit 200, such as lens water droplets, lens cloudiness, lens reflection, low visibility region, contamination detect, road surface reflection, road surface water film, road surface sludge, light source environment, or the weather and the like. For example, when a lens has water droplets, the response method is modified depending on in which region on the image the water droplets are present and how much. First, between multi-application lane recognition and vehicle detection, whether or not there is an influence varies depending on where the water droplets are because the processing region is different. By considering the extent of influence on an application-by-application basis, it can be determined to stop the recognition process only of an application having a large influence, for example. Thus, the need for necessarily stopping the operation of an application with small influence can be eliminated. Also, page 2, paragraphs, [0032-0033] FIG. 2 is an explanation drawing of an example of the overall configuration of the image self-diagnosis unit 200 according to the embodiment of the present invention and detection units provided therein. Here, it is determined whether the lens status, travelling road environment, light source environment, weather and the like are suitable for lane recognition, vehicle detection and the like through image recognition executed in the multi-application execution unit 400. Here, initially, a preliminary process is performed, regardless of individual applications, to detect, for example, in what contamination status the state of the lens as a whole is in and in what environment. Depending on the type of contamination and the type of environment of the lens, the subsequent system response will be changed and the application durability will also be changed. Thus, various detection units are present for the lens status and environment. For example, ruts of snow formed by accumulation of snow on the road surface have low durability because the status is subject to erroneous detection for lane recognition due to a number of white noise factors on the road. The ruts, however, have high durability for vehicle detection because the status is not such that the recognition performance is greatly decreased). Regarding claim 2, Takemura discloses the environment recognition device according to claim 1, wherein the three-dimensional location estimating part estimates three-dimensional location of a lighting device, based on a location in an image of specular reflection region on a road surface, the specular reflection region being a region where the illumination light is specularly reflected off a road surface (see claim 1, also page 2, paragraph, [0027] an application-by-application fail determination unit 300 determines how the system should respond on an application-by-application basis on the basis of the information detected by the image self-diagnosis unit 200, such as lens water droplets, lens cloudiness, lens reflection, low visibility region, contamination detect, road surface reflection, road surface water film, road surface sludge, light source environment, or the weather and the like. For example, when a lens has water droplets, the response method is modified depending on in which region on the image the water droplets are present and how much. Also, page 4, paragraphs, [0052] and [0059], depending on the road surface reflection region and its brightness and reflection direction, a countermeasure is implemented by, for example, stopping a subsequent-stage image process using multiple applications. For a region that does not have high brightness but from which reflection is estimated to be extended, an erroneous detection may readily occur. Thus, depending on the application, erroneous detection suppression is implemented by removing the region from the object of image processing, for example. Depending on the size or strength of the reflection region, suppression mode or fail determination is implemented. Because road surface reflection is not indicative of lens contamination, contamination removal hardware is not started. A light source environment detection unit 209 detects the ambient light source environment, such as the morning, daytime, evening, night, a dark night, ambient illumination and the like, based on the camera exposure, shutter speed, gain value, time, a high brightness region on the image and the like. Particularly, backlight in the morning or that of late-afternoon sunlight creates a status prone to performance degradation in image recognition. Thus, such backlight is adequately detected, and modification of a processing region, transition to suppression mode for implementing parameter modification, or fail determination is implemented by the application). Regarding claim 4, Takemura discloses the environment recognition device according to claim 1, wherein the road surface reflection region is region extending a predetermined distance from specular reflection region on a road surface to mobile unit side to alighting device side, the specular reflection region being obtained at the second time point and being region where the illuminating light is specularly reflected off a road surface (see claim 1, also page 11, paragraphs, [0134-0135] in the first suppression mode, the time before a vehicle is finally recognized is extended compared with a normal time so that detection is made only when the certainty of being a vehicle is high. Erroneous detection suppression is implemented at the expense of the maximum detection distance to some degree. An adjustment is implemented to make it easier for water film reflection or headlight road surface reflection countermeasure logic determination, and vehicle erroneous detection is suppressed based on the result of the determination. In the second suppression mode, it is very effective, in suppressing erroneous detection, to shorten the detection distance by bringing the processing region nearer so that road surface reflection or a distant noise factor will not be erroneously recognized. Further, the vehicle as the object of detection is narrowed to one which is highly dangerous to subject vehicle, and the other vehicles are eliminated from the object of recognition, thus reducing erroneous detection. For example, in the case of a front camera, only a vehicle running in front in the travel lane of subject vehicle is selected as the object. In the case of a rearward camera, only a vehicle approaching subject vehicle may be selected as the object. By thus narrowing the object of detection, erroneous detection reduction is implemented). Regarding claim 5, Takemura discloses the environment recognition device according to claim 1, wherein the road surface reflection region is calculated by changing a size of a specular reflection region on a road surface by multiplying the specular reflection region by a predetermined coefficient, the specular reflection region being obtained at the second time point and being a region where the illuminating light is specularly reflected off a road surface (see claim 1, also page 2, paragraphs, [0035-0036] for the detection units in the image self-diagnosis unit 200, a processing period corresponding to their respective properties is set. For example, in consideration of the headlight and the like of adjacent vehicles that changes from moment to moment, a time-delayed reflection position and the like would be provided in the case of a reflection detection unit 206 unless the processing period is the same as the processing period of an application of the multi-application execution unit 400, or equivalent to the processing period of the application with the highest processing period in the application execution unit. Such time-delayed position is not readily usable for erroneous detection suppression. Thus, the reflection detection unit 206 has the same period as the application execution unit. Similarly, the same period of 100 msec is set for a road surface water film detection unit 207 as for the application execution unit. Meanwhile, for a lens water droplets detection unit 201, a lens cloudiness detection unit 202, a low visibility detection unit 204, a travelling road environment detection unit 208, a light source environment detection unit 209, and a weather detection unit 210, the state does not change quickly, and therefore they do not require processing in every period. Thus, in order to reduce processing load, the processing period is set for 500 msec, and past determination results are utilized during un-processing periods so as to enable efficient monitoring of lens status. Also, page 4, paragraphs, [0052-0053] depending on the road surface reflection region and its brightness and reflection direction, a countermeasure is implemented by, for example, stopping a subsequent-stage image process using multiple applications. For a region that does not have high brightness but from which reflection is estimated to be extended, an erroneous detection may readily occur. Thus, depending on the application, erroneous detection suppression is implemented by removing the region from the object of image processing, for example. Depending on the size or strength of the reflection region, suppression mode or fail determination is implemented. Because road surface reflection is not indicative of lens contamination, contamination removal hardware is not started. The image self-diagnosis unit 200, as well as extracting the high brightness region, estimates a road surface reflection region thereof and expresses the region as a score on the map. In water droplets detection, cloudiness detection, and low visibility detection, the length of time of presence corresponds to the magnitude of the score, whereas in road surface reflection detection, a region predicted to reflect with higher brightness is given a higher score. In this way, the response method is determined on an application-by-application basis in a subsequent-stage process). With regard to claims 6 and 7 the arguments analogous to those presented above for claims 1, 2, 4 and 5 are respectively applicable to claims 6-7. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 3 is rejected under 35 U.S.C. 103(a) as being unpatentable over Takemura et al (U.S. Pub. No: 2015/0334385 A1) in view of Zavodny et al (U.S. Pub. No: 2019/0107400 A1). Regarding claim 3, Takemura discloses the environment recognition device according to claim 1, wherein the location information obtaining part obtains the location information based on odometry information (see claim 1, also page 9, paragraph, [0110] the second suppression mode is an operation mode that is tuned so as to further suppress erroneous detection. In this mode, the processing region is decreased to the nearby half so as to reduce erroneous detection by road surface reflection of headlight that enters the processing region in order to prevent reflection from cars behind, even at the expense of accuracy in the lane “recognition position, yaw angle, curvature and the like”. But does not explicitly state that the location information based on “odometry information”. On the other hand, Zavodny, in the same field of “method for localizing a vehicle using image recognition of road surfaces using at least one camera”, teaches (see page 4, paragraphs, [0056-0057] The search region for the registration of a sampled image during comparison with a reference mosaic may be further narrowed by measurement means. Narrowing the region of search for registration of the sampled image within the reference mosaic may speed the process of finding the registration of the sampled image in the mosaic. Measurement means for narrowing the field of search may include, but are not limited to odometers, GPS, inertial measurement units, or other methods that provide an estimate of the distance and direction traveled by the vehicle since its last known location or the location of its most previous registered sampled image. As an illustrative example, a vehicle may be equipped with the camera system of the present invention, a GPS unit, and an odometer. At a first time ti the camera system may take a first image of the road surface. Therefore, it would have been obvious to one having ordinary skill in the art at the time the invention was made to modify the Takemura invention according to the teaching of Zavodny because to combine, Takemura reference that uses GPS, angle, Yaw and with a vehicle mounted camera to determine location of the vehicle and environment, with the Zavodny invention that teaches the using a vehicle mounted camera and odometry for vehicle and improved method for location of a vehicle and its environment. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to Seyed Azarian whose telephone number is (571) 272-7443. The examiner can normally be reached on Monday through Thursday from 6:00 a.m. to 7:30 p.m. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Moyer Andrew, can be reached at (571) 272-9523. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application information Retrieval (PAIR) system. Status information for published application may be obtained from either Private PAIR or Public PAIR. Status information about the PAIR system, see http:// pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /SEYED H AZARIAN/Primary Examiner, Art Unit 2667 November 12, 2025
Read full office action

Prosecution Timeline

May 10, 2024
Application Filed
Mar 03, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602783
SYSTEM AND METHODS FOR AUTOMATIC IMAGE ALIGNMENT OF THREE-DIMENSIONAL IMAGE VOLUMES
2y 5m to grant Granted Apr 14, 2026
Patent 12597134
IMAGE PROCESSING DEVICE, METHOD, AND PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12598264
Color Correction for Electronic Device with Immersive Viewing
2y 5m to grant Granted Apr 07, 2026
Patent 12586206
METHOD FOR IDENTIFYING A MATERIAL BOUNDARY IN VOLUMETRIC IMAGE DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12573039
IMAGING SYSTEMS AND METHODS USEFUL FOR PATTERNED STRUCTURES
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+11.7%)
2y 3m
Median Time to Grant
Low
PTA Risk
Based on 901 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month