DETAILED ACTIONNotice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Applicant Response to Official Action
The response filed on 8/14/2025 has been entered and made of record.
Acknowledgment
Claim 2, canceled on 8/14/2025, is acknowledged by the examiner.
Claims 1, 3-6, and 8-14, amended on 8/14/2025, are acknowledged by the examiner.
Response to Arguments
Applicant’s arguments with respect to claims 1, 13, 14, and their dependent claims have been considered but they are moot in view of the new grounds of rejection necessitated by amendments initiated by the applicant. Examiner addresses the main arguments of the Applicant as below.
Regarding the drawing objection, the amendment filed on 8/14/2025 addresses the issue. As a result, the drawing objection is withdrawn.
Regarding the 35 U.S.C. 112(f) interpretation, the amendment filed on 8/14/2025 addresses the issue. As a result, the 35 U.S.C. 112(f) interpretation is withdrawn.
Regarding the 35 U.S.C. 112(a) rejection related to the obtaining unit, the setting unit, the evaluating unit, and the identifying unit, the amendment filed on 8/14/2025 addresses the issue. As a result, the 35 U.S.C. 112(a) rejection is withdrawn.
Regarding the 35 U.S.C. 112(b) rejection related to the obtaining unit, the setting unit, the evaluating unit, and the identifying unit, the amendment filed on 8/14/2025 addresses the issue. As a result, the 35 U.S.C. 112(b) rejection is withdrawn.
Regarding the 35 U.S.C. 112(b) rejection for claim 5, the amendment filed on 8/14/2025 addresses the issue. As a result, the 35 U.S.C. 112(b) rejection is withdrawn.
Claim Rejection – 35 U.S.C. § 112
The following is a quotation of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same and shall set forth the best mode contemplated by the inventor of carrying out his invention.
The following is a quotation of 35 U.S.C. 112(b):
(B) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of pre-AIA 35 U.S.C. 112, second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 6 is rejected under 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph, as failing to comply with the written description requirement. The claim contains subject matters, which were not described in the specification in such a way as to reasonably enable a person skilled in the art to make to the invention commensurate in scope with the claims. To satisfy the written description requirement, the specification must describe the claimed invention in sufficient details that one skilled in the art can reasonably conclude that the inventors had possession of the claimed invention. Original claims fail to satisfy the written description requirement when the invention is claimed and described in functional language but the specification does not sufficiently identify how the invention achieves the claimed function. In this application, the amended claim 6 recites, “wherein an area where the y-coordinate is equal to or smaller than a plurality of predetermined values at all positions in the image”. There are few issues with the claim. First, it is noted that an area in an image is a two-dimension data, which has a plurality of pixels. Each pixel has its own y-coordinate. It is not clear to readers neither from the specification nor from the claim that the y-coordinate of what pixel, among a plurality of pixels, will be used to compare with the plurality of predetermined values. Second, the claim indicates that “a plurality of predetermined values at all positions in the image”. First, it is not clear from the claim whether “all positions in the image” implies only y-positions or also including x-positions in the image. Second, according to paragraphs [0045, 0061] of the specification, the size of the image used by this invention is 960x960, which has 921,600 different positions; hence it has 921,600 different predetermined values. It is not clear how long it may take to compare these 921,600 different predetermined values with all possible y-coordinates, in order to determine the break light of the front car is on and turn on the break for the behind car. Accordingly, these limitations do not satisfy the written description requirement. It is not enough information for one skilled in the art could write a program or implement in an apparatus to achieve the claimed function because the specification must explain how the inventors achieve the claimed function to satisfy the written description requirement. For the reasons discussed above, claim 6 is rejected under 35 U.S.C. 112(a) or pre-AIA 35 U.S.C. 112, first paragraph.
Claim 6 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matters, which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. The amended claim 6 recites, “wherein an area where the y-coordinate is equal to or smaller than a plurality of predetermined values at all positions in the image”. It is not clear to readers neither from the specification nor from the claim that the y-coordinate of what pixel, among a plurality of pixels of the area, will be used to compare with the plurality of predetermined values. Second, the claim indicates that “a plurality of predetermined values at all positions in the image”. It is not clear from the claim whether “all positions in the image” implies only y-positions or also including x-positions in the image. Therefore, the claim is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph.
Claim 6 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter, which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 6 recites the limitation "the y-coordinate ". There is insufficient antecedent basis for this limitation in the claim. Therefore, the claim(s) is/are indefinite and is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under pre-AIA 35 U.S.C. 103(a) are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a).
Claims 1 and 3-14 are rejected under 35 U.S.C. 103 as being unpatentable over Nagaoka (US Patent Application Publication 2013/0235202 A1), (“Nagaoka”), in view of Tan et al. (US Patent 10,613,650 B2), (“Tan”), in view of Yamashita (US Patent 11,299,148 B2), (“Yamashita”).
Regarding claim 1, Nagaoka meets the claim limitations as follows:
An information processing apparatus (a vehicle periphery monitoring apparatus) [Nagaoka: para. 0003] comprising:
one or more processor (a CPU) [Nagaoka: para. 0041]; andone or more memory storing instructions which (a storage unit 14m including a RAM (Random Access Memory) for storing data being processed by the CPU 14c and a ROM (Read Only Memory) for storing a program executed by the CPU 14c, tables, maps, and templates) [Nagaoka: para. 0041], when executed by the one or more processor (The CPU 14c of the image processing unit 14 reads the supplied digital signals and executes the program while referring to the tables, the maps, and the templates, thereby functioning as various functioning means (also referred to as "functioning sections"), described below, to send the drive signal (e.g., sound signal, display signal) to the speaker 24 and the display signal to the image display unit 26. The functioning means may alternatively be performed by pieces of hardware) [Nagaoka: para. 0042], cause the information processing apparatus (a CPU ( Central Processing Unit) 14c for performing various processing operations) [Nagaoka: para. 0041] to:
obtain a first image in which an advancing direction of a vehicle is captured and a second image in which the advancing direction of the vehicle is captured at a time different from a time when the first image is captured ((a single vehicle-mounted infrared camera which captures at least two images (two frames) of an object in the periphery of a vehicle at a given interval of time) [Nagaoka: para. 0008 – Note: Nagaoka implicitly indicates that two images are captured at different times]; (When the infrared camera of the vehicle periphery monitoring apparatus of the related art captures the front end of another vehicle, e.g., an oncoming vehicle, at night, it can easily identify the headlights thereof that are positioned at respective ends in the transverse directions of the other vehicle. When the infrared camera of the vehicle periphery monitoring apparatus captures the rear end of another vehicle, e.g., a preceding vehicle running ahead in the same direction, at night, it can easily identify the taillights thereof that are positioned at respective ends in the transverse directions of the other vehicle) [Nagaoka: para. 0012]);
set a partial area including a light source provided in a preceding vehicle that precedes the vehicle in an image (the image of another vehicle Car shown in FIG. 5 is processed as follows: Lights 70a, 70b on laterally spaced left and right end portions of the other vehicle Car, such as headlights (oncoming car) or taillights (preceding car), a front grill (oncoming car) or an exhaust pipe 72 (preceding car) on a lower central portion of the other vehicle Car, and left and right tires 74a, 74b of the other vehicle Car are indicated as hatched regions because of their higher luminance level. Other portions of the vehicle body of the other vehicle are indicated depending on the ambient temperature.) [Nagaoka: para. 0065-0066; Fig. 5] in which the advancing direction of the vehicle is captured (When the infrared camera of the vehicle periphery monitoring apparatus of the related art captures the front end of another vehicle, e.g., an oncoming vehicle, at night, it can easily identify the headlights thereof that are positioned at respective ends in the transverse directions of the other vehicle. When the infrared camera of the vehicle periphery monitoring apparatus captures the rear end of another vehicle, e.g., a preceding vehicle running ahead in the same direction, at night, it can easily identify the taillights thereof that are positioned at respective ends in the transverse directions of the other vehicle) [Nagaoka: para. 0012], based on the first image and the second image (a single vehicle-mounted infrared camera which captures at least two images (two frames) of an object in the periphery of a vehicle at a given interval of time. As the relative speed between the object and the vehicle incorporating the vehicle periphery monitoring apparatus is higher, the size of an image of the object in the image captured later changes more greatly from the size of an image of the object in the image captured earlier. As the relative speed between the object and the vehicle is higher, the object that is present ahead of the vehicle reaches the vehicle in a shorter period of time. Consequently, even a single infrared camera is able to monitor the periphery of a vehicle by estimating a period of time which an object takes to reach the vehicle, so-called TTC (Time To Contact or Time to Collision), from a rate of change of the size of images of an object which are captured at a given interval of time (see paragraphs [0019], [0020] of JP4521642B2)) [Nagaoka: para. 0008] ; (The attention seeking output generation determiner 108 calculates a positional change x (horizontal) and a positional change y (vertical) of the image portion of the target object to be monitored between the images that are captured at the prescribed time intervals, and determines a contact possibility that the target object to be monitored and the vehicle 12 will contact each other, based on the determined period of time TTC and the calculated positional changes (motion vector) x, y) [Nagaoka: para. 0056]);
evaluate periodicity of light emission of the light source (The attention seeking output generation determiner 108 calculates a positional change x (horizontal) and a positional change y (vertical) of the image portion of the target object to be monitored between the images that are captured at the prescribed time intervals, and determines a contact possibility that the target object to be monitored and the vehicle 12 will contact each other, based on the determined period of time TTC and the calculated positional changes (motion vector) x, y) [Nagaoka: para. 0056]), based on the partial areas in a plurality of images in which the advancing direction of the vehicle is captured (Consequently, even a single infrared camera is able to monitor the periphery of a vehicle by estimating a period of time which an object takes to reach the vehicle, so-called TTC (Time To Contact or Time to Collision), from a rate of change of the size of images of an object which are captured at a given interval of time) [Nagaoka: para. 0008]; and
identify a type of the light source (When the infrared camera of the vehicle periphery monitoring apparatus of the related art captures the front end of another vehicle, e.g., an oncoming vehicle, at night, it can easily identify the headlights thereof that are positioned at respective ends in the transverse directions of the other vehicle. When the infrared camera of the vehicle periphery monitoring apparatus captures the rear end of another vehicle, e.g., a preceding vehicle running ahead in the same direction, at night, it can easily identify the taillights thereof that are positioned at respective ends in the transverse directions of the other vehicle) [Nagaoka: para. 0012 – Note: Based on TTC, the system can determine whether the light source is from a headlight of a vehicle moving in an opposite direction, or a taillight of a vehicle moving in a same direction], based on the periodicity ((a single infrared camera is able to monitor the periphery of a vehicle by estimating a period of time which an object takes to reach the vehicle, so-called TTC (Time To Contact or Time to Collision), from a rate of change of the size of images of an object which are captured at a given interval of time (see paragraphs [0019], [0020] of JP4521642B2)) [Nagaoka: para. 0008 – Note: Based on TTC, the system can determine whether the light source is from a headlight of a vehicle moving in an opposite direction, or a taillight of a vehicle moving in a same direction] ; (The attention seeking output generation determiner 108 calculates a positional change x (horizontal) and a positional change y (vertical) of the image portion of the target object to be monitored between the images that are captured at the prescribed time intervals, and determines a contact possibility that the target object to be monitored and the vehicle 12 will contact each other, based on the determined period of time TTC and the calculated positional changes (motion vector) x, y) [Nagaoka: para. 0056]),wherein whether the light source is blinking at a predetermined cycle, as the periodicity is evaluated (to monitor the periphery of a vehicle by estimating a period of time which an object takes to reach the vehicle, so-called TTC (Time To Contact or Time to Collision), from a rate of change of the size of images of an object which are captured at a given interval of time) [Nagaoka: para. 0008] ; (The attention seeking output generation determiner 108 calculates a positional change x (horizontal) and a positional change y (vertical) of the image portion of the target object to be monitored between the images that are captured at the prescribed time intervals, and determines a contact possibility that the target object to be monitored and the vehicle 12 will contact each other, based on the determined period of time TTC and the calculated positional changes (motion vector) x, y) [Nagaoka: para. 0056]), and
type in accordance with evaluation of whether the light source (When the infrared camera of the vehicle periphery monitoring apparatus of the related art captures the front end of another vehicle, e.g., an oncoming vehicle, at night, it can easily identify the headlights thereof that are positioned at respective ends in the transverse directions of the other vehicle. When the infrared camera of the vehicle periphery monitoring apparatus captures the rear end of another vehicle, e.g., a preceding vehicle running ahead in the same direction, at night, it can easily identify the taillights thereof that are positioned at respective ends in the transverse directions of the other vehicle) [Nagaoka: para. 0012 – Note: Based on TTC, the system can determine whether the light source is from a headlight of a vehicle moving in an opposite direction, or a taillight of a vehicle moving in a same direction] is blinking at the predetermined cycle is identified (a single infrared camera is able to monitor the periphery of a vehicle by estimating a period of time which an object takes to reach the vehicle, so-called TTC (Time To Contact or Time to Collision), from a rate of change of the size of images of an object which are captured at a given interval of time (see paragraphs [0019], [0020] of JP4521642B2)) [Nagaoka: para. 0008]; (The attention seeking output generation determiner 108 calculates a positional change x (horizontal) and a positional change y (vertical) of the image portion of the target object to be monitored between the images that are captured at the prescribed time intervals, and determines a contact possibility that the target object to be monitored and the vehicle 12 will contact each other, based on the determined period of time TTC and the calculated positional changes (motion vector) x, y) [Nagaoka: para. 0056]).
Nagaoka does not explicitly disclose the following claim limitations (Emphasis added).
periodicity of light emission;the light source is blinking.
However, in the same field of endeavor Tan further discloses the claim limitations and the deficient claim limitations, as follows:
obtain a first image in which an advancing direction of a vehicle is captured and a second image in which the advancing direction of the vehicle is captured at a time different from a time when the first image is captured (The image sensor successively captures a second previous image frame, a previous image frame and a current image frame) [Tan: col. 2, line 28-30].
evaluate periodicity of light emission of the light source (calculate a current displacement and a current speed according to the previous image frame and the current image frame, determine an emission period of the light source corresponding to the calculated current speed, determine an acceleration according to the current displacement and the previous displacement, set a next emission period as the determined emission period when the acceleration is smaller than an acceleration threshold, and set the next emission period to be shorter than the determined emission period when the acceleration is larger than the acceleration threshold) [Tan: col. 2, line 11-30].
It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Nagaoka with Tan to program the system to implement of Tan’s method.
Therefore, the combination of Nagaoka with Tan will enable the system to provide a navigation device and an operation method that can upshift the image frame rate of an image sensor [Tan: col. 1, line 44-48].
Nagaoka and Tan do not explicitly disclose the following claim limitations (Emphasis added).
the light source is blinking.
However, in the same field of endeavor Yamashita further discloses the deficient claim limitations as follows:
the light source is blinking (the ECU 20 may analyze at least one of blinking of a direction indicator of the other vehicle OV on the front side, lighting of a brake lamp, and the orientation and position of the other vehicle OV from images obtained by the cameras 41 using a known image analysis method, and detect the operation state of the other vehicle OV based on the analysis result. More specifically, as shown in FIG. 4, if it is detected, in an image 41a of the other vehicle OV obtained by the cameras 41, that left and right direction indicators DI of the other vehicle OV are blinking (hazard lamp blinking state), and part of the other vehicle OV is located on the road shoulder, the ECU 20 may determine that the operation state of the other vehicle OV is the stop state. Furthermore, the ECU 20 may detect the open/close state of a side mirror SM from the image 41a of the other vehicle OV obtained by the cameras 41, and upon detecting that the side mirror SM of the other vehicle OV is closed, may determine that the operation state of the other vehicle OV is the stop state) [Yamashita: col. 6, line 3-22; Figs. 4, 7-8].
It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Nagaoka and Tan with Yamashita to program the system to implement of Yamashita’s method.
Therefore, the combination of Nagaoka and Tan with Yamashita will enable the system to improve the reliability of the detection [Yamashita: col. 3, line 18-43].
Regarding claim 3, Nagaoka meets the claim limitations as set forth in claim 1.
Nagaoka further meets the claim limitations as follow.
in a case where the light source is evaluated (When the infrared camera of the vehicle periphery monitoring apparatus of the related art captures the front end of another vehicle, e.g., an oncoming vehicle, at night, it can easily identify the headlights thereof that are positioned at respective ends in the transverse directions of the other vehicle. When the infrared camera of the vehicle periphery monitoring apparatus captures the rear end of another vehicle, e.g., a preceding vehicle running ahead in the same direction, at night, it can easily identify the taillights thereof that are positioned at respective ends in the transverse directions of the other vehicle) [Nagaoka: para. 0012 – Note: Based on TTC, the system can determine whether the light source is from a headlight of a vehicle moving in an opposite direction, or a taillight of a vehicle moving in a same direction] to blink at the predetermined cycle (a single infrared camera is able to monitor the periphery of a vehicle by estimating a period of time which an object takes to reach the vehicle, so-called TTC (Time To Contact or Time to Collision), from a rate of change of the size of images of an object which are captured at a given interval of time (see paragraphs [0019], [0020] of JP4521642B2)) [Nagaoka: para. 0008] ; (The attention seeking output generation determiner 108 calculates a positional change x (horizontal) and a positional change y (vertical) of the image portion of the target object to be monitored between the images that are captured at the prescribed time intervals, and determines a contact possibility that the target object to be monitored and the vehicle 12 will contact each other, based on the determined period of time TTC and the calculated positional changes (motion vector) x, y) [Nagaoka: para. 0056]), the light source is identified (it can easily identify the taillights thereof that are) [Nagaoka: para. 0012] as a blinker.
Nagaoka and Tan do not explicitly disclose the following claim limitations (Emphasis added).
to blink;
the light source is identified as a blinker.
However, in the same field of endeavor Yamashita further discloses the deficient claim limitations as follows:
to blink (the ECU 20 may analyze at least one of blinking of a direction indicator of the other vehicle OV on the front side, lighting of a brake lamp, and the orientation and position of the other vehicle OV from images obtained by the cameras 41 using a known image analysis method, and detect the operation state of the other vehicle OV based on the analysis result. More specifically, as shown in FIG. 4, if it is detected, in an image 41a of the other vehicle OV obtained by the cameras 41, that left and right direction indicators DI of the other vehicle OV are blinking (hazard lamp blinking state), and part of the other vehicle OV is located on the road shoulder, the ECU 20 may determine that the operation state of the other vehicle OV is the stop state. Furthermore, the ECU 20 may detect the open/close state of a side mirror SM from the image 41a of the other vehicle OV obtained by the cameras 41, and upon detecting that the side mirror SM of the other vehicle OV is closed, may determine that the operation state of the other vehicle OV is the stop state) [Yamashita: col. 6, line 3-22; Figs. 4, 7-8].
the light source is identified as a blinker ((the ECU 20 may analyze at least one of blinking of a direction indicator of the other vehicle OV on the front side, lighting of a brake lamp, and the orientation and position of the other vehicle OV from images obtained by the cameras 41 using a known image analysis method, and detect the operation state of the other vehicle OV based on the analysis result. More specifically, as shown in FIG. 4, if it is detected, in an image 41a of the other vehicle OV obtained by the cameras 41, that left and right direction indicators DI of the other vehicle OV are blinking (hazard lamp blinking state), and part of the other vehicle OV is located on the road shoulder, the ECU 20 may determine that the operation state of the other vehicle OV is the stop state. Furthermore, the ECU 20 may detect the open/close state of a side mirror SM from the image 41a of the other vehicle OV obtained by the cameras 41, and upon detecting that the side mirror SM of the other vehicle OV is closed, may determine that the operation state of the other vehicle OV is the stop state) [Yamashita: col. 6, line 3-22; Figs. 4, 7-8]; (lighting devices (headlights, taillights, and the like) including direction indicators 8 (blinkers)) [Yamashita: col. 8, line 9-10; Figs. 4, 7-8]).
It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Nagaoka and Tan with Yamashita to program the system to implement of Yamashita’s method.
Therefore, the combination of Nagaoka and Tan with Yamashita will enable the system to improve the reliability of the detection [Yamashita: col. 3, line 18-43].
Regarding claim 4, Nagaoka meets the claim limitations as set forth in claim 1.
Nagaoka further meets the claim limitations as follow.
in a case where the light source is evaluated not to be blinking at the predetermined cycle (When the infrared camera of the vehicle periphery monitoring apparatus of the related art captures the front end of another vehicle, e.g., an oncoming vehicle, at night, it can easily identify the headlights thereof that are positioned at respective ends in the transverse directions of the other vehicle. When the infrared camera of the vehicle periphery monitoring apparatus captures the rear end of another vehicle, e.g., a preceding vehicle running ahead in the same direction, at night, it can easily identify the taillights thereof that are positioned at respective ends in the transverse directions of the other vehicle) [Nagaoka: para. 0012; Fig. 5], whether the light source is turned on is further evaluated ((decide that there are two headlights or taillights on other vehicles on account of heat emitted) [Nagaoka: para. 0008]; (In the binarizing process in step S2, the image of another vehicle Car shown in FIG. 5 is processed as follows: Lights 70a, 70b on laterally spaced left and right end portions of the other vehicle Car, such as headlights (oncoming car) or taillights (preceding car), a front grill (oncoming car) or an exhaust pipe 72 (preceding car) on a lower central portion of the other vehicle Car, and left and right tires 74a, 74b of the other vehicle Car are indicated as hatched regions because of their higher luminance level. Other portions of the vehicle body of the other vehicle are indicated depending on the ambient temperature. If the ambient temperature is lower than another portion of the vehicle body of the other vehicle Car, the other portion is indicated as blank) [Nagaoka: para. 0065-0066; Figs. 5-8B – Note: The heat emitted and the ambient temperature can indicate whether or not the light is on], and
in a case where the light source is evaluated to be turned on ((decide that there are two headlights or taillights on other vehicles on account of heat emitted) [Nagaoka: para. 0008]; (In the binarizing process in step S2, the image of another vehicle Car shown in FIG. 5 is processed as follows: Lights 70a, 70b on laterally spaced left and right end portions of the other vehicle Car, such as headlights (oncoming car) or taillights (preceding car), a front grill (oncoming car) or an exhaust pipe 72 (preceding car) on a lower central portion of the other vehicle Car, and left and right tires 74a, 74b of the other vehicle Car are indicated as hatched regions because of their higher luminance level. Other portions of the vehicle body of the other vehicle are indicated depending on the ambient temperature. If the ambient temperature is lower than another portion of the vehicle body of the other vehicle Car, the other portion is indicated as blank) [Nagaoka: para. 0065-0066; Figs. 5-8B – Note: The heat emitted and the ambient temperature can indicate whether or not the light is on], the light source is identified (When the infrared camera of the vehicle periphery monitoring apparatus captures the rear end of another vehicle, e.g., a preceding vehicle running ahead in the same direction, at night, it can easily identify the taillights thereof that are positioned at respective ends in the transverse directions of the other vehicle) [Nagaoka: para. 0012] as a brake lamp.
Nagaoka and Tan do not explicitly disclose the following claim limitations (Emphasis added).
the light source is identified as a brake lamp.
However, in the same field of endeavor Yamashita further discloses the deficient claim limitations as follows:
the light source is identified as a brake lamp (the ECU 20 may analyze at least one of blinking of a direction indicator of the other vehicle OV on the front side, lighting of a brake lamp) [Yamashita: col. 6, line 3-5 Figs. 4, 7-8].
It would have been obvious to one with an ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Nagaoka and Tan with Yamashita to program the system to implement of Yamashita’s method.
Therefore, the combination of Nagaoka and Tan with Yamashita will enable the system to improve the reliability of the detection [Yamashita: col. 3, line 18-43].
Regarding claim 5, Nagaoka meets the claim limitations as set forth in claim 4.
Nagaoka further meets the claim limitations as follow.
wherein whether the light source (When the infrared camera of the vehicle periphery monitoring apparatus of the related art captures the front end of another vehicle, e.g., an oncoming vehicle, at night, it can easily identify the headlights thereof that are positioned at respective ends in the transverse directions of the other vehicle. When the infrared camera of the vehicle periphery monitoring apparatus captures the rear end of another vehicle, e.g., a preceding vehicle running ahead in the same direction, at night, it can easily identify the taillights thereof that are positioned at respective ends in the transverse directions of the other vehicle) [Nagaoka: para. 0012; Fig. 5] is turned on is evaluated ((decide that there are two headlights or taillights on other vehicles on account of heat emitted) [Nagaoka: para. 0008]; (In the binarizing process in step S2, the image of another vehicle Car shown in FIG. 5 is processed as follows: Lights 70a, 70b on laterally spaced left and right end portions of the other vehicle Car, such as headlights (oncoming car) or taillights (preceding car), a front grill (oncoming car) or an exhaust pipe 72 (preceding car) on a lower central portion of the other vehicle Car, and left and right tires 74a, 74b of the other vehicle Car are indicated as hatched regions because of their higher luminance level. Other portions of the vehicle body of the other vehicle are indicated depending on the ambient temperature. If the ambient temperature is lower than another portion of the vehicle body of the other vehicle Car, the other portion is indicated as blank) [Nagaoka: para. 0065-0066; Figs. 5-8B – Note: The heat emitted and the ambient temperature can indicate whether or not the light is on], by using a pixel value of a part of an area obtained by dividing the partial area ((When the horizontally spaced lights 70a, 70b of a higher luminance level are detected in the binarizing process, a quadrangular mask having a prescribed area and extending horizontally, which, for example, has a horizontal width greater than the horizontal width of the other vehicle Car, generally covering a distance from the left end of the light 70a to the right end of the light 70b, and a vertical width slightly greater than the vertical width of the lights 70a, 70b, is applied to the image of the other vehicle Car and vertically moved above the lights 70a, 70b, and an area having a succession of identical pixel values within the grayscale image in the mask can be detected (extracted) as a roof (and a roof edge). Another quadrangular mask extending vertically, which, for example, has a horizontal width comparable to the horizontal width of the lights 70a, 70b and a vertical width which is 1 to 2 times the vertical width of the lights 70a, 70b, is applied laterally of the lights 70a, 70b, and an area having a succession of identical pixel values within the grayscale image in the mask can be detected ( extracted) as a pillar (and a pillar edge) or a fender ( and a fender edge).) [Nagaoka: para. 0068; Figs. 5-8B]).
Regarding claim 6, Nagaoka meets the claim limitations as set forth in claim 5.
Nagaoka further meets the claim limitations as follow.
wherein an area where the y-coordinate is equal to or smaller than a plurality of predetermined values at all positions in the image in which the advancing direction of the vehicle is captured, is used as the part of the area obtained by dividing the partial area (detects a high-luminance area 76 that is greater in area than the light 70a and has a horizontal length equal to a horizontal width (lateral width) Hwb that is smaller than the horizontal width (lateral width) Hwa of a region interconnecting the lights 70a, 70b) [Nagaoka: para. 0081].
Regarding claim 7, Nagaoka meets the claim limitations as set forth in claim 1.
Nagaoka further meets the claim limitations as follow.
wherein a time difference in capturing images between the first image and the second image is set ((a single vehicle-mounted infrared camera which captures at least two images (two frames) of an object in the periphery of a vehicle at a given interval of time) [Nagaoka: para. 0008], based on the periodicity of the light emission of the light source provided in the preceding vehicle (a single vehicle-mounted infrared camera which captures at least two images (two frames) of an object in the periphery of a vehicle at a given interval of time. As the relative speed between the object and the vehicle incorporating the vehicle periphery monitoring apparatus is higher, the size of an image of the object in the image captured later changes more greatly from the size of an image of the object in the image captured earlier. As the relative speed between the object and the vehicle is higher, the object that is present ahead of the vehicle reaches the vehicle in a shorter period of time. Consequently, even a single infrared camera is able to monitor the periphery of a vehicle by estimating a period of time which an object takes to reach the vehicle, so-called TTC (Time To Contact or Time to Collision), from a rate of change of the size of images of an object which are captured at a given interval of time (see paragraphs [0019], [0020] of JP4521642B2)) [Nagaoka: para. 0008] ; (The attention seeking output generation determiner 108 calculates a positional change x (horizontal) and a positional change y (vertical) of the image portion of the target object to be monitored between the images that are captured at the prescribed time intervals, and determines a contact possibility that the target object to be monitored and the vehicle 12 will contact each other, based on the determined period of time TTC and the calculated positional changes (motion vector) x, y) [Nagaoka: para. 0056]).
Regarding claim 8, Nagaoka meets the claim limitations as set forth in claim 1.
Nagaoka further meets the claim limitations as follow.
wherein the periodicity is evaluated (a single vehicle-mounted infrared camera which captures at least two images (two frames) of an object in the periphery of a vehicle at a given interval of time. As the relative speed between the object and the vehicle incorporating the vehicle periphery monitoring apparatus is higher, the size of an image of the object in the image captured later changes more greatly from the size of an image of the object in the image captured earlier. As the relative speed between the object and the vehicle is higher, the object that is present ahead of the vehicle reaches the vehicle in a shorter period of time. Consequently, even a single infrared camera is able to monitor the periphery of a vehicle by estimating a period of time which an object takes to reach the vehicle, so-called TTC (Time To Contact or Time to Collision), from a rate of change of the size of images of an object which are captured at a given interval of time (see paragraphs [0019], [0020] of JP4521642B2)) [Nagaoka: para. 0008] ; (The attention seeking output generation determiner 108 calculates a positional change x (horizontal) and a positional change y (vertical) of the image portion of the target object to be monitored between the images that are captured at the prescribed time intervals, and determines a contact possibility that the target object to be monitored and the vehicle 12 will contact each other, based on the determined period of time TTC and the calculated positional changes (motion vector) x, y) [Nagaoka: para. 0056]), based on a change in a pixel value of the partial areas in the plurality of images ((The rate of change Rate is determined as a ratio between the width or length W0 (which may be stored as a number of pixels) of the target object to be monitored in an image captured earlier and the width or length W1 (which may be stored as a number of pixels) of the target object to be monitored in an image captured later (Rate=W0/W1)) [Nagaoka: para. 0053]; (When the horizontally spaced lights 70a, 70b of a higher luminance level are detected in the binarizing process, a quadrangular mask having a prescribed area and extending horizontally, which, for example, has a horizontal width greater than the horizontal width of the other vehicle Car, generally covering a distance from the left end of the light 70a to the right end of the light 70b, and a vertical width slightly greater than the vertical width of the lights 70a, 70b, is applied to the image of the other vehicle Car and vertically moved above the lights 70a, 70b, and an area having a succession of identical pixel values within the grayscale image in the mask can be detected (extracted) as a roof (and a roof edge). Another quadrangular mask extending vertically, which, for example, has a horizontal width comparable to the horizontal width of the lights 70a, 70b and a vertical width which is 1 to 2 times the vertical width of the lights 70a, 70b, is applied laterally of the lights 70a, 70b, and an area having a succession of identical pixel values within the grayscale image in the mask can be detected (extracted) as a pillar (and a pillar edge) or a fender (and a fender edge)) [Nagaoka: para. 0068; Figs. 5-8B]).
Regarding claim 9, Nagaoka meets the claim limitations as set forth in claim 1.
Nagaoka further meets the claim limitations as follow.
wherein the periodicity is evaluated (a single vehicle-mounted infrared camera which captures at least two images (two frames) of an object in the periphery of a vehicle at a given interval of time. As the relative speed between the object and the vehicle incorporating the vehicle periphery monitoring apparatus is higher, the size of an image of the object in the image captured later changes more greatly from the size of an image of the object in the image captured earlier. As the relative speed between the object and the vehicle is higher, the object that is present ahead of the vehicle reaches the vehicle in a shorter period of time. Consequently, even a single infrared camera is able to monitor the periphery of a vehicle by estimating a period of time which an object takes to reach the vehicle, so-called TTC (Time To Contact or Time to Collision), from a rate of change of the size of images of an object which are captured at a given interval of time (see paragraphs [0019], [0020] of JP4521642B2)) [Nagaoka: para. 0008]; (The attention seeking output generation determiner 108 calculates a positional change x (horizontal) and a positional change y (vertical) of the image portion of the target object to be monitored between the images that are captured at the prescribed time intervals, and determines a contact possibility that the target object to be monitored and the vehicle 12 will contact each other, based on the determined period of time TTC and the calculated positional changes (motion vector) x, y) [Nagaoka: para. 0056], by converting a pixel value of the partial areas in the plurality of images into a frequency domain (The rate of change Rate is determined as a ratio between the width or length W0 (which may be stored as a number of pixels) of the target object to be monitored in an image captured earlier and the width or length W1 (which may be stored as a number of pixels) of the target object to be monitored in an image captured later (Rate=W0/W1)) [Nagaoka: para. 0053 – Note: The rate of change is frequency].
Regarding claim 10, Nagaoka meets the claim limitations as set forth in claim 1.
Nagaoka further meets the claim limitations as follow.
wherein the partial area is set ((When the horizontally spaced lights 70a, 70b of a higher luminance level are detected in the binarizing process, a quadrangular mask having a prescribed area and extending horizontally, which, for example, has a horizontal width greater than the horizontal width of the other vehicle Car, generally covering a distance from the left end of the light 70a to the right end of the light 70b, and a vertical width slightly greater than the vertical width of the lights 70a, 70b, is applied to the image of the other vehicle Car and vertically moved above the lights 70a, 70b, and an area having a succession of identical pixel values within the grayscale image in the mask can be detected (extracted) as a roof (and a roof edge). Another quadrangular mask extending vertically, which, for example, has a horizontal width comparable to the horizontal width of the lights 70a, 70b and a vertical width which is 1 to 2 times the vertical width of the lights 70a, 70b, is applied laterally of the lights 70a, 70b, and an area having a succession of identical pixel values within the grayscale image in the mask can be detected ( extracted) as a pillar (and a pillar edge) or a fender ( and a fender edge).) [Nagaoka: para. 0068; Figs. 5-8B]), based on a difference between the first image and the second image (a single vehicle-mounted infrared camera which captures at least two images (two frames) of an object in the periphery of a vehicle at a given interval of time. As the relative speed between the object and the vehicle incorporating the vehicle periphery monitoring apparatus is higher, the size of an image of the object in the image captured later changes more greatly from the size of an image of the object in the image captured earlier. As the relative speed between the object and the vehicle is higher, the object that is present ahead of the vehicle reaches the vehicle in a shorter period of time. Consequently, even a single infrared camera is able to monitor the periphery of a vehicle by estimating a period of time which an object takes to reach the vehicle, so-called TTC (Time To Contact or Time to Collision), from a rate of change of the size of images of an object which are captured at a given interval of time (see paragraphs [0019], [0020] of JP4521642B2)) [Nagaoka: para. 0008] ; (The attention seeking output generation determiner 108 calculates a positional change x (horizontal) and a positional change y (vertical) of the image portion of the target object to be monitored between the images that are captured at the prescribed time intervals, and determines a contact possibility that the target object to be monitored and the vehicle 12 will contact each other, based on the determined period of time TTC and the calculated positional changes (motion vector) x, y) [Nagaoka: para. 0056]).
Regarding claim 11, Nagaoka meets the claim limitations as set forth in claim 10.
Nagaoka further meets the claim limitations as follow.
wherein the partial area is set to be included in an area having a predetermined width from either a right end or a left end of the image ((When the horizontally spaced lights 70a, 70b of a higher luminance level are detected in the binarizing process, a quadrangular mask having a prescribed area and extending horizontally, which, for example, has a horizontal width greater than the horizontal width of the other vehicle Car, generally covering a distance from the left end of the light 70a to the right end of the light 70b, and a vertical width slightly greater than the vertical width of the lights 70a, 70b, is applied to the image of the other vehicle Car and vertically moved above the lights 70a, 70b, and an area having a succession of identical pixel values within the grayscale image in the mask can be detected (extracted) as a roof (and a roof edge). Another quadrangular mask extending vertically, which, for example, has a horizontal width comparable to the horizontal width of the lights 70a, 70b and a vertical width which is 1 to 2 times the vertical width of the lights 70a, 70b, is applied laterally of the lights 70a, 70b, and an area having a succession