DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are:
Imagining unit in claim 1 – At least at Par. [0016].
Object recognition unit in claims 1-3 – At least at Par. [0026-0029, 0033].
Light distribution control unit in claim 1 – At least at Par. [0025, 0028-0029, 0031, 0039].
Area specifying unit in claim 1 – At least at Par. [0029].
Collision avoidance control unit in claim 3 – At least at Par. [0011, 0030].
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over JP 7241772 in view of JP 2015148887 to Yoshida et al.
Regarding claim 1, JP 7241772 discloses an object recognition apparatus (At least where JP 7241772 discloses an object identification unit) comprising: an imaging unit (At least where JP 7241772 discloses an image acquisition unit that acquires a captured image captured by an imaging device; Fig. 1 and its description, “a vehicle control system (also a light distribution control system) 100 is installed in a vehicle (sometimes referred to as the own vehicle), and is installed in front of the vehicle to periodically display an image of a predetermined area in front of the vehicle. a camera (imaging device) 11 that takes pictures in time series, an image processing device 1 that processes the images (captured images) taken by the camera 11”) for capturing an image of a predetermined area in front of a vehicle; an object recognition unit configured to recognizes the presence of an object in a captured image by executing a predetermined image process on the captured image captured by the imaging unit (At least where JP 7241772 discloses, “The image processing apparatus 1 is a computer that executes external world recognition processing including object detection based on an image (color image) captured by the camera 11” and “The vehicle control device 2 is a computer that executes vehicle control processing based on the detection result of the an object identification unit, 16 by the image processing device 1 and includes a light distribution control unit 18”); a light distribution control unit (18) for controlling an irradiation range of an irradiation light emitted from a headlamp provided in the vehicle; an area specifying unit for specifying a light control area in which an irradiation light is shielded or dimmed and an irradiation area in which the irradiation light is not shielded or dimmed, by a light distribution control unit (At least where JP 7241772 discloses, “The light distribution control system 100 of the present embodiment includes an illumination direction information acquisition unit 17 that acquires information on the illumination direction of the headlights of the vehicle, an imaging device 11, and an image acquisition unit 12 that acquires an image captured by the imaging device 11. A luminance calculator 13 that divides the image acquired by the image acquisition unit 12 into a plurality of pixel regions 41 and calculates the luminance of each pixel region 41; a light source region setting unit 14 set as a light source region 42 including a plurality of pixel regions 41 adjacent to each other; a threshold determination unit 15 that determines a threshold for each light source region 42 based on the irradiation direction of the light source region 42; the area of each of the plurality of light source regions 42; Based on the luminance distribution of each pixel region 41 included in the light source region 42 set by the setting unit 14, the light source positioned in the light source region 42 is identified using the threshold determined by the threshold determination unit 15. and a light distribution control unit 18 that performs light distribution control based on the light shielding area of the headlamp determined based on the identification result of the object identification unit 16. The light distribution control unit 18 in the vehicle control device 2 executes light distribution control (light distribution control application) of the vehicle headlights based on the identification result (vehicle detection result) of the object identification unit 16 of the image processing device 1. do. Light distribution control includes, for example, setting the vehicle's headlights to high beam when there is no preceding or oncoming vehicle in front of the vehicle, and setting the vehicle's headlights to low beam when there is a preceding or oncoming vehicle.”), but does not expressly disclose that the object recognition unit performs a first image process and second image process to increase image luminance.
Nevertheless, Yoshida discloses that the object recognition unit configured to execute a first image process in the irradiation area (At least where Yoshida discloses, “ For example, while driving on a night road with few street lamps, as shown in FIG. 16, the road surface portion (about 1/3 of the lower part of the captured image) near the host vehicle illuminated by the headlamps is bright, but from the host vehicle” and where Yoshida further discloses, “.”), and execute a second image process for increasing the luminance of the captured image in the light control area as compared with the first image process (At least where Yoshida discloses, “On the other hand, when it is determined that the average luminance value is less than the predetermined first threshold value (No in S3), it is next determined whether or not the calculated average luminance value is equal to or less than the predetermined second threshold value (S5). ). In this determination, when it is determined that the average luminance value is equal to or less than the predetermined second threshold (Yes in S5), the specific image area is determined to be an area that is too dark, and the contrast of the area that is too dark is increased. The second correction table is selected (S6).” and where Yoshida discloses, “As the second correction table selected for the specific image area whose average luminance value is equal to or smaller than the predetermined second threshold, the luminance value before correction is increased as in pattern 1 or pattern 2 shown in FIG. Use a correction table,” and also where Yoshida discloses, “In the present embodiment, luminance correction processing is not performed for image regions other than the specific image region (non-specific image region), but if necessary, the luminance different from that of the specific image region with respect to the non-specific image region, correction processing may be executed. In this case, the object recognition accuracy can be increased for the identification target existing in the non-specific image region as in the case of the specific image region. Even when the object recognition process is performed using the brightness value without using the parallax value, the object recognition accuracy can be improved by performing the brightness correction process of the present embodiment.”).
Thus, it would have been obvious to a PHOSITA at the time of effective filing to have modified the object recognition unit of JP 7241772 to have image luminance correction, as taught by Yoshida, in order to enhance the images collected in varied light scenarios (especially where low-lighting contributes to poor contrast) in order to provide more detail about the imaged environment for improved object detection thus increasing driver/other driver safety.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over JP 7241772 in view of JP 2015148887 to Yoshida et al. in further view of JP 2019185639.
Regarding claim 3, the previous combination of JP 7241772/Yoshida discloses the claimed invention except for the object recognition unit causing a collision avoidance unit to recognize an object.
Nevertheless, JP 2019185639 discloses a vehicle comprising a collision avoidance control unit configured to execute a collision avoidance control for avoiding a collision between the vehicle and the object or reducing the damage of the collision when the object recognized in front of the vehicle satisfies a predetermined collision condition; and wherein the object recognition unit causes the collision avoidance control unit to recognize the object by transmitting an information of the recognized object to the collision avoidance control unit (At least where JP 2019185639 discloses, “Provided is a technique capable of determining the possibility of collision in a vehicle with higher accuracy. A collision determination device mounted on a host vehicle includes an object detection sensor that detects an object in front of the host vehicle, and a collision possibility between the host vehicle and an object detected by the object detection sensor. Estimating unit 110, a recognizing unit 111 recognizing the result of the object judging the possibility of collision between the object and the vehicle, a possibility of collision estimated by the estimating unit, and a possibility of collision recognized by the recognizing unit. And an operation changing unit that changes an operation mode for preventing the vehicle from colliding with the object when the predetermined condition is satisfied.”).
Thus, it would have been obvious to a PHOSITA at the time of effective filing to have modified the system of JP 7241772/Yoshida to have a collision avoidance system which is controlled by an object recognition unit’s recognition of a satisfied collision condition, as taught by JP 2019185639, in order to provide a means for avoiding collision events for increased safety to the driver and occupants of the vehicle.
Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over JP 2015148887 to Yoshida et al. in view of U.S. PG Pub. 2023/0371155 to Baker et al.
Regarding claim 1, Yoshida discloses an object recognition apparatus comprising: an imaging unit for capturing an image of a predetermined area in front of a vehicle (At least where Yoshida discloses, an “imaging unit, 101 which picks up an image of a traveling direction front area of traveling vehicle 100.”); an object recognition unit configured to recognizes the presence of an object in a captured image by executing a predetermined image process on the captured image captured by the imaging unit (At least where Yoshida discloses, “The CPU is responsible for the control of the image sensor controller of each of the imaging units 110A and 110B and the overall control of the processing hardware unit 120, and executes a recognition process for other vehicles, guardrails, and other various objects (recognition objects)” and where Yoshida further discloses, “the object recognition device 200 is configured by the processing hardware unit 120 and the image analysis unit 102”); a headlamp provided in the vehicle (At least where Yoshida discloses, “For example, while driving on a night road with few street lamps, as shown in FIG. 16, the road surface portion (about 1/3 of the lower part of the captured image) near the host vehicle illuminated by the headlamps is bright, but from the host vehicle.”); and wherein the object recognition unit configured to execute a first image process in the irradiation area (At least where Yoshida discloses, “The present invention relates to an image processing device, an object recognition device, a mobile device control system, and an image processing device that perform image processing for recognizing an object existing in an imaging region based on a captured image captured by one or more imaging means. The present invention relates to an object recognition program.” and where Yoshida further discloses, “For example, while driving on a night road with few street lamps, as shown in FIG. 16, the road surface portion (about 1/3 of the lower part of the captured image) near the host vehicle illuminated by the headlamps is bright, but from the host vehicle.”), and
execute a second image process for increasing the luminance of the captured image in the light control area as compared with the first image process (At least where Yoshida discloses, “On the other hand, when it is determined that the average luminance value is less than the predetermined first threshold value (No in S3), it is next determined whether or not the calculated average luminance value is equal to or less than the predetermined second threshold value (S5). ). In this determination, when it is determined that the average luminance value is equal to or less than the predetermined second threshold (Yes in S5), the specific image area is determined to be an area that is too dark, and the contrast of the area that is too dark is increased. The second correction table is selected (S6).” and where Yoshida discloses, “As the second correction table selected for the specific image area whose average luminance value is equal to or smaller than the predetermined second threshold, the luminance value before correction is increased as in pattern 1 or pattern 2 shown in FIG. Use a correction table,” and also where Yoshida discloses, “In the present embodiment, luminance correction processing is not performed for image regions other than the specific image region (non-specific image region), but if necessary, the luminance different from that of the specific image region with respect to the non-specific image region, correction processing may be executed. In this case, the object recognition accuracy can be increased for the identification target existing in the non-specific image region as in the case of the specific image region. Even when the object recognition process is performed using the brightness value without using the parallax value, the object recognition accuracy can be improved by performing the brightness correction process of the present embodiment.”).
While Yoshida contemplates various lighting scenarios (i.e. dark roads vs. well-lit scenes), Yoshida does not expressly discuss a unit for controlling the irradiation range or a unit to specify the light control area.
Nevertheless, Baker teaches a light distribution control unit for controlling an irradiation range of an irradiation light emitted from a headlamp provided in the vehicle (At least at Par. [0036, 0052, 0065]; differentiating between differing road illumination covering different distances) and an area specifying unit for specifying a light control area in which an irradiation light is shielded or dimmed and an irradiation area in which the irradiation light is not shielded or dimmed by a light distribution control unit (At least at Par. [0049-0050]; where headlamp illumination is dimmed to avoid blinding an object of interest or areas where streetlights are less likely to exist).
Thus, it would have been obvious to a PHOSITA at the time of effective filing to have modified the system of Yoshida to have a light distribution control unit with an area specifying unit, as taught by Baker, in order to adjust the range of illumination and have area recognition for illumination needs to promote both driver and other driver safety in varied situations where illumination needs are different. Further, though undisclosed in Yoshida, it is likely that Yoshida uses a light distribution control unit to switch between high beam headlights and low beam headlights since this is a typical feature for using different levels of light levels in headlights while driving.
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over JP 2015148887 to Yoshida et al. in view of U.S. PG Pub. 2023/0371155 to Baker et al. in further view of JP 5281023.
Regarding claim 2, the primary reference, Yoshida, discloses that the object recognition unit is configured to execute the second image process for increasing a luminance of the captured image and sharpening a contour edge of the object image in the light control area (At least where Yoshida discloses, “When the luminance image data is input, the image processing unit 131 executes the distortion correction processing by the distortion correction processing unit 125 after executing the luminance correction processing by the luminance correction processing unit 124. This distortion correction processing is performed by using luminance image data (compared with a reference image) output from each of the imaging units 110A and 110B based on the distortion of the optical system in the imaging units 110A and 110B and the relative positional relationship between the left and right imaging units 110A and 110B. Image) is converted into an ideal parallel stereo image obtained when two pinhole cameras are mounted in parallel” and “If the characteristics of the luminance distribution of the block are small (contrast is low), Cannot find the corresponding block showing the same point. In this case, the parallax value cannot be calculated for the pixel corresponding to the block. Therefore, in order to increase the number of pixels from which parallax data can be obtained in an image area where a recognition target object is projected, it is necessary to increase the contrast in the image area as much as possible” and “On the other hand, when it is determined that the average luminance value is less than the predetermined first threshold value (No in S3), it is next determined whether or not the calculated average luminance value is equal to or less than the predetermined second threshold value (S5). ). In this determination, when it is determined that the average luminance value is equal to or less than the predetermined second threshold (Yes in S5), the specific image area is determined to be an area that is too dark, and the contrast of the area that is too dark is increased. The second correction table is selected (S6)” and “On the contrary, as the second correction table selected for the specific image area whose average luminance value is equal to or smaller than the predetermined second threshold, the luminance value before correction is increased as in pattern 1 or pattern 2 shown in FIG. Use a correction table”).
While the words “sharpening a contour edge” are not explicitly discussed by Yoshida, a PHOSITA would have readily recognized that the increasing of the contrast described by Yoshida results in a harder edge of the contours of an object within an image. Thus, it would have been obvious to a PHOSITA at the time of effective filing to have recognized that the object detection system is being used to increase luminance and sharpness of a contour edge.
Yoshida does not expressly disclose that the increase in luminance and sharpening a contour edge of the object image is a result from filtering.
Nevertheless, JP 5281023 teaches an object recognition unit that is configured to execute the second image process by applying a filter for increasing a luminance of the captured image and sharpening a contour edge of the object image in the light control area (At least where JP 5281023 discloses, “According to the present invention, the filtering process is performed only when the luminance variance of the original image is equal to or smaller than a predetermined value and the overall luminance difference of the original image is small, and the object to be monitored by edge enhancement is performed. Detection of an object image can be facilitated” and “Next, with reference to FIG. 2A and FIG. 2B, the effect of performing filter processing using a differential filter by the filter processing unit 13 will be described. FIG. 2A shows a luminance profile of a pixel whose vertical coordinate is y 1 in an original image Im1 including image portions A and B having different luminances. In FIG. 2A, in the original image Im1, the luminance difference between the image portions A and B is Lw1. Also, FIG. 2 (b), the original image Im1 shown in FIG. 2 (a), the filter processing unit 13, the filter processed image Im2 carrying out the filtering process by a differential filter, the vertical coordinate y 1 The luminance profile of the pixel which is is shown. As shown in FIG. 2B, by performing a filtering process using a differential filter, the boundary edge between the image portion B of the original image Im1 and the surrounding portion (background portion) A is enhanced, and the image portion B ′ The luminance difference with the surrounding portion A is enlarged to Lw2. Thus, by enlarging the luminance difference between the image portion B and the surrounding portion A (Lw1 → Lw2), the extraction of the image portion B can be facilitated.”).
Thus, it would have been obvious to a PHOSITA at the time of effective filing to have modified the system of Yoshida/Baker to have a applied a filter to increase luminance and sharpen a contour line of the object image, as taught by JP 5281023, in order to provide greater detail by enhancing the image in situations where lighting is poor for improved image quality.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over JP 2015148887 to Yoshida et al. in view of U.S. PG Pub. 2023/0371155 to Baker et al. in further view of JP 2019185639.
Regarding claim 3, the previous combination of Yoshida and Baker discloses the claimed invention except for the object recognition unit causing a collision avoidance unit to recognize an object.
Nevertheless, JP 2019185639 discloses a vehicle comprising a collision avoidance control unit configured to execute a collision avoidance control for avoiding a collision between the vehicle and the object or reducing the damage of the collision when the object recognized in front of the vehicle satisfies a predetermined collision condition; and wherein the object recognition unit causes the collision avoidance control unit to recognize the object by transmitting an information of the recognized object to the collision avoidance control unit (At least where JP 2019185639 discloses, “Provided is a technique capable of determining the possibility of collision in a vehicle with higher accuracy. A collision determination device mounted on a host vehicle includes an object detection sensor that detects an object in front of the host vehicle, and a collision possibility between the host vehicle and an object detected by the object detection sensor. Estimating unit 110, a recognizing unit 111 recognizing the result of the object judging the possibility of collision between the object and the vehicle, a possibility of collision estimated by the estimating unit, and a possibility of collision recognized by the recognizing unit. And an operation changing unit that changes an operation mode for preventing the vehicle from colliding with the object when the predetermined condition is satisfied.”).
Thus, it would have been obvious to a PHOSITA at the time of effective filing to have modified the system of Yoshida/Baker to have a collision avoidance system which is controlled by an object recognition unit’s recognition of a satisfied collision condition, as taught by JP 2019185639, in order to provide a means for avoiding collision events for increased safety to the driver and occupants of the vehicle.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Brodie Follman whose telephone number is (571)270-1169. The examiner can normally be reached 8am-4:30pm EST M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Erin Piateski can be reached at (571)270-7429. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BRODIE J FOLLMAN/Primary Patent Examiner, Art Unit 3669