DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claims 1, 5-22 and 23 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 5-22 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Yoshida et al. US 2017/0140230 as applied to claims 1-2 above, and further in view of Imura et al. US 10,266,114.
In regarding to claim 1 Yoshida teaches:
1. An imaging device for performing display control of a plurality of captured image obtained by imaging an environment around a vehicle, wherein a processor mounted in the imaging device is configured to perform the following processing: recognizing a monitoring target included in the plurality of captured image;
[0077] First, the first recognition part 316 which constitutes the object recognition part 306 recognizes the detection range of the first object in the captured image based on the information from the detection range determination part 304 (step S41). The information from the detection range determination part 304 is, for example, position information X1′, X2′, Y1′, and Y2′ in the captured image illustrated in FIG. 8.
Yoshida, 0077, 0079, emphasis added.
generating a plurality of processed captured image obtained by adding highlighting to the plurality of captured image to enhance the visibility of the monitoring target;
[0083] The second recognition part 326 performs matching processing on the image in the region being set by the first recognition part 316, with the edge image (step S45). Then, the second recognition part 326 only needs to perform the matching processing on the image within the region being set, instead of the entire captured image. Therefore, the processing time can be shortened and the processing load of the driving assist apparatus 300 can be decreased.
Yoshida, 0083, 0088, emphasis added.
displaying a first image of the plurality of processed captured image on a display unit;
[0062] The control part 308 performs a control operation to display on a display device such as a navigation screen or HUD (Head Up Display), the model of the object existing around one's vehicle which is generated by the model generation part 307. As the model is displayed on the display device, it is possible to make the driver aware of the existence of the object existing around his or her vehicle visually. The control operation of the control part 308 is not limited to displaying, and the control part 308 may perform a control operation to notify the driver of the existence of the object existing around his or her vehicle by sound or vibration. In this case, a model generated by the model generation part 307 is not needed, and it suffices if the position information and distance information of the object existing around his or her vehicle is obtained. The control part 308 may transmit a signal to the outside so as to control the driving (for example, braking) of the vehicle based on the position information and distance information of the object existing around one's vehicle
Yoshida, 0062 and Fig. 10, emphasis added.
tracking the monitoring target from the first image to a second image of the plurality of processed captured images, wherein the highlighting is added on the first image and second image;
[0088] Furthermore, the first recognition part 316 of the object recognition part 306 performs image processing on the image existing in the range other than the detection range of the first object within the captured image, that is, performs a process of calculating a pixel group having a color density equal to or higher than the threshold. The second recognition part 326 of the object recognition part 306 performs image processing only on an image within the region based on the calculated pixel group, that is, performs matching processing between the image within the region and the edge image. As a result, the processing time can be shortened, and the processing load of the driving assist apparatus 300 can be decreased. Especially, the matching processing between the captured image and the edge image is very time-consuming if it is performed on the entire captured image. Hence, if the matching processing is performed only on the image within the region being set, it largely contributes to shortening of time.
Yoshida, 0062 and Fig. 10, emphasis added.
and causing the display unit to display and enlarge the monitoring target based on an input operation in which a user selects the monitoring target.
[0062] The control part 308 performs a control operation to display on a display device such as a navigation screen or HUD (Head Up Display), the model of the object existing around one's vehicle which is generated by the model generation part 307. As the model is displayed on around his or her vehicle visually. The control operation of the control part 308 is not limited to displaying, and the control part 308 may perform a control operation to notify the driver of the existence of the object existing around his or her vehicle by sound or vibration. In this case, a model generated by the model generation part 307 is not needed, and it suffices if the position information and distance information of the object existing around his or her vehicle is obtained. The control part 308 may transmit a signal to the outside so as to control the driving (for example, braking) of the vehicle based on the position information and distance information of the object existing around one's vehicle.
Yoshida, 0062, emphasis added.
Yoshida teaches monitoring target, processed captured image obtained by adding highlighting to the captured image to enhance the visibility of the monitoring target, however, Yoshida fails to explicitly teach, but Imura teaches tracking the monitoring target from the first image to a second image of the plurality of processed captured images and displaying and enlarge the monitoring target.
(12) Referring to FIG. 1 again, the detection sensor 30 detects an object (e.g., a moving object such as a pedestrian or a vehicle, an obstacle on the road, a guardrail set up along the road, a traffic light, a utility pole, or the like) that is present in the imaging region 110 in the forward direction of the own vehicle 100. The detection sensor 30 includes a radar, a sonar, or the like, for example, and transmits a search wave and detects a reflected wave of the search wave to thereby detect a location, size, type, and the like of the object present ahead of the own vehicle 100. The detection sensor 30 then outputs a detection signal indicating detection results to the ECU 10.
(19) An operation of the imaging system 1 will be described. In the following description, the right side and the left side relative to the traveling direction of the own vehicle 100 (hereinafter referred to as forward direction of the camera 20) are simply referred to as right and left, respectively.
(25) On the other hand, in a right image region 151 and a left image region 153 of the normal image 150, the images of the right and left border areas 111 and 113 are displayed not being reduced (non-reduced state), or being reduced at a reduction ratio lower than that applied to the center area 112. FIG. 3 shows a display example in which images are not reduced (corrected) (i.e., in a state close to the vision obtained through the driver's direct visual recognition), to achieve higher visibility (i.e., to achieve easier visual recognition) for the driver than the display image of the center area 112 displayed in a reduced state. In other words, the display example shows a state in which images are corrected and reduced at a reduction ratio lower than that of the reduced state of the center area 112. Specifically, in the example shown in FIG. 3, the images of the right and left border areas 111 and 113 are displayed being enlarged relative to the image of the center area 112 where the object 120 is not present (i.e., displayed being relatively enlarged). In this way, in the present embodiment, the reduction degrees of the respective images of the imaging areas are adjusted such that the image of the imaging area where the object 120 is present is displayed in a more easy-to-see state than the images of other imaging areas, thereby correcting the image data. Thus, in the present embodiment, the display image data for entirely displaying the imaging region 110 is generated. In the following description, the displayed state achieving high visibility to the driver is referred to as normal state, for the sake of convenience.
Imura, col. 2 lines 51-62, col.3 lines 30-34, col. 4 lines 27-64, col. 8 lines 4-12 and Figs. 3-4, emphasis added.
Accordingly, it would have been obvious to one ordinary skill in the art before the effective filing date to combine the teaching of Imura with the system of Yoshida in order to track detected object from multiple image and displaying and enlarge the monitoring target, as such, the system provides object of enabling a driver to easily recognize an object present around the own vehicle …--col. 1 lines 43-44.
Note: The motivation that was applied to claim 1 above, applies equally as well to claim 5-22 and 23 as presented blow.
Claim 5 list all similar elements of claim 1, but in a non-transitory storage medium form rather than device form. Therefore, the supporting rationale of the rejection to claim 1 applies equally as well to claim 5. Furthermore, Yoshida teaches a non-transitory storage medium.
Yoshida, Fig. 1 item 305.
In regarding to claim 6 Yoshida and Imura teaches:
6. (New) The imaging device according to claim 1, furthermore, Imura teaches: wherein the displaying unit enlarges the monitoring target during the tracking of the monitoring target from the first image to the second image.
Imura, col. 2 lines 51-62, col.3 lines 30-34, col. 4 lines 27-64, col. 8 lines 4-12 and Figs. 3-4
In regarding to claim 7 Yoshida and Imura teaches:
7. (New) The imaging device according to claim 6, furthermore, Imura teaches: wherein the displaying unit displays the enlarged monitoring target in a processed captured image of the plurality of processed captured images in which the enlarged monitoring target is located.
Imura, col. 2 lines 51-62, col.3 lines 30-34, col. 4 lines 27-64, col. 8 lines 4-12 and Figs. 3-4
In regarding to claim 8 Yoshida and Imura teaches:
8. (New) The imaging device according to claim 1, furthermore, Yoshida teaches: wherein the plurality of captured images comprise at least: a front-side image of the vehicle, a left-side image of the vehicle, a right-side image of the vehicle, and a back-side image of the vehicle.
Yoshida, 0046, 0048
In regarding to claim 9 Yoshida and Imura teaches:
9. (New) The imaging device according to claim 8, furthermore, Yoshida teaches: wherein the imaging device comprises a first camera to capture the front-side image, a second camera to capture the left-side image, a third camera to capture the right-side image, and a fourth camera to capture the back- side image.
Yoshida, 0046, 0048
In regarding to claim 10 Yoshida and Imura teaches:
10. (New) The imaging device according to claim 9, furthermore, Yoshida teaches: wherein the imaging device captures the plurality of captured images when the vehicle stops.
Yoshida, 0046, 0048
In regarding to claim 11 Yoshida and Imura teaches:
11. (New) The imaging device according to claim 10, furthermore, Yoshida teaches: wherein the imaging device comprises a storage unit configured to store the plurality of the captured images for a predetermined time.
Yoshida, 0064
In regarding to claim 13 Yoshida and Imura teaches:
13. (New) The imaging device according to claim 11, wherein the storage unit stores the plurality of the captured images retroactively for the predetermined time when an event occurs in the environment around the vehicle.
Yoshida, 0055
In regarding to claim 14 Yoshida and Imura teaches:
14. (New) The imaging device according to claim 1, furthermore, Yoshida teaches: wherein the imaging device includes a control unit configured to (i) recognize the monitoring target and (ii) determine if an event has occurred in the environment around the vehicle.
Yoshida, 0077, 0079
In regarding to claim 15 Yoshida and Imura teaches:
15. (New) The imaging device according to claim 11, furthermore, Imura teaches: wherein the user performs the input operation during displaying the plurality of processed captured images from the storage unit.
Imura, Figs. 3-4
Claims 16-22 list all similar elements of claims 6-11, but in a method form rather than device form. Therefore, the supporting rationale of the rejection to claims 6-11 applies equally as well to claims 16-22.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
Claims 12 and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Yoshida et al. US 2017/0140230 as applied to claims 1-2 above, and further in view of Imura et al. US 10,266,114.
In regarding to claim 12 Yoshida and Imura teaches:
12. (New) The imaging device according to claim 11, however, Yoshida and Imura fails to explicitly teaches: wherein deletion of the plurality of the captured images occurs chronologically when no event occurs in the environment around the vehicle.
Official Notice is taken that both the concept and the advantage of implementing wherein deletion of the plurality of the captured images occurs chronologically when no event occurs in the environment around the vehicle are well known and expected in the art. Thus, it would have been obvious to one skilled in the art, at the time of the applicant’s invention, to utilize said feature within said system taught by Yoshida and Imura because such incorporation would result in minimizing storage capacity.
Claim 23 list all similar elements of claim 12, but in a method form rather than device form. Therefore, the supporting rationale of the rejection to claim 12 applies equally as well to claim 23.
Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
Li et al. US 2025/0022143
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL T TEKLE whose telephone number is (571)270-1117. The examiner can normally be reached Monday-Friday 8:00-4:30 ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached at 571-272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL T TEKLE/Primary Examiner, Art Unit 2481