DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant's arguments filed 10/20/2025 have been fully considered but they are not persuasive.
Applicant argues that the combination of Park and Kumon does not explicitly teach generate a surround-view image having a blind area by synthesizing a plurality of images obtained from the plurality of cameras.
In response, the examiner respectfully disagrees. Park teaches the second image controller 200 may be a control module in which Surround View Monitoring (SVM) and Advanced Driver Assistance System (ADAS) are integrated. The SVM may synthesize a plurality of first image data captured by the plurality of cameras 10 provided at the front, rear, and side of the vehicle 1, and then may output a surround view generated through the AVN 50. The ADAS may output the image processing or a blind spot image necessary for parking through the image of the camera 10 to assist the driver. [0059].
Applicant argues that the combination of Park and Kumon does not explicitly teach generate a corrected surround-view image by correcting the blind area using image data of surroundings of the blind area in the surround-view image.
In response, the examiner respectfully disagrees. Kumon teaches first, the image generation section 36 generates a usual image as illustrated in FIG. 5A on the basis of an image imaged by the rear camera 22. As illustrated in FIG. 4A, this usual image is the unaltered image imaged by the rear camera 22 or is an ordinary electronic mirror display trimmed to an imaging range that is suitable during running. As illustrated in FIG. 6A, the usual image has a virtual viewpoint V1 at the rear end of the vehicle body 14 and a virtual viewing angle W1 with a range spanning the rear of the vehicle 12. In the example in FIG. 5A, a vehicle C is displayed running along an adjacent lane at the rear of the vehicle (the vehicle 12).
The image generation section 36 also generates a blind spot completion image as illustrated in FIG. 5B on the basis of images imaged by the rear camera 22 and the rear side cameras 24. As illustrated in FIG. 4B, the blind spot completion image uses images from the images imaged by the right rear side camera 24R and the left rear side camera 24L in addition to the image imaged by the rear camera 22, transforming and compositing the three images into a continuously joined wide-angle image. In the descriptions below, a transformed and composited image of images imaged by the rear camera 22 and the left and right rear side cameras 24 is referred to as a “rear composite image”. As illustrated in FIG. 6B, the blind spot completion image has a virtual viewpoint V2 close to the vehicle front-and-rear middle of the vehicle 12, and a virtual viewing angle W2 with a range spanning the rear and the sides of the vehicle 12. In the blind spot completion image, the virtual viewpoint is moved forward and the virtual viewing angle is widened relative to the usual image. In the example in FIG. 4B, the vehicle C running along the adjacent lane is displayed at the rear of the vehicle (the vehicle 12), and sides of the rear of the vehicle 12, which are not displayed in the usual image, are displayed. In the blind spot completion image, a portion of the vehicle 12 that would be present in the range viewed from the virtual viewpoint V2 is displayed as a masked area S. This masked area S is formed by a range of the vehicle 12 projected onto a road surface being shaded out to appear like a shadow. The range of the masked area S is actually a region that cannot be seen, corresponding to beneath the vehicle 12, and does not enter any imaging angles A1 and A2 of each of the three cameras 20. The purpose of the appearance in which the masked area S is shaded out is to correct for a region that cannot be seen and make the images consistent with each other. In addition, because a range of the road surface occupied by the vehicle is displayed on the screen, there is an effect in that relationships with surrounding vehicles can be easily understood.
Further, the image generation section 36 generates a cabin interior view image as illustrated in FIG. 5C on the basis of images imaged by the rear camera 22, the rear side cameras 24 and the cabin interior camera 26. As illustrated in FIG. 4C, this cabin interior view image composites the image imaged by the cabin interior camera 26 in addition to the rear composite image. As illustrated in FIG. 6C, the cabin interior view image has a virtual viewpoint V3 close to the location of disposition of the display 40, and a virtual viewing angle W3 with a range of viewing across the passenger P on the rear seats 16C and the rear and sides of the vehicle 12. In the cabin interior view image, the virtual viewpoint is moved further toward the front and the virtual viewing angle is widened relative to the blind spot completion image. Therefore, in the cabin interior view image, the display range of the rear composite image based on the rear camera 22, the right rear side camera 24R and the left rear side camera 24L is expanded compared to the rear composite image in the blind spot completion image. Accordingly, the masked area S in the cabin interior view image is larger than in the blind spot completion image, and images of the surroundings are moved further to the outer sides of the image.
The passenger P on the rear seats 16C and the vehicle structures 16 appear in the image imaged by the cabin interior camera 26. However, as described above, the passenger P on the rear seats 16C and the vehicle structures 16 are processed to be separated out from surrounding images by the cabin interior image processing section 32 in advance. The image generation section 36 composites images of the vehicle structures 16 and an image of the passenger P on the rear seats 16C, which have been separated out by the cabin interior image processing section 32, with the rear composite image based on the images imaged by the rear camera 22, the right rear side camera 24R and the left rear side camera 24L. Specifically, the image generation section 36 composites the masked area S that is enlarged compared to the blind spot completion image with the images of the vehicle structures 16 in regions surrounding the masked area S, and then composites the image of the passenger P on the rear seats 16C, who is at the near side relative to the vehicle structures 16.
For the cabin interior view image, the image generation section 36 generates a viewing image in which the images of the vehicle structures 16 are made semi-transparent. This is done with a view to not completely masking regions that coincide with the vehicle structures 16 in the rear composite image based on the rear camera 22, the right rear side camera 24R and the left rear side camera 24L. The level of transparency of the vehicle structures 16 is increased toward the vehicle upper side. The closer to the vehicle upper side, the greater the transparency, which is to say, the more clearly the rear of the vehicle 12 is displayed. The style of transparency of the vehicle structures 16 is not limited thus. The transparency may be uniform, and the level of transparency may be arbitrarily adjusted from complete masking to complete transparency.
Moreover, the image generation section 36 generates a viewing image in which the passenger P on the rear seats 16C is composited. This image of the passenger P on the rear seats 16C is composited in a non-transparent state. In the example in FIG. 5C, the rear seats 16C and the passenger P seated on the rear seats 16C are displayed to be non-transparent, portions of the pillars 16A, which are the vehicle structures 16, are made transparent, and the roof 16B is displayed in a completely transparent state. Therefore, the vehicle C running along the adjacent lane at the rear of the vehicle (the vehicle 12) is visibly displayed even when it overlaps with the pillars 16A. [0050] – [0055].
Applicant argues that the combination of Park and Kumon does not explicitly teach set a plurality of regions of interest respectively adjacent to front, rear, left, and right outer sides of the blind area in the generated surround-view image; obtain respective virtual images by cropping and resizing images corresponding to the respective regions of interest; and replace respective target sub-areas in the blind area, which correspond to and area adjacent to the respective regions of interest, with the respective virtual images.
In response, the examiner respectfully disagrees. Kumon teaches first, the image generation section 36 generates a usual image as illustrated in FIG. 5A on the basis of an image imaged by the rear camera 22. As illustrated in FIG. 4A, this usual image is the unaltered image imaged by the rear camera 22 or is an ordinary electronic mirror display trimmed to an imaging range that is suitable during running. As illustrated in FIG. 6A, the usual image has a virtual viewpoint V1 at the rear end of the vehicle body 14 and a virtual viewing angle W1 with a range spanning the rear of the vehicle 12. In the example in FIG. 5A, a vehicle C is displayed running along an adjacent lane at the rear of the vehicle (the vehicle 12).
The image generation section 36 also generates a blind spot completion image as illustrated in FIG. 5B on the basis of images imaged by the rear camera 22 and the rear side cameras 24. As illustrated in FIG. 4B, the blind spot completion image uses images from the images imaged by the right rear side camera 24R and the left rear side camera 24L in addition to the image imaged by the rear camera 22, transforming and compositing the three images into a continuously joined wide-angle image. In the descriptions below, a transformed and composited image of images imaged by the rear camera 22 and the left and right rear side cameras 24 is referred to as a “rear composite image”. As illustrated in FIG. 6B, the blind spot completion image has a virtual viewpoint V2 close to the vehicle front-and-rear middle of the vehicle 12, and a virtual viewing angle W2 with a range spanning the rear and the sides of the vehicle 12. In the blind spot completion image, the virtual viewpoint is moved forward and the virtual viewing angle is widened relative to the usual image. In the example in FIG. 4B, the vehicle C running along the adjacent lane is displayed at the rear of the vehicle (the vehicle 12), and sides of the rear of the vehicle 12, which are not displayed in the usual image, are displayed. In the blind spot completion image, a portion of the vehicle 12 that would be present in the range viewed from the virtual viewpoint V2 is displayed as a masked area S. This masked area S is formed by a range of the vehicle 12 projected onto a road surface being shaded out to appear like a shadow. The range of the masked area S is actually a region that cannot be seen, corresponding to beneath the vehicle 12, and does not enter any imaging angles A1 and A2 of each of the three cameras 20. The purpose of the appearance in which the masked area S is shaded out is to correct for a region that cannot be seen and make the images consistent with each other. In addition, because a range of the road surface occupied by the vehicle is displayed on the screen, there is an effect in that relationships with surrounding vehicles can be easily understood.
Further, the image generation section 36 generates a cabin interior view image as illustrated in FIG. 5C on the basis of images imaged by the rear camera 22, the rear side cameras 24 and the cabin interior camera 26. As illustrated in FIG. 4C, this cabin interior view image composites the image imaged by the cabin interior camera 26 in addition to the rear composite image. As illustrated in FIG. 6C, the cabin interior view image has a virtual viewpoint V3 close to the location of disposition of the display 40, and a virtual viewing angle W3 with a range of viewing across the passenger P on the rear seats 16C and the rear and sides of the vehicle 12. In the cabin interior view image, the virtual viewpoint is moved further toward the front and the virtual viewing angle is widened relative to the blind spot completion image. Therefore, in the cabin interior view image, the display range of the rear composite image based on the rear camera 22, the right rear side camera 24R and the left rear side camera 24L is expanded compared to the rear composite image in the blind spot completion image. Accordingly, the masked area S in the cabin interior view image is larger than in the blind spot completion image, and images of the surroundings are moved further to the outer sides of the image.
The passenger P on the rear seats 16C and the vehicle structures 16 appear in the image imaged by the cabin interior camera 26. However, as described above, the passenger P on the rear seats 16C and the vehicle structures 16 are processed to be separated out from surrounding images by the cabin interior image processing section 32 in advance. The image generation section 36 composites images of the vehicle structures 16 and an image of the passenger P on the rear seats 16C, which have been separated out by the cabin interior image processing section 32, with the rear composite image based on the images imaged by the rear camera 22, the right rear side camera 24R and the left rear side camera 24L. Specifically, the image generation section 36 composites the masked area S that is enlarged compared to the blind spot completion image with the images of the vehicle structures 16 in regions surrounding the masked area S, and then composites the image of the passenger P on the rear seats 16C, who is at the near side relative to the vehicle structures 16.
For the cabin interior view image, the image generation section 36 generates a viewing image in which the images of the vehicle structures 16 are made semi-transparent. This is done with a view to not completely masking regions that coincide with the vehicle structures 16 in the rear composite image based on the rear camera 22, the right rear side camera 24R and the left rear side camera 24L. The level of transparency of the vehicle structures 16 is increased toward the vehicle upper side. The closer to the vehicle upper side, the greater the transparency, which is to say, the more clearly the rear of the vehicle 12 is displayed. The style of transparency of the vehicle structures 16 is not limited thus. The transparency may be uniform, and the level of transparency may be arbitrarily adjusted from complete masking to complete transparency.
Moreover, the image generation section 36 generates a viewing image in which the passenger P on the rear seats 16C is composited. This image of the passenger P on the rear seats 16C is composited in a non-transparent state. In the example in FIG. 5C, the rear seats 16C and the passenger P seated on the rear seats 16C are displayed to be non-transparent, portions of the pillars 16A, which are the vehicle structures 16, are made transparent, and the roof 16B is displayed in a completely transparent state. Therefore, the vehicle C running along the adjacent lane at the rear of the vehicle (the vehicle 12) is visibly displayed even when it overlaps with the pillars 16A. [0050] – [0055].
Kumon further teaches as a variant example of the cameras 20 of the exemplary embodiments, the three cameras other than the cabin interior camera 26—the rear camera 22 and the left and right rear side cameras 24—may be replaced with a 360° camera (for example, a camera that generates a composite image from two fisheye lenses at front and rear) disposed in a protrusion provided on the roof 16B of the vehicle 12. In this case, the imaging angle from the 360° camera may be cropped to imaging angles appropriate to the respective viewing images and the viewing images may be generated. [0114].
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1, 4-6, 8, 11-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Park et al. (US 2021/0120175 A1) in view of Kumon et al. (US 2018/0272937 A1).
Consider claim 1, Park teaches an apparatus for surround-view monitoring for a vehicle, the apparatus comprising: a plurality of cameras ([0059]); a display ([0098] – [0099]); and a controller configured to: generate a surround-view image having a blind area by synthesizing a plurality of images obtained from the plurality of cameras ([0059]).
However, Park does not explicitly teach generate a corrected surround-view image by correcting the blind area using image data of surroundings of the blind area in the surround-view image, set a plurality of regions of interest respectively adjacent to front, rear, left, and right outer sides of the blind area in the generated surround-view image; obtain respective virtual images by cropping and resizing images corresponding to the respective regions of interest; and replace respective target sub-areas in the blind area, which correspond to and area adjacent to the respective regions of interest, with the respective virtual images; and output the corrected surround-view image to the display
Kumon teaches generate a corrected surround-view image by correcting the blind area using image data of surroundings of the blind area in the surround-view image ([0050] – [0055]), the controller is configured to: set a plurality of regions of interest respectively adjacent to front, rear, left, and right outer sides of the blind area in the generated surround-view image ([0050] – [0055] and [0114]); obtain respective virtual images by cropping and resizing images corresponding to the respective regions of interest ([0050] – [0055] and [0114]); and replace respective target sub-areas in the blind area, which correspond to and area adjacent to the respective regions of interest, with the respective virtual images ([0050] – [0055] and [0114]); and output the corrected surround-view image to the display ([0050] – [0055]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of generating a corrected surround-view image by correcting the blind area using image data of surroundings of the blind area in the surround-view image because such incorporation would moderate a feeling of strangeness during viewing and that enables understanding of conditions at the vehicle rear and inside the vehicle cabin. [0008].
Consider claim 4, Kumon teaches the controller is configured to resize the virtual image to fit sizes of the respective target sub-area ([0050] – [0059], [0061] – [0074]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of generating a corrected surround-view image by correcting the blind area using image data of surroundings of the blind area in the surround-view image because such incorporation would moderate a feeling of strangeness during viewing and that enables understanding of conditions at the vehicle rear and inside the vehicle cabin. [0008].
Consider claim 5, Kumon teaches the controller is configured to perform filtering on the target sub-area for which the respective virtual image are substituted ([0050] – [0055], [0076] – [0081], [0111]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of generating a corrected surround-view image by correcting the blind area using image data of surroundings of the blind area in the surround-view image because such incorporation would moderate a feeling of strangeness during viewing and that enables understanding of conditions at the vehicle rear and inside the vehicle cabin. [0008].
Consider claim 6, Kumon teaches the controller is configured to synthesize the surround-view image and a vehicle image such that the vehicle image is disposed in the blind area ([0050] – [0059], [0061] – [0074]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of generating a corrected surround-view image by correcting the blind area using image data of surroundings of the blind area in the surround-view image because such incorporation would moderate a feeling of strangeness during viewing and that enables understanding of conditions at the vehicle rear and inside the vehicle cabin. [0008].
Consider claim 8, claim 8 recites the method implemented by the apparatus recited in claim 1. Thus, it is rejected for the same reasons.
Consider claim 11, claim 11 recites the method implemented by the apparatus recited in claim 4. Thus, it is rejected for the same reasons.
Consider claim 12, claim 12 recites the method implemented by the apparatus recited in claim 5. Thus, it is rejected for the same reasons.
Consider claim 13, claim 13 recites the method implemented by the apparatus recited in claim 6. Thus, it is rejected for the same reasons.
Claim(s) 2 and 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Park et al. (US 2021/0120175 A1) in view of Kumon et al. (US 2018/0272937 A1) and Watanabe (US 2017/0092234 A1).
Consider claim 2, the combination of Park and Kumon teaches all the limitations in claim 1 but does not explicitly teach the controller is configured to generate the surround-view image by performing one or both of masking the blind area in black or applying a gradation effect to the blind area.
Watanabe teaches the controller is configured to generate the surround-view image by performing one or both of masking the blind area in black or applying a gradation effect to the blind area ([0045], [0047], [0069], [0071], [0091]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of generating the surround-view image by masking the blind area in black because such incorporation would indicate where the blind area is. [0091].
Consider claim 9, claim 9 recites the method implemented by the apparatus recited in claim 2. Thus, it is rejected for the same reasons.
Claim(s) 7 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Park et al. (US 2021/0120175 A1) in view of Kumon et al. (US 2018/0272937 A1) and Donishi et al. (US 2014/0085473 A1).
Consider claim 7, the combination of Park and Kumon teaches all the limitations in claim 3 but does not explicitly teach the controller is configured to, when the vehicle is in parking mode, adjust a brightness of the blind area based on a brightness of the region of interest.
Donishi teaches the controller is configured to, when the vehicle is in parking mode, adjust a brightness of the blind area based on brightness values of the respective regions of interest ([0003], [0021] – [0022], [0025], [0029] – [0030], [0033] – [0040]).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the known technique of adjusting a brightness of the blind area based on a brightness of the region of interest because such incorporation would facilitate visual recognition of an object present in the shadow and provide better visibility of the photographic subject. [0022].
Consider claim 14, claim 14 recites the method implemented by the apparatus recited in claim 7. Thus, it is rejected for the same reasons.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TAT CHI CHIO whose telephone number is (571)272-9563. The examiner can normally be reached Monday-Thursday 10am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JAMIE J ATALA can be reached at 571-272-7384. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TAT C CHIO/ Primary Examiner, Art Unit 2486