Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in response to application filed 02/14/2025 in which claims 16-31 are pending.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 12/16/2025, 02/14/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 16-31 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1- of U.S. Patent No. US 12256178 B2 in view of Aihara et al. (WO2019123840 A1) (IDS provided 02/14/2025). Although the claims at issue are not identical, they are not patentably distinct from each other because the examined application claim is obvious over the conflicting patent claim
The difference between the instant and conflicting patent claim is the addition of limitation narrow angle image data cut from the image data, wherein the narrow angle image data corresponds to a central region including an optical axis of an optical system in the instant claim. See the table below. However Aihara discloses narrow angle image data cut from the image data, wherein the narrow angle image data corresponds to a central region including an optical axis of an optical system in para[0040] the side camera 10 of, as shown in FIG. 14A, an enlarged image of the subject (here, the motorcycle 104) is displayed in the region R1 used for image analysis. can get. Therefore, when this region R1 is cut out, an image having a high resolution can be obtained as shown in FIG. 14B, so that the bike 104 to be detected can be accurately recognized in the image analysis
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to utilize limitation “narrow angle image data cut from the image data, wherein the narrow angle image data corresponds to a central region including an optical axis of an optical system” in the method of the conflicting patent claim, since to obtain an image in which a subject is magnified in region, when this region is cut out, an image having high resolution can be obtained so that the detection target in image analysis can be precisely recognized.
Instant application:19/053,552
Patent No.: US 12256178 B2
16. An image processing device comprising: at least one processor or circuit configured to function as: an imaging unit
configured to image a side behind a first moving device to generate image data;
a detection unit configured to detect a second moving device that satisfies a predetermined condition from the image data;
and a display control unit configured to display narrow angle image data cut from the image data, wherein the narrow angle image data corresponds to a central region including an optical axis of an optical system, in a case when the detection unit does not detect the second moving device, and to display wide angle image data corresponding to a wider region than the narrow angle image, in a case when the detection unit detects the second moving device.
1. An image processing device comprising: at least one processor or circuit configured to function as: an imaging unit including an optical system that forms an optical image including a low-distortion region and a high-distortion region on a light receiving surface and configured to image a side behind a first moving device; an image processing unit configured to generate image data by performing distortion correction on imaged data generated by the imaging unit; a detection unit configured to detect a second moving device that satisfies a predetermined condition from the image data corresponding to a region including the high-distortion region; and a display control unit configured to display narrow angle image data corresponding to the low-distortion region in the image data
in a case when the detection unit does not detect the second moving device, and to display wide angle image data corresponding to the low-distortion region and the high-distortion region, in a case when the detection unit detects the second moving device, so that the wide angle image data including the second moving device is displayed.
17. The image processing device according to claim 16, wherein, in a case when a focal distance of the optical system is defined as f, a half angle of view is defined as θ, an image height on an image plane is defined as y, and a projection property representing a relationship between the image height y and the half angle of view θ is defined as y(θ),y(θ) in a low-distortion region is greater than f x 0 and is different from the projection property in a high-distortion region.
2. The image processing device according to claim 1, wherein, in a case when a focal distance of the optical system is defined as f, a half angle of view is defined as θ, an image height on an image plane is defined as y, and a projection property representing a relationship between the image height y and the half angle of view θ is defined as y(θ), y(θ) in the low-distortion region is greater than f×θ and is different from the projection property in the high-distortion region.
18. The image processing device according to claim 17, wherein the low- distortion region is configured to have a projection property that is approximate to a central projection method, wherein y = f x tan θ, or an equidistance projection method, wherein y = y x 0.
3. The image processing device according to claim 2, wherein the low-distortion region is configured to have a projection property that is approximate to a central projection method, wherein y=f×tan θ, or an equidistance projection method, wherein y=y×θ.
19. The image processing device according to claim 17, wherein, in a case when θ max is defined as a maximum half angle of view that the optical system has, the image processing device is configured to satisfy: 1 < f x sin(θ max)/y(θ max) 1.9.
4. The image processing device according to claim 2, wherein, in a case when θ max is defined as a maximum half angle of view that the optical system has, the image processing device is configured to satisfy:1<f×sin(θ max)/y(θ max)≤1.9.
20. The image processing device according to claim 16, wherein the display control unit causes third image data, including a high-distortion region on a side behind the first moving device, to be displayed, when the first moving device moves backward.
5. The image processing device according to claim 1, wherein the display control unit causes third image data, including the high-distortion region on a side behind the first moving device, to be displayed when the first moving device moves backward.
21. The image processing device according to claim 16, wherein the detection unit detects, as the second moving device, a moving device that is present at a distance that is less than a predetermined distance from the first moving device in the image data other than the narrow angle image data, the image data being the image data in the region including a high- distortion region.
6. The image processing device according to claim 1, wherein the detection unit detects, as the second moving device, a moving device that is present at a distance that is less than a predetermined distance from the first moving device in the image data other than the narrow angle image data, the image data being the image data in the region including the high-distortion region.
22. The image processing device according to claim 16, wherein the detection unit detects, as the second moving device, a moving device that is blinking its blinker under a predetermined condition.
7. The image processing device according to claim 1, wherein the detection unit detects, as the second moving device, a moving device that is blinking its blinker under a predetermined condition.
23. The image processing device according to claim 16, wherein the detection unit detects, as the second moving device, a moving device that repeats a motion of crossing a boundary of passing sections.
8. The image processing device according to claim 1, wherein the detection unit detects, as the second moving device, a moving device that repeats a motion of crossing a boundary of passing sections
24. The image processing device according to claim 16, wherein the detection unit detects, as the second moving device, a moving device that is performing a headlight flashing motion.
9. The image processing device according to claim 1, wherein the detection unit detects, as the second moving device, a moving device that is performing a headlight flashing motion.
25. The image processing device according to claim 16, wherein the display control unit causes a predetermined transition time to be spent when display of the narrow angle image data and the wide angle image data is switched.
10. The image processing device according to claim 1, wherein the display control unit causes a predetermined transition time to be spent when display of the narrow angle image data and the wide angle image data is switched.
26. The image processing device according to claim 25, wherein the transition time when display is switched from the narrow angle image data to the wide angle image data, and the transition time when display is switched from the wide angle image data to the narrow angle image data, are different from each other.
11. The image processing device according to claim 10, wherein the transition time when display is switched from the narrow angle image data to the wide angle image data, and the transition time when display is switched from the wide angle image data to the narrow angle image data, are different from each other.
27. The image processing device according to claim 16, wherein the detection unit is able to detect a type of the second moving device, and the display control unit causes a condition under which the wide angle image data is displayed to be different, depending on the type.
12. The image processing device according to claim 1, wherein the detection unit is able to detect a type of the second moving device, and the display control unit causes a condition under which the wide angle image data is displayed to be different, depending on the type.
29. A moving device comprising: at least one processor or circuit configured to function as: an imaging unit
configured to image a side behind a first moving device to generate image data;
a detection unit configured to detect a second moving device that satisfies a predetermined condition from the image data;
a display control unit configured to display narrow angle image data cut from the image data, wherein the narrow angle image data corresponds to a central region including an optical axis of an optical system, in a case when the detection unit does not detect the second moving device, and to display wide angle image data corresponding to a wider region than the narrow angle image, in a case when the detection unit detects the second moving device;
a display unit configured to have display controlled by the display control unit; and a drive unit configured to cause the first moving device to move.
13. A moving device comprising: at least one processor or circuit configured to function as: an imaging unit including an optical system that forms an optical image including a low-distortion region and a high-distortion region on a light receiving surface and configured to image a side behind a first moving device; an image processing unit configured to generate image data by performing distortion correction on imaged data generated by the imaging unit; a detection unit configured to detect a second moving device that satisfies a predetermined condition from the image data corresponding to a region including the high-distortion region;
a display control unit configured to display narrow angle image data corresponding to the low-distortion region in the image data
in a case when the detection unit does not detect the second moving device, and to display wide angle image data corresponding to the low-distortion region and the high-distortion region, in a case when the detection unit detects the second moving device, so that the wide angle image data including the second moving device is displayed;
a display unit configured to have display controlled by the display control unit; and a drive unit configured to cause the first moving device to move.
30. An image processing method comprising: an imaging step of imaging a side behind a first moving device using an imaging unit to generate image data;
a detecting step of detecting a second moving device that satisfies a predetermined condition from the image data;
and a display control step of causing narrow angle image data, cut from the image data, to be displayed, wherein the narrow angle image data corresponds to a central region including an optical axis of an optical system to be displayed, in a case when the detection step does not detect the second moving device,
and causing wide angle image data corresponding to a wider region than the narrow angle image to be displayed, in a case when the detection step detects the second moving device.
14. An image processing method comprising: an imaging step of imaging a side behind a first moving device using an imaging unit that includes an optical system configured to form an optical image including a low-distortion region and a high-distortion region on a light receiving surface; an image processing step of generating image data by performing distortion correction on imaged data acquired in the imaging step; a detecting step of detecting a second moving device that satisfies a predetermined condition from the image data corresponding to a region including the high-distortion region; and a display control step of causing narrow angle image data corresponding to the low-distortion region in the image data to be displayed
in a case when the detection unit does not detect the second moving device, and causing wide angle image data corresponding to the low-distortion region and the high-distortion region to be displayed, in a case when the detection unit detects the second moving device, so that the wide angle image data including the second moving device is displayed.
31. A non-transitory computer-readable storage medium configured to store a computer program comprising instructions for executing the following processes: an imaging step of imaging a side behind a first moving device using an imaging unit to generate image data;
a detecting step of detecting a second moving device that satisfies a predetermined condition from the image data;
and a display control step of causing narrow angle image data, cut from the image data, to be displayed, wherein the narrow angle image data corresponds to a central region including an optical axis of an optical system to be displayed, in a case when the detection step does not detect the second moving device, and causing wide angle image data corresponding to a wide region than the narrow angle image to be displayed, in a case when the detection step detects the second moving device..
15. A non-transitory computer-readable storage medium configured to store a computer program comprising instructions for executing the following processes: an imaging step of imaging a side behind a first moving device using an imaging unit that includes an optical system configured to form an optical image including a low-distortion region and a high-distortion region on a light receiving surface; an image processing step of generating image data by performing distortion correction on imaged data acquired in the imaging step; a detecting step of detecting a second moving device that satisfies a predetermined condition from the image data corresponding to a region including the high-distortion region; and a display control step of causing narrow angle image data corresponding to the low-distortion region in the image data to be displayed
in a case when the detection unit does not detect the second moving device, and causing wide angle image data corresponding to the low-distortion region and the high-distortion region to be displayed, in a case when the detection unit detects the second moving device, so that the wide angle image data including the second moving device is displayed.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
9. Claims 16, 20-21, 27-31 are rejected under 35 U.S.C. 103 as being unpatentable over Yasuhara et al. (JP2014235639 A) (IDS provided 02/14/2025) in view of Aihara et al. (WO2019123840 A1) (IDS provided 02/14/2025).
Regarding claim 16, Yasuhara discloses an image processing device comprising (Para [0011], [0016], [0018] & Figs. 1, 2, 7, and 8 teaches vehicle display device): at least one processor or circuit configured to function as: an imaging unit configured to image a side behind a first moving device to generate image data (para[0014] teaches the rear camera 1c is installed to image the rear of the vehicle, and is provided to image between a portion projected by the right side camera 1a and a portion projected by the left side camera 1b. The rear camera 1c may be provided outside the back door (outside the vehicle) or may be provided inside the back door (in the vehicle); a detection unit configured to detect a second moving device (Para[0036] teaches and a vehicle detection unit 12 for detecting another vehicle) that satisfies a predetermined condition from the image data (Para[0024] teaches FIG. 2, the rear camera 1c is provided on the back door. For example, when another vehicle approaches a position near the rear of the host vehicle, the distance from the rear camera 1c to the other vehicle is shorter than the distance from the rearview mirror to the other vehicle. Therefore, the rear camera 1c captures another vehicle at a position closer than the position of the rearview mirror, so the size of the other vehicle imaged by the rear camera 1c is larger than the size of the other vehicle reflected in the rearview mirror); and a display control unit (Para[0022]-[0023] teaches image control unit 11) configured to display narrow angle image data, in a case when the detection unit does not detect the second moving device (para[0030] teaches at least two types of cameras are used to capture both the distant part of the host vehicle and the vicinity part of the host vehicle on the display. As shown in FIGS. 5 and 6, it is assumed that the rear camera 1c captures a distant place, so a camera with a narrow angle of view is used, Para[0034] On the other hand, when another vehicle approaches the host vehicle and the other vehicle enters a dead angle range (a range between the imaging range of rear camera 1 c and the ground surface) which is outside the imaging range of rear camera 1 c, the rear camera 1c can not take at least a part of other vehicles. The display screen of the central display 1c at this time is shown in FIG. FIG. 7 is a view showing a display screen of the central display 1c when a display image is generated only with a captured image of the rear camera 1c in a state where the position of another vehicle is included in the blind spot range of the rear camera 1c. [0035] As shown in FIG. 7, since a part of the other vehicle is included in the dead angle range of the rear camera 1 c, the part of the other vehicle can not be displayed on the central display 1 c. Then, for example, when another vehicle approaches the host vehicle from the state shown in FIG. 7 such as in a traffic jam, most of the other vehicles can not be displayed on the central display 1c.), and to display wide angle image data corresponding to a wider region than the narrow angle image, in a case when the detection unit detects the second moving device (para[0030] teaches on the other hand, a camera with a wide angle of view is used for the back camera 1d because it is assumed that the surroundings of the host vehicle are taken widely. Para[0035] teaches Therefore, in the vehicle display device according to the present embodiment, as described below, when an object such as another vehicle is included in the blind spot range of the rear camera 1c, the central display is performed using the image captured by the back camera 1d. The display image of 1c is generated. Para[0039] In addition, the image control unit 11 assigns the captured image of the back camera 1d to the central display 1c at the position below the display screen. The image to be assigned is an image of a portion overlapping the blind spot range of the rear camera 1c in the captured image of the rear camera 1c, and is an image showing another vehicle. para[0048] teaches vehicle speed of the host vehicle is low, such as in a traffic jam, there is a high possibility that another vehicle approaches the host vehicle. Therefore, when the vehicle speed of the host vehicle is low, the occupant can control the other vehicle's vehicle by increasing the ratio of the captured image of the back camera 1d that captures the area near the rear of the host vehicle in the display image of the central display 1c).
Yasuhara does not explicitly disclose narrow angle image data cut from the image data , wherein the narrow angle image data corresponds to a central region including an optical axis of an optical system. However Aihara discloses image data cut from the image data , wherein the narrow angle image data corresponds to a central region including an optical axis of an optical system (para[0040] teaches according to the side camera 10 of the present embodiment, as shown in FIG. 14A, an enlarged image of the subject (here, the motorcycle 104) is displayed in the region R1 used for image analysis. can get. Therefore, when this region R1 is cut out, an image having a high resolution can be obtained as shown in FIG. 14B, so that the bike 104 to be detected can be accurately recognized in the image analysis). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method of detecting means for detecting an object positioned behind the own vehicle with so a camera with a narrow angle of view is used and on the other hand, a camera with a wide angle of view of Yasuhara with the method to obtain an image in which a subject is magnified in region, when this region is cut out, an image having high resolution can be obtained of Aihara in order to provide a system in which the detection target in image analysis can be precisely recognized.
Regarding claim 20, Yasuhara discloses the image processing device according to claim 16, wherein the display control unit causes third image data, including a high-distortion region on a side behind the first moving device, to be displayed, when the first moving device moves backward (para[0026] & Fig. 2 teaches the image control unit 11 generates a display image of the central display 2c using the captured image of the back camera 1d, and outputs the generated display image to the central display 2c. For example, when the user's operation or the shift lever is operated to the back position, the controller 10 switches the central display 2c from an image captured by the rear camera 1c to an image captured by the back camera 1d. At this time, the image control unit 11 generates a display image indicating at least the vicinity of the rear of the vehicle among the surroundings of the host vehicle from the image captured by the back camera 1d, para[0029] teaches and when the angle of view is wide, distortion becomes large at the outer edge portion of the imaging range. para[0030] teaches on the other hand, a camera with a wide angle of view is used for the back camera 1d because it is assumed that the surroundings of the host vehicle are taken widely).
Regarding claim 21, Yasuhara discloses the image processing device according to claim 16, wherein the detection unit detects, as the second moving device, a moving device that is present at a distance that is less than a predetermined distance from the first moving device in the image data other than the narrow angle image data, the image data being the image data in the region including a high- distortion region (Para[0024] teaches for example, when another vehicle approaches a position near the rear of the host vehicle, the distance from the rear camera 1c to the other vehicle is shorter than the distance from the rearview mirror to the other vehicle. Therefore, the rear camera 1c captures another vehicle at a position closer than the position of the rearview mirror, so the size of the other vehicle imaged by the rear camera 1c is larger than the size of the other vehicle reflected in the rearview mirror ,growing).
Regarding claim 27, Yasuhara discloses the image processing device according to claim 16, wherein the detection unit is able to detect a type of the second moving device, and the display control unit causes a condition under which the wide angle image data is displayed to be different, depending on the type (para[0042] –[0043], teaches shown in FIG. 8, in a portion corresponding to the blind spot range of the rear camera 1 c, a captured image of the back camera 1 d that captures a part of another vehicle is allocated. Therefore, the entire other vehicle is displayed on the central display 2c. When the image control unit 11 combines the captured image of the rear camera 1 c and the captured image of the back camera 1 d to generate a display screen, the image control unit 11 responds to the position of the other vehicle detected by the vehicle detection unit 12. The ratio of the images to be combined is set. The vehicle control unit 11 generates a display image such that the ratio of the image captured by the back camera 1 d is larger on the display screen as the range of another vehicle included in the blind spot range of the rear camera 1 c is larger. para[0047]-[0048]).
Regarding claim 28, Aihara discloses the image processing device according to claim 16, wherein the at least one processor or circuit is further configured to function as an image processing unit configured to generate image data by performing distortion correction on imaged data generated by the imaging unit (para[0070] teaches the signal processing circuit 13 of the side camera 10 performs gamma correction, distortion correction, and the like on the image, but these image processing may be performed by the control device 20). Motivation to combine as indicated in claim 16.
Regarding claim 29, Yasuhara discloses a moving device comprising: at least one processor or circuit configured to function as: an imaging unit configured to image a side behind a first moving device to generate image data (para[0014] teaches the rear camera 1c is installed to image the rear of the vehicle, and is provided to image between a portion projected by the right side camera 1a and a portion projected by the left side camera 1b. The rear camera 1c may be provided outside the back door (outside the vehicle) or may be provided inside the back door (in the vehicle);a detection unit configured to detect a second moving device that satisfies a predetermined condition from the image data (Para[0036] teaches and a vehicle detection unit 12 to for detecting another vehicle, (Para[0024] teaches FIG. 2, the rear camera 1c is provided on the back door. For example, when another vehicle approaches a position near the rear of the host vehicle, the distance from the rear camera 1c to the other vehicle is shorter than the distance from the rearview mirror to the other vehicle. Therefore, the rear camera 1c captures another vehicle at a position closer than the position of the rearview mirror, so the size of the other vehicle imaged by the rear camera 1c is larger than the size of the other vehicle reflected in the rearview mirror)); a display control unit configured to display narrow angle image data, wherein the narrow angle image data, in a case when the detection unit does not detect the second moving device (para[0030] teaches at least two types of cameras are used to capture both the distant part of the host vehicle and the vicinity part of the host vehicle on the display. As shown in FIGS. 5 and 6, it is assumed that the rear camera 1c captures a distant place, so a camera with a narrow angle of view is used, Para[0034] On the other hand, when another vehicle approaches the host vehicle and the other vehicle enters a dead angle range (a range between the imaging range of rear camera 1 c and the ground surface) which is outside the imaging range of rear camera 1 c, the rear camera 1c can not take at least a part of other vehicles. The display screen of the central display 1c at this time is shown in FIG. FIG. 7 is a view showing a display screen of the central display 1c when a display image is generated only with a captured image of the rear camera 1c in a state where the position of another vehicle is included in the blind spot range of the rear camera 1c. [0035] As shown in FIG. 7, since a part of the other vehicle is included in the dead angle range of the rear camera 1 c, the part of the other vehicle can not be displayed on the central display 1 c. Then, for example, when another vehicle approaches the host vehicle from the state shown in FIG. 7 such as in a traffic jam, most of the other vehicles can not be displayed on the central display 1c.), and to display wide angle image data corresponding to a wider region than the narrow angle image, in a case when the detection unit detects the second moving device (para[0030] teaches on the other hand, a camera with a wide angle of view is used for the back camera 1d because it is assumed that the surroundings of the host vehicle are taken widely. Para[0035] teaches Therefore, in the vehicle display device according to the present embodiment, as described below, when an object such as another vehicle is included in the blind spot range of the rear camera 1c, the central display is performed using the image captured by the back camera 1d. The display image of 1c is generated. Para[0039] In addition, the image control unit 11 assigns the captured image of the back camera 1d to the central display 1c at the position below the display screen. The image to be assigned is an image of a portion overlapping the blind spot range of the rear camera 1c in the captured image of the rear camera 1c, and is an image showing another vehicle. para[0048] teaches vehicle speed of the host vehicle is low, such as in a traffic jam, there is a high possibility that another vehicle approaches the host vehicle. Therefore, when the vehicle speed of the host vehicle is low, the occupant can control the other vehicle's vehicle by increasing the ratio of the captured image of the back camera 1d that captures the area near the rear of the host vehicle in the display image of the central display 1c);a display unit configured to have display controlled by the display control unit (para[0049] teaches vehicle control unit 11 displays the display image of the rear camera 1c from the imaging image of the rear camera 1c); and a drive unit configured to cause the first moving device to move (para[0033] teaches traveling vehicle).
Yasuhara does not explicitly disclose display narrow angle image data cut from the image data, wherein the narrow angle image data corresponds to a central region including an optical axis of an optical system. However Aihara discloses display narrow angle image data cut from the image data, wherein the narrow angle image data corresponds to a central region including an optical axis of an optical system (para[0040] teaches according to the side camera 10 of the present embodiment, as shown in FIG. 14A, an enlarged image of the subject (here, the motorcycle 104) is displayed in the region R1 used for image analysis. can get. Therefore, when this region R1 is cut out, an image having a high resolution can be obtained as shown in FIG. 14B, so that the bike 104 to be detected can be accurately recognized in the image analysis). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method of detecting means for detecting an object positioned behind the own vehicle with so a camera with a narrow angle of view is used and on the other hand, a camera with a wide angle of view of Yasuhara with the method to obtain an image in which a subject is magnified in region, when this region is cut out, an image having high resolution can be obtained of Aihara in order to provide a system in which the detection target in image analysis can be precisely recognized.
Regarding claim 30, Yasuhara discloses an image processing method comprising: an imaging step of imaging a side behind a first moving device using an imaging unit to generate image data (para[0014] teaches the rear camera 1c is installed to image the rear of the vehicle, and is provided to image between a portion projected by the right side camera 1a and a portion projected by the left side camera 1b. The rear camera 1c may be provided outside the back door (outside the vehicle) or may be provided inside the back door (in the vehicle); a detecting step of detecting a second moving device that (Para[0036] teaches and a vehicle detection unit 12 to for detecting another vehicle) satisfies a predetermined condition from the image data (Para[0024] teaches FIG. 2, the rear camera 1c is provided on the back door. For example, when another vehicle approaches a position near the rear of the host vehicle, the distance from the rear camera 1c to the other vehicle is shorter than the distance from the rearview mirror to the other vehicle. Therefore, the rear camera 1c captures another vehicle at a position closer than the position of the rearview mirror, so the size of the other vehicle imaged by the rear camera 1c is larger than the size of the other vehicle reflected in the rearview mirror); and a display control step of causing narrow angle image data, to be displayed, in a case when the detection step does not detect the second moving device (para[0030] teaches at least two types of cameras are used to capture both the distant part of the host vehicle and the vicinity part of the host vehicle on the display. As shown in FIGS. 5 and 6, it is assumed that the rear camera 1c captures a distant place, so a camera with a narrow angle of view is used, Para[0034] On the other hand, when another vehicle approaches the host vehicle and the other vehicle enters a dead angle range (a range between the imaging range of rear camera 1 c and the ground surface) which is outside the imaging range of rear camera 1 c, the rear camera 1c can not take at least a part of other vehicles. The display screen of the central display 1c at this time is shown in FIG. FIG. 7 is a view showing a display screen of the central display 1c when a display image is generated only with a captured image of the rear camera 1c in a state where the position of another vehicle is included in the blind spot range of the rear camera 1c. [0035] As shown in FIG. 7, since a part of the other vehicle is included in the dead angle range of the rear camera 1 c, the part of the other vehicle can not be displayed on the central display 1 c. Then, for example, when another vehicle approaches the host vehicle from the state shown in FIG. 7 such as in a traffic jam, most of the other vehicles can not be displayed on the central display 1c), and causing wide angle image data corresponding to a wider region than the narrow angle image to be displayed, in a case when the detection step detects the second moving device (para[0030] teaches on the other hand, a camera with a wide angle of view is used for the back camera 1d because it is assumed that the surroundings of the host vehicle are taken widely. Para[0035] teaches Therefore, in the vehicle display device according to the present embodiment, as described below, when an object such as another vehicle is included in the blind spot range of the rear camera 1c, the central display is performed using the image captured by the back camera 1d. The display image of 1c is generated. Para[0039] In addition, the image control unit 11 assigns the captured image of the back camera 1d to the central display 1c at the position below the display screen. The image to be assigned is an image of a portion overlapping the blind spot range of the rear camera 1c in the captured image of the rear camera 1c, and is an image showing another vehicle. para[0048] teaches vehicle speed of the host vehicle is low, such as in a traffic jam, there is a high possibility that another vehicle approaches the host vehicle. Therefore, when the vehicle speed of the host vehicle is low, the occupant can control the other vehicle's vehicle by increasing the ratio of the captured image of the back camera 1d that captures the area near the rear of the host vehicle in the display image of the central display 1c).
Yasuhara does not explicitly disclose narrow angle image data, cut from the image data, to be displayed, wherein the narrow angle image data corresponds to a central region including an optical axis of an optical system to be displayed. However Aihara discloses narrow angle image data, cut from the image data, to be displayed, wherein the narrow angle image data corresponds to a central region including an optical axis of an optical system to be displayed (para[0040] teaches according to the side camera 10 of the present embodiment, as shown in FIG. 14A, an enlarged image of the subject (here, the motorcycle 104) is displayed in the region R1 used for image analysis. can get. Therefore, when this region R1 is cut out, an image having a high resolution can be obtained as shown in FIG. 14B, so that the bike 104 to be detected can be accurately recognized in the image analysis). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method of detecting means for detecting an object positioned behind the own vehicle with so a camera with a narrow angle of view is used and on the other hand, a camera with a wide angle of view of Yasuhara with the method to obtain an image in which a subject is magnified in region, when this region is cut out, an image having high resolution can be obtained of Aihara in order to provide a system in which the detection target in image analysis can be precisely recognized.
Regarding claim 31, Yasuhara discloses a non-transitory computer-readable storage medium configured to store a computer program comprising instructions for executing the following processes (Para[0018] teaches the controller 10 includes a ROM (Read Only Memory) in which various programs are stored, a CPU (Central Processing Unit) as an operation circuit that executes the programs stored in the ROM, and a RAM (Random Access) that functions as an accessible storage device): an imaging step of imaging a side behind a first moving device using an imaging unit to generate image data (para[0014] teaches the rear camera 1c is installed to image the rear of the vehicle, and is provided to image between a portion projected by the right side camera 1a and a portion projected by the left side camera 1b. The rear camera 1c may be provided outside the back door (outside the vehicle) or may be provided inside the back door (in the vehicle); a detecting step of detecting a second moving device (Para[0036] teaches and a vehicle detection unit 12 to for detecting another vehicle) that satisfies a predetermined condition from the image data (Para[0024] teaches FIG. 2, the rear camera 1c is provided on the back door. For example, when another vehicle approaches a position near the rear of the host vehicle, the distance from the rear camera 1c to the other vehicle is shorter than the distance from the rearview mirror to the other vehicle. Therefore, the rear camera 1c captures another vehicle at a position closer than the position of the rearview mirror, so the size of the other vehicle imaged by the rear camera 1c is larger than the size of the other vehicle reflected in the rearview mirror); and a display control (image control unit 11) step of causing narrow angle image data, in a case when the detection step does not detect the second moving device (para[0030] teaches at least two types of cameras are used to capture both the distant part of the host vehicle and the vicinity part of the host vehicle on the display. As shown in FIGS. 5 and 6, it is assumed that the rear camera 1c captures a distant place, so a camera with a narrow angle of view is used, Para[0034] On the other hand, when another vehicle approaches the host vehicle and the other vehicle enters a dead angle range (a range between the imaging range of rear camera 1 c and the ground surface) which is outside the imaging range of rear camera 1 c, the rear camera 1c can not take at least a part of other vehicles. The display screen of the central display 1c at this time is shown in FIG. FIG. 7 is a view showing a display screen of the central display 1c when a display image is generated only with a captured image of the rear camera 1c in a state where the position of another vehicle is included in the blind spot range of the rear camera 1c. [0035] As shown in FIG. 7, since a part of the other vehicle is included in the dead angle range of the rear camera 1 c, the part of the other vehicle can not be displayed on the central display 1 c. Then, for example, when another vehicle approaches the host vehicle from the state shown in FIG. 7 such as in a traffic jam, most of the other vehicles can not be displayed on the central display 1c), and causing wide angle image data corresponding to a wide region than the narrow angle image to be displayed, in a case when the detection step detects the second moving device (para[0030] teaches on the other hand, a camera with a wide angle of view is used for the back camera 1d because it is assumed that the surroundings of the host vehicle are taken widely. Para[0035] teaches Therefore, in the vehicle display device according to the present embodiment, as described below, when an object such as another vehicle is included in the blind spot range of the rear camera 1c, the central display is performed using the image captured by the back camera 1d. The display image of 1c is generated. Para[0039] In addition, the image control unit 11 assigns the captured image of the back camera 1d to the central display 1c at the position below the display screen. The image to be assigned is an image of a portion overlapping the blind spot range of the rear camera 1c in the captured image of the rear camera 1c, and is an image showing another vehicle. para[0048] teaches vehicle speed of the host vehicle is low, such as in a traffic jam, there is a high possibility that another vehicle approaches the host vehicle. Therefore, when the vehicle speed of the host vehicle is low, the occupant can control the other vehicle's vehicle by increasing the ratio of the captured image of the back camera 1d that captures the area near the rear of the host vehicle in the display image of the central display 1c).
Yasuhara does not explicitly disclose narrow angle image data, cut from the image data, to be displayed, wherein the narrow angle image data corresponds to a central region including an optical axis of an optical system to be displayed. However Aihara discloses narrow angle image data, cut from the image data, to be displayed, wherein the narrow angle image data corresponds to a central region including an optical axis of an optical system to be displayed (para[0040] teaches according to the side camera 10 of the present embodiment, as shown in FIG. 14A, an enlarged image of the subject (here, the motorcycle 104) is displayed in the region R1 used for image analysis. can get. Therefore, when this region R1 is cut out, an image having a high resolution can be obtained as shown in FIG. 14B, so that the bike 104 to be detected can be accurately recognized in the image analysis). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method of detecting means for detecting an object positioned behind the own vehicle with so a camera with a narrow angle of view is used and on the other hand, a camera with a wide angle of view of Yasuhara with the method to obtain an image in which a subject is magnified in region, when this region is cut out, an image having high resolution can be obtained of Aihara in order to provide a system in which the detection target in image analysis can be precisely recognized.
10. Claims 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Yasuhara et al. (JP2014235639 A) (IDS provided 02/14/2025) in view of Aihara et al. (WO2019123840 A1) (IDS provided 02/14/2025) and Kishiwada et al. (JP 2019178871A) (IDS provided 02/14/2025)
Regarding claim 17, Yasuhara in view of Aihara discloses the image processing device according to claim 16. Yasuhara in view of Aihara does not explicitly disclose wherein, in a case when a focal distance of the optical system is defined as f, a half angle of view is defined as θ, an image height on an image plane is defined as y, and a projection property representing a relationship between the image height y and the half angle of view θ is defined as y(θ),y(θ) in a low-distortion region is greater than f x θ and is different from the projection property in a high-distortion region. However Kishiwada discloses wherein, in a case when a focal distance of the optical system is defined as f, a half angle of view is defined as θ, an image height on an image plane is defined as y, and a projection property representing a relationship between the image height y and the half angle of view θ is defined as y(θ),y(θ) in a low-distortion region is greater than f x θ and is different from the projection property in a high-distortion region (para[0019] teaches angle of view of a stereo camera system is 100 degrees or more in all angles due to the recent trend of wide angle. In the central projection, the relationship between the image height and the angle of view is defined by the following equation (2), so that the required image height increases as the angle becomes wider. (2) y = f: tan (θ) y: image height, f: focal length, θ: angle of view). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method of detecting an object positioned behind the own vehicle with so a camera with a narrow angle of view is used and a subject is magnified in region, when this region is cut out, a camera with a wide angle of view of Yasuhara in view of Aihara with the projecting an image of a subject outside the camera of Kishiwada in order to provide a system in which there is no correction amount or correction error that occurs during image correction.
Regarding claim 18, Kishiwada further discloses the image processing device according to claim 17, wherein the low- distortion region is configured to have a projection property that is approximate to a central projection method, wherein y = f x tan θ, or an equidistance projection method, wherein y = y x θ (para[0014] teaches [0014] FIG. 2A is an example for explaining central projection. The central projection is a method of projecting a subject existing in a direction away from the camera optical axis by an angle θ to a position separated by f: tan (θ) from the center of the imaging surface (intersection with the optical axis). Here, f is the focal length of the optical system. Para[0016] teaches [0016] FIG. 2B is an example of a diagram illustrating equidistant projection. The equidistant projection is a method of projecting a subject existing in a direction away from the camera optical axis by an angle θ to a position separated by f · θ from the center of the imaging surface. Further para[0032] teaches the image height characteristics can be expressed by the following formulas (3) and (4). (3) y = ftan θ (y ≦ a) ). Motivation to combine as indicated in claim 18.
11. Claims 19 is rejected under 35 U.S.C. 103 as being unpatentable over Yasuhara et al. (JP2014235639 A) (IDS provided 02/14/2025) in view of Aihara et al. (WO2019123840 A1) (IDS provided 02/14/2025) and Nakahara et al. (US 2023/0101459 A1).
Regarding claim 19, Yasuhara in view of Aihara discloses the image processing device according to claim 16. Yasuhara in view of Aihara does not explicitly disclose wherein, in a case when θmax is defined as a maximum half angle of view that the optical system has, the image processing device is configured to satisfy: 1 < f x sin(θ max)/y(θ max) ≤1.9. However Nakahara discloses the image processing device according to claim 17, wherein, in a case when θmax is defined as a maximum half angle of view that the optical system has, the image processing device is configured to satisfy: 1 < f x sin(θ max)/y(θ max) ≤1.9 (Para[0039] & Forumula (1) 1<f×sin (θ max)/y(θ max)<1.9 . . . (Formula 1)). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method of detecting an object positioned behind the own vehicle with so a camera with a narrow angle of view is used and a subject is magnified in region, when this region is cut out, a camera with a wide angle of view of Yasuhara in view of Aihara with the method of taking into consideration the balance between the resolution in the high-resolution region and the resolution in the low-resolution region of Nakahara in order to provide a system in which processing load for image recognition can be reduced and image recognition at a high speed can be achieved.
12. Claims 22, 24 is rejected under 35 U.S.C. 103 as being unpatentable over Yasuhara et al. (JP2014235639 A) (IDS provided 02/14/2025) in view of Aihara et al. (WO2019123840 A1) (IDS provided 02/14/2025) and Ohwada et al. US 2022/0024452A1 (corresponding to WO 2020110611) (IDS provided 02/14/2025)
Regarding claim 22, Yasuhara in view of Aihara discloses the image processing device according to claim 16. Yasuhara in view of Aihara does not explicitly disclose wherein the detection unit detects, as the second moving device, a moving device that is blinking its blinker under a predetermined condition. However Ohwada discloses wherein the detection unit detects, as the second moving device, a moving device that is blinking its blinker under a predetermined condition (para[0084] teaches based on the surrounding image captured by the imaging unit 11 , the image processing apparatus 10 detects that the other moveable body 2 is blinking its headlights behind the moveable body 1 . Para[0191] Fig. 13 & When not judging that the vehicle staying time is 3 seconds or more and the other moveable body 2 is flashing its headlights (step S 44 : No), the processor 13 proceeds to the process of step S 50 . For example, the processor 13 judges whether the other moveable body 2 is zig-zagging based on whether the other moveable body 2 is blinking its headlights behind the moveable body 1 , whether the other moveable body 2 has its headlights on the high beam setting behind the moveable body 1 , or the like.).
It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method of detecting an object positioned behind the own vehicle with so a camera with a narrow angle of view is used and a subject is magnified in region, when this region is cut out, a camera with a wide angle of view of Yasuhara in view of Aihara with the method of that detecting a state of another moving body from a peripheral image and determines the behavior of the moving body based on the detected state of the other moving body of Ohwada in order to provide a system in which smooth traffic is achieved.
Regarding claim 24, Yasuhara in view of Aihara discloses the image processing device according to claim 16. Yasuhara in view of Aihara does not explicitly disclose, wherein the detection unit detects, as the second moving device, a moving device that is performing a headlight flashing motion. However Ohwada discloses wherein the detection unit detects, as the second moving device, a moving device that is performing a headlight flashing motion (para[0084] teaches based on the surrounding image captured by the imaging unit 11, the image processing apparatus 10 detects that the other moveable body 2 is blinking its headlights behind the moveable body 1, or that the other moveable body 2 has its headlights on the high beam setting behind the moveable body 1).. It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method of detecting an object positioned behind the own vehicle with so a camera with a narrow angle of view is used and a subject is magnified in region, when this region is cut out, a camera with a wide angle of view of Yasuhara in view of Aihara with the method of that detecting a state of another moving body from a peripheral image and determines the behavior of the moving body based on the detected state of the other moving body of Ohwada in order to provide a system in which smooth traffic is achieved.
13. Claims 23 is rejected under 35 U.S.C. 103 as being unpatentable over Yasuhara et al. (JP2014235639 A) (IDS provided 02/14/2025) in view of Aihara et al. (WO2019123840 A1) (IDS provided 02/14/2025) and Shalev-Shwartz et al (US 2021/0179096 A1) (IDS provided 02/14/2025).
Regarding claim 23, Yasuhara in view of Aihara discloses the image processing device according to claim 16. Yasuhara in view of Aihara does not explicitly disclose, wherein the detection unit detects, as the second moving device, a moving device that repeats a motion of crossing a boundary of passing sections. However Shalev-Shwartz discloses wherein the detection unit detects, as the second moving device, a moving device that repeats a motion of crossing a boundary of passing sections (Para[0274] teaches when there are observed lane structures, it may be assumed that next-lane vehicles will respect lane boundaries. That is, the host vehicle may assume that a next-lane vehicle will stay in its lane unless there is observed evidence (e.g., a signal light, strong lateral movement, movement across a lane boundary) indicating that the next-lane vehicle will cut into the lane of the host vehicle).. It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method of detecting an object positioned behind the own vehicle with so a camera with a narrow angle of view is used and a subject is magnified in region, when this region is cut out, a camera with a wide angle of view of Yasuhara in view of Aihara with safety constraint includes a safety distance constraint associated with the host vehicle, based on a determined speed of the host vehicle and a determined speed of a detected target object of Shalev-Shwartz in order to provide a system in which ensure the safety of the policy, which enable adaptivity to many scenarios, a human-like balance between defensive/aggressive behavior, and a human-like negotiation with other drivers.
14. Claims 25-26 is rejected under 35 U.S.C. 103 as being unpatentable over Yasuhara et al. (JP2014235639 A) (IDS provided 02/14/2025) in view of Aihara et al. (WO2019123840 A1) (IDS provided 02/14/2025) and Park et al. (US 2018/0376122 A1) (IDS provided 02/14/2025).
Regarding claim 25, Yasuhara in view of Aihara discloses the image processing device according to claim 16. Yasuhara in view of Aihara does not explicitly disclose, wherein the display control unit causes a predetermined transition time to be spent when display of the narrow angle image data and the wide angle image data is switched . However Park discloses wherein the display control unit causes a predetermined transition time to be spent when display of the narrow angle image data and the wide angle image data is switched . (Para[0106] teaches the digital photographing device 100 continuously generates and outputs the virtual viewpoint image V, consuming more power. When the zoom input of the user is included the switchover region for a certain time or more, the digital photographing device 100 may be set so that any one of the wide-angle camera and the telephoto camera may be selected. At this time, according to a setting, when the input of ×2.3 lasts for five seconds or more, the digital photographing device 100 may automatically switch to the telephoto camera even without a change in the input zoom factor). It would have been obvious to one having ordinary skill in the art before the effective filing date of the invention to use the method of detecting an object positioned behind the own vehicle with so a camera with a narrow angle of view is used and a subject is magnified in region, when this region is cut out, a camera with a wide angle of view of Yasuhara in view of Aihara with the method in which the user can execute multi-point touch operation model to input zoom signal again of Park in order to provide a system by maintaining the input signal in the switching area relatively long time to automatically switch the camera can prevent excessive power consumption.
Regarding claim 26, Park discloses the image processing device according to claim 25, wherein the transition time when display is switched from the narrow angle image data to the wide angle image data, and the transition time when display is switched from the wide angle image data to the narrow angle image data, are different from each other Para[0107] & Fig. 11 teaches when a camera before switchover is the wide-angle camera, the virtual viewpoint image V is set to be generated at the input of ×2.3 which is less than ×2.5. At this time, according to a setting, when the input of ×2.3 lasts for five seconds or more, the digital photographing device 100 may automatically switch to the telephoto camera even without a change in the input zoom factor. On the other hand, when a camera before switchover is the telephoto camera and an input of ×2.4 lasts for five seconds or more, the digital photographing device 100 may automatically switch to the wide-angle camera). Motivation to combine as indicated in claim 25.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROWINA J CATTUNGAL whose telephone number is (571)270-5922. The examiner can normally be reached Monday-Thursday 7:30am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at (571) 272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ROWINA J CATTUNGAL/Primary Examiner, Art Unit 2425