DETAILED ACTION
This office action is in response to an amendment filed 11/19/2025, wherein claims 1-19 are pending and being examined. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claim(s) 1-19 have been considered but are directed to newly amended language, which is addressed below.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-19 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-14 of U.S. Patent No. 12,088,952. Although the claims at issue are not identical, they are not patentably distinct from each other because of the following.
In regard to claim 1, claim 1 of the instant application is a broader version of claim 1 of U.S. Patent No. 12,088,952. Every claim limitation within claim 1 of the instant application is rendered obvious or anticipated by a corresponding limitation within claim 1 of U.S. Patent No. 12,088,952. Therefore the two claims are not patentably distinct and claim 1 is rejected on the grounds of nonstatutory double patenting.
In regard to claims 2-17, these claims are rejected as being dependent upon a previously rejected claim and/or claiming subject matter not patentably distinct from the various dependent claims of US Patent No. 12,088,952.
In regard to claim 18, claim 18 of the instant application is a broader version of claim 13 of U.S. Patent No. 12,088,952. Every claim limitation within claim 18 of the instant application is rendered obvious or anticipated by a corresponding limitation within claim 13 of U.S. Patent No. 12,088,952. Therefore the two claims are not patentably distinct and claim 18 is rejected on the grounds of nonstatutory double patenting.
In regard to claim 19, claim 19 of the instant application is a broader version of claim 14 of U.S. Patent No. 12,088,952. Every claim limitation within claim 19 of the instant application is rendered obvious or anticipated by a corresponding limitation within claim 14 of U.S. Patent No. 12,088,952. Therefore the two claims are not patentably distinct and claim 19 is rejected on the grounds of nonstatutory double patenting.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-3, 6, 8, 9, 13, 14, and 16-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yoshida (US 2020/0369207) in view of Kosugi et al. (US 2018/0056870) (hereinafter Kosugi).
In regard to claim 1, Yoshida discloses an image processing apparatus [¶0024; image processing device] comprising:
circuitry [¶0044] configured to
set a reference visual field corresponding to a display mode [¶0062; image processor 103 generates combined image 50 by combining images 51 to 53. ¶0066],
obtain a virtual viewpoint position that changes in accordance with motion of a viewpoint of a driver of a vehicle [¶0061-¶0063; position detector 102 detects that the face position of the driver has moved in one of the left and right directions of vehicle 1, image processor 103 moves the range in which the image is to be clipped out of combined image 50 to a position located in the other of the left and right directions of vehicle 1 relative to the range before the movement of the face position, and moves the position at which position image 80 is to be superimposed to a position located in the one of the left and right directions of vehicle 1 relative to the position before the movement of the face position. Fig.5 through Fig.7; virtual viewpoint changes as the driver moves their head], and
generate a display image by superimposing a vehicle interior image on a captured image obtained by capturing an image on a rear side from the vehicle [¶0088; Image processor 103 outputs, to display device 40, image 60 (or image 61) on which position image 80 has been superimposed (S15). Fig.4, Fig.7], based on setting information regarding the reference visual field and the virtual viewpoint position [¶0090; Image processor 103 performs image processing to clip, according to the face position detected by position detector 102, image 54 having range 71 corresponding to face position P1, out of combined image 50 obtained by combining images 51 to 53, and superimpose position image 80 indicating a position in vehicle 1, on image 54 at position P11 corresponding to face position P1, and outputs image 60 (or image 61) resulting from the image processing. Fig.4, Fig.7].
Yoshida does not explicitly disclose converting the captured image and the vehicle interior image arranged a three dimensional space into a projected coordinate system with a visual field determined by the virtual viewpoint position. However Kosugi discloses,
generate a display image by superimposing a vehicle interior image on a captured image obtained by capturing an image on a rear side from the vehicle [¶0046-¶0048; vehicle cabin interior, in which a portion that is seen through a window can be viewed through the window, is displayed so as to be superposed on an image in which the image that is projected on the virtual screen 24. Fig.6], based on setting information regarding the reference visual field and the virtual viewpoint position [¶0017-¶0018; virtual viewpoints that are set at respective predetermined positions… projected from the virtual viewpoints onto a virtual screen that is provided at the rear of the vehicle... captured images that are respectively projected from the virtual viewpoints onto a virtual screen that is provided at the rear of the vehicle and is curved convexly rearward. Fig.3A, Fig.3B; virtual screens (24A, 24B) are at a position on a rear side from the vehicle. ¶0043; a scene that is the same as that of a case of regularly viewing the rear of the vehicle from the virtual viewpoint 22 is converted into an image that is in a state of being projected onto the virtual screen 24. ¶0046; marking the position of the virtual screen 24 that can be seen through the window from the position of the inner rearview mirror 20 of the vehicle] with converting the captured image and the vehicle interior image arranged a three dimensional space into a projected coordinate system with a visual field determined by the virtual viewpoint position [¶0052-¶0053; CPU 18A converts the image that shows the vehicle cabin interior and the image that shows the rear of the vehicle to become images projected onto the virtual screen 24. Fig.3A, Fig.3B; virtual screen is a three-dimensional screen. ¶0043; virtual screen 24 may be planar, but is desirably an oval shape having curvature that is convex toward the rearward direction of the vehicle].
Yoshida discloses an image processing device for a vehicle. A display image is generated by superimposing a vehicle interior image onto an image obtained by combining multiple captured images of the rear/sides of the vehicle. Initially, a center of a field of a view of a driver is determined and a range representing a field of view is set by the processor. A face position of a driver is tracked and the center of the driver's field of view and associated field of view range are adjusted as the face position changes over time. Yoshida does not explicitly disclose arranging the cabin image and the vehicle image in three-dimensional space.
Kosugi discloses an image processing device for a vehicle and combining an interior image with an image captured of the vehicle's rearward region, similar to Yoshida. Kosugi further discloses that both the interior image and the captured image are projected onto one or more virtual screen(s) wherein as shown in Fig.3A through Fig.5, by projecting the captured image onto the virtual screen(s), the images are arranged in three dimensions. As is readily apparent from the disclosure of Kosugi, the virtual screen is positioned within a three-dimensional space/coordinate system. That is, the system projects the images into a common virtual screen surface wherein the surface is a surface within a three-dimensional space.
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the apparatus disclosed by Yoshida with the arrangement of the images in a three-dimensional space and projecting the images into a coordinate system as disclosed by Kosugi in order to provide a driver with a sense of depth while viewing the images, which can improve the overall visibility of the rearward field of view [Kosugi ¶0005-¶0009, ¶0019, ¶0047-¶0048, ¶0053].
In regard to claim 2, Yoshida in view of Kosugi discloses the image processing apparatus according to claim 1. Yoshida in view of Kosugi further discloses,
wherein as the reference visual field setting, a display position setting is included [Yoshida ¶0066; image processor 103 determines, as the range in which an image is to be clipped, range 71 corresponding to angle range θ10 spread from the center position (the center position in the horizontal direction) of display device 40 in vehicle 1 as a starting point and centered on direction D1 in large imaging range R10. Kosugi ¶0018; captured images that are seen from virtual viewpoints that are set at respective predetermined positions].
See claim 1 for motivation to combine.
In regard to claim 3, Yoshida in view of Kosugi discloses the image processing apparatus according to claim 1. Yoshida in view of Kosugi further discloses,
wherein as the reference visual field setting, a display size setting is included [Yoshida ¶0073; converting combined image 50 so that an image of the same size as the size of range 71 clipped out of combined image 50 has the same resolution as the resolution of image 61. Kosugi ¶0046; display an image, in which the width of the rear side window glasses is wider than conventionally, by widening the virtual screen 24].
See claim 1 for motivation to combine.
In regard to claim 6, Yoshida in view of Kosugi discloses the image processing apparatus according to claim 1. Yoshida further discloses,
wherein the circuitry is further configured to use, as a captured image obtained by capturing an image on a rear side from the vehicle, a captured image captured by an image capturing device attached to a rear part of the vehicle and a captured image captured by an image capturing device attached to a side part of the vehicle [¶0050-¶0053. ¶0062; image processor 103 generates combined image 50 by combining images 51 to 53].
In regard to claim 8, Yoshida in view of Kosugi discloses the image processing apparatus according to claim 1. Yoshida further discloses,
wherein the vehicle interior image includes a computer graphics image [¶0076; Storage 104 stores position image 80. Position image 80 is smaller than clipped images 54, 55, and is a schematic diagram (for example, CG) illustrating an accessory of vehicle 1 that is located behind driver U1].
In regard to claim 9, Yoshida in view of Kosugi discloses the image processing apparatus according to claim 1. Yoshida further discloses,
wherein the circuitry is further configured to change a superimposed positional relationship between the captured image and the vehicle interior image in accordance with motion of the viewpoint of the driver [¶0067-¶0074. ¶0091; image processing device 10 determines, according to detected face position P1 of driver U1, position P21 of range 71 of image 54 to be clipped out of combined image 50 including images 51 to 53 captured by imaging devices 12 to 14 and position P11 of position image 80 to be superimposed on image 54].
In regard to claim 13, Yoshida in view of Kosugi discloses the image processing apparatus according to claim 1. Yoshida in view of Kosugi further discloses ,
wherein the circuitry is further configured to superimpose the vehicle interior image on the captured image to allow the captured image to be seen through [Yoshida ¶0035-¶0038, ¶0099-¶0103. Kosugi ¶0038, ¶0053].
See claim 1 for motivation to combine.
In regard to claim 14, Yoshida in view of Kosugi discloses the image processing apparatus according to claim 1. Yoshida further discloses,
wherein the circuitry is configured to obtain current viewpoint of the driver and current line of sight of the driver, the circuitry is configured to determine a reference viewpoint position of the driver based on the current viewpoint of the driver and the current line of sight of the driver [¶0061-¶0063].
In regard to claim 16, Yoshida in view of Kosugi discloses the image processing apparatus according to claim 14. Yoshida further discloses,
wherein the circuitry is configured to detect the line of sight of the driver based on an image of a pupil of the driver [¶0059-¶0063].
In regard to claim 17, Yoshida in view of Kosugi discloses the image processing apparatus according to claim 1. Yoshida in view of Kosugi further discloses,
wherein the reference visual field setting includes at least one of parameters corresponding to one mode of a plane mirror mode, a curved mirror mode, a right-end curved mirror mode, or a left- end curved mirror mode [Yoshida ¶0091-¶0092, Kosugi ¶0013-¶0014, ¶0043, ¶0048-¶0050].
See claim 1 for motivation to combine.
In regard to claim 18, this claim is drawn to a method corresponding to the apparatus of claim 1, wherein claim 18 contains the same limitations as claim 1 and is therefore rejected upon the same basis.
In regard to claim 19, this claim is drawn to a non-transitory computer-readable medium containing instructions that when executed by a processor perform the functions of the apparatus of claim 1, wherein claim 19 contains the same limitations as claim 1 and is therefore rejected upon the same basis.
Claim(s) 4 and 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yoshida (US 2020/0369207) in view of Kosugi (US 2018/0056870) in view of Tatara (US 2016/0059781).
In regard to claim 4, Yoshida in view of Kosugi discloses the image processing apparatus according to claim 1. Neither Yoshida nor Kosugi explicitly disclose wherein as the reference visual field setting, a compression setting of all or part of the display image in a horizontal direction is included. However Tatara discloses,
wherein as the reference visual field setting, a compression setting of all or part of the display image in a horizontal direction is included [Fig.2B, Fig.3B, ¶0026-¶0028; video image processing circuit 17 is configured to cause video images of rear and side areas of the vehicle body 1 to be displayed on two portions of monitor screens at different compression ratios and to change the compression ratios according vehicle information.... video image Pa showing the rear of the vehicle body of the vehicle V is displayed on a left portion 13 a of the monitor screen at a compression ratio A, and a video image Pb showing the side of the vehicle body is displayed on a right portion 13 b at a compression ratio B... compression ratios A, B denote horizontal compression ratios, and the video image is processed by the video image processing circuit 17 so that the compression ratio A is larger than the compression ration B or the compression ratio A > the compression ratio B. ¶0030-¶0032; As the vehicle information when the vehicle V is turning left or right, a turn signal, information from the car navigation system, information on the line of sight of the driver, and the like can be used].
Tatara discloses an ECU for processing images captured by rearward facing vehicle cameras and processing the images in order to generate images for display. As noted above, Tatara discloses that depending on vehicle state/status information, the captured images may be processed in such a way that certain images/image regions have compression ratios ("settings") that differ from other images/image regions.
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the apparatus disclosed by Yoshida in view of Kosugi with the compression as disclosed by Tatara in order to allow important regions of image data to be easily visualized by a driver using an inexpensive data processing technique [Tatara ¶0003-¶0012, ¶0029-¶0035]. As disclosed by Tatara, as the vehicle traverses, depending on the vehicle's state/direction various information areas in captured images will be of importance to a driver wherein by compressing the images in the disclosed manner, the system can ensure areas of importance are easily recognized by a driver.
In regard to claim 5, Yoshida in view of Kosugi discloses the image processing apparatus according to claim 1. Neither Yoshida nor Kosugi explicitly disclose, wherein as the reference visual field setting, a compression setting of all or part of the display image in a vertical direction is included. However Tatara discloses,
wherein as the reference visual field setting, a compression setting of all or part of the display image in a vertical direction is included [Fig.4B, Fig.4C, ¶0034-¶0035; video image processing circuit 17 divides the screen of the monitor 13 into two in a vertical direction and processes the video image that is displayed on two upper and lower portions so that the video image is displayed at different compression ratios in the vertical direction... compression ratios C, D both denote vertical compression ratios. ¶0026, ¶0032; video image processing circuit 17 is configured to cause video images of rear and side areas of the vehicle body 1 to be displayed on two portions of monitor screens at different compression ratios and to change the compression ratios according vehicle information... As the vehicle information when the vehicle V is turning left or right, a turn signal, information from the car navigation system, information on the line of sight of the driver, and the like can be used].
See claim 4 for elaboration on Tatara. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the apparatus disclosed by Yoshida in view of Kosugi with the compression as disclosed by Tatara in order to allow important regions of image data to be easily visualized by a driver using an inexpensive data processing technique [Tatara ¶0003-¶0012, ¶0029-¶0035]. As disclosed by Tatara, as the vehicle traverses, depending on the vehicle's state/direction various information areas in captured images will be of importance to a driver wherein by compressing the images in the disclosed manner, the system can ensure areas of importance are easily recognized by a driver.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Yoshida (US 2020/0369207) in view of Kosugi (US 2018/0056870) in view of Hashimoto (US 2017/0282813) (hereinafter Hashimoto).
In regard to claim 7, Yoshida in view of Kosugi discloses the image processing apparatus according to claim 1. Neither Yoshida nor Kosugi explicitly disclose, wherein the circuitry is further configured to select any reference visual field setting from a plurality of the reference visual field settings; and generate the display image on a basis of the selected reference visual field setting. However Hashimoto discloses,
wherein the circuitry is further configured to select any reference visual field setting from a plurality of the reference visual field settings [¶0088; area determiner 110 b and the display mode determiner 110 c can determine the areas and the display modes of the vehicle exterior image Imo, the vehicle body image Imi, the additional images, and the output image Im through operation input by the occupant such as the driver. In this case, for example, the occupant such as the driver can view the output image Im of a preferred area in a preferred display mode on the display 10 a. ¶0069; display mode determiner 110 c determines the display mode of the output image Im on the displays 10 a and 24 a. The display mode determiner 110 c can set or change the display mode of the output image Im... display mode determiner 110 c may set or change the display mode of the output image Im based on detection results by other sensors and devices, signals, data, and the like... display mode determiner 110 c can set or change, for example, the transmittance α, the color, the luminance, or the chroma of the vehicle body image Imi. When a composite image contains an interior image, the display mode determiner 110 c can set or change the transmittance of the interior image, for example]; and generate the display image on a basis of the selected reference visual field setting [¶0076; distortion corrector 110 e and the viewpoint converter 110 f acquire a display mode set by the display mode determiner 110 c (S3) and correct the vehicle exterior image Imo according to the acquired display mode (S4). ¶0091; display 10 a can display, for example, the output image Im in the appropriate display mode for each of the forward travel and the backward travel of the vehicle 1].
As disclosed by Hashimoto, a display mode determiner allows selection of any display mode/settings among a plurality of settings wherein each setting changes the display/visual field of the output image.
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the apparatus disclosed by Yoshida in view of Kosugi with the mode selection as disclosed by Hashimoto in order to allow an operator to view output images in a preferred area and in a preferred display mode [Hashimoto ¶0005-¶0009, ¶0069, ¶0085-¶0097]. As disclosed by Hashimoto, different drivers may have different viewing/display preferences and by selecting any of a plurality of display mode/settings, a driver can view the display images in a preferred/optimal manner.
Allowable Subject Matter
Claims 10-12 and 15 would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and if a terminal disclaimer is filed to overcome the double patenting rejection noted herein.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to REBECCA A VOLENTINE whose telephone number is (571)270-7261. The examiner can normally be reached Monday-Friday 9am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Joe Ustaris can be reached at (571)272-7383. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/REBECCA A VOLENTINE/Primary Examiner, Art Unit 2483 March 9, 2026