DETAILED ACTION
I. Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
II. Response to Amendment
The response, filed September 5, 2025, has been entered and made of record. Claims 1-20 are pending in the application.
III. Response to Arguments
As discussed during the August interviews, Applicant’s arguments regarding the amended independent claims and Park et al. are persuasive. Therefore, all rejections with Park et al. as the primary reference have been withdrawn. The examiner also agrees with Applicant’s argument that Bando fails to disclose color information synthesis via a machine-learned model. However, see the new grounds of rejection that follow.
IV. Finality of the instant Office Action
The examiner notes that amendment to the independent claims primarily involves limitations that were previously presented in claims 5,6,18, and 19. However, some dependent claims (e.g., claim 6 and claim 14), the full scope of which is newly presented by amendment, may be more appropriately rejected using a different reference (i.e., Bando). Therefore, the instant Office action is made final.
V. Claim Objections
A. Claim 1 is objected to because of the following informalities: On line 8, “obtaining” should be changed to ‘obtain(
B. Claim 12 is objected to because of the following informalities: At the end of line 3, the examiner suggests replacing “and” with ‘wherein.’ As currently drafted, the claim reads “[t]he camera system of claim 1, further comprising…the one or more first machine learned models are configured to….”
VI. Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
A. Claims 1,2,4,6,8,10-14,17,18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Bando (US 2013/0229544 A1) in view of
Imagawa et al. (US 2021/0104031 A1) and further in view of
Chen (US 2023/0291982 A1)
As to claim 1, Bando teaches a camera system (Fig. 1, image pickup device “1”), comprising:
a monochrome camera (Fig. 1, first image pickup part “11”; Fig. 2; [0023], lines 2 and 3), configured to capture a first image of a scene ([0025], lines 2 and 3);
a color camera (Fig. 1, second image pickup part “12”), disposed separately from the monochrome camera (Fig. 1; [0023], lines 5-8), configured to capture a second image of the scene ([0025], lines 4-6); and
one or more processors (Fig. 1, image processing device “2”) configured to:
align the second image to the first image ([0027], lines 1-5), and
obtain
identifying non-aligned portions of the first image for which portions of the second image are not aligned to corresponding portions of the first image (Fig. 6; [0027], lines 10 and 11),
transferring color information of portions of the second image to corresponding portions of the first image for those portions of the second image which are aligned to corresponding portions of the first image ([0036]; {The deformation of the color image to align with the viewpoint of corresponding points of the luminance image is effectively a transfer of color information to positions of color information at corresponding points of the luminance image.}), and
applying synthesized color information to the non-aligned portions of the first image ([0038] and [0039]).
Claim 1 differs from Bando in that it requires (1) that the monochrome camera and the color camera each have a global shutter and (2) that the synthesized color information is applied via one or more first machine-learned models.
(1) However, in the same field of endeavor as the instant application, Imagawa et al. discloses a camera system (Fig. 2, system “10”) including a monochrome camera (Fig. 1, monochrome camera “20”) and a color camera (Fig. 2, color camera “30”) that respectively capture images at the same time ([0080], lines 1-3) using global shuttering ([0067], lines 9 and 10; [0068], lines 8 and 9). In light of the teaching of Imagawa et al., the examiner submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to capture Bando’s luminance image and color image at the same time using global shuttering functionality in respective image pickup parts because, as is known in the art, global shuttering can prevent artifacts from appearing in captured images due to either handshake or object motion in the captured scene. Moreover, capturing the image signals at the same time can reduce differences in scene detail between the color and luminance images, thereby improving image alignment results.
(2) Further in the same field of endeavor as the instant application, Chen discloses a camera system including a monochrome camera and a color camera in which the system transfers color from the color images to the monochrome images using a neural network model ([0030]). In light of the teaching of Chen, the examiner submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to use a machine learning/neural network model to perform Bando’s color transfer/deformation function given the numerous advantages the artificial intelligence provides in the image processing space, like improved quality, efficiency, and scalability.
As to claim 2, Bando, as modified by Imagawa et al. and Chen, teaches the camera system of claim 1, wherein the monochrome camera and the color camera are synchronized to capture the first image and the second image at a substantially same time (see Imagawa et al., [0080], lines 1-3).
As to claim 4, Bando, as modified by Imagawa et al. and Chen et al., teaches the camera system of claim 1, wherein the monochrome camera and the color camera are disposed to face in a same direction (see Bando, Fig. 1) and are disposed less than a threshold distance from each other (see Bando, Fig. 1; {As the threshold distance is not defined in claim 4, it can essentially be arbitrarily chosen. Therefore, the examiner reads the claimed threshold distance as a distance greater than the distance between Bando’s image pickup parts.}).
As to claim 6, Bando, as modified by Imagawa et al. and Chen, teaches the camera system of claim 1, wherein chromatic channels of the third image are synthesized and a luma component of the first image is retained for obtaining the third image (see Bando, [0044], lines 1-4).
As to claim 8, Bando, as modified by Imagawa et al. and Chen, teaches the camera system of claim 1, wherein the color camera is a red-green-blue (RGB) camera (see Bando, [0025], lines 4-11).
As to claim 10, Bando, as modified by Imagawa et al. and Chen, teaches the camera system of claim 1, wherein the monochrome camera has a larger size than the color camera (Fig. 2; {The luminance image pickup part is larger than the color image pickup part in that it extends farther in one direction relative to an arbitrary point.}).
As to claim 11, Bando, as modified by Imagawa et al. and Chen, teaches the camera system of claim 1, wherein
the first image includes a luma component (see Bando, [0022], lines 1-3), and
the third image includes the luma component of the first image and a chroma component based on the second image (see Bando, [0044], lines 1-4).
As to claim 12, Bando et al., as modified by Imagawa et al. and Chen, teaches the camera system of claim 1, further comprising:
a processor function that aligns the second image to the first image (see Bando, [0027], lines 1-5), wherein
the one or more first machine learned models are configured to synthesize the color information for the non-aligned portions of the first image for which portions of the second image are not aligned to corresponding portions of the first image, and to apply the synthesized color information to the non-aligned portions of the first image (see Bando, [0038] and [0039]; see Chen, [0030]).
The claim differs from Bando, as modified by Imagawa et al. and Chen, in that it requires that the image-aligning processor function be a function of a machine-learned model. Imagawa et al. further discloses that a machine learning model performs image region matching between the monochrome and color images ([0148], lines 1-4). In light of this additional teaching of Imagawa et al., the examiner submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to use a machine learning/neural network model to perform Bando’s corresponding point calculation given the numerous advantages the artificial intelligence provide in the image processing space, like improved quality, efficiency, and scalability.
Claims 13 and 18 are method claims reciting steps substantially similar to the processor functions of claims 1 and 6, respectively. Therefore, they are rejected as detailed above.
As to claim 14, Bando, as modified by Imagawa et al. and Chen, teaches the method of claim 13, wherein the first image includes a single channel having a luma component (see Bando, Fig. 2; [0022], lines 1-3), and the third image includes at least three channels including a first channel having the luma component and a plurality of channels including chroma components based on the second image (see Bando, [0044], lines 1-4).
As to claim 17, Bando, as modified by Imagawa et al. and Chen, teaches the method of claim 13, wherein a difference between a time at which the color camera captures the second image and a time at which the monochrome camera captures the first image is less than an integration time of the monochrome camera (see Imagawa et al., [0080], lines 1-3).
As to claim 20, Bando, as Imagawa et al. and Chen, teaches processor functions corresponding to the instructions of the claim. Chen, however, further discloses that the reference’s functions can be implemented as processor executable instructions stored on a non-transitory medium ([0043], lines 1-8). In light of this additional disclosure of Chen, the examiner submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to implement the processor functions of Bando, as modified by Imagawa et al., as processor-executable instructions stored in memory because one of ordinary skill in the art would recognize the numerous advantages that software-based solutions provide, like ease of implementation and design flexibility.
B. Claim 3,15, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Bando (US 2013/0229544 A1) in view of
Imagawa et al. (US 2021/0104031 A1) in view of Chen (US 2023/0291982 A1)
and further in view of Fruchtman et al. (US 2020/0344398 A1).
As to claim 3, Bando, as modified by Imagawa et al. and Chen, teaches the camera system of claim 2. The claim differs from Bando, as modified by Imagawa et al. and Chen, in that it requires that each of the first image and the second image are captured when a peak lighting condition occurs in an environment in which the monochrome camera and the color camera are disposed.
However, in the same field of endeavor as the instant application, Fruchtman et al. discloses a camera (Fig. 3) that operates with a global shutter ([0030], lines 7-10), measures a flicker period of external illumination (Fig. 12, “S1210”), and times an exposure window such that image capture occurs during a peak external illumination cycle (Fig. 12, “S1220”). In light of the teaching of Fruchtman et al., the examiner submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to include functionality in the system of Bando, as modified by Imagawa et al. and Chen, that measures a flicker period of external illumination and captures the luminance and color images during a peak illumination cycle because, as Fruchtman et al. notes in the related art section, this would prevent flicker artifacts from appearing in the captured images, thereby leading to improved alignment and faithful detection of corresponding points.
Claim 15 is a method claim reciting steps substantially similar to the processor functions of claim 3. Therefore, it is rejected as detailed above.
As to claim 16, Bando, as modified by Imagawa et al., Chen, and Fruchtman et al., teaches the method of claim 15, wherein the color camera and the monochrome camera have a substantially same field of view (see Bando, e.g., Fig. 6).
C. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over
Bando (US 2013/0229544 A1) in view of Imagawa et al. (US 2021/0104031 A1)
in view of Chen (US 2023/0291982 A1) and further in view of
Chou et al. (US # 10,469,821 B2).
As to claim 5, Bando, as modified by Imagawa et al. and Chen, teaches the camera system of claim 4. The claim differs from Bando, as modified by Imagawa et al. and Chen, in that it requires that the threshold distance is ten centimeters or less. However, in the same field of endeavor as the instant application, Chou et al. discloses that the typical distance between lenses of a dual-camera system in a mobile device is 1-2 cm (col. 7, lines 51-53). In light of Chou’s teaching, the examiner submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to separate Bando’s cameras by a distance of 1-2 cm as this ensures a comparatively small baseline distance so as to limit the size of the undefined region.
D. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over
Bando (US 2013/0229544 A1) in view of Imagawa et al. (US 2021/0104031 A1)
in view of Chen (US 2023/0291982 A1) and further in view of
Ichihashi et al. (US 2019/0259139 A1).
As to claim 9, Bando, as modified by Imagawa et al. and Chen, teaches the camera system of claim 1. The claim differs from Bando, as modified by Imagawa et al. and Chen, in that it requires that the color camera has a lower resolution than the monochrome camera. However, in the same field of endeavor as the instant application, Ichihashi et al. discloses a camera system for transferring color from a color image to a luminance image, the system including a monochrome image sensor and a color image sensor. The color image sensor has a lower resolution than the monochrome image sensor, and the color image sensor has a wider angle of view than the monochrome image sensor ([0043]).
In light of the teaching of Ichihashi et al., the examiner submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to design the luminance image pickup part of Bando with a higher resolution than that of the reference’s color sensor. Because luminance sensors generally provide less noisy images even in low-light environments, an increase in resolution would provide reduced noise at a highly resolved level in the image.
VII. Allowable Subject Matter
Claims 7 and 19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is the examiner’s statement of reasons for the indication of allowable subject matter: As to claims 7 and 19, Park et al. discloses how, in a luminance transfer mode, certain luminance regions will have a resolution lower than that of the color image. However, there is no discussion of the reverse, where transferred/synthesized color information has a lower resolution than the luminance image, in Park et al. or in the material prior art.
VIII. Allowable Subject Matter
Stauder et al. (US 2014/0161347 A1) discloses another example of transferring color information from one image to another image, where a parallax exists between the images.
IX. Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANTHONY J DANIELS whose telephone number is (571)272-7362. The examiner can normally be reached M-F 9:00 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sinh Tran can be reached at 571-272-7564. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANTHONY J DANIELS/ 12/27/2025Primary Examiner, Art Unit 2637