DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: a camera module and a computing unit in claims 14 and 26.
The corresponding structures are identified in the specification for a camera module and a computing unit in paragraphs [0002], [0013], [0015], and [0041].
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 14, 15, 25, and 26 is/are rejected under 35 U.S.C. 103 as being unpatentable over Oon et al. (US 7609291 B2) in view of Price et al. (US 20190305018 A1).
Regarding claims 14 and 26, Oon discloses a camera system (fig. 1) comprising:
a camera module configured to generate camera images of a scene (12 and 18 of fig. 1, Col. 3, lines 4-9, the lens 18 is used to focus a scene of interest onto the color image sensor 12 to capture an image of that scene. The focusing mechanism 20 operates to move the lens 18 to focus the lens with respect to the scene of interest. The focusing mechanism 20 can be controlled manually using the user input interface 14 or automatically by the processor 24); and
a computing unit configured to control the camera module and to process the camera images (24 of fig. 1, Col. 4, line 59-Col. 5, lines 2-7, the processor 24 also controls various active components of the imaging device 10, such as the IR flash 16, the focusing mechanism 20, the image sensor 12 and the ADC 22. The processor 24 also performs operations commanded by a user through the user input interface 14),
wherein the camera module has an image sensor configured to record light in a visible spectrum using RGB pixels and light in an infrared spectrum using IR pixels (12 of fig. 1, 304 and 366 of fig. 3, Col. 5, lines 25-48, at block 304, a grayscale image of a scene of interest is captured using a flash of infrared light, which is produced by the IR flash 16, during a first exposure period without the IR filter 19 being positioned in front of the color image sensor 12. This grayscale image is captured as a frame of R.sub.0, G.sub.0 and B.sub.0 analog image signals, which are generated by the R, G and B photosensitive elements 32 of the color image sensor 12; at block 306, a color image of the same scene of interest is captured without using a flash of infrared light during a second exposure period with the IR filter 19 being positioned in front of the color image sensor 12. The color image is rich in visible color information of the scene of interest),
wherein the camera module is configured to capture at least two individual images of the scene by an exposure of the image sensor during at least two recording processes with different exposure durations (304 and 306 of fig. 3, a first exposure period and a second exposure period),
wherein the computing unit (24 of fig. 1) is configured to control the camera module in to expose the image sensor with light in the visible spectrum with a first exposure duration during at least one first recording process (306 of fig. at block 306, a color image of the same scene of interest is captured without using a flash of infrared light during a second exposure period with the IR filter 19 being positioned in front of the color image sensor 12. Col. 5, lines 5-6, the processor 24 also performs operations commanded by a user through the user input interface 14, so this suggests that the operations with the second exposure period would obviously be performed first as a first exposure duration by the processor commanded by the user), wherein a first RGB image is generated by the RGB pixels (306 of fig. 3, color image comprises a first RGB image), and
control an infrared light source to illuminate the scene with infrared light in addition to the light in the visible spectrum during at least one second recording process and
simultaneously control the camera module to expose the image sensor during the second recording process with a second exposure duration deviating from the first exposure duration (304 of fig. 3, Col. 5, lines 25-40, at block 304, a grayscale image of a scene of interest is captured using a flash of infrared light, which is produced by the IR flash 16, during a first exposure period without the IR filter 19 being positioned in front of the color image sensor 12. This grayscale image is captured as a frame of R.sub.0, G.sub.0 and B.sub.0 analog image signals, which are generated by the R, G and B photosensitive elements 32 of the color image sensor 12; Col. 5, lines 5-6, the processor 24 also performs operations commanded by a user through the user input interface 14, so this suggests that the operations with the first exposure period would obviously be performed second as a second exposure duration by the processor commanded by the user),
wherein, during the second recording process, the image sensor simultaneously generates a second RGB image using the RGB pixels and an IR image using the IR pixels (Col. 5, lines 25-40, Since each of the R.sub.0, G.sub.0 and B.sub.0 digital image signals includes both red, green or blue color component and infrared component, each of the R.sub.0, G.sub.0 and B.sub.0 digital image signals includes grayscale information, which is derived from the respective color and infrared components).
It is noted that Oon does not teach wherein the computing unit is configured to merge the at least one first RGB image and the at least one second RGB image to form an HDR image.
Price teaches wherein the computing unit is configured to merge the at least one first RGB image and the at least one second RGB image to form an HDR image (fig. 2, Images 205 and 210 include the same content but were taken at different times according to traditional HDR imaging techniques. Specifically, image 205 is an image that was generated using a prolonged exposure time (corresponding to exposure time 110 from FIG. 1), image 210 is an image that was generated using a short exposure time (corresponding to exposure time 120 from FIG. 1), and image 215 is the resulting HDR image that is formed by merging image 205 with image 210; the process of fig. 16A, [0100-0104] there is an act (act 1625) of generating a combined digital image based on the first readout and the second readout).
Taking the teachings of Oon and Price together as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the merging of the first RGB image and second RGB image of Price to generate a digital image with reduced motion blurring artifacts ([0011] and [0034] of Price).
Regarding claim 15, Oon teaches the camera system of claim 14, wherein the camera module and the computing unit are configured to continuously record the scene with a sequence from a plurality of first and second recording processes (12, 18, and 24 of fig. 1, see also process of fig. 3 for recording presses, wherein the recorded image are stored in the storage device, 26 of fig. 1), wherein at least several first RGB images and second RGB images captured temporally one after the other are joined to form an RGB video stream (308-312 of fig. 1, store the output color image in the storage), several IR images captured temporally one after the other are joined to form an IR video stream (304 of fig. 3, capturing images to form a video stream), or Price teaches several HDR images generated temporally one after the other are joined to form an HDR video stream ([0100-0104] there is an act (act 1625) of generating a combined digital image based on the first readout and the second readout).
Regarding claim 25, Oon teaches the camera system of claim 14, wherein the computing unit is configured to control the camera module and the infrared light source to change, depending on an image statistic obtained by analyzing at least one RGB image, IR image, or HDR image already generated: the second exposure duration, a time duration with which the infrared light source is operated, or a sensitivity of the IR pixels during the second recording process (304 and 306 of fig. 3, for the first and second recording processes and the first and second exposure periods).
Allowable Subject Matter
Claims 16-24 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Peng et al. (US 20210195087 A1) discloses a dual sensor imaging system and an imaging method thereof are provided. The dual sensor imaging system includes at least one color sensor, at least one infrared ray (IR) sensor, a storage device, and a processor.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TUNG T VO whose telephone number is (571)272-7340. The examiner can normally be reached Monday-Friday 6:30 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
TUNG T. VO
Primary Examiner
Art Unit 2425
/TUNG T VO/Primary Examiner, Art Unit 2425