Prosecution Insights
Last updated: April 19, 2026
Application No. 18/979,941

METHOD FOR OPTIMIZING BASE IMAGE LIBRARY, APPARATUS FOR OPTIMIZING BASE IMAGE LIBRARY, ELECTRONIC DEVICE, AND STORAGE MEDIUM

Non-Final OA §102§112
Filed
Dec 13, 2024
Examiner
ADEDIRAN, ABDUL-SAMAD A
Art Unit
2621
Tech Center
2600 — Communications
Assignee
Huike (Singapore) Holding Pte. Ltd.
OA Round
1 (Non-Final)
78%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
92%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
481 granted / 617 resolved
+16.0% vs TC avg
Moderate +14% lift
Without
With
+13.9%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
22 currently pending
Career history
639
Total Applications
across all art units

Statute-Specific Performance

§101
1.8%
-38.2% vs TC avg
§103
41.2%
+1.2% vs TC avg
§102
19.5%
-20.5% vs TC avg
§112
29.0%
-11.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 617 resolved cases

Office Action

§102 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent Application No. 18/979,941, filed on December 13, 2024. Oath/Declaration Oath/Declaration as filed on December 13, 2024 is noted by the Examiner. Claim Objections Claim 1 is objected to because of the following informalities: The claim recites limitation “the calculation result” in eighth line of the claim, but the limitation is unclear at least because there is insufficient antecedent basis for the above limitation in the claim given that the claim use term “the calculation result” for a first time without previously reciting exact term in the claim, which even further creates lack of clarity in regard to exactly what calculation result is being referred to. Therefore, Examiner suggests the limitation should be amended, without adding new matter, in a manner that resolves the antecedent basis issue. Accordingly, any claim(s) dependent on claim 1 are objected to based on same above reasoning. Claim 2 is objected to because of the following informalities: In particular, the limitation “signal interference detection” in third line of the claim renders the claim indefinite, because the meaning of the coined term “signal interference detection” recited in the third line of the claim is not apparent in light of the specification. See MPEP § 2173.05(a). Examiner recommends applicant amend the claim, without adding new matter, to positively recite in definite terms more clearly what “signal interference detection” actually is. Accordingly, any claim(s) dependent on claim 2 are objected to based on same above reasoning. Claim 3 is objected to because of the following informalities: In particular, the limitation “stripe signals” in ninth line of the claim renders the claim indefinite, because the meaning of the coined term “stripe signals” recited in the ninth line of the claim is not apparent in light of the specification. See MPEP § 2173.05(a). Examiner recommends applicant amend the claim, without adding new matter, to positively recite in definite terms more clearly what “stripe signals” actually are. Accordingly, any claim(s) dependent on claim 3 are objected to based on same above reasoning. Claim 6 is objected to because of the following informalities: In particular, the limitation “normalizing the plurality of residual images” in second line of the claim renders the claim indefinite, because the meaning of the coined term “normalizing the plurality of residual images” recited in the second line of the claim is not apparent in light of the specification. See MPEP § 2173.05(a). Examiner recommends applicant amend the claim, without adding new matter, to positively recite in definite terms more clearly what “normalizing the plurality of residual images” actually is. Accordingly, any claim(s) dependent on claim 6 are objected to based on same above reasoning. Claim 10 is objected to because of the following informalities: In particular, the limitation “a pre-trained shading interference detection network” in second thru third lines of the claim renders the claim indefinite, because the meaning of the coined term “a pre-trained shading interference detection network” recited in the second thru third lines of the claim is not apparent in light of the specification. See MPEP § 2173.05(a). Examiner recommends applicant amend the claim, without adding new matter, to positively recite in definite terms more clearly what “a pre-trained shading interference detection network” actually is. In addition, the claim 10 has an extra white-space before the period in fifth line of the claim. Thus, the Examiner suggests removing the extra white-space. Appropriate correction is required. Claim 12 is objected to because of the following informalities: The claim recites limitation “a base image library” in sixth thru seventh lines of the claim, but the limitation is indefinite, because it is unclear as to whether the limitation is referring to same base image library recited in first line of claim 1, or to a different base image library. Therefore, Examiner suggests the limitations should be amended, without adding new matter, in a manner that resolves the indefiniteness issue. In addition, claim 12 depends on an independent method claim, and claims an apparatus for performing the method. A single claim which claims both an apparatus and the method steps of using the apparatus is indefinite under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph (See MPEP § 2173.05(p)). Therefore, Examiner suggests the limitations should be amended, without adding new matter, in a manner that resolves the indefiniteness. Claim Interpretation – 35 USC § 112(f) The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) are: “a determination unit configured to” recited in second line of claim 11 is considered to read on determination unit 301 (pg. 20, paragraph[0093]; 301 FIG. 3); “calculation unit configured to” recited in seventh line of claim 11 is considered to read on calculation unit 302 (pg. 20, paragraph[0094]; 302 FIG. 3); and “deletion unit configured to” recited in tenth line of claim 11 is considered to read on deletion unit 303 (pg. 20, paragraph[0095]; 303 FIG. 3). Because these claim limitation(s) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 11 limitation(s) “a determination unit is configured to”, “a calculation unit is configured to”, and “a deletion unit is configured to” invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to clearly disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. In particular, no structure or material that is capable of performing the claimed functions are present or shown in any of the figures and substantively linked to the claimed function. Therefore, claim 11 is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph. Accordingly, any claim(s) dependent on claim 11 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, based at least on same above reasoning. Moreover, for a computer-implemented means-plus-function claim limitation invoking 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, a general purpose computer is usually only sufficient as the corresponding structure for performing a general computing function (e.g., “means for storing data”), but the corresponding structure for performing a specific function is required to be more than simply a general purpose computer or microprocessor. See In re Katz Interactive Call Processing Patent Litigation, 639 F.3d 1303, 1316, 97 USPQ2d 1737, 1747 (Fed. Cir. 2011). However, neither a specialized computer, a specialized processor circuitry, a general purpose computer, or a general processor circuitry for implementing the functionality of the determination unit, the calculation unit and the deletion unit are even substantively recited and directly tied with sufficient specificity to the determination unit, the calculation unit and the deletion unit and their functionality in applicable claim 11. Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph; (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Accordingly, as mentioned above, any claim(s) dependent on claim 11 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, based at least on same above reasoning. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 3, and 11-12 are rejected under 35 U.S.C. 102(a)(1) and 102(a)(2) as being anticipated Flament et al., U.S. Patent Application Publication 2019/0188442 A1 (hereinafter Flament). Regarding claim 1, Flament teaches a method for optimizing a base image library, (1300 FIGS. 7A-7B, 13-15, 21, and 23-25, paragraphs[0127]-[0128] of Flament teaches if it is not determined that a darkfield candidate image should be captured, flow diagram 1300 returns to procedure 1310; in one embodiment, flow diagram 1300 delays the performance of procedure 1310 a predetermined time period, e.g., to ensure that enough time has passed that a darkfield candidate image could be captured, given satisfaction of other conditions; if it is determined that a darkfield candidate image should be captured, flow diagram 1300 proceeds to procedure 1340; at procedure 1340, a darkfield image is captured as a darkfield candidate image, where a darkfield image is an image absent an object interacting with the sensor; at procedure 1350, the darkfield estimate is updated with the darkfield candidate image; in one embodiment, as shown at procedure 1355, the darkfield candidate image is merged with the darkfield estimate; and in one embodiment, as shown at procedure 1360, provided the darkfield estimate is not stored, the darkfield candidate image is stored as the darkfield estimate, and See also at least paragraphs[0072]-[0088], [0120]-[0126], [0129]-[0132], [0139], [0149], [0163], and [0172]-[0179] of Flament (i.e., Flament teaches a method for darkfield tracking and updating of a stored darkfield estimate)) comprising: determining a first base image and a plurality of second base images from the base image library, wherein the base image library comprises a plurality of base images collected by a fingerprint sensor from a detection region without a detection object; calculating a difference between the first base image and each of the plurality of second base images respectively (715 FIGS. 7A-9, 12-17, 21, and 23-25, paragraph[0117] of Flament teaches in one embodiment, as soon as the darkfield candidate image is captured, a test that the darkfield candidate image is indeed a darkfield image can be performed; this test can look for structures in the image to distinguish an actual darkfield image from a fingerprint image or an object Image; an additional darkfield quality verification step may be applied before merging the recently acquired darkfield candidate image; for example, an image analysis may be applied to scan for any image contribution that are not likely to constitute a darkfield; the image analysis may comprise looking for features resembling a fingerprint, or spatial frequencies related to a fingerprint; if such features are present, the darkfield candidate image may not be used, or used with a lesser weight; a darkfield quality factor may be determined, and the weight of the candidate darkfield in the merger may depend on the quality factor; the quality factor may also express a confidence in the fact that no object was detected; it may also be determined if the quality of the darkfield estimate will be negatively affected by the merger of the darkfield candidate image, and based on this determination, the weight of the darkfield candidate image may be adapted; the stored darkfield estimate may be subtracted from the recently acquired darkfield candidate image, since this represent the latest acquired image of the sensor; if the darkfield procedure is working properly, the so obtained corrected image should be nearly uniform but for a small contribution; the uniformity of quality of the image may be determined to analysis the quality of the darkfield correction, and any issue or errors may be used as feedback to automatically adapt the darkfield correction process, and See also at least ABSTRACT and paragraphs[0072]-[0091], [0119]-[0140], [0149], [0155], [0158], [0161]-[0179], and [0185]-[0191] of Flament (i.e., Flament teaches subtracting a stored darkfield estimate from a recently acquired darkfield candidate image, which is capable of being captured by a fingerprint sensor when an object is not interacting with the fingerprint sensor, evaluating the darkfield candidate image for contamination that includes comparing the darkfield candidate image to a reference database that includes images captured by the sensor when no finger is touching the sensor, and even merging the darkfield candidate image with previously recorded darkfield images (i.e., merging with a darkfield estimate))), and obtaining a plurality of residual images based on the calculation result (FIGS. 8-9, 12-17, 21, and 23-25, paragraph[0128] of Flament teaches at procedure 1340, a darkfield image is captured as a darkfield candidate image, where a darkfield image is an image absent an object interacting with the sensor; at procedure 1350, the darkfield estimate is updated with the darkfield candidate image; in one embodiment, as shown at procedure 1355, the darkfield candidate image is merged with the darkfield estimate; and in one embodiment, as shown at procedure 1360, provided the darkfield estimate is not stored, the darkfield candidate image is stored as the darkfield estimate, and See also at least ABSTRACT and paragraphs[0083]-[0091], [0117]-[0127], [0129]-[0140], [0149], [0155], [0158], [0161]-[0179], and [0185]-[0191] of Flament (i.e., Flament teaches subtracting the stored darkfield estimate from the recently acquired darkfield candidate image, which is capable of being captured by the fingerprint sensor when the object is not interacting with the fingerprint sensor, evaluating the darkfield candidate image for contamination that includes comparing the darkfield candidate image to the reference database that includes images captured by the sensor when no finger is touching the sensor, and even monitoring, replacing and merging the darkfield candidate image with previously recorded darkfield images on a continuous basis in order to improve performance of fingerprint matching (i.e., merging with the darkfield estimate on a continuous basis))); and deleting, in response to determining based on the plurality of residual images that shading interference is present in the first base image, the first base image from the base image library (FIGS. 8-9, 12-17, 21, and 23-25, paragraph[0171] of Flament teaches in some embodiments, if the darkfield contamination verification of procedure 2320 reveals a contaminated darkfield, a decision whether or not to allow authentication may be based on the level of contamination e.g., at procedure 2340); for example, for minor contamination, the authentication may be allowed, but the dynamic update may not be allowed; when a serious contamination is detected, other measures may be taken; for example, it may be decided not to do any darkfield correction, because no correction may yield better results than a correction with an incorrect darkfield; alternatively, a different darkfield may be selected, e.g., an older darkfield; this different darkfield may be selected from a database of darkfield images acquired under similar conditions as the current operating conditions; in some embodiments, a new darkfield may be determined, through measurement or simulation/modelling; and when a new measurement is required, the system may ask the user the remove his or her finger in order to acquire a correct darkfield, and See also at least ABSTRACT and paragraphs[0083]-[0091], [0117]-[0140], [0149], [0155], [0158], [0161]-[0170], [0172]-[0179], and [0185]-[0191] of Flament (i.e., Flament teaches subtracting the stored darkfield estimate from the recently acquired darkfield candidate image, which is capable of being captured by the fingerprint sensor when the object is not interacting with the fingerprint sensor, evaluating the darkfield candidate image for contamination that includes comparing the darkfield candidate image to the reference database that includes images captured by the sensor when no finger is touching the sensor, and even monitoring, replacing and merging the darkfield candidate image with previously recorded darkfield images on a continuous basis in order to improve performance of fingerprint matching (i.e., merging with the darkfield estimate on a continuous basis))). Regarding claim 3, Flament teaches the method according to claim 1, wherein the calculating the difference between the first base image and each of the plurality of second base images respectively, and obtaining the plurality of residual images based on the calculation result comprises: calculating the difference between the first base image and each of the plurality of second base images respectively to obtain a plurality of first images, and determining the plurality of first images as the residual images respectively; or selecting a plurality of second images from the plurality of first images, wherein an intensity or number of stripe signals included in the second images is smaller than a preset noise threshold, and determining the plurality of second images as the residual images respectively; or obtaining the plurality of residual images based on the plurality of second images (FIGS. 8-9, 12-17, 21, and 23-25, paragraph[0128] of Flament teaches at procedure 1340, a darkfield image is captured as a darkfield candidate image, where a darkfield image is an image absent an object interacting with the sensor; at procedure 1350, the darkfield estimate is updated with the darkfield candidate image; in one embodiment, as shown at procedure 1355, the darkfield candidate image is merged with the darkfield estimate; and in one embodiment, as shown at procedure 1360, provided the darkfield estimate is not stored, the darkfield candidate image is stored as the darkfield estimate, and See also at least ABSTRACT and paragraphs[0083]-[0091], [0117]-[0127], [0129]-[0140], [0149], [0155], [0158], [0161]-[0179], and [0185]-[0191] of Flament (i.e., Flament teaches subtracting the stored darkfield estimate from the recently acquired darkfield candidate image, which is capable of being captured by the fingerprint sensor when the object is not interacting with the fingerprint sensor, evaluating the darkfield candidate image for contamination that includes comparing the darkfield candidate image to the reference database that includes images captured by the sensor when no finger is touching the sensor, and even monitoring, replacing and merging the darkfield candidate image with previously recorded darkfield images on a continuous basis in order to improve performance of fingerprint matching (i.e., merging with the darkfield estimate on a continuous basis))). Regarding claim 11, Flament teaches an apparatus for optimizing a base image library (710 FIGS. 7A-7B, 13-15, 21, and 23-25, paragraph[0071] of Flament teaches with reference to the drawings, FIG. 7A illustrates an example of an operational environment 700 for sensing of human touch in accordance with one or more embodiments of the disclosure; as illustrated, a device 710 includes a fingerprint sensor 715 or other type of surface sensitive to touch; in one embodiment, fingerprint sensor 715 is disposed beneath a touch-screen display device of device 710; in another embodiment, fingerprint sensor 715 is disposed adjacent or close to a touch-screen display device of device 710; in another embodiment, fingerprint sensor 715 is comprised within a touch-screen display device of device 710; in another embodiment, fingerprint sensor 715 is disposed on the side or back of the device; and it should be appreciated that device 710 includes a fingerprint sensor 715 for sensing a fingerprint of a finger interacting with device 710, and See also at least paragraphs[0072]-[0088], [0120]-[0132], [0139], [0149], [0163], and [0172]-[0179] of Flament (i.e., Flament teaches a method for darkfield tracking and updating of a stored darkfield estimate)), comprising: a determination unit configured to determine a first base image and a plurality of second base images from the base image library, wherein the base image library comprises a plurality of base images collected by a fingerprint sensor from a detection region without a detection object, and the first base image is different from the second base images; a calculation unit configured to calculate a difference between the first base image and each of the plurality of second base images respectively (715 FIGS. 7A-9, 12-17, 21, and 23-25, paragraph[0117] of Flament teaches in one embodiment, as soon as the darkfield candidate image is captured, a test that the darkfield candidate image is indeed a darkfield image can be performed; this test can look for structures in the image to distinguish an actual darkfield image from a fingerprint image or an object Image; an additional darkfield quality verification step may be applied before merging the recently acquired darkfield candidate image; for example, an image analysis may be applied to scan for any image contribution that are not likely to constitute a darkfield; the image analysis may comprise looking for features resembling a fingerprint, or spatial frequencies related to a fingerprint; if such features are present, the darkfield candidate image may not be used, or used with a lesser weight; a darkfield quality factor may be determined, and the weight of the candidate darkfield in the merger may depend on the quality factor; the quality factor may also express a confidence in the fact that no object was detected; it may also be determined if the quality of the darkfield estimate will be negatively affected by the merger of the darkfield candidate image, and based on this determination, the weight of the darkfield candidate image may be adapted; the stored darkfield estimate may be subtracted from the recently acquired darkfield candidate image, since this represent the latest acquired image of the sensor; if the darkfield procedure is working properly, the so obtained corrected image should be nearly uniform but for a small contribution; the uniformity of quality of the image may be determined to analysis the quality of the darkfield correction, and any issue or errors may be used as feedback to automatically adapt the darkfield correction process, and See also at least ABSTRACT and paragraphs[0037]-[0045], [0071]-[0091], [0119]-[0140], [0149], [0155], [0158], [0161]-[0179], and [0185]-[0191] of Flament (i.e., Flament teaches a processor that executes instructions, which are included in a non-transitory processor-readable storage medium (e.g., memory), for subtracting a stored darkfield estimate from a recently acquired darkfield candidate image, which is capable of being captured by a fingerprint sensor when an object is not interacting with the fingerprint sensor, evaluating the darkfield candidate image for contamination that includes comparing the darkfield candidate image to a reference database that includes images captured by the sensor when no finger is touching the sensor, and even merging the darkfield candidate image with previously recorded darkfield images (i.e., merging with a darkfield estimate))), and obtain a plurality of residual images based on the calculation result (FIGS. 7A-9, 12-17, 21, and 23-25, paragraph[0128] of Flament teaches at procedure 1340, a darkfield image is captured as a darkfield candidate image, where a darkfield image is an image absent an object interacting with the sensor; at procedure 1350, the darkfield estimate is updated with the darkfield candidate image; in one embodiment, as shown at procedure 1355, the darkfield candidate image is merged with the darkfield estimate; and in one embodiment, as shown at procedure 1360, provided the darkfield estimate is not stored, the darkfield candidate image is stored as the darkfield estimate, and See also at least ABSTRACT and paragraphs[0037]-[0045], [0071]-[0091], [0117]-[0127], [0129]-[0140], [0149], [0155], [0158], [0161]-[0179], and [0185]-[0191] of Flament (i.e., Flament teaches subtracting the stored darkfield estimate from the recently acquired darkfield candidate image, which is capable of being captured by the fingerprint sensor when the object is not interacting with the fingerprint sensor, evaluating the darkfield candidate image for contamination that includes comparing the darkfield candidate image to the reference database that includes images captured by the sensor when no finger is touching the sensor, and even monitoring, replacing and merging the darkfield candidate image with previously recorded darkfield images on a continuous basis in order to improve performance of fingerprint matching (i.e., merging with the darkfield estimate on a continuous basis))); and a deletion unit configured to delete, in response to determining based on the plurality of residual images that shading interference is present in the first base image, the first base image from the base image library (FIGS. 7A-9, 12-17, 21, and 23-25, paragraph[0171] of Flament teaches in some embodiments, if the darkfield contamination verification of procedure 2320 reveals a contaminated darkfield, a decision whether or not to allow authentication may be based on the level of contamination e.g., at procedure 2340); for example, for minor contamination, the authentication may be allowed, but the dynamic update may not be allowed; when a serious contamination is detected, other measures may be taken; for example, it may be decided not to do any darkfield correction, because no correction may yield better results than a correction with an incorrect darkfield; alternatively, a different darkfield may be selected, e.g., an older darkfield; this different darkfield may be selected from a database of darkfield images acquired under similar conditions as the current operating conditions; in some embodiments, a new darkfield may be determined, through measurement or simulation/modelling; and when a new measurement is required, the system may ask the user the remove his or her finger in order to acquire a correct darkfield, and See also at least ABSTRACT and paragraphs[0037]-[0045], [0071]-[0091], [0117]-[0140], [0149], [0155], [0158], [0161]-[0170], [0172]-[0179], and [0185]-[0191] of Flament (i.e., Flament teaches the processor that executes instructions, which are included in a non-transitory processor-readable storage medium (e.g., memory), for subtracting the stored darkfield estimate from the recently acquired darkfield candidate image, which is capable of being captured by the fingerprint sensor when the object is not interacting with the fingerprint sensor, evaluating the darkfield candidate image for contamination that includes comparing the darkfield candidate image to the reference database that includes images captured by the sensor when no finger is touching the sensor, and even monitoring, replacing and merging the darkfield candidate image with previously recorded darkfield images on a continuous basis in order to improve performance of fingerprint matching (i.e., merging with the darkfield estimate on a continuous basis))). Regarding claim 12, Flament teaches an electronic device, comprising: a processor, a memory, a communication interface, and (710, 760, 740 FIGS. 7A-9, 12-17, 21, and 23-25, paragraph[0078] of Flament teaches while the embodiment of FIG. 7B includes processor 760 and memory 770, as described above, it should be appreciated that various functions of processor 760 and memory 770 may reside in other components of device 710 (e.g., within always-on circuitry 730 or system circuitry 740); moreover, it should be appreciated that processor 760 may be any type of processor for performing any portion of the described functionality (e.g., custom digital logic), and See also at least ABSTRACT and paragraphs[0037]-[0045], [0071]-[0077], [0079]-[0091], [0117]-[0140], [0149], [0155], [0158], [0161]-[0170], [0171]-[0179], and [0185]-[0191] of Flament (i.e., Flament teaches an electronic device having memory and the processor at least within system circuity wherein the system circuitry is an interface that exchanges signals with always-on circuitry via a bus, and wherein the processor executes instructions, which are included in a non-transitory processor-readable storage medium (e.g., memory), for subtracting the stored darkfield estimate from the recently acquired darkfield candidate image (i.e., for darkfield image correction) that is capable of being captured by the fingerprint sensor when the object is not interacting with the fingerprint sensor, evaluating the darkfield candidate image for contamination that includes comparing the darkfield candidate image to the reference database that includes images captured by the sensor when no finger is touching the sensor, and even monitoring, replacing and merging the darkfield candidate image with previously recorded darkfield images on a continuous basis in order to improve performance of fingerprint matching (i.e., merging with the darkfield estimate on a continuous basis))) a communication bus, wherein the processor, the memory, and the communication interface implement communication with each other through the communication bus; and the memory is configured to store at least one executable instruction, and the executable instruction causes the processor to execute the method for optimizing a base image library according to claim 1 (735 FIGS. 7A-9, 12-17, 21, and 23-25, paragraph[0073] of Flament teaches while the embodiment of FIG. 7B includes processor 760 and memory 770, as described above, it should be appreciated that various functions of processor 760 and memory 770 may reside in other components of device 710 (e.g., within always-on circuitry 730 or system circuitry 740); moreover, it should be appreciated that processor 760 may be any type of processor for performing any portion of the described functionality (e.g., custom digital logic), and See also at least ABSTRACT and paragraphs[0037]-[0045], [0071]-[0091], [0117]-[0140], [0149], [0155], [0158], [0161]-[0170], [0171]-[0179], and [0185]-[0191] of Flament (i.e., Flament teaches an electronic device having memory and the processor at least within system circuity wherein the system circuitry is an interface that exchanges signals with always-on circuitry via a bus, wherein the always-on circuitry includes the fingerprint sensor, and wherein the processor executes instructions, which are included in a non-transitory processor-readable storage medium (e.g., memory), for subtracting the stored darkfield estimate from the recently acquired darkfield candidate image (i.e., for darkfield image correction) that is capable of being captured by the fingerprint sensor when the object is not interacting with the fingerprint sensor, evaluating the darkfield candidate image for contamination that includes comparing the darkfield candidate image to the reference database that includes images captured by the sensor when no finger is touching the sensor, and even monitoring, replacing and merging the darkfield candidate image with previously recorded darkfield images on a continuous basis in order to improve performance of fingerprint matching (i.e., merging with the darkfield estimate on a continuous basis))). Potentially Allowable Subject Matter Claims 2, and 4-10 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten to overcome applicable rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ) 2nd paragraph and objection(s), if any, indicated above and if rewritten in independent form including all of the limitations of the base claim and any intervening claims, because for each of claims 2, and 4-10 the prior art references of record do not teach the combination of all element limitations as presently claimed. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABDUL-SAMAD A ADEDIRAN whose telephone number is (571)272-3128. The examiner can normally be reached Monday through Thursday, 8:00 am to 5:00 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amr Awad can be reached at 571-272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ABDUL-SAMAD A ADEDIRAN/Primary Examiner, Art Unit 2621
Read full office action

Prosecution Timeline

Dec 13, 2024
Application Filed
Jan 15, 2026
Non-Final Rejection — §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604613
DISPLAY DEVICE
2y 5m to grant Granted Apr 14, 2026
Patent 12592188
PIXEL CIRCUITS AND DISPLAY PANELS
2y 5m to grant Granted Mar 31, 2026
Patent 12586527
PIXEL DRIVING CIRCUIT, DISPLAY DEVICE INCLUDING THE SAME, AND METHOD FOR DRIVING THE DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12586496
DISPLAY DEVICE AND METHOD OF DRIVING A DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12572202
Determining IPD By Adjusting The Positions Of Displayed Stimuli
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
78%
Grant Probability
92%
With Interview (+13.9%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 617 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month